public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.16 commit in: /
Date: Wed, 20 Jun 2018 19:44:16 +0000 (UTC)	[thread overview]
Message-ID: <1529523847.39d0fad6ec600bb1c3d7cb58750a0f1f96b7bf7b.mpagano@gentoo> (raw)

commit:     39d0fad6ec600bb1c3d7cb58750a0f1f96b7bf7b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 20 19:44:07 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 20 19:44:07 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=39d0fad6

Linux patch 4.16.17

 0000_README              |     4 +
 1016_linux-4.16.17.patch | 10919 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10923 insertions(+)

diff --git a/0000_README b/0000_README
index 83e0c3b..c683722 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-4.16.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.16.16
 
+Patch:  1016_linux-4.16.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.16.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-4.16.17.patch b/1016_linux-4.16.17.patch
new file mode 100644
index 0000000..c408309
--- /dev/null
+++ b/1016_linux-4.16.17.patch
@@ -0,0 +1,10919 @@
+diff --git a/Documentation/devicetree/bindings/display/panel/panel-common.txt b/Documentation/devicetree/bindings/display/panel/panel-common.txt
+index 557fa765adcb..5d2519af4bb5 100644
+--- a/Documentation/devicetree/bindings/display/panel/panel-common.txt
++++ b/Documentation/devicetree/bindings/display/panel/panel-common.txt
+@@ -38,7 +38,7 @@ Display Timings
+   require specific display timings. The panel-timing subnode expresses those
+   timings as specified in the timing subnode section of the display timing
+   bindings defined in
+-  Documentation/devicetree/bindings/display/display-timing.txt.
++  Documentation/devicetree/bindings/display/panel/display-timing.txt.
+ 
+ 
+ Connectivity
+diff --git a/Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt b/Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
+index 891db41e9420..98d7898fcd78 100644
+--- a/Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
++++ b/Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
+@@ -25,6 +25,7 @@ Required Properties:
+ 		- "renesas,dmac-r8a7794" (R-Car E2)
+ 		- "renesas,dmac-r8a7795" (R-Car H3)
+ 		- "renesas,dmac-r8a7796" (R-Car M3-W)
++		- "renesas,dmac-r8a77965" (R-Car M3-N)
+ 		- "renesas,dmac-r8a77970" (R-Car V3M)
+ 
+ - reg: base address and length of the registers block for the DMAC
+diff --git a/Documentation/devicetree/bindings/net/renesas,ravb.txt b/Documentation/devicetree/bindings/net/renesas,ravb.txt
+index b4dc455eb155..d159807c2155 100644
+--- a/Documentation/devicetree/bindings/net/renesas,ravb.txt
++++ b/Documentation/devicetree/bindings/net/renesas,ravb.txt
+@@ -17,6 +17,7 @@ Required properties:
+ 
+       - "renesas,etheravb-r8a7795" for the R8A7795 SoC.
+       - "renesas,etheravb-r8a7796" for the R8A7796 SoC.
++      - "renesas,etheravb-r8a77965" for the R8A77965 SoC.
+       - "renesas,etheravb-r8a77970" for the R8A77970 SoC.
+       - "renesas,etheravb-r8a77980" for the R8A77980 SoC.
+       - "renesas,etheravb-r8a77995" for the R8A77995 SoC.
+diff --git a/Documentation/devicetree/bindings/pinctrl/allwinner,sunxi-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/allwinner,sunxi-pinctrl.txt
+index 09789fdfa749..4dc4c354c72b 100644
+--- a/Documentation/devicetree/bindings/pinctrl/allwinner,sunxi-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/allwinner,sunxi-pinctrl.txt
+@@ -55,9 +55,9 @@ pins it needs, and how they should be configured, with regard to muxer
+ configuration, drive strength and pullups. If one of these options is
+ not set, its actual value will be unspecified.
+ 
+-This driver supports the generic pin multiplexing and configuration
+-bindings. For details on each properties, you can refer to
+-./pinctrl-bindings.txt.
++Allwinner A1X Pin Controller supports the generic pin multiplexing and
++configuration bindings. For details on each properties, you can refer to
++ ./pinctrl-bindings.txt.
+ 
+ Required sub-node properties:
+   - pins
+diff --git a/Documentation/devicetree/bindings/serial/amlogic,meson-uart.txt b/Documentation/devicetree/bindings/serial/amlogic,meson-uart.txt
+index 8ff65fa632fd..c06c045126fc 100644
+--- a/Documentation/devicetree/bindings/serial/amlogic,meson-uart.txt
++++ b/Documentation/devicetree/bindings/serial/amlogic,meson-uart.txt
+@@ -21,7 +21,7 @@ Required properties:
+ - interrupts : identifier to the device interrupt
+ - clocks : a list of phandle + clock-specifier pairs, one for each
+ 	   entry in clock names.
+-- clocks-names :
++- clock-names :
+    * "xtal" for external xtal clock identifier
+    * "pclk" for the bus core clock, either the clk81 clock or the gate clock
+    * "baud" for the source of the baudrate generator, can be either the xtal
+diff --git a/Documentation/devicetree/bindings/serial/mvebu-uart.txt b/Documentation/devicetree/bindings/serial/mvebu-uart.txt
+index 2ae2fee7e023..b7e0e32b9ac6 100644
+--- a/Documentation/devicetree/bindings/serial/mvebu-uart.txt
++++ b/Documentation/devicetree/bindings/serial/mvebu-uart.txt
+@@ -24,7 +24,7 @@ Required properties:
+     - Must contain two elements for the extended variant of the IP
+       (marvell,armada-3700-uart-ext): "uart-tx" and "uart-rx",
+       respectively the UART TX interrupt and the UART RX interrupt. A
+-      corresponding interrupts-names property must be defined.
++      corresponding interrupt-names property must be defined.
+     - For backward compatibility reasons, a single element interrupts
+       property is also supported for the standard variant of the IP,
+       containing only the UART sum interrupt. This form is deprecated
+diff --git a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
+index cf504d0380ae..88f947c47adc 100644
+--- a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
++++ b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
+@@ -41,6 +41,8 @@ Required properties:
+     - "renesas,hscif-r8a7795" for R8A7795 (R-Car H3) HSCIF compatible UART.
+     - "renesas,scif-r8a7796" for R8A7796 (R-Car M3-W) SCIF compatible UART.
+     - "renesas,hscif-r8a7796" for R8A7796 (R-Car M3-W) HSCIF compatible UART.
++    - "renesas,scif-r8a77965" for R8A77965 (R-Car M3-N) SCIF compatible UART.
++    - "renesas,hscif-r8a77965" for R8A77965 (R-Car M3-N) HSCIF compatible UART.
+     - "renesas,scif-r8a77970" for R8A77970 (R-Car V3M) SCIF compatible UART.
+     - "renesas,hscif-r8a77970" for R8A77970 (R-Car V3M) HSCIF compatible UART.
+     - "renesas,scif-r8a77995" for R8A77995 (R-Car D3) SCIF compatible UART.
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt
+index ae850d6c0ad3..8ff7eadc8bef 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.txt
++++ b/Documentation/devicetree/bindings/vendor-prefixes.txt
+@@ -181,6 +181,7 @@ karo	Ka-Ro electronics GmbH
+ keithkoep	Keith & Koep GmbH
+ keymile	Keymile GmbH
+ khadas	Khadas
++kiebackpeter    Kieback & Peter GmbH
+ kinetic Kinetic Technologies
+ kingnovel	Kingnovel Technology Co., Ltd.
+ kosagi	Sutajio Ko-Usagi PTE Ltd.
+diff --git a/Makefile b/Makefile
+index 55554f392115..02a4f7f8c613 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 16
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+ 
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index 45a6b9b7af2a..6a4e7341ecd3 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -117,11 +117,9 @@ ccflags-y := -fpic -mno-single-pic-base -fno-builtin -I$(obj)
+ asflags-y := -DZIMAGE
+ 
+ # Supply kernel BSS size to the decompressor via a linker symbol.
+-KBSS_SZ = $(shell $(CROSS_COMPILE)nm $(obj)/../../../../vmlinux | \
+-		perl -e 'while (<>) { \
+-			$$bss_start=hex($$1) if /^([[:xdigit:]]+) B __bss_start$$/; \
+-			$$bss_end=hex($$1) if /^([[:xdigit:]]+) B __bss_stop$$/; \
+-		}; printf "%d\n", $$bss_end - $$bss_start;')
++KBSS_SZ = $(shell echo $$(($$($(CROSS_COMPILE)nm $(obj)/../../../../vmlinux | \
++		sed -n -e 's/^\([^ ]*\) [AB] __bss_start$$/-0x\1/p' \
++		       -e 's/^\([^ ]*\) [AB] __bss_stop$$/+0x\1/p') )) )
+ LDFLAGS_vmlinux = --defsym _kernel_bss_size=$(KBSS_SZ)
+ # Supply ZRELADDR to the decompressor via a linker symbol.
+ ifneq ($(CONFIG_AUTO_ZRELADDR),y)
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index 45c8823c3750..517e0e18f0b8 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -29,19 +29,19 @@
+ #if defined(CONFIG_DEBUG_ICEDCC)
+ 
+ #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) || defined(CONFIG_CPU_V7)
+-		.macro	loadsp, rb, tmp
++		.macro	loadsp, rb, tmp1, tmp2
+ 		.endm
+ 		.macro	writeb, ch, rb
+ 		mcr	p14, 0, \ch, c0, c5, 0
+ 		.endm
+ #elif defined(CONFIG_CPU_XSCALE)
+-		.macro	loadsp, rb, tmp
++		.macro	loadsp, rb, tmp1, tmp2
+ 		.endm
+ 		.macro	writeb, ch, rb
+ 		mcr	p14, 0, \ch, c8, c0, 0
+ 		.endm
+ #else
+-		.macro	loadsp, rb, tmp
++		.macro	loadsp, rb, tmp1, tmp2
+ 		.endm
+ 		.macro	writeb, ch, rb
+ 		mcr	p14, 0, \ch, c1, c0, 0
+@@ -57,7 +57,7 @@
+ 		.endm
+ 
+ #if defined(CONFIG_ARCH_SA1100)
+-		.macro	loadsp, rb, tmp
++		.macro	loadsp, rb, tmp1, tmp2
+ 		mov	\rb, #0x80000000	@ physical base address
+ #ifdef CONFIG_DEBUG_LL_SER3
+ 		add	\rb, \rb, #0x00050000	@ Ser3
+@@ -66,8 +66,8 @@
+ #endif
+ 		.endm
+ #else
+-		.macro	loadsp,	rb, tmp
+-		addruart \rb, \tmp
++		.macro	loadsp,	rb, tmp1, tmp2
++		addruart \rb, \tmp1, \tmp2
+ 		.endm
+ #endif
+ #endif
+@@ -561,8 +561,6 @@ not_relocated:	mov	r0, #0
+ 		bl	decompress_kernel
+ 		bl	cache_clean_flush
+ 		bl	cache_off
+-		mov	r1, r7			@ restore architecture number
+-		mov	r2, r8			@ restore atags pointer
+ 
+ #ifdef CONFIG_ARM_VIRT_EXT
+ 		mrs	r0, spsr		@ Get saved CPU boot mode
+@@ -1297,7 +1295,7 @@ phex:		adr	r3, phexbuf
+ 		b	1b
+ 
+ @ puts corrupts {r0, r1, r2, r3}
+-puts:		loadsp	r3, r1
++puts:		loadsp	r3, r2, r1
+ 1:		ldrb	r2, [r0], #1
+ 		teq	r2, #0
+ 		moveq	pc, lr
+@@ -1314,8 +1312,8 @@ puts:		loadsp	r3, r1
+ @ putc corrupts {r0, r1, r2, r3}
+ putc:
+ 		mov	r2, r0
++		loadsp	r3, r1, r0
+ 		mov	r0, #0
+-		loadsp	r3, r1
+ 		b	2b
+ 
+ @ memdump corrupts {r0, r1, r2, r3, r10, r11, r12, lr}
+@@ -1365,6 +1363,8 @@ __hyp_reentry_vectors:
+ 
+ __enter_kernel:
+ 		mov	r0, #0			@ must be 0
++		mov	r1, r7			@ restore architecture number
++		mov	r2, r8			@ restore atags pointer
+  ARM(		mov	pc, r4		)	@ call kernel
+  M_CLASS(	add	r4, r4, #1	)	@ enter in Thumb mode for M class
+  THUMB(		bx	r4		)	@ entry point is always ARM for A/R classes
+diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi
+index 699fdf94d139..9fe4f5a6379e 100644
+--- a/arch/arm/boot/dts/bcm-cygnus.dtsi
++++ b/arch/arm/boot/dts/bcm-cygnus.dtsi
+@@ -69,7 +69,7 @@
+ 		timer@20200 {
+ 			compatible = "arm,cortex-a9-global-timer";
+ 			reg = <0x20200 0x100>;
+-			interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ 			clocks = <&periph_clk>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi
+index c66cf7895363..3cf97f4dac24 100644
+--- a/arch/arm/boot/dts/da850.dtsi
++++ b/arch/arm/boot/dts/da850.dtsi
+@@ -46,8 +46,6 @@
+ 		pmx_core: pinmux@14120 {
+ 			compatible = "pinctrl-single";
+ 			reg = <0x14120 0x50>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			#pinctrl-cells = <2>;
+ 			pinctrl-single,bit-per-mux;
+ 			pinctrl-single,register-width = <32>;
+diff --git a/arch/arm/boot/dts/dm8148-evm.dts b/arch/arm/boot/dts/dm8148-evm.dts
+index d6657b3bae84..85d7b5148b0a 100644
+--- a/arch/arm/boot/dts/dm8148-evm.dts
++++ b/arch/arm/boot/dts/dm8148-evm.dts
+@@ -10,7 +10,7 @@
+ 
+ / {
+ 	model = "DM8148 EVM";
+-	compatible = "ti,dm8148-evm", "ti,dm8148";
++	compatible = "ti,dm8148-evm", "ti,dm8148", "ti,dm814";
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+diff --git a/arch/arm/boot/dts/dm8148-t410.dts b/arch/arm/boot/dts/dm8148-t410.dts
+index 63883b3479f9..6418f9cdbe83 100644
+--- a/arch/arm/boot/dts/dm8148-t410.dts
++++ b/arch/arm/boot/dts/dm8148-t410.dts
+@@ -9,7 +9,7 @@
+ 
+ / {
+ 	model = "HP t410 Smart Zero Client";
+-	compatible = "hp,t410", "ti,dm8148";
++	compatible = "hp,t410", "ti,dm8148", "ti,dm814";
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+diff --git a/arch/arm/boot/dts/dm8168-evm.dts b/arch/arm/boot/dts/dm8168-evm.dts
+index c72a2132aa82..1d030d567307 100644
+--- a/arch/arm/boot/dts/dm8168-evm.dts
++++ b/arch/arm/boot/dts/dm8168-evm.dts
+@@ -10,7 +10,7 @@
+ 
+ / {
+ 	model = "DM8168 EVM";
+-	compatible = "ti,dm8168-evm", "ti,dm8168";
++	compatible = "ti,dm8168-evm", "ti,dm8168", "ti,dm816";
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+diff --git a/arch/arm/boot/dts/dra62x-j5eco-evm.dts b/arch/arm/boot/dts/dra62x-j5eco-evm.dts
+index fee0547f7302..31b824ad5d29 100644
+--- a/arch/arm/boot/dts/dra62x-j5eco-evm.dts
++++ b/arch/arm/boot/dts/dra62x-j5eco-evm.dts
+@@ -10,7 +10,7 @@
+ 
+ / {
+ 	model = "DRA62x J5 Eco EVM";
+-	compatible = "ti,dra62x-j5eco-evm", "ti,dra62x", "ti,dm8148";
++	compatible = "ti,dra62x-j5eco-evm", "ti,dra62x", "ti,dm8148", "ti,dm814";
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+diff --git a/arch/arm/boot/dts/imx51-zii-rdu1.dts b/arch/arm/boot/dts/imx51-zii-rdu1.dts
+index 5306b78de0ca..380afcafeb16 100644
+--- a/arch/arm/boot/dts/imx51-zii-rdu1.dts
++++ b/arch/arm/boot/dts/imx51-zii-rdu1.dts
+@@ -518,7 +518,7 @@
+ 	};
+ 
+ 	touchscreen@20 {
+-		compatible = "syna,rmi4_i2c";
++		compatible = "syna,rmi4-i2c";
+ 		reg = <0x20>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_ts>;
+@@ -536,8 +536,8 @@
+ 
+ 		rmi4-f11@11 {
+ 			reg = <0x11>;
+-			touch-inverted-y;
+-			touch-swapped-x-y;
++			touchscreen-inverted-y;
++			touchscreen-swapped-x-y;
+ 			syna,sensor-type = <1>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/logicpd-som-lv.dtsi b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+index a30ee9fcb3ae..4fabe4e9283f 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv.dtsi
++++ b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+@@ -26,7 +26,7 @@
+ 		gpio = <&gpio1 3 0>;   /* gpio_3 */
+ 		startup-delay-us = <70000>;
+ 		enable-active-high;
+-		vin-supply = <&vmmc2>;
++		vin-supply = <&vaux3>;
+ 	};
+ 
+ 	/* HS USB Host PHY on PORT 1 */
+@@ -82,6 +82,7 @@
+ 		twl_audio: audio {
+ 			compatible = "ti,twl4030-audio";
+ 			codec {
++				ti,hs_extmute_gpio = <&gpio2 25 GPIO_ACTIVE_HIGH>;
+ 			};
+ 		};
+ 	};
+@@ -195,6 +196,7 @@
+ 		pinctrl-single,pins = <
+ 			OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT | MUX_MODE0)        /* i2c1_scl.i2c1_scl */
+ 			OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT | MUX_MODE0)        /* i2c1_sda.i2c1_sda */
++			OMAP3_CORE1_IOPAD(0x20ba, PIN_OUTPUT | MUX_MODE4)        /* gpmc_ncs6.gpio_57 */
+ 		>;
+ 	};
+ };
+@@ -209,7 +211,7 @@
+ 	};
+ 	wl127x_gpio: pinmux_wl127x_gpio_pin {
+ 		pinctrl-single,pins = <
+-			OMAP3_WKUP_IOPAD(0x2a0c, PIN_INPUT | MUX_MODE4)		/* sys_boot0.gpio_2 */
++			OMAP3_WKUP_IOPAD(0x2a0a, PIN_INPUT | MUX_MODE4)		/* sys_boot0.gpio_2 */
+ 			OMAP3_WKUP_IOPAD(0x2a0c, PIN_OUTPUT | MUX_MODE4)	/* sys_boot1.gpio_3 */
+ 		>;
+ 	};
+@@ -244,6 +246,11 @@
+ #include "twl4030.dtsi"
+ #include "twl4030_omap3.dtsi"
+ 
++&vaux3 {
++	regulator-min-microvolt = <2800000>;
++	regulator-max-microvolt = <2800000>;
++};
++
+ &twl {
+ 	twl_power: power {
+ 		compatible = "ti,twl4030-power-idle-osc-off", "ti,twl4030-power-idle";
+diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
+index 475904894b86..e554b6e039f3 100644
+--- a/arch/arm/boot/dts/omap4.dtsi
++++ b/arch/arm/boot/dts/omap4.dtsi
+@@ -163,10 +163,10 @@
+ 
+ 			cm2: cm2@8000 {
+ 				compatible = "ti,omap4-cm2", "simple-bus";
+-				reg = <0x8000 0x3000>;
++				reg = <0x8000 0x2000>;
+ 				#address-cells = <1>;
+ 				#size-cells = <1>;
+-				ranges = <0 0x8000 0x3000>;
++				ranges = <0 0x8000 0x2000>;
+ 
+ 				cm2_clocks: clocks {
+ 					#address-cells = <1>;
+@@ -250,11 +250,11 @@
+ 
+ 				prm: prm@6000 {
+ 					compatible = "ti,omap4-prm";
+-					reg = <0x6000 0x3000>;
++					reg = <0x6000 0x2000>;
+ 					interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
+ 					#address-cells = <1>;
+ 					#size-cells = <1>;
+-					ranges = <0 0x6000 0x3000>;
++					ranges = <0 0x6000 0x2000>;
+ 
+ 					prm_clocks: clocks {
+ 						#address-cells = <1>;
+diff --git a/arch/arm/include/uapi/asm/siginfo.h b/arch/arm/include/uapi/asm/siginfo.h
+deleted file mode 100644
+index d0513880be21..000000000000
+--- a/arch/arm/include/uapi/asm/siginfo.h
++++ /dev/null
+@@ -1,13 +0,0 @@
+-#ifndef __ASM_SIGINFO_H
+-#define __ASM_SIGINFO_H
+-
+-#include <asm-generic/siginfo.h>
+-
+-/*
+- * SIGFPE si_codes
+- */
+-#ifdef __KERNEL__
+-#define FPE_FIXME	0	/* Broken dup of SI_USER */
+-#endif /* __KERNEL__ */
+-
+-#endif
+diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
+index 6b38d7a634c1..c15318431986 100644
+--- a/arch/arm/kernel/machine_kexec.c
++++ b/arch/arm/kernel/machine_kexec.c
+@@ -95,6 +95,27 @@ void machine_crash_nonpanic_core(void *unused)
+ 		cpu_relax();
+ }
+ 
++void crash_smp_send_stop(void)
++{
++	static int cpus_stopped;
++	unsigned long msecs;
++
++	if (cpus_stopped)
++		return;
++
++	atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
++	smp_call_function(machine_crash_nonpanic_core, NULL, false);
++	msecs = 1000; /* Wait at most a second for the other cpus to stop */
++	while ((atomic_read(&waiting_for_crash_ipi) > 0) && msecs) {
++		mdelay(1);
++		msecs--;
++	}
++	if (atomic_read(&waiting_for_crash_ipi) > 0)
++		pr_warn("Non-crashing CPUs did not react to IPI\n");
++
++	cpus_stopped = 1;
++}
++
+ static void machine_kexec_mask_interrupts(void)
+ {
+ 	unsigned int i;
+@@ -120,19 +141,8 @@ static void machine_kexec_mask_interrupts(void)
+ 
+ void machine_crash_shutdown(struct pt_regs *regs)
+ {
+-	unsigned long msecs;
+-
+ 	local_irq_disable();
+-
+-	atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
+-	smp_call_function(machine_crash_nonpanic_core, NULL, false);
+-	msecs = 1000; /* Wait at most a second for the other cpus to stop */
+-	while ((atomic_read(&waiting_for_crash_ipi) > 0) && msecs) {
+-		mdelay(1);
+-		msecs--;
+-	}
+-	if (atomic_read(&waiting_for_crash_ipi) > 0)
+-		pr_warn("Non-crashing CPUs did not react to IPI\n");
++	crash_smp_send_stop();
+ 
+ 	crash_save_cpu(regs, smp_processor_id());
+ 	machine_kexec_mask_interrupts();
+diff --git a/arch/arm/mach-davinci/board-da830-evm.c b/arch/arm/mach-davinci/board-da830-evm.c
+index f673cd7a6766..fb7c44cdadcb 100644
+--- a/arch/arm/mach-davinci/board-da830-evm.c
++++ b/arch/arm/mach-davinci/board-da830-evm.c
+@@ -205,12 +205,17 @@ static const short da830_evm_mmc_sd_pins[] = {
+ 	-1
+ };
+ 
++#define DA830_MMCSD_WP_PIN		GPIO_TO_PIN(2, 1)
++#define DA830_MMCSD_CD_PIN		GPIO_TO_PIN(2, 2)
++
+ static struct gpiod_lookup_table mmc_gpios_table = {
+ 	.dev_id = "da830-mmc.0",
+ 	.table = {
+ 		/* gpio chip 1 contains gpio range 32-63 */
+-		GPIO_LOOKUP("davinci_gpio.1", 2, "cd", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("davinci_gpio.1", 1, "wp", GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA830_MMCSD_CD_PIN, "cd",
++			    GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA830_MMCSD_WP_PIN, "wp",
++			    GPIO_ACTIVE_LOW),
+ 	},
+ };
+ 
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index d898a94f6eae..631363293887 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -763,12 +763,17 @@ static const short da850_evm_mcasp_pins[] __initconst = {
+ 	-1
+ };
+ 
++#define DA850_MMCSD_CD_PIN		GPIO_TO_PIN(4, 0)
++#define DA850_MMCSD_WP_PIN		GPIO_TO_PIN(4, 1)
++
+ static struct gpiod_lookup_table mmc_gpios_table = {
+ 	.dev_id = "da830-mmc.0",
+ 	.table = {
+ 		/* gpio chip 2 contains gpio range 64-95 */
+-		GPIO_LOOKUP("davinci_gpio.2", 0, "cd", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("davinci_gpio.2", 1, "wp", GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd",
++			    GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp",
++			    GPIO_ACTIVE_LOW),
+ 	},
+ };
+ 
+diff --git a/arch/arm/mach-davinci/board-dm355-evm.c b/arch/arm/mach-davinci/board-dm355-evm.c
+index d6b11907380c..9aedec083dbf 100644
+--- a/arch/arm/mach-davinci/board-dm355-evm.c
++++ b/arch/arm/mach-davinci/board-dm355-evm.c
+@@ -19,6 +19,7 @@
+ #include <linux/gpio.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/clk.h>
++#include <linux/dm9000.h>
+ #include <linux/videodev2.h>
+ #include <media/i2c/tvp514x.h>
+ #include <linux/spi/spi.h>
+@@ -109,12 +110,15 @@ static struct platform_device davinci_nand_device = {
+ 	},
+ };
+ 
++#define DM355_I2C_SDA_PIN	GPIO_TO_PIN(0, 15)
++#define DM355_I2C_SCL_PIN	GPIO_TO_PIN(0, 14)
++
+ static struct gpiod_lookup_table i2c_recovery_gpiod_table = {
+-	.dev_id = "i2c_davinci",
++	.dev_id = "i2c_davinci.1",
+ 	.table = {
+-		GPIO_LOOKUP("davinci_gpio", 15, "sda",
++		GPIO_LOOKUP("davinci_gpio.0", DM355_I2C_SDA_PIN, "sda",
+ 			    GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+-		GPIO_LOOKUP("davinci_gpio", 14, "scl",
++		GPIO_LOOKUP("davinci_gpio.0", DM355_I2C_SCL_PIN, "scl",
+ 			    GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+ 	},
+ };
+@@ -179,11 +183,16 @@ static struct resource dm355evm_dm9000_rsrc[] = {
+ 	},
+ };
+ 
++static struct dm9000_plat_data dm335evm_dm9000_platdata;
++
+ static struct platform_device dm355evm_dm9000 = {
+ 	.name		= "dm9000",
+ 	.id		= -1,
+ 	.resource	= dm355evm_dm9000_rsrc,
+ 	.num_resources	= ARRAY_SIZE(dm355evm_dm9000_rsrc),
++	.dev		= {
++		.platform_data = &dm335evm_dm9000_platdata,
++	},
+ };
+ 
+ static struct tvp514x_platform_data tvp5146_pdata = {
+diff --git a/arch/arm/mach-davinci/board-dm644x-evm.c b/arch/arm/mach-davinci/board-dm644x-evm.c
+index 85e6fb33b1ee..50b246e315d1 100644
+--- a/arch/arm/mach-davinci/board-dm644x-evm.c
++++ b/arch/arm/mach-davinci/board-dm644x-evm.c
+@@ -17,6 +17,7 @@
+ #include <linux/i2c.h>
+ #include <linux/platform_data/pcf857x.h>
+ #include <linux/platform_data/at24.h>
++#include <linux/platform_data/gpio-davinci.h>
+ #include <linux/mtd/mtd.h>
+ #include <linux/mtd/rawnand.h>
+ #include <linux/mtd/partitions.h>
+@@ -596,12 +597,15 @@ static struct i2c_board_info __initdata i2c_info[] =  {
+ 	},
+ };
+ 
++#define DM644X_I2C_SDA_PIN	GPIO_TO_PIN(2, 12)
++#define DM644X_I2C_SCL_PIN	GPIO_TO_PIN(2, 11)
++
+ static struct gpiod_lookup_table i2c_recovery_gpiod_table = {
+-	.dev_id = "i2c_davinci",
++	.dev_id = "i2c_davinci.1",
+ 	.table = {
+-		GPIO_LOOKUP("davinci_gpio", 44, "sda",
++		GPIO_LOOKUP("davinci_gpio.0", DM644X_I2C_SDA_PIN, "sda",
+ 			    GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+-		GPIO_LOOKUP("davinci_gpio", 43, "scl",
++		GPIO_LOOKUP("davinci_gpio.0", DM644X_I2C_SCL_PIN, "scl",
+ 			    GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+ 	},
+ };
+diff --git a/arch/arm/mach-davinci/board-dm646x-evm.c b/arch/arm/mach-davinci/board-dm646x-evm.c
+index cb0a41e83582..4c458f714101 100644
+--- a/arch/arm/mach-davinci/board-dm646x-evm.c
++++ b/arch/arm/mach-davinci/board-dm646x-evm.c
+@@ -534,11 +534,12 @@ static struct vpif_display_config dm646x_vpif_display_config = {
+ 	.set_clock	= set_vpif_clock,
+ 	.subdevinfo	= dm646x_vpif_subdev,
+ 	.subdev_count	= ARRAY_SIZE(dm646x_vpif_subdev),
++	.i2c_adapter_id = 1,
+ 	.chan_config[0] = {
+ 		.outputs = dm6467_ch0_outputs,
+ 		.output_count = ARRAY_SIZE(dm6467_ch0_outputs),
+ 	},
+-	.card_name	= "DM646x EVM",
++	.card_name	= "DM646x EVM Video Display",
+ };
+ 
+ /**
+@@ -676,6 +677,7 @@ static struct vpif_capture_config dm646x_vpif_capture_cfg = {
+ 	.setup_input_channel_mode = setup_vpif_input_channel_mode,
+ 	.subdev_info = vpif_capture_sdev_info,
+ 	.subdev_count = ARRAY_SIZE(vpif_capture_sdev_info),
++	.i2c_adapter_id = 1,
+ 	.chan_config[0] = {
+ 		.inputs = dm6467_ch0_inputs,
+ 		.input_count = ARRAY_SIZE(dm6467_ch0_inputs),
+@@ -696,6 +698,7 @@ static struct vpif_capture_config dm646x_vpif_capture_cfg = {
+ 			.fid_pol = 0,
+ 		},
+ 	},
++	.card_name = "DM646x EVM Video Capture",
+ };
+ 
+ static void __init evm_init_video(void)
+diff --git a/arch/arm/mach-davinci/board-omapl138-hawk.c b/arch/arm/mach-davinci/board-omapl138-hawk.c
+index 62eb7d668890..10a027253250 100644
+--- a/arch/arm/mach-davinci/board-omapl138-hawk.c
++++ b/arch/arm/mach-davinci/board-omapl138-hawk.c
+@@ -123,12 +123,16 @@ static const short hawk_mmcsd0_pins[] = {
+ 	-1
+ };
+ 
++#define DA850_HAWK_MMCSD_CD_PIN		GPIO_TO_PIN(3, 12)
++#define DA850_HAWK_MMCSD_WP_PIN		GPIO_TO_PIN(3, 13)
++
+ static struct gpiod_lookup_table mmc_gpios_table = {
+ 	.dev_id = "da830-mmc.0",
+ 	.table = {
+-		/* CD: gpio3_12: gpio60: chip 1 contains gpio range 32-63*/
+-		GPIO_LOOKUP("davinci_gpio.0", 28, "cd", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("davinci_gpio.0", 29, "wp", GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA850_HAWK_MMCSD_CD_PIN, "cd",
++			    GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("davinci_gpio.0", DA850_HAWK_MMCSD_WP_PIN, "wp",
++			    GPIO_ACTIVE_LOW),
+ 	},
+ };
+ 
+diff --git a/arch/arm/mach-davinci/dm646x.c b/arch/arm/mach-davinci/dm646x.c
+index 6fc06a6ad4f8..137227b33397 100644
+--- a/arch/arm/mach-davinci/dm646x.c
++++ b/arch/arm/mach-davinci/dm646x.c
+@@ -495,7 +495,8 @@ static u8 dm646x_default_priorities[DAVINCI_N_AINTC_IRQ] = {
+ 	[IRQ_DM646X_MCASP0TXINT]        = 7,
+ 	[IRQ_DM646X_MCASP0RXINT]        = 7,
+ 	[IRQ_DM646X_RESERVED_3]         = 7,
+-	[IRQ_DM646X_MCASP1TXINT]        = 7,    /* clockevent */
++	[IRQ_DM646X_MCASP1TXINT]        = 7,
++	[IRQ_TINT0_TINT12]              = 7,    /* clockevent */
+ 	[IRQ_TINT0_TINT34]              = 7,    /* clocksource */
+ 	[IRQ_TINT1_TINT12]              = 7,    /* DSP timer */
+ 	[IRQ_TINT1_TINT34]              = 7,    /* system tick */
+diff --git a/arch/arm/mach-keystone/pm_domain.c b/arch/arm/mach-keystone/pm_domain.c
+index fe57e2692629..abca83d22ff3 100644
+--- a/arch/arm/mach-keystone/pm_domain.c
++++ b/arch/arm/mach-keystone/pm_domain.c
+@@ -29,6 +29,7 @@ static struct dev_pm_domain keystone_pm_domain = {
+ 
+ static struct pm_clk_notifier_block platform_domain_notifier = {
+ 	.pm_domain = &keystone_pm_domain,
++	.con_ids = { NULL },
+ };
+ 
+ static const struct of_device_id of_keystone_table[] = {
+diff --git a/arch/arm/mach-omap1/ams-delta-fiq.c b/arch/arm/mach-omap1/ams-delta-fiq.c
+index 793a24a53c52..d7ca9e2b40d2 100644
+--- a/arch/arm/mach-omap1/ams-delta-fiq.c
++++ b/arch/arm/mach-omap1/ams-delta-fiq.c
+@@ -58,22 +58,24 @@ static irqreturn_t deferred_fiq(int irq, void *dev_id)
+ 		irq_num = gpio_to_irq(gpio);
+ 		fiq_count = fiq_buffer[FIQ_CNT_INT_00 + gpio];
+ 
+-		while (irq_counter[gpio] < fiq_count) {
+-			if (gpio != AMS_DELTA_GPIO_PIN_KEYBRD_CLK) {
+-				struct irq_data *d = irq_get_irq_data(irq_num);
+-
+-				/*
+-				 * It looks like handle_edge_irq() that
+-				 * OMAP GPIO edge interrupts default to,
+-				 * expects interrupt already unmasked.
+-				 */
+-				if (irq_chip && irq_chip->irq_unmask)
++		if (irq_counter[gpio] < fiq_count &&
++				gpio != AMS_DELTA_GPIO_PIN_KEYBRD_CLK) {
++			struct irq_data *d = irq_get_irq_data(irq_num);
++
++			/*
++			 * handle_simple_irq() that OMAP GPIO edge
++			 * interrupts default to since commit 80ac93c27441
++			 * requires interrupt already acked and unmasked.
++			 */
++			if (irq_chip) {
++				if (irq_chip->irq_ack)
++					irq_chip->irq_ack(d);
++				if (irq_chip->irq_unmask)
+ 					irq_chip->irq_unmask(d);
+ 			}
+-			generic_handle_irq(irq_num);
+-
+-			irq_counter[gpio]++;
+ 		}
++		for (; irq_counter[gpio] < fiq_count; irq_counter[gpio]++)
++			generic_handle_irq(irq_num);
+ 	}
+ 	return IRQ_HANDLED;
+ }
+diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c
+index 76eb6ec5f157..1e6a967cd2d5 100644
+--- a/arch/arm/mach-omap2/powerdomain.c
++++ b/arch/arm/mach-omap2/powerdomain.c
+@@ -188,7 +188,7 @@ static int _pwrdm_state_switch(struct powerdomain *pwrdm, int flag)
+ 				       ((prev & OMAP_POWERSTATE_MASK) << 0));
+ 			trace_power_domain_target_rcuidle(pwrdm->name,
+ 							  trace_state,
+-							  smp_processor_id());
++							  raw_smp_processor_id());
+ 		}
+ 		break;
+ 	default:
+@@ -518,7 +518,7 @@ int pwrdm_set_next_pwrst(struct powerdomain *pwrdm, u8 pwrst)
+ 	if (arch_pwrdm && arch_pwrdm->pwrdm_set_next_pwrst) {
+ 		/* Trace the pwrdm desired target state */
+ 		trace_power_domain_target_rcuidle(pwrdm->name, pwrst,
+-						  smp_processor_id());
++						  raw_smp_processor_id());
+ 		/* Program the pwrdm desired target state */
+ 		ret = arch_pwrdm->pwrdm_set_next_pwrst(pwrdm, pwrst);
+ 	}
+diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
+index 4c375e11ae95..af4ee2cef2f9 100644
+--- a/arch/arm/vfp/vfpmodule.c
++++ b/arch/arm/vfp/vfpmodule.c
+@@ -257,7 +257,7 @@ static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_
+ 
+ 	if (exceptions == VFP_EXCEPTION_ERROR) {
+ 		vfp_panic("unhandled bounce", inst);
+-		vfp_raise_sigfpe(FPE_FIXME, regs);
++		vfp_raise_sigfpe(FPE_FLTINV, regs);
+ 		return;
+ 	}
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-p23x-q20x.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-p23x-q20x.dtsi
+index aeb6d21a3bec..afc4001689fd 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx-p23x-q20x.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx-p23x-q20x.dtsi
+@@ -248,3 +248,7 @@
+ 	pinctrl-0 = <&uart_ao_a_pins>;
+ 	pinctrl-names = "default";
+ };
++
++&usb0 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+index 9671f1e3c74a..40c674317987 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+@@ -271,3 +271,15 @@
+ 	pinctrl-0 = <&uart_ao_a_pins>;
+ 	pinctrl-names = "default";
+ };
++
++&usb0 {
++	status = "okay";
++};
++
++&usb2_phy0 {
++	/*
++	 * even though the schematics don't show it:
++	 * HDMI_5V is also used as supply for the USB VBUS.
++	 */
++	phy-supply = <&hdmi_5v>;
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
+index 271f14279180..0fdebcc698a6 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
+@@ -251,3 +251,7 @@
+ 	pinctrl-0 = <&uart_ao_a_pins>;
+ 	pinctrl-names = "default";
+ };
++
++&usb0 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
+index 7005068346a0..26de81a24fd5 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
+@@ -185,3 +185,7 @@
+ 	pinctrl-0 = <&uart_ao_a_pins>;
+ 	pinctrl-names = "default";
+ };
++
++&usb0 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index c8514110b9da..7f542992850f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -57,6 +57,67 @@
+ 			no-map;
+ 		};
+ 	};
++
++	soc {
++		usb0: usb@c9000000 {
++			status = "disabled";
++			compatible = "amlogic,meson-gxl-dwc3";
++			#address-cells = <2>;
++			#size-cells = <2>;
++			ranges;
++
++			clocks = <&clkc CLKID_USB>;
++			clock-names = "usb_general";
++			resets = <&reset RESET_USB_OTG>;
++			reset-names = "usb_otg";
++
++			dwc3: dwc3@c9000000 {
++				compatible = "snps,dwc3";
++				reg = <0x0 0xc9000000 0x0 0x100000>;
++				interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
++				dr_mode = "host";
++				maximum-speed = "high-speed";
++				snps,dis_u2_susphy_quirk;
++				phys = <&usb3_phy>, <&usb2_phy0>, <&usb2_phy1>;
++			};
++		};
++	};
++};
++
++&apb {
++	usb2_phy0: phy@78000 {
++		compatible = "amlogic,meson-gxl-usb2-phy";
++		#phy-cells = <0>;
++		reg = <0x0 0x78000 0x0 0x20>;
++		clocks = <&clkc CLKID_USB>;
++		clock-names = "phy";
++		resets = <&reset RESET_USB_OTG>;
++		reset-names = "phy";
++		status = "okay";
++	};
++
++	usb2_phy1: phy@78020 {
++		compatible = "amlogic,meson-gxl-usb2-phy";
++		#phy-cells = <0>;
++		reg = <0x0 0x78020 0x0 0x20>;
++		clocks = <&clkc CLKID_USB>;
++		clock-names = "phy";
++		resets = <&reset RESET_USB_OTG>;
++		reset-names = "phy";
++		status = "okay";
++	};
++
++	usb3_phy: phy@78080 {
++		compatible = "amlogic,meson-gxl-usb3-phy";
++		#phy-cells = <0>;
++		reg = <0x0 0x78080 0x0 0x20>;
++		interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>;
++		clocks = <&clkc CLKID_USB>, <&clkc_AO CLKID_AO_CEC_32K>;
++		clock-names = "phy", "peripheral";
++		resets = <&reset RESET_USB_OTG>, <&reset RESET_USB_OTG>;
++		reset-names = "phy", "peripheral";
++		status = "okay";
++	};
+ };
+ 
+ &ethmac {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
+index 1448c3dba08e..572b01ae8de1 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
+@@ -413,3 +413,7 @@
+ 	status = "okay";
+ 	vref-supply = <&vddio_ao18>;
+ };
++
++&usb0 {
++	status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxm.dtsi
+index 19a798d2ae2f..fc53ed7afc11 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm.dtsi
+@@ -117,6 +117,19 @@
+ 	};
+ };
+ 
++&apb {
++	usb2_phy2: phy@78040 {
++		compatible = "amlogic,meson-gxl-usb2-phy";
++		#phy-cells = <0>;
++		reg = <0x0 0x78040 0x0 0x20>;
++		clocks = <&clkc CLKID_USB>;
++		clock-names = "phy";
++		resets = <&reset RESET_USB_OTG>;
++		reset-names = "phy";
++		status = "okay";
++	};
++};
++
+ &clkc_AO {
+ 	compatible = "amlogic,meson-gxm-aoclkc", "amlogic,meson-gx-aoclkc";
+ };
+@@ -137,3 +150,7 @@
+ &hdmi_tx {
+ 	compatible = "amlogic,meson-gxm-dw-hdmi", "amlogic,meson-gx-dw-hdmi";
+ };
++
++&dwc3 {
++	phys = <&usb3_phy>, <&usb2_phy0>, <&usb2_phy1>, <&usb2_phy2>;
++};
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray-sata.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray-sata.dtsi
+index 4b5465da81d8..8c68e0c26f1b 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray-sata.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray-sata.dtsi
+@@ -36,11 +36,11 @@
+ 		#size-cells = <1>;
+ 		ranges = <0x0 0x0 0x67d00000 0x00800000>;
+ 
+-		sata0: ahci@210000 {
++		sata0: ahci@0 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00210000 0x1000>;
++			reg = <0x00000000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -52,9 +52,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy0: sata_phy@212100 {
++		sata_phy0: sata_phy@2100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00212100 0x1000>;
++			reg = <0x00002100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -66,11 +66,11 @@
+ 			};
+ 		};
+ 
+-		sata1: ahci@310000 {
++		sata1: ahci@10000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00310000 0x1000>;
++			reg = <0x00010000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -82,9 +82,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy1: sata_phy@312100 {
++		sata_phy1: sata_phy@12100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00312100 0x1000>;
++			reg = <0x00012100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -96,11 +96,11 @@
+ 			};
+ 		};
+ 
+-		sata2: ahci@120000 {
++		sata2: ahci@20000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00120000 0x1000>;
++			reg = <0x00020000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -112,9 +112,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy2: sata_phy@122100 {
++		sata_phy2: sata_phy@22100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00122100 0x1000>;
++			reg = <0x00022100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -126,11 +126,11 @@
+ 			};
+ 		};
+ 
+-		sata3: ahci@130000 {
++		sata3: ahci@30000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00130000 0x1000>;
++			reg = <0x00030000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -142,9 +142,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy3: sata_phy@132100 {
++		sata_phy3: sata_phy@32100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00132100 0x1000>;
++			reg = <0x00032100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -156,11 +156,11 @@
+ 			};
+ 		};
+ 
+-		sata4: ahci@330000 {
++		sata4: ahci@100000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00330000 0x1000>;
++			reg = <0x00100000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 351 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -172,9 +172,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy4: sata_phy@332100 {
++		sata_phy4: sata_phy@102100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00332100 0x1000>;
++			reg = <0x00102100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -186,11 +186,11 @@
+ 			};
+ 		};
+ 
+-		sata5: ahci@400000 {
++		sata5: ahci@110000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00400000 0x1000>;
++			reg = <0x00110000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -202,9 +202,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy5: sata_phy@402100 {
++		sata_phy5: sata_phy@112100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00402100 0x1000>;
++			reg = <0x00112100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -216,11 +216,11 @@
+ 			};
+ 		};
+ 
+-		sata6: ahci@410000 {
++		sata6: ahci@120000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00410000 0x1000>;
++			reg = <0x00120000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -232,9 +232,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy6: sata_phy@412100 {
++		sata_phy6: sata_phy@122100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00412100 0x1000>;
++			reg = <0x00122100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -246,11 +246,11 @@
+ 			};
+ 		};
+ 
+-		sata7: ahci@420000 {
++		sata7: ahci@130000 {
+ 			compatible = "brcm,iproc-ahci", "generic-ahci";
+-			reg = <0x00420000 0x1000>;
++			reg = <0x00130000 0x1000>;
+ 			reg-names = "ahci";
+-			interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			status = "disabled";
+@@ -262,9 +262,9 @@
+ 			};
+ 		};
+ 
+-		sata_phy7: sata_phy@422100 {
++		sata_phy7: sata_phy@132100 {
+ 			compatible = "brcm,iproc-sr-sata-phy";
+-			reg = <0x00422100 0x1000>;
++			reg = <0x00132100 0x1000>;
+ 			reg-names = "phy";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186-p3310.dtsi b/arch/arm64/boot/dts/nvidia/tegra186-p3310.dtsi
+index a8baad7b80df..13f57fff1477 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186-p3310.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186-p3310.dtsi
+@@ -46,7 +46,7 @@
+ 				compatible = "ethernet-phy-ieee802.3-c22";
+ 				reg = <0x0>;
+ 				interrupt-parent = <&gpio>;
+-				interrupts = <TEGRA_MAIN_GPIO(M, 5) IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <TEGRA_MAIN_GPIO(M, 5) IRQ_TYPE_LEVEL_LOW>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld11.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld11.dtsi
+index cd7c2d0a1f64..4939ab25b506 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld11.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld11.dtsi
+@@ -330,7 +330,7 @@
+ 			mmc-ddr-1_8v;
+ 			mmc-hs200-1_8v;
+ 			mmc-pwrseq = <&emmc_pwrseq>;
+-			cdns,phy-input-delay-legacy = <4>;
++			cdns,phy-input-delay-legacy = <9>;
+ 			cdns,phy-input-delay-mmc-highspeed = <2>;
+ 			cdns,phy-input-delay-mmc-ddr = <3>;
+ 			cdns,phy-dll-delay-sdclk = <21>;
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index 8a3276ba2da1..ef8b9a4d8910 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -435,7 +435,7 @@
+ 			mmc-ddr-1_8v;
+ 			mmc-hs200-1_8v;
+ 			mmc-pwrseq = <&emmc_pwrseq>;
+-			cdns,phy-input-delay-legacy = <4>;
++			cdns,phy-input-delay-legacy = <9>;
+ 			cdns,phy-input-delay-mmc-highspeed = <2>;
+ 			cdns,phy-input-delay-mmc-ddr = <3>;
+ 			cdns,phy-dll-delay-sdclk = <21>;
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+index 234fc58cc599..a1724f7e70fa 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+@@ -336,7 +336,7 @@
+ 			mmc-ddr-1_8v;
+ 			mmc-hs200-1_8v;
+ 			mmc-pwrseq = <&emmc_pwrseq>;
+-			cdns,phy-input-delay-legacy = <4>;
++			cdns,phy-input-delay-legacy = <9>;
+ 			cdns,phy-input-delay-mmc-highspeed = <2>;
+ 			cdns,phy-input-delay-mmc-ddr = <3>;
+ 			cdns,phy-dll-delay-sdclk = <21>;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 8e32a6f28f00..be1e2174bb66 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_IMP_CAVIUM		0x43
+ #define ARM_CPU_IMP_BRCM		0x42
+ #define ARM_CPU_IMP_QCOM		0x51
++#define ARM_CPU_IMP_NVIDIA		0x4E
+ 
+ #define ARM_CPU_PART_AEM_V8		0xD0F
+ #define ARM_CPU_PART_FOUNDATION		0xD00
+@@ -98,6 +99,9 @@
+ #define QCOM_CPU_PART_FALKOR		0xC00
+ #define QCOM_CPU_PART_KRYO		0x200
+ 
++#define NVIDIA_CPU_PART_DENVER		0x003
++#define NVIDIA_CPU_PART_CARMEL		0x004
++
+ #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+ #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
+@@ -112,6 +116,8 @@
+ #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
+ #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
+ #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
++#define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER)
++#define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
+ 
+ #ifndef __ASSEMBLY__
+ 
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 9ae31f7e2243..b3fb0ccd6010 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -25,6 +25,7 @@
+ #include <linux/sched/signal.h>
+ #include <linux/sched/task_stack.h>
+ #include <linux/mm.h>
++#include <linux/nospec.h>
+ #include <linux/smp.h>
+ #include <linux/ptrace.h>
+ #include <linux/user.h>
+@@ -249,15 +250,20 @@ static struct perf_event *ptrace_hbp_get_event(unsigned int note_type,
+ 
+ 	switch (note_type) {
+ 	case NT_ARM_HW_BREAK:
+-		if (idx < ARM_MAX_BRP)
+-			bp = tsk->thread.debug.hbp_break[idx];
++		if (idx >= ARM_MAX_BRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_BRP);
++		bp = tsk->thread.debug.hbp_break[idx];
+ 		break;
+ 	case NT_ARM_HW_WATCH:
+-		if (idx < ARM_MAX_WRP)
+-			bp = tsk->thread.debug.hbp_watch[idx];
++		if (idx >= ARM_MAX_WRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_WRP);
++		bp = tsk->thread.debug.hbp_watch[idx];
+ 		break;
+ 	}
+ 
++out:
+ 	return bp;
+ }
+ 
+@@ -1458,9 +1464,7 @@ static int compat_ptrace_gethbpregs(struct task_struct *tsk, compat_long_t num,
+ {
+ 	int ret;
+ 	u32 kdata;
+-	mm_segment_t old_fs = get_fs();
+ 
+-	set_fs(KERNEL_DS);
+ 	/* Watchpoint */
+ 	if (num < 0) {
+ 		ret = compat_ptrace_hbp_get(NT_ARM_HW_WATCH, tsk, num, &kdata);
+@@ -1471,7 +1475,6 @@ static int compat_ptrace_gethbpregs(struct task_struct *tsk, compat_long_t num,
+ 	} else {
+ 		ret = compat_ptrace_hbp_get(NT_ARM_HW_BREAK, tsk, num, &kdata);
+ 	}
+-	set_fs(old_fs);
+ 
+ 	if (!ret)
+ 		ret = put_user(kdata, data);
+@@ -1484,7 +1487,6 @@ static int compat_ptrace_sethbpregs(struct task_struct *tsk, compat_long_t num,
+ {
+ 	int ret;
+ 	u32 kdata = 0;
+-	mm_segment_t old_fs = get_fs();
+ 
+ 	if (num == 0)
+ 		return 0;
+@@ -1493,12 +1495,10 @@ static int compat_ptrace_sethbpregs(struct task_struct *tsk, compat_long_t num,
+ 	if (ret)
+ 		return ret;
+ 
+-	set_fs(KERNEL_DS);
+ 	if (num < 0)
+ 		ret = compat_ptrace_hbp_set(NT_ARM_HW_WATCH, tsk, num, &kdata);
+ 	else
+ 		ret = compat_ptrace_hbp_set(NT_ARM_HW_BREAK, tsk, num, &kdata);
+-	set_fs(old_fs);
+ 
+ 	return ret;
+ }
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index eb2d15147e8d..e904f4ed49ff 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -243,7 +243,8 @@ void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+ 	 * If we were single stepping, we want to get the step exception after
+ 	 * we return from the trap.
+ 	 */
+-	user_fastforward_single_step(current);
++	if (user_mode(regs))
++		user_fastforward_single_step(current);
+ }
+ 
+ static LIST_HEAD(undef_hook);
+diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
+index dabfc1ecda3d..12145874c02b 100644
+--- a/arch/arm64/mm/kasan_init.c
++++ b/arch/arm64/mm/kasan_init.c
+@@ -204,7 +204,7 @@ void __init kasan_init(void)
+ 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+ 
+ 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
+-			   pfn_to_nid(virt_to_pfn(lm_alias(_text))));
++			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+ 
+ 	kasan_populate_zero_shadow((void *)KASAN_SHADOW_START,
+ 				   (void *)mod_shadow_start);
+@@ -224,7 +224,7 @@ void __init kasan_init(void)
+ 
+ 		kasan_map_populate((unsigned long)kasan_mem_to_shadow(start),
+ 				   (unsigned long)kasan_mem_to_shadow(end),
+-				   pfn_to_nid(virt_to_pfn(start)));
++				   early_pfn_to_nid(virt_to_pfn(start)));
+ 	}
+ 
+ 	/*
+diff --git a/arch/hexagon/include/asm/io.h b/arch/hexagon/include/asm/io.h
+index 9e8621d94ee9..e17262ad125e 100644
+--- a/arch/hexagon/include/asm/io.h
++++ b/arch/hexagon/include/asm/io.h
+@@ -216,6 +216,12 @@ static inline void memcpy_toio(volatile void __iomem *dst, const void *src,
+ 	memcpy((void *) dst, src, count);
+ }
+ 
++static inline void memset_io(volatile void __iomem *addr, int value,
++			     size_t size)
++{
++	memset((void __force *)addr, value, size);
++}
++
+ #define PCI_IO_ADDR	(volatile void __iomem *)
+ 
+ /*
+diff --git a/arch/hexagon/lib/checksum.c b/arch/hexagon/lib/checksum.c
+index 617506d1a559..7cd0a2259269 100644
+--- a/arch/hexagon/lib/checksum.c
++++ b/arch/hexagon/lib/checksum.c
+@@ -199,3 +199,4 @@ csum_partial_copy_nocheck(const void *src, void *dst, int len, __wsum sum)
+ 	memcpy(dst, src, len);
+ 	return csum_partial(dst, len, sum);
+ }
++EXPORT_SYMBOL(csum_partial_copy_nocheck);
+diff --git a/arch/mips/boot/dts/img/boston.dts b/arch/mips/boot/dts/img/boston.dts
+index 2cd49b60e030..f7aad80c69ab 100644
+--- a/arch/mips/boot/dts/img/boston.dts
++++ b/arch/mips/boot/dts/img/boston.dts
+@@ -51,6 +51,8 @@
+ 		ranges = <0x02000000 0 0x40000000
+ 			  0x40000000 0 0x40000000>;
+ 
++		bus-range = <0x00 0xff>;
++
+ 		interrupt-map-mask = <0 0 0 7>;
+ 		interrupt-map = <0 0 0 1 &pci0_intc 1>,
+ 				<0 0 0 2 &pci0_intc 2>,
+@@ -79,6 +81,8 @@
+ 		ranges = <0x02000000 0 0x20000000
+ 			  0x20000000 0 0x20000000>;
+ 
++		bus-range = <0x00 0xff>;
++
+ 		interrupt-map-mask = <0 0 0 7>;
+ 		interrupt-map = <0 0 0 1 &pci1_intc 1>,
+ 				<0 0 0 2 &pci1_intc 2>,
+@@ -107,6 +111,8 @@
+ 		ranges = <0x02000000 0 0x16000000
+ 			  0x16000000 0 0x100000>;
+ 
++		bus-range = <0x00 0xff>;
++
+ 		interrupt-map-mask = <0 0 0 7>;
+ 		interrupt-map = <0 0 0 1 &pci2_intc 1>,
+ 				<0 0 0 2 &pci2_intc 2>,
+diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
+index 0cbf3af37eca..a7d0b836f2f7 100644
+--- a/arch/mips/include/asm/io.h
++++ b/arch/mips/include/asm/io.h
+@@ -307,7 +307,7 @@ static inline void iounmap(const volatile void __iomem *addr)
+ #if defined(CONFIG_CPU_CAVIUM_OCTEON) || defined(CONFIG_LOONGSON3_ENHANCEMENT)
+ #define war_io_reorder_wmb()		wmb()
+ #else
+-#define war_io_reorder_wmb()		do { } while (0)
++#define war_io_reorder_wmb()		barrier()
+ #endif
+ 
+ #define __BUILD_MEMORY_SINGLE(pfx, bwlq, type, irq)			\
+@@ -377,6 +377,8 @@ static inline type pfx##read##bwlq(const volatile void __iomem *mem)	\
+ 		BUG();							\
+ 	}								\
+ 									\
++	/* prevent prefetching of coherent DMA data prematurely */	\
++	rmb();								\
+ 	return pfx##ioswab##bwlq(__mem, __val);				\
+ }
+ 
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index d4240aa7f8b1..0f9ccd76a8ea 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -448,7 +448,8 @@ static int match_by_id(struct device * dev, void * data)
+  * Checks all the children of @parent for a matching @id.  If none
+  * found, it allocates a new device and returns it.
+  */
+-static struct parisc_device * alloc_tree_node(struct device *parent, char id)
++static struct parisc_device * __init alloc_tree_node(
++			struct device *parent, char id)
+ {
+ 	struct match_id_data d = {
+ 		.id = id,
+@@ -825,8 +826,8 @@ void walk_lower_bus(struct parisc_device *dev)
+  * devices which are not physically connected (such as extra serial &
+  * keyboard ports).  This problem is not yet solved.
+  */
+-static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high,
+-                            struct device *parent)
++static void __init walk_native_bus(unsigned long io_io_low,
++	unsigned long io_io_high, struct device *parent)
+ {
+ 	int i, devices_found = 0;
+ 	unsigned long hpa = io_io_low;
+diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
+index 4065b5e48c9d..5e26dbede5fc 100644
+--- a/arch/parisc/kernel/smp.c
++++ b/arch/parisc/kernel/smp.c
+@@ -423,8 +423,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+ }
+ 
+ #ifdef CONFIG_PROC_FS
+-int __init
+-setup_profiling_timer(unsigned int multiplier)
++int setup_profiling_timer(unsigned int multiplier)
+ {
+ 	return -EINVAL;
+ }
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index f7e684560186..42a873226a04 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -205,7 +205,7 @@ static int __init rtc_init(void)
+ device_initcall(rtc_init);
+ #endif
+ 
+-void read_persistent_clock(struct timespec *ts)
++void read_persistent_clock64(struct timespec64 *ts)
+ {
+ 	static struct pdc_tod tod_data;
+ 	if (pdc_tod_read(&tod_data) == 0) {
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 9f421641a35c..16b077801a5f 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -91,6 +91,7 @@ extern int start_topology_update(void);
+ extern int stop_topology_update(void);
+ extern int prrn_is_enabled(void);
+ extern int find_and_online_cpu_nid(int cpu);
++extern int timed_topology_update(int nsecs);
+ #else
+ static inline int start_topology_update(void)
+ {
+@@ -108,16 +109,12 @@ static inline int find_and_online_cpu_nid(int cpu)
+ {
+ 	return 0;
+ }
++static inline int timed_topology_update(int nsecs)
++{
++	return 0;
++}
+ #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */
+ 
+-#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_NEED_MULTIPLE_NODES)
+-#if defined(CONFIG_PPC_SPLPAR)
+-extern int timed_topology_update(int nsecs);
+-#else
+-#define	timed_topology_update(nsecs)
+-#endif /* CONFIG_PPC_SPLPAR */
+-#endif /* CONFIG_HOTPLUG_CPU || CONFIG_NEED_MULTIPLE_NODES */
+-
+ #include <asm-generic/topology.h>
+ 
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index c27557aff394..e96b8e1cbd8c 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -864,6 +864,17 @@ static void init_fallback_flush(void)
+ 	int cpu;
+ 
+ 	l1d_size = ppc64_caches.l1d.size;
++
++	/*
++	 * If there is no d-cache-size property in the device tree, l1d_size
++	 * could be zero. That leads to the loop in the asm wrapping around to
++	 * 2^64-1, and then walking off the end of the fallback area and
++	 * eventually causing a page fault which is fatal. Just default to
++	 * something vaguely sane.
++	 */
++	if (!l1d_size)
++		l1d_size = (64 * 1024);
++
+ 	limit = min(ppc64_bolted_size(), ppc64_rma_size);
+ 
+ 	/*
+diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
+index 6038e2e7aee0..876d4f294fdd 100644
+--- a/arch/powerpc/kvm/booke.c
++++ b/arch/powerpc/kvm/booke.c
+@@ -305,6 +305,13 @@ void kvmppc_core_queue_fpunavail(struct kvm_vcpu *vcpu)
+ 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_FP_UNAVAIL);
+ }
+ 
++#ifdef CONFIG_ALTIVEC
++void kvmppc_core_queue_vec_unavail(struct kvm_vcpu *vcpu)
++{
++	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_ALTIVEC_UNAVAIL);
++}
++#endif
++
+ void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu)
+ {
+ 	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DECREMENTER);
+diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c
+index 9033c8194eda..ccc421503363 100644
+--- a/arch/powerpc/platforms/cell/spufs/sched.c
++++ b/arch/powerpc/platforms/cell/spufs/sched.c
+@@ -1093,7 +1093,7 @@ static int show_spu_loadavg(struct seq_file *s, void *private)
+ 		LOAD_INT(c), LOAD_FRAC(c),
+ 		count_active_contexts(),
+ 		atomic_read(&nr_spu_contexts),
+-		idr_get_cursor(&task_active_pid_ns(current)->idr));
++		idr_get_cursor(&task_active_pid_ns(current)->idr) - 1);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
+index de470caf0784..fc222a0c2ac4 100644
+--- a/arch/powerpc/platforms/powernv/memtrace.c
++++ b/arch/powerpc/platforms/powernv/memtrace.c
+@@ -82,19 +82,6 @@ static const struct file_operations memtrace_fops = {
+ 	.open	= simple_open,
+ };
+ 
+-static void flush_memory_region(u64 base, u64 size)
+-{
+-	unsigned long line_size = ppc64_caches.l1d.size;
+-	u64 end = base + size;
+-	u64 addr;
+-
+-	base = round_down(base, line_size);
+-	end = round_up(end, line_size);
+-
+-	for (addr = base; addr < end; addr += line_size)
+-		asm volatile("dcbf 0,%0" : "=r" (addr) :: "memory");
+-}
+-
+ static int check_memblock_online(struct memory_block *mem, void *arg)
+ {
+ 	if (mem->state != MEM_ONLINE)
+@@ -132,10 +119,6 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
+ 	walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE,
+ 			  change_memblock_state);
+ 
+-	/* RCU grace period? */
+-	flush_memory_region((u64)__va(start_pfn << PAGE_SHIFT),
+-			    nr_pages << PAGE_SHIFT);
+-
+ 	lock_device_hotplug();
+ 	remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+ 	unlock_device_hotplug();
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 04807c7f64cc..1225d9add766 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -11,6 +11,7 @@ config RISCV
+ 	select ARCH_WANT_FRAME_POINTERS
+ 	select CLONE_BACKWARDS
+ 	select COMMON_CLK
++	select DMA_DIRECT_OPS
+ 	select GENERIC_CLOCKEVENTS
+ 	select GENERIC_CPU_DEVICES
+ 	select GENERIC_IRQ_SHOW
+@@ -88,9 +89,6 @@ config PGTABLE_LEVELS
+ config HAVE_KPROBES
+ 	def_bool n
+ 
+-config DMA_DIRECT_OPS
+-	def_bool y
+-
+ menu "Platform type"
+ 
+ choice
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 324568d33921..f6561b783b61 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -52,7 +52,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ # Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+ quiet_cmd_vdsold = VDSOLD  $@
+-      cmd_vdsold = $(CC) $(KCFLAGS) -nostdlib $(SYSCFLAGS_$(@F)) \
++      cmd_vdsold = $(CC) $(KCFLAGS) $(call cc-option, -no-pie) -nostdlib $(SYSCFLAGS_$(@F)) \
+                            -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp -lgcc && \
+                    $(CROSS_COMPILE)objcopy \
+                            $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index 97fe29316476..1851eaeee131 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -9,6 +9,7 @@ config SUPERH
+ 	select HAVE_IDE if HAS_IOPORT_MAP
+ 	select HAVE_MEMBLOCK
+ 	select HAVE_MEMBLOCK_NODE_MAP
++	select NO_BOOTMEM
+ 	select ARCH_DISCARD_MEMBLOCK
+ 	select HAVE_OPROFILE
+ 	select HAVE_GENERIC_DMA_COHERENT
+diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c
+index 4205f6d42b69..a5bd03642678 100644
+--- a/arch/sh/kernel/cpu/sh2/probe.c
++++ b/arch/sh/kernel/cpu/sh2/probe.c
+@@ -43,7 +43,11 @@ void __ref cpu_probe(void)
+ #endif
+ 
+ #if defined(CONFIG_CPU_J2)
++#if defined(CONFIG_SMP)
+ 	unsigned cpu = hard_smp_processor_id();
++#else
++	unsigned cpu = 0;
++#endif
+ 	if (cpu == 0) of_scan_flat_dt(scan_cache, NULL);
+ 	if (j2_ccr_base) __raw_writel(0x80000303, j2_ccr_base + 4*cpu);
+ 	if (cpu != 0) return;
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index b95c411d0333..b075b030218a 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -11,7 +11,6 @@
+ #include <linux/ioport.h>
+ #include <linux/init.h>
+ #include <linux/initrd.h>
+-#include <linux/bootmem.h>
+ #include <linux/console.h>
+ #include <linux/root_dev.h>
+ #include <linux/utsname.h>
+diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
+index ce0bbaa7e404..4034035fbede 100644
+--- a/arch/sh/mm/init.c
++++ b/arch/sh/mm/init.c
+@@ -211,59 +211,15 @@ void __init allocate_pgdat(unsigned int nid)
+ 
+ 	NODE_DATA(nid) = __va(phys);
+ 	memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));
+-
+-	NODE_DATA(nid)->bdata = &bootmem_node_data[nid];
+ #endif
+ 
+ 	NODE_DATA(nid)->node_start_pfn = start_pfn;
+ 	NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
+ }
+ 
+-static void __init bootmem_init_one_node(unsigned int nid)
+-{
+-	unsigned long total_pages, paddr;
+-	unsigned long end_pfn;
+-	struct pglist_data *p;
+-
+-	p = NODE_DATA(nid);
+-
+-	/* Nothing to do.. */
+-	if (!p->node_spanned_pages)
+-		return;
+-
+-	end_pfn = pgdat_end_pfn(p);
+-
+-	total_pages = bootmem_bootmap_pages(p->node_spanned_pages);
+-
+-	paddr = memblock_alloc(total_pages << PAGE_SHIFT, PAGE_SIZE);
+-	if (!paddr)
+-		panic("Can't allocate bootmap for nid[%d]\n", nid);
+-
+-	init_bootmem_node(p, paddr >> PAGE_SHIFT, p->node_start_pfn, end_pfn);
+-
+-	free_bootmem_with_active_regions(nid, end_pfn);
+-
+-	/*
+-	 * XXX Handle initial reservations for the system memory node
+-	 * only for the moment, we'll refactor this later for handling
+-	 * reservations in other nodes.
+-	 */
+-	if (nid == 0) {
+-		struct memblock_region *reg;
+-
+-		/* Reserve the sections we're already using. */
+-		for_each_memblock(reserved, reg) {
+-			reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);
+-		}
+-	}
+-
+-	sparse_memory_present_with_active_regions(nid);
+-}
+-
+ static void __init do_init_bootmem(void)
+ {
+ 	struct memblock_region *reg;
+-	int i;
+ 
+ 	/* Add active regions with valid PFNs. */
+ 	for_each_memblock(memory, reg) {
+@@ -279,9 +235,12 @@ static void __init do_init_bootmem(void)
+ 
+ 	plat_mem_setup();
+ 
+-	for_each_online_node(i)
+-		bootmem_init_one_node(i);
++	for_each_memblock(memory, reg) {
++		int nid = memblock_get_region_node(reg);
+ 
++		memory_present(nid, memblock_region_memory_base_pfn(reg),
++			memblock_region_memory_end_pfn(reg));
++	}
+ 	sparse_init();
+ }
+ 
+@@ -322,7 +281,6 @@ void __init paging_init(void)
+ {
+ 	unsigned long max_zone_pfns[MAX_NR_ZONES];
+ 	unsigned long vaddr, end;
+-	int nid;
+ 
+ 	sh_mv.mv_mem_init();
+ 
+@@ -377,21 +335,7 @@ void __init paging_init(void)
+ 	kmap_coherent_init();
+ 
+ 	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+-
+-	for_each_online_node(nid) {
+-		pg_data_t *pgdat = NODE_DATA(nid);
+-		unsigned long low, start_pfn;
+-
+-		start_pfn = pgdat->bdata->node_min_pfn;
+-		low = pgdat->bdata->node_low_pfn;
+-
+-		if (max_zone_pfns[ZONE_NORMAL] < low)
+-			max_zone_pfns[ZONE_NORMAL] = low;
+-
+-		printk("Node %u: start_pfn = 0x%lx, low = 0x%lx\n",
+-		       nid, start_pfn, low);
+-	}
+-
++	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ 	free_area_init_nodes(max_zone_pfns);
+ }
+ 
+diff --git a/arch/sh/mm/numa.c b/arch/sh/mm/numa.c
+index 05713d190247..830e8b3684e4 100644
+--- a/arch/sh/mm/numa.c
++++ b/arch/sh/mm/numa.c
+@@ -8,7 +8,6 @@
+  * for more details.
+  */
+ #include <linux/module.h>
+-#include <linux/bootmem.h>
+ #include <linux/memblock.h>
+ #include <linux/mm.h>
+ #include <linux/numa.h>
+@@ -26,9 +25,7 @@ EXPORT_SYMBOL_GPL(node_data);
+  */
+ void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end)
+ {
+-	unsigned long bootmap_pages;
+ 	unsigned long start_pfn, end_pfn;
+-	unsigned long bootmem_paddr;
+ 
+ 	/* Don't allow bogus node assignment */
+ 	BUG_ON(nid >= MAX_NUMNODES || nid <= 0);
+@@ -48,25 +45,9 @@ void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end)
+ 					     SMP_CACHE_BYTES, end));
+ 	memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));
+ 
+-	NODE_DATA(nid)->bdata = &bootmem_node_data[nid];
+ 	NODE_DATA(nid)->node_start_pfn = start_pfn;
+ 	NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
+ 
+-	/* Node-local bootmap */
+-	bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn);
+-	bootmem_paddr = memblock_alloc_base(bootmap_pages << PAGE_SHIFT,
+-				       PAGE_SIZE, end);
+-	init_bootmem_node(NODE_DATA(nid), bootmem_paddr >> PAGE_SHIFT,
+-			  start_pfn, end_pfn);
+-
+-	free_bootmem_with_active_regions(nid, end_pfn);
+-
+-	/* Reserve the pgdat and bootmap space with the bootmem allocator */
+-	reserve_bootmem_node(NODE_DATA(nid), start_pfn << PAGE_SHIFT,
+-			     sizeof(struct pglist_data), BOOTMEM_DEFAULT);
+-	reserve_bootmem_node(NODE_DATA(nid), bootmem_paddr,
+-			     bootmap_pages << PAGE_SHIFT, BOOTMEM_DEFAULT);
+-
+ 	/* It's up */
+ 	node_set_online(nid);
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 39cd0b36c790..9296b41ac342 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3331,7 +3331,8 @@ static void intel_pmu_cpu_starting(int cpu)
+ 
+ 	cpuc->lbr_sel = NULL;
+ 
+-	flip_smm_bit(&x86_pmu.attr_freeze_on_smi);
++	if (x86_pmu.version > 1)
++		flip_smm_bit(&x86_pmu.attr_freeze_on_smi);
+ 
+ 	if (!cpuc->shared_regs)
+ 		return;
+@@ -3494,6 +3495,8 @@ static __initconst const struct x86_pmu core_pmu = {
+ 	.cpu_dying		= intel_pmu_cpu_dying,
+ };
+ 
++static struct attribute *intel_pmu_attrs[];
++
+ static __initconst const struct x86_pmu intel_pmu = {
+ 	.name			= "Intel",
+ 	.handle_irq		= intel_pmu_handle_irq,
+@@ -3524,6 +3527,8 @@ static __initconst const struct x86_pmu intel_pmu = {
+ 	.format_attrs		= intel_arch3_formats_attr,
+ 	.events_sysfs_show	= intel_event_sysfs_show,
+ 
++	.attrs			= intel_pmu_attrs,
++
+ 	.cpu_prepare		= intel_pmu_cpu_prepare,
+ 	.cpu_starting		= intel_pmu_cpu_starting,
+ 	.cpu_dying		= intel_pmu_cpu_dying,
+@@ -3902,8 +3907,6 @@ __init int intel_pmu_init(void)
+ 
+ 	x86_pmu.max_pebs_events		= min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);
+ 
+-
+-	x86_pmu.attrs			= intel_pmu_attrs;
+ 	/*
+ 	 * Quirk: v2 perfmon does not report fixed-purpose events, so
+ 	 * assume at least 3 events, when not running in a hypervisor:
+diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
+index b3e32b010ab1..c2c01f84df75 100644
+--- a/arch/x86/include/asm/insn.h
++++ b/arch/x86/include/asm/insn.h
+@@ -208,4 +208,22 @@ static inline int insn_offset_immediate(struct insn *insn)
+ 	return insn_offset_displacement(insn) + insn->displacement.nbytes;
+ }
+ 
++#define POP_SS_OPCODE 0x1f
++#define MOV_SREG_OPCODE 0x8e
++
++/*
++ * Intel SDM Vol.3A 6.8.3 states;
++ * "Any single-step trap that would be delivered following the MOV to SS
++ * instruction or POP to SS instruction (because EFLAGS.TF is 1) is
++ * suppressed."
++ * This function returns true if @insn is MOV SS or POP SS. On these
++ * instructions, single stepping is suppressed.
++ */
++static inline int insn_masking_exception(struct insn *insn)
++{
++	return insn->opcode.bytes[0] == POP_SS_OPCODE ||
++		(insn->opcode.bytes[0] == MOV_SREG_OPCODE &&
++		 X86_MODRM_REG(insn->modrm.bytes[0]) == 2);
++}
++
+ #endif /* _ASM_X86_INSN_H */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 4b0539a52c4c..e2201c9c3f20 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1019,6 +1019,7 @@ struct kvm_x86_ops {
+ 
+ 	bool (*has_wbinvd_exit)(void);
+ 
++	u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu);
+ 	void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset);
+ 
+ 	void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index c895f38a7a5e..0b2330e19169 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -751,6 +751,9 @@ static const struct _tlb_table intel_tlb_table[] = {
+ 	{ 0x5d, TLB_DATA_4K_4M,		256,	" TLB_DATA 4 KByte and 4 MByte pages" },
+ 	{ 0x61, TLB_INST_4K,		48,	" TLB_INST 4 KByte pages, full associative" },
+ 	{ 0x63, TLB_DATA_1G,		4,	" TLB_DATA 1 GByte pages, 4-way set associative" },
++	{ 0x6b, TLB_DATA_4K,		256,	" TLB_DATA 4 KByte pages, 8-way associative" },
++	{ 0x6c, TLB_DATA_2M_4M,		128,	" TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" },
++	{ 0x6d, TLB_DATA_1G,		16,	" TLB_DATA 1 GByte pages, fully associative" },
+ 	{ 0x76, TLB_INST_2M_4M,		8,	" TLB_INST 2-MByte or 4-MByte pages, fully associative" },
+ 	{ 0xb0, TLB_INST_4K,		128,	" TLB_INST 4 KByte pages, 4-way set associative" },
+ 	{ 0xb1, TLB_INST_2M_4M,		4,	" TLB_INST 2M pages, 4-way, 8 entries or 4M pages, 4-way entries" },
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index fb095ba0c02f..f24cd9f1799a 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -398,11 +398,10 @@ static void *bzImage64_load(struct kimage *image, char *kernel,
+ 	 * little bit simple
+ 	 */
+ 	efi_map_sz = efi_get_runtime_map_size();
+-	efi_map_sz = ALIGN(efi_map_sz, 16);
+ 	params_cmdline_sz = sizeof(struct boot_params) + cmdline_len +
+ 				MAX_ELFCOREHDR_STR_LEN;
+ 	params_cmdline_sz = ALIGN(params_cmdline_sz, 16);
+-	kbuf.bufsz = params_cmdline_sz + efi_map_sz +
++	kbuf.bufsz = params_cmdline_sz + ALIGN(efi_map_sz, 16) +
+ 				sizeof(struct setup_data) +
+ 				sizeof(struct efi_setup_data);
+ 
+@@ -410,7 +409,7 @@ static void *bzImage64_load(struct kimage *image, char *kernel,
+ 	if (!params)
+ 		return ERR_PTR(-ENOMEM);
+ 	efi_map_offset = params_cmdline_sz;
+-	efi_setup_data_offset = efi_map_offset + efi_map_sz;
++	efi_setup_data_offset = efi_map_offset + ALIGN(efi_map_sz, 16);
+ 
+ 	/* Copy setup header onto bootparams. Documentation/x86/boot.txt */
+ 	setup_header_size = 0x0202 + kernel[0x0201] - setup_hdr_offset;
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 0715f827607c..6f4d42377fe5 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -370,6 +370,10 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 	if (insn->opcode.bytes[0] == BREAKPOINT_INSTRUCTION)
+ 		return 0;
+ 
++	/* We should not singlestep on the exception masking instructions */
++	if (insn_masking_exception(insn))
++		return 0;
++
+ #ifdef CONFIG_X86_64
+ 	/* Only x86_64 has RIP relative instructions */
+ 	if (insn_rip_relative(insn)) {
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index 85c7ef23d99f..c84bb5396958 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -299,6 +299,10 @@ static int uprobe_init_insn(struct arch_uprobe *auprobe, struct insn *insn, bool
+ 	if (is_prefix_bad(insn))
+ 		return -ENOTSUPP;
+ 
++	/* We should not singlestep on the exception masking instructions */
++	if (insn_masking_exception(insn))
++		return -ENOTSUPP;
++
+ 	if (x86_64)
+ 		good_insns = good_insns_64;
+ 	else
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index dc97f2544b6f..5d13abecb384 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1223,7 +1223,7 @@ static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu)
+ 	struct kvm_run *run = vcpu->run;
+ 
+ 	kvm_hv_hypercall_set_result(vcpu, run->hyperv.u.hcall.result);
+-	return 1;
++	return kvm_skip_emulated_instruction(vcpu);
+ }
+ 
+ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index dbbd762359a9..569aa55d0aba 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1313,12 +1313,23 @@ static void init_sys_seg(struct vmcb_seg *seg, uint32_t type)
+ 	seg->base = 0;
+ }
+ 
++static u64 svm_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
++{
++	struct vcpu_svm *svm = to_svm(vcpu);
++
++	if (is_guest_mode(vcpu))
++		return svm->nested.hsave->control.tsc_offset;
++
++	return vcpu->arch.tsc_offset;
++}
++
+ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 	u64 g_tsc_offset = 0;
+ 
+ 	if (is_guest_mode(vcpu)) {
++		/* Write L1's TSC offset.  */
+ 		g_tsc_offset = svm->vmcb->control.tsc_offset -
+ 			       svm->nested.hsave->control.tsc_offset;
+ 		svm->nested.hsave->control.tsc_offset = offset;
+@@ -3188,6 +3199,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
+ 	/* Restore the original control entries */
+ 	copy_vmcb_control_area(vmcb, hsave);
+ 
++	svm->vcpu.arch.tsc_offset = svm->vmcb->control.tsc_offset;
+ 	kvm_clear_exception_queue(&svm->vcpu);
+ 	kvm_clear_interrupt_queue(&svm->vcpu);
+ 
+@@ -3348,10 +3360,12 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
+ 	/* We don't want to see VMMCALLs from a nested guest */
+ 	clr_intercept(svm, INTERCEPT_VMMCALL);
+ 
++	svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset;
++	svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset;
++
+ 	svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext;
+ 	svm->vmcb->control.int_vector = nested_vmcb->control.int_vector;
+ 	svm->vmcb->control.int_state = nested_vmcb->control.int_state;
+-	svm->vmcb->control.tsc_offset += nested_vmcb->control.tsc_offset;
+ 	svm->vmcb->control.event_inj = nested_vmcb->control.event_inj;
+ 	svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err;
+ 
+@@ -3901,12 +3915,6 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+ 	switch (msr_info->index) {
+-	case MSR_IA32_TSC: {
+-		msr_info->data = svm->vmcb->control.tsc_offset +
+-			kvm_scale_tsc(vcpu, rdtsc());
+-
+-		break;
+-	}
+ 	case MSR_STAR:
+ 		msr_info->data = svm->vmcb->save.star;
+ 		break;
+@@ -4066,9 +4074,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 		svm->vmcb->save.g_pat = data;
+ 		mark_dirty(svm->vmcb, VMCB_NPT);
+ 		break;
+-	case MSR_IA32_TSC:
+-		kvm_write_tsc(vcpu, msr);
+-		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr->host_initiated &&
+ 		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+@@ -5142,9 +5147,8 @@ static int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ 		}
+ 
+ 		if (!ret && svm) {
+-			trace_kvm_pi_irte_update(svm->vcpu.vcpu_id,
+-						 host_irq, e->gsi,
+-						 vcpu_info.vector,
++			trace_kvm_pi_irte_update(host_irq, svm->vcpu.vcpu_id,
++						 e->gsi, vcpu_info.vector,
+ 						 vcpu_info.pi_desc_addr, set);
+ 		}
+ 
+@@ -6967,6 +6971,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
+ 
+ 	.has_wbinvd_exit = svm_has_wbinvd_exit,
+ 
++	.read_l1_tsc_offset = svm_read_l1_tsc_offset,
+ 	.write_tsc_offset = svm_write_tsc_offset,
+ 
+ 	.set_tdp_cr3 = set_tdp_cr3,
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index e3b589e28264..c779f0970126 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -2638,18 +2638,15 @@ static void setup_msrs(struct vcpu_vmx *vmx)
+ 		vmx_update_msr_bitmap(&vmx->vcpu);
+ }
+ 
+-/*
+- * reads and returns guest's timestamp counter "register"
+- * guest_tsc = (host_tsc * tsc multiplier) >> 48 + tsc_offset
+- * -- Intel TSC Scaling for Virtualization White Paper, sec 1.3
+- */
+-static u64 guest_read_tsc(struct kvm_vcpu *vcpu)
++static u64 vmx_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
+ {
+-	u64 host_tsc, tsc_offset;
++	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ 
+-	host_tsc = rdtsc();
+-	tsc_offset = vmcs_read64(TSC_OFFSET);
+-	return kvm_scale_tsc(vcpu, host_tsc) + tsc_offset;
++	if (is_guest_mode(vcpu) &&
++	    (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING))
++		return vcpu->arch.tsc_offset - vmcs12->tsc_offset;
++
++	return vcpu->arch.tsc_offset;
+ }
+ 
+ /*
+@@ -3272,9 +3269,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ #endif
+ 	case MSR_EFER:
+ 		return kvm_get_msr_common(vcpu, msr_info);
+-	case MSR_IA32_TSC:
+-		msr_info->data = guest_read_tsc(vcpu);
+-		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr_info->host_initiated &&
+ 		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+@@ -3392,9 +3386,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		vmcs_write64(GUEST_BNDCFGS, data);
+ 		break;
+-	case MSR_IA32_TSC:
+-		kvm_write_tsc(vcpu, msr_info);
+-		break;
+ 	case MSR_IA32_SPEC_CTRL:
+ 		if (!msr_info->host_initiated &&
+ 		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+@@ -4281,12 +4272,6 @@ static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+ 	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
+ }
+ 
+-static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)
+-{
+-	if (enable_ept)
+-		vmx_flush_tlb(vcpu, true);
+-}
+-
+ static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
+ {
+ 	ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;
+@@ -9039,7 +9024,7 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
+ 	} else {
+ 		sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
+ 		sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+-		vmx_flush_tlb_ept_only(vcpu);
++		vmx_flush_tlb(vcpu, true);
+ 	}
+ 	vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);
+ 
+@@ -9067,7 +9052,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
+ 	    !nested_cpu_has2(get_vmcs12(&vmx->vcpu),
+ 			     SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
+ 		vmcs_write64(APIC_ACCESS_ADDR, hpa);
+-		vmx_flush_tlb_ept_only(vcpu);
++		vmx_flush_tlb(vcpu, true);
+ 	}
+ }
+ 
+@@ -10338,6 +10323,16 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 	return true;
+ }
+ 
++static int nested_vmx_check_apic_access_controls(struct kvm_vcpu *vcpu,
++					  struct vmcs12 *vmcs12)
++{
++	if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) &&
++	    !page_address_valid(vcpu, vmcs12->apic_access_addr))
++		return -EINVAL;
++	else
++		return 0;
++}
++
+ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu,
+ 					   struct vmcs12 *vmcs12)
+ {
+@@ -10906,11 +10901,8 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 		vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
+ 	}
+ 
+-	if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING)
+-		vmcs_write64(TSC_OFFSET,
+-			vcpu->arch.tsc_offset + vmcs12->tsc_offset);
+-	else
+-		vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
++	vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
++
+ 	if (kvm_has_tsc_control)
+ 		decache_tsc_multiplier(vmx);
+ 
+@@ -10952,7 +10944,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 		}
+ 	} else if (nested_cpu_has2(vmcs12,
+ 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
+-		vmx_flush_tlb_ept_only(vcpu);
++		vmx_flush_tlb(vcpu, true);
+ 	}
+ 
+ 	/*
+@@ -11006,6 +10998,9 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 	if (nested_vmx_check_msr_bitmap_controls(vcpu, vmcs12))
+ 		return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
+ 
++	if (nested_vmx_check_apic_access_controls(vcpu, vmcs12))
++		return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
+ 	if (nested_vmx_check_tpr_shadow_controls(vcpu, vmcs12))
+ 		return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
+ 
+@@ -11124,6 +11119,7 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ 	u32 msr_entry_idx;
+ 	u32 exit_qual;
++	int r;
+ 
+ 	enter_guest_mode(vcpu);
+ 
+@@ -11133,26 +11129,21 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ 	vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ 	vmx_segment_cache_clear(vmx);
+ 
+-	if (prepare_vmcs02(vcpu, vmcs12, from_vmentry, &exit_qual)) {
+-		leave_guest_mode(vcpu);
+-		vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+-		nested_vmx_entry_failure(vcpu, vmcs12,
+-					 EXIT_REASON_INVALID_STATE, exit_qual);
+-		return 1;
+-	}
++	if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING)
++		vcpu->arch.tsc_offset += vmcs12->tsc_offset;
++
++	r = EXIT_REASON_INVALID_STATE;
++	if (prepare_vmcs02(vcpu, vmcs12, from_vmentry, &exit_qual))
++		goto fail;
+ 
+ 	nested_get_vmcs12_pages(vcpu, vmcs12);
+ 
++	r = EXIT_REASON_MSR_LOAD_FAIL;
+ 	msr_entry_idx = nested_vmx_load_msr(vcpu,
+ 					    vmcs12->vm_entry_msr_load_addr,
+ 					    vmcs12->vm_entry_msr_load_count);
+-	if (msr_entry_idx) {
+-		leave_guest_mode(vcpu);
+-		vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+-		nested_vmx_entry_failure(vcpu, vmcs12,
+-				EXIT_REASON_MSR_LOAD_FAIL, msr_entry_idx);
+-		return 1;
+-	}
++	if (msr_entry_idx)
++		goto fail;
+ 
+ 	/*
+ 	 * Note no nested_vmx_succeed or nested_vmx_fail here. At this point
+@@ -11161,6 +11152,14 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ 	 * the success flag) when L2 exits (see nested_vmx_vmexit()).
+ 	 */
+ 	return 0;
++
++fail:
++	if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING)
++		vcpu->arch.tsc_offset -= vmcs12->tsc_offset;
++	leave_guest_mode(vcpu);
++	vmx_switch_vmcs(vcpu, &vmx->vmcs01);
++	nested_vmx_entry_failure(vcpu, vmcs12, r, exit_qual);
++	return 1;
+ }
+ 
+ /*
+@@ -11732,6 +11731,9 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 
+ 	leave_guest_mode(vcpu);
+ 
++	if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING)
++		vcpu->arch.tsc_offset -= vmcs12->tsc_offset;
++
+ 	if (likely(!vmx->fail)) {
+ 		if (exit_reason == -1)
+ 			sync_vmcs12(vcpu, vmcs12);
+@@ -11769,7 +11771,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 	} else if (!nested_cpu_has_ept(vmcs12) &&
+ 		   nested_cpu_has2(vmcs12,
+ 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
+-		vmx_flush_tlb_ept_only(vcpu);
++		vmx_flush_tlb(vcpu, true);
+ 	}
+ 
+ 	/* This is needed for same reason as it was needed in prepare_vmcs02 */
+@@ -12237,7 +12239,7 @@ static int vmx_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ 		vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu));
+ 		vcpu_info.vector = irq.vector;
+ 
+-		trace_kvm_pi_irte_update(vcpu->vcpu_id, host_irq, e->gsi,
++		trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi,
+ 				vcpu_info.vector, vcpu_info.pi_desc_addr, set);
+ 
+ 		if (set)
+@@ -12410,6 +12412,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ 
+ 	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
+ 
++	.read_l1_tsc_offset = vmx_read_l1_tsc_offset,
+ 	.write_tsc_offset = vmx_write_tsc_offset,
+ 
+ 	.set_tdp_cr3 = vmx_set_cr3,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index cf08ac8a910c..f3a1f9f3fb29 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -112,7 +112,7 @@ module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR);
+ static bool __read_mostly report_ignored_msrs = true;
+ module_param(report_ignored_msrs, bool, S_IRUGO | S_IWUSR);
+ 
+-unsigned int min_timer_period_us = 500;
++unsigned int min_timer_period_us = 200;
+ module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR);
+ 
+ static bool __read_mostly kvmclock_periodic_sync = true;
+@@ -1459,7 +1459,7 @@ static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu)
+ 
+ static void update_ia32_tsc_adjust_msr(struct kvm_vcpu *vcpu, s64 offset)
+ {
+-	u64 curr_offset = vcpu->arch.tsc_offset;
++	u64 curr_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
+ 	vcpu->arch.ia32_tsc_adjust_msr += offset - curr_offset;
+ }
+ 
+@@ -1501,7 +1501,9 @@ static u64 kvm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)
+ 
+ u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc)
+ {
+-	return vcpu->arch.tsc_offset + kvm_scale_tsc(vcpu, host_tsc);
++	u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
++
++	return tsc_offset + kvm_scale_tsc(vcpu, host_tsc);
+ }
+ EXPORT_SYMBOL_GPL(kvm_read_l1_tsc);
+ 
+@@ -2331,6 +2333,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		vcpu->arch.smbase = data;
+ 		break;
++	case MSR_IA32_TSC:
++		kvm_write_tsc(vcpu, msr_info);
++		break;
+ 	case MSR_SMI_COUNT:
+ 		if (!msr_info->host_initiated)
+ 			return 1;
+@@ -2570,6 +2575,9 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_UCODE_REV:
+ 		msr_info->data = vcpu->arch.microcode_version;
+ 		break;
++	case MSR_IA32_TSC:
++		msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
++		break;
+ 	case MSR_MTRRcap:
+ 	case 0x200 ... 0x2ff:
+ 		return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data);
+@@ -6545,12 +6553,13 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
+ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
+ {
+ 	unsigned long nr, a0, a1, a2, a3, ret;
+-	int op_64_bit, r;
+-
+-	r = kvm_skip_emulated_instruction(vcpu);
++	int op_64_bit;
+ 
+-	if (kvm_hv_hypercall_enabled(vcpu->kvm))
+-		return kvm_hv_hypercall(vcpu);
++	if (kvm_hv_hypercall_enabled(vcpu->kvm)) {
++		if (!kvm_hv_hypercall(vcpu))
++			return 0;
++		goto out;
++	}
+ 
+ 	nr = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ 	a0 = kvm_register_read(vcpu, VCPU_REGS_RBX);
+@@ -6571,7 +6580,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
+ 
+ 	if (kvm_x86_ops->get_cpl(vcpu) != 0) {
+ 		ret = -KVM_EPERM;
+-		goto out;
++		goto out_error;
+ 	}
+ 
+ 	switch (nr) {
+@@ -6591,12 +6600,14 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
+ 		ret = -KVM_ENOSYS;
+ 		break;
+ 	}
+-out:
++out_error:
+ 	if (!op_64_bit)
+ 		ret = (u32)ret;
+ 	kvm_register_write(vcpu, VCPU_REGS_RAX, ret);
++
++out:
+ 	++vcpu->stat.hypercalls;
+-	return r;
++	return kvm_skip_emulated_instruction(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_hypercall);
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index ce5b2ebd5701..6609cb6c91af 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -992,7 +992,17 @@ xadd:			if (is_imm8(insn->off))
+ 			break;
+ 
+ 		case BPF_JMP | BPF_JA:
+-			jmp_offset = addrs[i + insn->off] - addrs[i];
++			if (insn->off == -1)
++				/* -1 jmp instructions will always jump
++				 * backwards two bytes. Explicitly handling
++				 * this case avoids wasting too many passes
++				 * when there are long sequences of replaced
++				 * dead code.
++				 */
++				jmp_offset = -2;
++			else
++				jmp_offset = addrs[i + insn->off] - addrs[i];
++
+ 			if (!jmp_offset)
+ 				/* optimize out nop jumps */
+ 				break;
+@@ -1191,6 +1201,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 	for (pass = 0; pass < 20 || image; pass++) {
+ 		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+ 		if (proglen <= 0) {
++out_image:
+ 			image = NULL;
+ 			if (header)
+ 				bpf_jit_binary_free(header);
+@@ -1201,8 +1212,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 			if (proglen != oldproglen) {
+ 				pr_err("bpf_jit: proglen=%d != oldproglen=%d\n",
+ 				       proglen, oldproglen);
+-				prog = orig_prog;
+-				goto out_addrs;
++				goto out_image;
+ 			}
+ 			break;
+ 		}
+@@ -1239,7 +1249,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 		prog = orig_prog;
+ 	}
+ 
+-	if (!prog->is_func || extra_pass) {
++	if (!image || !prog->is_func || extra_pass) {
+ out_addrs:
+ 		kfree(addrs);
+ 		kfree(jit_data);
+diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
+index 826898701045..19c1ff542387 100644
+--- a/arch/x86/xen/enlighten_hvm.c
++++ b/arch/x86/xen/enlighten_hvm.c
+@@ -65,6 +65,19 @@ static void __init xen_hvm_init_mem_mapping(void)
+ {
+ 	early_memunmap(HYPERVISOR_shared_info, PAGE_SIZE);
+ 	HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
++
++	/*
++	 * The virtual address of the shared_info page has changed, so
++	 * the vcpu_info pointer for VCPU 0 is now stale.
++	 *
++	 * The prepare_boot_cpu callback will re-initialize it via
++	 * xen_vcpu_setup, but we can't rely on that to be called for
++	 * old Xen versions (xen_have_vector_callback == 0).
++	 *
++	 * It is, in any case, bad to have a stale vcpu_info pointer
++	 * so reset it now.
++	 */
++	xen_vcpu_info_reset(0);
+ }
+ 
+ static void __init init_hvm_pv_info(void)
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index c2033a232a44..58d030517b0f 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1142,18 +1142,16 @@ int blkcg_init_queue(struct request_queue *q)
+ 	rcu_read_lock();
+ 	spin_lock_irq(q->queue_lock);
+ 	blkg = blkg_create(&blkcg_root, q, new_blkg);
++	if (IS_ERR(blkg))
++		goto err_unlock;
++	q->root_blkg = blkg;
++	q->root_rl.blkg = blkg;
+ 	spin_unlock_irq(q->queue_lock);
+ 	rcu_read_unlock();
+ 
+ 	if (preloaded)
+ 		radix_tree_preload_end();
+ 
+-	if (IS_ERR(blkg))
+-		return PTR_ERR(blkg);
+-
+-	q->root_blkg = blkg;
+-	q->root_rl.blkg = blkg;
+-
+ 	ret = blk_throtl_init(q);
+ 	if (ret) {
+ 		spin_lock_irq(q->queue_lock);
+@@ -1161,6 +1159,13 @@ int blkcg_init_queue(struct request_queue *q)
+ 		spin_unlock_irq(q->queue_lock);
+ 	}
+ 	return ret;
++
++err_unlock:
++	spin_unlock_irq(q->queue_lock);
++	rcu_read_unlock();
++	if (preloaded)
++		radix_tree_preload_end();
++	return PTR_ERR(blkg);
+ }
+ 
+ /**
+@@ -1367,17 +1372,12 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 	__clear_bit(pol->plid, q->blkcg_pols);
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
+-		/* grab blkcg lock too while removing @pd from @blkg */
+-		spin_lock(&blkg->blkcg->lock);
+-
+ 		if (blkg->pd[pol->plid]) {
+ 			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
+-
+-		spin_unlock(&blkg->blkcg->lock);
+ 	}
+ 
+ 	spin_unlock_irq(q->queue_lock);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 96de7aa4f62a..00e16588b169 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -118,6 +118,25 @@ void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part,
+ 	blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi);
+ }
+ 
++static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx,
++				     struct request *rq, void *priv,
++				     bool reserved)
++{
++	struct mq_inflight *mi = priv;
++
++	if (rq->part == mi->part)
++		mi->inflight[rq_data_dir(rq)]++;
++}
++
++void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
++			 unsigned int inflight[2])
++{
++	struct mq_inflight mi = { .part = part, .inflight = inflight, };
++
++	inflight[0] = inflight[1] = 0;
++	blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi);
++}
++
+ void blk_freeze_queue_start(struct request_queue *q)
+ {
+ 	int freeze_depth;
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 88c558f71819..ecc86b6efdec 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -185,7 +185,9 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
+ }
+ 
+ void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part,
+-			unsigned int inflight[2]);
++		      unsigned int inflight[2]);
++void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
++			 unsigned int inflight[2]);
+ 
+ static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx)
+ {
+diff --git a/block/genhd.c b/block/genhd.c
+index 9656f9e9f99e..8f34897159f5 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -82,6 +82,18 @@ void part_in_flight(struct request_queue *q, struct hd_struct *part,
+ 	}
+ }
+ 
++void part_in_flight_rw(struct request_queue *q, struct hd_struct *part,
++		       unsigned int inflight[2])
++{
++	if (q->mq_ops) {
++		blk_mq_in_flight_rw(q, part, inflight);
++		return;
++	}
++
++	inflight[0] = atomic_read(&part->in_flight[0]);
++	inflight[1] = atomic_read(&part->in_flight[1]);
++}
++
+ struct hd_struct *__disk_get_part(struct gendisk *disk, int partno)
+ {
+ 	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 08dabcd8b6ae..db57cced9b98 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -145,13 +145,15 @@ ssize_t part_stat_show(struct device *dev,
+ 		jiffies_to_msecs(part_stat_read(p, time_in_queue)));
+ }
+ 
+-ssize_t part_inflight_show(struct device *dev,
+-			struct device_attribute *attr, char *buf)
++ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
++			   char *buf)
+ {
+ 	struct hd_struct *p = dev_to_part(dev);
++	struct request_queue *q = part_to_disk(p)->queue;
++	unsigned int inflight[2];
+ 
+-	return sprintf(buf, "%8u %8u\n", atomic_read(&p->in_flight[0]),
+-		atomic_read(&p->in_flight[1]));
++	part_in_flight_rw(q, p, inflight);
++	return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]);
+ }
+ 
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
+diff --git a/drivers/acpi/acpi_watchdog.c b/drivers/acpi/acpi_watchdog.c
+index ebb626ffb5fa..4bde16fb97d8 100644
+--- a/drivers/acpi/acpi_watchdog.c
++++ b/drivers/acpi/acpi_watchdog.c
+@@ -12,23 +12,64 @@
+ #define pr_fmt(fmt) "ACPI: watchdog: " fmt
+ 
+ #include <linux/acpi.h>
++#include <linux/dmi.h>
+ #include <linux/ioport.h>
+ #include <linux/platform_device.h>
+ 
+ #include "internal.h"
+ 
++static const struct dmi_system_id acpi_watchdog_skip[] = {
++	{
++		/*
++		 * On Lenovo Z50-70 there are two issues with the WDAT
++		 * table. First some of the instructions use RTC SRAM
++		 * to store persistent information. This does not work well
++		 * with Linux RTC driver. Second, more important thing is
++		 * that the instructions do not actually reset the system.
++		 *
++		 * On this particular system iTCO_wdt seems to work just
++		 * fine so we prefer that over WDAT for now.
++		 *
++		 * See also https://bugzilla.kernel.org/show_bug.cgi?id=199033.
++		 */
++		.ident = "Lenovo Z50-70",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "20354"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Z50-70"),
++		},
++	},
++	{}
++};
++
++static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void)
++{
++	const struct acpi_table_wdat *wdat = NULL;
++	acpi_status status;
++
++	if (acpi_disabled)
++		return NULL;
++
++	if (dmi_check_system(acpi_watchdog_skip))
++		return NULL;
++
++	status = acpi_get_table(ACPI_SIG_WDAT, 0,
++				(struct acpi_table_header **)&wdat);
++	if (ACPI_FAILURE(status)) {
++		/* It is fine if there is no WDAT */
++		return NULL;
++	}
++
++	return wdat;
++}
++
+ /**
+  * Returns true if this system should prefer ACPI based watchdog instead of
+  * the native one (which are typically the same hardware).
+  */
+ bool acpi_has_watchdog(void)
+ {
+-	struct acpi_table_header hdr;
+-
+-	if (acpi_disabled)
+-		return false;
+-
+-	return ACPI_SUCCESS(acpi_get_table_header(ACPI_SIG_WDAT, 0, &hdr));
++	return !!acpi_watchdog_get_wdat();
+ }
+ EXPORT_SYMBOL_GPL(acpi_has_watchdog);
+ 
+@@ -41,12 +82,10 @@ void __init acpi_watchdog_init(void)
+ 	struct platform_device *pdev;
+ 	struct resource *resources;
+ 	size_t nresources = 0;
+-	acpi_status status;
+ 	int i;
+ 
+-	status = acpi_get_table(ACPI_SIG_WDAT, 0,
+-				(struct acpi_table_header **)&wdat);
+-	if (ACPI_FAILURE(status)) {
++	wdat = acpi_watchdog_get_wdat();
++	if (!wdat) {
+ 		/* It is fine if there is no WDAT */
+ 		return;
+ 	}
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 8e63d937babb..807e1ae67b7c 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -2150,10 +2150,10 @@ int __init acpi_scan_init(void)
+ 	acpi_cmos_rtc_init();
+ 	acpi_container_init();
+ 	acpi_memory_hotplug_init();
++	acpi_watchdog_init();
+ 	acpi_pnp_init();
+ 	acpi_int340x_thermal_init();
+ 	acpi_amba_init();
+-	acpi_watchdog_init();
+ 	acpi_init_lpit();
+ 
+ 	acpi_scan_add_handler(&generic_device_handler);
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 46cde0912762..b7846d8d3e87 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -364,6 +364,19 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9360"),
+ 		},
+ 	},
++	/*
++	 * ThinkPad X1 Tablet(2016) cannot do suspend-to-idle using
++	 * the Low Power S0 Idle firmware interface (see
++	 * https://bugzilla.kernel.org/show_bug.cgi?id=199057).
++	 */
++	{
++	.callback = init_no_lps0,
++	.ident = "ThinkPad X1 Tablet(2016)",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 1d396b6e6000..738fb22978dd 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -699,7 +699,7 @@ static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
+ 
+ 	DPRINTK("ENTER\n");
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	rc = sata_link_hardreset(link, sata_ehc_deb_timing(&link->eh_context),
+ 				 deadline, &online, NULL);
+@@ -725,7 +725,7 @@ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
+ 	bool online;
+ 	int rc;
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	/* clear D2H reception area to properly wait for D2H FIS */
+ 	ata_tf_init(link->device, &tf);
+@@ -789,7 +789,7 @@ static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
+ 
+ 	DPRINTK("ENTER\n");
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	for (i = 0; i < 2; i++) {
+ 		u16 val;
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index a9d996e17d75..824bd399f02e 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -365,6 +365,13 @@ struct ahci_host_priv {
+ 	 * be overridden anytime before the host is activated.
+ 	 */
+ 	void			(*start_engine)(struct ata_port *ap);
++	/*
++	 * Optional ahci_stop_engine override, if not set this gets set to the
++	 * default ahci_stop_engine during ahci_save_initial_config, this can
++	 * be overridden anytime before the host is activated.
++	 */
++	int			(*stop_engine)(struct ata_port *ap);
++
+ 	irqreturn_t 		(*irq_handler)(int irq, void *dev_instance);
+ 
+ 	/* only required for per-port MSI(-X) support */
+diff --git a/drivers/ata/ahci_mvebu.c b/drivers/ata/ahci_mvebu.c
+index de7128d81e9c..0045dacd814b 100644
+--- a/drivers/ata/ahci_mvebu.c
++++ b/drivers/ata/ahci_mvebu.c
+@@ -62,6 +62,60 @@ static void ahci_mvebu_regret_option(struct ahci_host_priv *hpriv)
+ 	writel(0x80, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA);
+ }
+ 
++/**
++ * ahci_mvebu_stop_engine
++ *
++ * @ap:	Target ata port
++ *
++ * Errata Ref#226 - SATA Disk HOT swap issue when connected through
++ * Port Multiplier in FIS-based Switching mode.
++ *
++ * To avoid the issue, according to design, the bits[11:8, 0] of
++ * register PxFBS are cleared when Port Command and Status (0x18) bit[0]
++ * changes its value from 1 to 0, i.e. falling edge of Port
++ * Command and Status bit[0] sends PULSE that resets PxFBS
++ * bits[11:8; 0].
++ *
++ * This function is used to override function of "ahci_stop_engine"
++ * from libahci.c by adding the mvebu work around(WA) to save PxFBS
++ * value before the PxCMD ST write of 0, then restore PxFBS value.
++ *
++ * Return: 0 on success; Error code otherwise.
++ */
++int ahci_mvebu_stop_engine(struct ata_port *ap)
++{
++	void __iomem *port_mmio = ahci_port_base(ap);
++	u32 tmp, port_fbs;
++
++	tmp = readl(port_mmio + PORT_CMD);
++
++	/* check if the HBA is idle */
++	if ((tmp & (PORT_CMD_START | PORT_CMD_LIST_ON)) == 0)
++		return 0;
++
++	/* save the port PxFBS register for later restore */
++	port_fbs = readl(port_mmio + PORT_FBS);
++
++	/* setting HBA to idle */
++	tmp &= ~PORT_CMD_START;
++	writel(tmp, port_mmio + PORT_CMD);
++
++	/*
++	 * bit #15 PxCMD signal doesn't clear PxFBS,
++	 * restore the PxFBS register right after clearing the PxCMD ST,
++	 * no need to wait for the PxCMD bit #15.
++	 */
++	writel(port_fbs, port_mmio + PORT_FBS);
++
++	/* wait for engine to stop. This could be as long as 500 msec */
++	tmp = ata_wait_register(ap, port_mmio + PORT_CMD,
++				PORT_CMD_LIST_ON, PORT_CMD_LIST_ON, 1, 500);
++	if (tmp & PORT_CMD_LIST_ON)
++		return -EIO;
++
++	return 0;
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int ahci_mvebu_suspend(struct platform_device *pdev, pm_message_t state)
+ {
+@@ -112,6 +166,8 @@ static int ahci_mvebu_probe(struct platform_device *pdev)
+ 	if (rc)
+ 		return rc;
+ 
++	hpriv->stop_engine = ahci_mvebu_stop_engine;
++
+ 	if (of_device_is_compatible(pdev->dev.of_node,
+ 				    "marvell,armada-380-ahci")) {
+ 		dram = mv_mbus_dram_info();
+diff --git a/drivers/ata/ahci_qoriq.c b/drivers/ata/ahci_qoriq.c
+index 2685f28160f7..cfdef4d44ae9 100644
+--- a/drivers/ata/ahci_qoriq.c
++++ b/drivers/ata/ahci_qoriq.c
+@@ -96,7 +96,7 @@ static int ahci_qoriq_hardreset(struct ata_link *link, unsigned int *class,
+ 
+ 	DPRINTK("ENTER\n");
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	/*
+ 	 * There is a errata on ls1021a Rev1.0 and Rev2.0 which is:
+diff --git a/drivers/ata/ahci_xgene.c b/drivers/ata/ahci_xgene.c
+index c2b5941d9184..ad58da7c9aff 100644
+--- a/drivers/ata/ahci_xgene.c
++++ b/drivers/ata/ahci_xgene.c
+@@ -165,7 +165,7 @@ static int xgene_ahci_restart_engine(struct ata_port *ap)
+ 				    PORT_CMD_ISSUE, 0x0, 1, 100))
+ 		  return -EBUSY;
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 	ahci_start_fis_rx(ap);
+ 
+ 	/*
+@@ -421,7 +421,7 @@ static int xgene_ahci_hardreset(struct ata_link *link, unsigned int *class,
+ 	portrxfis_saved = readl(port_mmio + PORT_FIS_ADDR);
+ 	portrxfishi_saved = readl(port_mmio + PORT_FIS_ADDR_HI);
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	rc = xgene_ahci_do_hardreset(link, deadline, &online);
+ 
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 7adcf3caabd0..e5d90977caec 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -560,6 +560,9 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
+ 	if (!hpriv->start_engine)
+ 		hpriv->start_engine = ahci_start_engine;
+ 
++	if (!hpriv->stop_engine)
++		hpriv->stop_engine = ahci_stop_engine;
++
+ 	if (!hpriv->irq_handler)
+ 		hpriv->irq_handler = ahci_single_level_irq_intr;
+ }
+@@ -897,9 +900,10 @@ static void ahci_start_port(struct ata_port *ap)
+ static int ahci_deinit_port(struct ata_port *ap, const char **emsg)
+ {
+ 	int rc;
++	struct ahci_host_priv *hpriv = ap->host->private_data;
+ 
+ 	/* disable DMA */
+-	rc = ahci_stop_engine(ap);
++	rc = hpriv->stop_engine(ap);
+ 	if (rc) {
+ 		*emsg = "failed to stop engine";
+ 		return rc;
+@@ -1310,7 +1314,7 @@ int ahci_kick_engine(struct ata_port *ap)
+ 	int busy, rc;
+ 
+ 	/* stop engine */
+-	rc = ahci_stop_engine(ap);
++	rc = hpriv->stop_engine(ap);
+ 	if (rc)
+ 		goto out_restart;
+ 
+@@ -1549,7 +1553,7 @@ int ahci_do_hardreset(struct ata_link *link, unsigned int *class,
+ 
+ 	DPRINTK("ENTER\n");
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	/* clear D2H reception area to properly wait for D2H FIS */
+ 	ata_tf_init(link->device, &tf);
+@@ -2075,14 +2079,14 @@ void ahci_error_handler(struct ata_port *ap)
+ 
+ 	if (!(ap->pflags & ATA_PFLAG_FROZEN)) {
+ 		/* restart engine */
+-		ahci_stop_engine(ap);
++		hpriv->stop_engine(ap);
+ 		hpriv->start_engine(ap);
+ 	}
+ 
+ 	sata_pmp_error_handler(ap);
+ 
+ 	if (!ata_dev_enabled(ap->link.device))
+-		ahci_stop_engine(ap);
++		hpriv->stop_engine(ap);
+ }
+ EXPORT_SYMBOL_GPL(ahci_error_handler);
+ 
+@@ -2129,7 +2133,7 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		return;
+ 
+ 	/* set DITO, MDAT, DETO and enable DevSlp, need to stop engine first */
+-	rc = ahci_stop_engine(ap);
++	rc = hpriv->stop_engine(ap);
+ 	if (rc)
+ 		return;
+ 
+@@ -2189,7 +2193,7 @@ static void ahci_enable_fbs(struct ata_port *ap)
+ 		return;
+ 	}
+ 
+-	rc = ahci_stop_engine(ap);
++	rc = hpriv->stop_engine(ap);
+ 	if (rc)
+ 		return;
+ 
+@@ -2222,7 +2226,7 @@ static void ahci_disable_fbs(struct ata_port *ap)
+ 		return;
+ 	}
+ 
+-	rc = ahci_stop_engine(ap);
++	rc = hpriv->stop_engine(ap);
+ 	if (rc)
+ 		return;
+ 
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index c016829a38fd..513b260bcff1 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -175,8 +175,8 @@ static void ata_eh_handle_port_resume(struct ata_port *ap)
+ { }
+ #endif /* CONFIG_PM */
+ 
+-static void __ata_ehi_pushv_desc(struct ata_eh_info *ehi, const char *fmt,
+-				 va_list args)
++static __printf(2, 0) void __ata_ehi_pushv_desc(struct ata_eh_info *ehi,
++				 const char *fmt, va_list args)
+ {
+ 	ehi->desc_len += vscnprintf(ehi->desc + ehi->desc_len,
+ 				     ATA_EH_DESC_LEN - ehi->desc_len,
+diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
+index aafb8cc03523..e67815b896fc 100644
+--- a/drivers/ata/sata_highbank.c
++++ b/drivers/ata/sata_highbank.c
+@@ -410,7 +410,7 @@ static int ahci_highbank_hardreset(struct ata_link *link, unsigned int *class,
+ 	int rc;
+ 	int retry = 100;
+ 
+-	ahci_stop_engine(ap);
++	hpriv->stop_engine(ap);
+ 
+ 	/* clear D2H reception area to properly wait for D2H FIS */
+ 	ata_tf_init(link->device, &tf);
+diff --git a/drivers/char/agp/uninorth-agp.c b/drivers/char/agp/uninorth-agp.c
+index c381c8e396fc..79d8c84693a1 100644
+--- a/drivers/char/agp/uninorth-agp.c
++++ b/drivers/char/agp/uninorth-agp.c
+@@ -195,7 +195,7 @@ static int uninorth_insert_memory(struct agp_memory *mem, off_t pg_start, int ty
+ 	return 0;
+ }
+ 
+-int uninorth_remove_memory(struct agp_memory *mem, off_t pg_start, int type)
++static int uninorth_remove_memory(struct agp_memory *mem, off_t pg_start, int type)
+ {
+ 	size_t i;
+ 	u32 *gp;
+@@ -470,7 +470,7 @@ static int uninorth_free_gatt_table(struct agp_bridge_data *bridge)
+ 	return 0;
+ }
+ 
+-void null_cache_flush(void)
++static void null_cache_flush(void)
+ {
+ 	mb();
+ }
+diff --git a/drivers/clk/clk-mux.c b/drivers/clk/clk-mux.c
+index 39cabe157163..4f6a048aece6 100644
+--- a/drivers/clk/clk-mux.c
++++ b/drivers/clk/clk-mux.c
+@@ -101,10 +101,18 @@ static int clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ 	return 0;
+ }
+ 
++static int clk_mux_determine_rate(struct clk_hw *hw,
++				  struct clk_rate_request *req)
++{
++	struct clk_mux *mux = to_clk_mux(hw);
++
++	return clk_mux_determine_rate_flags(hw, req, mux->flags);
++}
++
+ const struct clk_ops clk_mux_ops = {
+ 	.get_parent = clk_mux_get_parent,
+ 	.set_parent = clk_mux_set_parent,
+-	.determine_rate = __clk_mux_determine_rate,
++	.determine_rate = clk_mux_determine_rate,
+ };
+ EXPORT_SYMBOL_GPL(clk_mux_ops);
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 5698d2fac1af..665b64f0b0f8 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -426,9 +426,9 @@ static bool mux_is_better_rate(unsigned long rate, unsigned long now,
+ 	return now <= rate && now > best;
+ }
+ 
+-static int
+-clk_mux_determine_rate_flags(struct clk_hw *hw, struct clk_rate_request *req,
+-			     unsigned long flags)
++int clk_mux_determine_rate_flags(struct clk_hw *hw,
++				 struct clk_rate_request *req,
++				 unsigned long flags)
+ {
+ 	struct clk_core *core = hw->core, *parent, *best_parent = NULL;
+ 	int i, num_parents, ret;
+@@ -488,6 +488,7 @@ clk_mux_determine_rate_flags(struct clk_hw *hw, struct clk_rate_request *req,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(clk_mux_determine_rate_flags);
+ 
+ struct clk *__clk_lookup(const char *name)
+ {
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index 85c118164469..c95034584747 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -461,7 +461,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 	clk_set_rate(clks[IMX6UL_CLK_AHB], 99000000);
+ 
+ 	/* Change periph_pre clock to pll2_bus to adjust AXI rate to 264MHz */
+-	clk_set_parent(clks[IMX6UL_CLK_PERIPH_CLK2_SEL], clks[IMX6UL_CLK_PLL3_USB_OTG]);
++	clk_set_parent(clks[IMX6UL_CLK_PERIPH_CLK2_SEL], clks[IMX6UL_CLK_OSC]);
+ 	clk_set_parent(clks[IMX6UL_CLK_PERIPH], clks[IMX6UL_CLK_PERIPH_CLK2]);
+ 	clk_set_parent(clks[IMX6UL_CLK_PERIPH_PRE], clks[IMX6UL_CLK_PLL2_BUS]);
+ 	clk_set_parent(clks[IMX6UL_CLK_PERIPH], clks[IMX6UL_CLK_PERIPH_PRE]);
+diff --git a/drivers/clocksource/timer-imx-tpm.c b/drivers/clocksource/timer-imx-tpm.c
+index 557ed25b42e3..d175b9545581 100644
+--- a/drivers/clocksource/timer-imx-tpm.c
++++ b/drivers/clocksource/timer-imx-tpm.c
+@@ -20,6 +20,7 @@
+ #define TPM_SC				0x10
+ #define TPM_SC_CMOD_INC_PER_CNT		(0x1 << 3)
+ #define TPM_SC_CMOD_DIV_DEFAULT		0x3
++#define TPM_SC_TOF_MASK			(0x1 << 7)
+ #define TPM_CNT				0x14
+ #define TPM_MOD				0x18
+ #define TPM_STATUS			0x1c
+@@ -29,6 +30,7 @@
+ #define TPM_C0SC_MODE_SHIFT		2
+ #define TPM_C0SC_MODE_MASK		0x3c
+ #define TPM_C0SC_MODE_SW_COMPARE	0x4
++#define TPM_C0SC_CHF_MASK		(0x1 << 7)
+ #define TPM_C0V				0x24
+ 
+ static void __iomem *timer_base;
+@@ -205,9 +207,13 @@ static int __init tpm_timer_init(struct device_node *np)
+ 	 * 4) Channel0 disabled
+ 	 * 5) DMA transfers disabled
+ 	 */
++	/* make sure counter is disabled */
+ 	writel(0, timer_base + TPM_SC);
++	/* TOF is W1C */
++	writel(TPM_SC_TOF_MASK, timer_base + TPM_SC);
+ 	writel(0, timer_base + TPM_CNT);
+-	writel(0, timer_base + TPM_C0SC);
++	/* CHF is W1C */
++	writel(TPM_C0SC_CHF_MASK, timer_base + TPM_C0SC);
+ 
+ 	/* increase per cnt, div 8 by default */
+ 	writel(TPM_SC_CMOD_INC_PER_CNT | TPM_SC_CMOD_DIV_DEFAULT,
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index a8bec064d14a..ebde031ebd50 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -70,16 +70,6 @@ config ARM_BRCMSTB_AVS_CPUFREQ
+ 
+ 	  Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.
+ 
+-config ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
+-	bool "Broadcom STB AVS CPUfreq driver sysfs debug capability"
+-	depends on ARM_BRCMSTB_AVS_CPUFREQ
+-	help
+-	  Enabling this option turns on debug support via sysfs under
+-	  /sys/kernel/debug/brcmstb-avs-cpufreq. It is possible to read all and
+-	  write some AVS mailbox registers through sysfs entries.
+-
+-	  If in doubt, say N.
+-
+ config ARM_EXYNOS5440_CPUFREQ
+ 	tristate "SAMSUNG EXYNOS5440"
+ 	depends on SOC_EXYNOS5440
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index 7281a2c19c36..726fb4db139e 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -49,13 +49,6 @@
+ #include <linux/platform_device.h>
+ #include <linux/semaphore.h>
+ 
+-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
+-#include <linux/ctype.h>
+-#include <linux/debugfs.h>
+-#include <linux/slab.h>
+-#include <linux/uaccess.h>
+-#endif
+-
+ /* Max number of arguments AVS calls take */
+ #define AVS_MAX_CMD_ARGS	4
+ /*
+@@ -182,88 +175,11 @@ struct private_data {
+ 	void __iomem *base;
+ 	void __iomem *avs_intr_base;
+ 	struct device *dev;
+-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
+-	struct dentry *debugfs;
+-#endif
+ 	struct completion done;
+ 	struct semaphore sem;
+ 	struct pmap pmap;
+ };
+ 
+-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
+-
+-enum debugfs_format {
+-	DEBUGFS_NORMAL,
+-	DEBUGFS_FLOAT,
+-	DEBUGFS_REV,
+-};
+-
+-struct debugfs_data {
+-	struct debugfs_entry *entry;
+-	struct private_data *priv;
+-};
+-
+-struct debugfs_entry {
+-	char *name;
+-	u32 offset;
+-	fmode_t mode;
+-	enum debugfs_format format;
+-};
+-
+-#define DEBUGFS_ENTRY(name, mode, format)	{ \
+-	#name, AVS_MBOX_##name, mode, format \
+-}
+-
+-/*
+- * These are used for debugfs only. Otherwise we use AVS_MBOX_PARAM() directly.
+- */
+-#define AVS_MBOX_PARAM1		AVS_MBOX_PARAM(0)
+-#define AVS_MBOX_PARAM2		AVS_MBOX_PARAM(1)
+-#define AVS_MBOX_PARAM3		AVS_MBOX_PARAM(2)
+-#define AVS_MBOX_PARAM4		AVS_MBOX_PARAM(3)
+-
+-/*
+- * This table stores the name, access permissions and offset for each hardware
+- * register and is used to generate debugfs entries.
+- */
+-static struct debugfs_entry debugfs_entries[] = {
+-	DEBUGFS_ENTRY(COMMAND, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(STATUS, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(VOLTAGE0, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(TEMP0, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(PV0, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(MV0, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(PARAM1, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(PARAM2, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(PARAM3, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(PARAM4, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(REVISION, 0, DEBUGFS_REV),
+-	DEBUGFS_ENTRY(PSTATE, 0, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(HEARTBEAT, 0, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(MAGIC, S_IWUSR, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(SIGMA_HVT, 0, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(SIGMA_SVT, 0, DEBUGFS_NORMAL),
+-	DEBUGFS_ENTRY(VOLTAGE1, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(TEMP1, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(PV1, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(MV1, 0, DEBUGFS_FLOAT),
+-	DEBUGFS_ENTRY(FREQUENCY, 0, DEBUGFS_NORMAL),
+-};
+-
+-static int brcm_avs_target_index(struct cpufreq_policy *, unsigned int);
+-
+-static char *__strtolower(char *s)
+-{
+-	char *p;
+-
+-	for (p = s; *p; p++)
+-		*p = tolower(*p);
+-
+-	return s;
+-}
+-
+-#endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */
+-
+ static void __iomem *__map_region(const char *name)
+ {
+ 	struct device_node *np;
+@@ -516,238 +432,6 @@ brcm_avs_get_freq_table(struct device *dev, struct private_data *priv)
+ 	return table;
+ }
+ 
+-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
+-
+-#define MANT(x)	(unsigned int)(abs((x)) / 1000)
+-#define FRAC(x)	(unsigned int)(abs((x)) - abs((x)) / 1000 * 1000)
+-
+-static int brcm_avs_debug_show(struct seq_file *s, void *data)
+-{
+-	struct debugfs_data *dbgfs = s->private;
+-	void __iomem *base;
+-	u32 val, offset;
+-
+-	if (!dbgfs) {
+-		seq_puts(s, "No device pointer\n");
+-		return 0;
+-	}
+-
+-	base = dbgfs->priv->base;
+-	offset = dbgfs->entry->offset;
+-	val = readl(base + offset);
+-	switch (dbgfs->entry->format) {
+-	case DEBUGFS_NORMAL:
+-		seq_printf(s, "%u\n", val);
+-		break;
+-	case DEBUGFS_FLOAT:
+-		seq_printf(s, "%d.%03d\n", MANT(val), FRAC(val));
+-		break;
+-	case DEBUGFS_REV:
+-		seq_printf(s, "%c.%c.%c.%c\n", (val >> 24 & 0xff),
+-			   (val >> 16 & 0xff), (val >> 8 & 0xff),
+-			   val & 0xff);
+-		break;
+-	}
+-	seq_printf(s, "0x%08x\n", val);
+-
+-	return 0;
+-}
+-
+-#undef MANT
+-#undef FRAC
+-
+-static ssize_t brcm_avs_seq_write(struct file *file, const char __user *buf,
+-				  size_t size, loff_t *ppos)
+-{
+-	struct seq_file *s = file->private_data;
+-	struct debugfs_data *dbgfs = s->private;
+-	struct private_data *priv = dbgfs->priv;
+-	void __iomem *base, *avs_intr_base;
+-	bool use_issue_command = false;
+-	unsigned long val, offset;
+-	char str[128];
+-	int ret;
+-	char *str_ptr = str;
+-
+-	if (size >= sizeof(str))
+-		return -E2BIG;
+-
+-	memset(str, 0, sizeof(str));
+-	ret = copy_from_user(str, buf, size);
+-	if (ret)
+-		return ret;
+-
+-	base = priv->base;
+-	avs_intr_base = priv->avs_intr_base;
+-	offset = dbgfs->entry->offset;
+-	/*
+-	 * Special case writing to "command" entry only: if the string starts
+-	 * with a 'c', we use the driver's __issue_avs_command() function.
+-	 * Otherwise, we perform a raw write. This should allow testing of raw
+-	 * access as well as using the higher level function. (Raw access
+-	 * doesn't clear the firmware return status after issuing the command.)
+-	 */
+-	if (str_ptr[0] == 'c' && offset == AVS_MBOX_COMMAND) {
+-		use_issue_command = true;
+-		str_ptr++;
+-	}
+-	if (kstrtoul(str_ptr, 0, &val) != 0)
+-		return -EINVAL;
+-
+-	/*
+-	 * Setting the P-state is a special case. We need to update the CPU
+-	 * frequency we report.
+-	 */
+-	if (val == AVS_CMD_SET_PSTATE) {
+-		struct cpufreq_policy *policy;
+-		unsigned int pstate;
+-
+-		policy = cpufreq_cpu_get(smp_processor_id());
+-		/* Read back the P-state we are about to set */
+-		pstate = readl(base + AVS_MBOX_PARAM(0));
+-		if (use_issue_command) {
+-			ret = brcm_avs_target_index(policy, pstate);
+-			return ret ? ret : size;
+-		}
+-		policy->cur = policy->freq_table[pstate].frequency;
+-	}
+-
+-	if (use_issue_command) {
+-		ret = __issue_avs_command(priv, val, false, NULL);
+-	} else {
+-		/* Locking here is not perfect, but is only for debug. */
+-		ret = down_interruptible(&priv->sem);
+-		if (ret)
+-			return ret;
+-
+-		writel(val, base + offset);
+-		/* We have to wake up the firmware to process a command. */
+-		if (offset == AVS_MBOX_COMMAND)
+-			writel(AVS_CPU_L2_INT_MASK,
+-			       avs_intr_base + AVS_CPU_L2_SET0);
+-		up(&priv->sem);
+-	}
+-
+-	return ret ? ret : size;
+-}
+-
+-static struct debugfs_entry *__find_debugfs_entry(const char *name)
+-{
+-	int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++)
+-		if (strcasecmp(debugfs_entries[i].name, name) == 0)
+-			return &debugfs_entries[i];
+-
+-	return NULL;
+-}
+-
+-static int brcm_avs_debug_open(struct inode *inode, struct file *file)
+-{
+-	struct debugfs_data *data;
+-	fmode_t fmode;
+-	int ret;
+-
+-	/*
+-	 * seq_open(), which is called by single_open(), clears "write" access.
+-	 * We need write access to some files, so we preserve our access mode
+-	 * and restore it.
+-	 */
+-	fmode = file->f_mode;
+-	/*
+-	 * Check access permissions even for root. We don't want to be writing
+-	 * to read-only registers. Access for regular users has already been
+-	 * checked by the VFS layer.
+-	 */
+-	if ((fmode & FMODE_WRITER) && !(inode->i_mode & S_IWUSR))
+-		return -EACCES;
+-
+-	data = kmalloc(sizeof(*data), GFP_KERNEL);
+-	if (!data)
+-		return -ENOMEM;
+-	/*
+-	 * We use the same file system operations for all our debug files. To
+-	 * produce specific output, we look up the file name upon opening a
+-	 * debugfs entry and map it to a memory offset. This offset is then used
+-	 * in the generic "show" function to read a specific register.
+-	 */
+-	data->entry = __find_debugfs_entry(file->f_path.dentry->d_iname);
+-	data->priv = inode->i_private;
+-
+-	ret = single_open(file, brcm_avs_debug_show, data);
+-	if (ret)
+-		kfree(data);
+-	file->f_mode = fmode;
+-
+-	return ret;
+-}
+-
+-static int brcm_avs_debug_release(struct inode *inode, struct file *file)
+-{
+-	struct seq_file *seq_priv = file->private_data;
+-	struct debugfs_data *data = seq_priv->private;
+-
+-	kfree(data);
+-	return single_release(inode, file);
+-}
+-
+-static const struct file_operations brcm_avs_debug_ops = {
+-	.open		= brcm_avs_debug_open,
+-	.read		= seq_read,
+-	.write		= brcm_avs_seq_write,
+-	.llseek		= seq_lseek,
+-	.release	= brcm_avs_debug_release,
+-};
+-
+-static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev)
+-{
+-	struct private_data *priv = platform_get_drvdata(pdev);
+-	struct dentry *dir;
+-	int i;
+-
+-	if (!priv)
+-		return;
+-
+-	dir = debugfs_create_dir(BRCM_AVS_CPUFREQ_NAME, NULL);
+-	if (IS_ERR_OR_NULL(dir))
+-		return;
+-	priv->debugfs = dir;
+-
+-	for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++) {
+-		/*
+-		 * The DEBUGFS_ENTRY macro generates uppercase strings. We
+-		 * convert them to lowercase before creating the debugfs
+-		 * entries.
+-		 */
+-		char *entry = __strtolower(debugfs_entries[i].name);
+-		fmode_t mode = debugfs_entries[i].mode;
+-
+-		if (!debugfs_create_file(entry, S_IFREG | S_IRUGO | mode,
+-					 dir, priv, &brcm_avs_debug_ops)) {
+-			priv->debugfs = NULL;
+-			debugfs_remove_recursive(dir);
+-			break;
+-		}
+-	}
+-}
+-
+-static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev)
+-{
+-	struct private_data *priv = platform_get_drvdata(pdev);
+-
+-	if (priv && priv->debugfs) {
+-		debugfs_remove_recursive(priv->debugfs);
+-		priv->debugfs = NULL;
+-	}
+-}
+-
+-#else
+-
+-static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev) {}
+-static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev) {}
+-
+-#endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */
+-
+ /*
+  * To ensure the right firmware is running we need to
+  *    - check the MAGIC matches what we expect
+@@ -1020,11 +704,8 @@ static int brcm_avs_cpufreq_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	brcm_avs_driver.driver_data = pdev;
+-	ret = cpufreq_register_driver(&brcm_avs_driver);
+-	if (!ret)
+-		brcm_avs_cpufreq_debug_init(pdev);
+ 
+-	return ret;
++	return cpufreq_register_driver(&brcm_avs_driver);
+ }
+ 
+ static int brcm_avs_cpufreq_remove(struct platform_device *pdev)
+@@ -1036,8 +717,6 @@ static int brcm_avs_cpufreq_remove(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	brcm_avs_cpufreq_debug_exit(pdev);
+-
+ 	priv = platform_get_drvdata(pdev);
+ 	iounmap(priv->base);
+ 	iounmap(priv->avs_intr_base);
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index b9bd827caa22..1b4d465cc5d9 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -97,6 +97,16 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table_arg,
+ 		u32 offset = !IS_ENABLED(CONFIG_DEBUG_ALIGN_RODATA) ?
+ 			     (phys_seed >> 32) & mask : TEXT_OFFSET;
+ 
++		/*
++		 * With CONFIG_RANDOMIZE_TEXT_OFFSET=y, TEXT_OFFSET may not
++		 * be a multiple of EFI_KIMG_ALIGN, and we must ensure that
++		 * we preserve the misalignment of 'offset' relative to
++		 * EFI_KIMG_ALIGN so that statically allocated objects whose
++		 * alignment exceeds PAGE_SIZE appear correctly aligned in
++		 * memory.
++		 */
++		offset |= TEXT_OFFSET % EFI_KIMG_ALIGN;
++
+ 		/*
+ 		 * If KASLR is enabled, and we have some randomness available,
+ 		 * locate the kernel at a randomized offset in physical memory.
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 09d35051fdd6..3fabf9f97022 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -419,9 +419,11 @@ int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned ring_id)
+ 
+ 	if (other) {
+ 		signed long r;
+-		r = dma_fence_wait_timeout(other, false, MAX_SCHEDULE_TIMEOUT);
++		r = dma_fence_wait(other, true);
+ 		if (r < 0) {
+-			DRM_ERROR("Error (%ld) waiting for fence!\n", r);
++			if (r != -ERESTARTSYS)
++				DRM_ERROR("Error (%ld) waiting for fence!\n", r);
++
+ 			return r;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 62c3d9cd6ef1..0492aff87382 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -748,12 +748,13 @@ static int kfd_ioctl_get_clock_counters(struct file *filep,
+ 	struct timespec64 time;
+ 
+ 	dev = kfd_device_by_id(args->gpu_id);
+-	if (dev == NULL)
+-		return -EINVAL;
+-
+-	/* Reading GPU clock counter from KGD */
+-	args->gpu_clock_counter =
+-		dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);
++	if (dev)
++		/* Reading GPU clock counter from KGD */
++		args->gpu_clock_counter =
++			dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);
++	else
++		/* Node without GPU resource */
++		args->gpu_clock_counter = 0;
+ 
+ 	/* No access to rdtsc. Using raw monotonic time */
+ 	getrawmonotonic64(&time);
+diff --git a/drivers/gpu/drm/drm_dumb_buffers.c b/drivers/gpu/drm/drm_dumb_buffers.c
+index 39ac15ce4702..9e2ae02f31e0 100644
+--- a/drivers/gpu/drm/drm_dumb_buffers.c
++++ b/drivers/gpu/drm/drm_dumb_buffers.c
+@@ -65,12 +65,13 @@ int drm_mode_create_dumb_ioctl(struct drm_device *dev,
+ 		return -EINVAL;
+ 
+ 	/* overflow checks for 32bit size calculations */
+-	/* NOTE: DIV_ROUND_UP() can overflow */
++	if (args->bpp > U32_MAX - 8)
++		return -EINVAL;
+ 	cpp = DIV_ROUND_UP(args->bpp, 8);
+-	if (!cpp || cpp > 0xffffffffU / args->width)
++	if (cpp > U32_MAX / args->width)
+ 		return -EINVAL;
+ 	stride = cpp * args->width;
+-	if (args->height > 0xffffffffU / stride)
++	if (args->height > U32_MAX / stride)
+ 		return -EINVAL;
+ 
+ 	/* test for wrap-around */
+diff --git a/drivers/gpu/drm/exynos/exynos_mixer.c b/drivers/gpu/drm/exynos/exynos_mixer.c
+index dc5d79465f9b..66b7cc2128e7 100644
+--- a/drivers/gpu/drm/exynos/exynos_mixer.c
++++ b/drivers/gpu/drm/exynos/exynos_mixer.c
+@@ -485,7 +485,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
+ 			chroma_addr[1] = chroma_addr[0] + 0x40;
+ 		} else {
+ 			luma_addr[1] = luma_addr[0] + fb->pitches[0];
+-			chroma_addr[1] = chroma_addr[0] + fb->pitches[0];
++			chroma_addr[1] = chroma_addr[0] + fb->pitches[1];
+ 		}
+ 	} else {
+ 		luma_addr[1] = 0;
+@@ -494,6 +494,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
+ 
+ 	spin_lock_irqsave(&ctx->reg_slock, flags);
+ 
++	vp_reg_write(ctx, VP_SHADOW_UPDATE, 1);
+ 	/* interlace or progressive scan mode */
+ 	val = (test_bit(MXR_BIT_INTERLACE, &ctx->flags) ? ~0 : 0);
+ 	vp_reg_writemask(ctx, VP_MODE, val, VP_MODE_LINE_SKIP);
+@@ -507,21 +508,23 @@ static void vp_video_buffer(struct mixer_context *ctx,
+ 	vp_reg_write(ctx, VP_IMG_SIZE_Y, VP_IMG_HSIZE(fb->pitches[0]) |
+ 		VP_IMG_VSIZE(fb->height));
+ 	/* chroma plane for NV12/NV21 is half the height of the luma plane */
+-	vp_reg_write(ctx, VP_IMG_SIZE_C, VP_IMG_HSIZE(fb->pitches[0]) |
++	vp_reg_write(ctx, VP_IMG_SIZE_C, VP_IMG_HSIZE(fb->pitches[1]) |
+ 		VP_IMG_VSIZE(fb->height / 2));
+ 
+ 	vp_reg_write(ctx, VP_SRC_WIDTH, state->src.w);
+-	vp_reg_write(ctx, VP_SRC_HEIGHT, state->src.h);
+ 	vp_reg_write(ctx, VP_SRC_H_POSITION,
+ 			VP_SRC_H_POSITION_VAL(state->src.x));
+-	vp_reg_write(ctx, VP_SRC_V_POSITION, state->src.y);
+-
+ 	vp_reg_write(ctx, VP_DST_WIDTH, state->crtc.w);
+ 	vp_reg_write(ctx, VP_DST_H_POSITION, state->crtc.x);
++
+ 	if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)) {
++		vp_reg_write(ctx, VP_SRC_HEIGHT, state->src.h / 2);
++		vp_reg_write(ctx, VP_SRC_V_POSITION, state->src.y / 2);
+ 		vp_reg_write(ctx, VP_DST_HEIGHT, state->crtc.h / 2);
+ 		vp_reg_write(ctx, VP_DST_V_POSITION, state->crtc.y / 2);
+ 	} else {
++		vp_reg_write(ctx, VP_SRC_HEIGHT, state->src.h);
++		vp_reg_write(ctx, VP_SRC_V_POSITION, state->src.y);
+ 		vp_reg_write(ctx, VP_DST_HEIGHT, state->crtc.h);
+ 		vp_reg_write(ctx, VP_DST_V_POSITION, state->crtc.y);
+ 	}
+@@ -711,6 +714,15 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
+ 
+ 		/* interlace scan need to check shadow register */
+ 		if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)) {
++			if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags) &&
++			    vp_reg_read(ctx, VP_SHADOW_UPDATE))
++				goto out;
++
++			base = mixer_reg_read(ctx, MXR_CFG);
++			shadow = mixer_reg_read(ctx, MXR_CFG_S);
++			if (base != shadow)
++				goto out;
++
+ 			base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(0));
+ 			shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(0));
+ 			if (base != shadow)
+diff --git a/drivers/gpu/drm/exynos/regs-mixer.h b/drivers/gpu/drm/exynos/regs-mixer.h
+index c311f571bdf9..189cfa2470a8 100644
+--- a/drivers/gpu/drm/exynos/regs-mixer.h
++++ b/drivers/gpu/drm/exynos/regs-mixer.h
+@@ -47,6 +47,7 @@
+ #define MXR_MO				0x0304
+ #define MXR_RESOLUTION			0x0310
+ 
++#define MXR_CFG_S			0x2004
+ #define MXR_GRAPHIC0_BASE_S		0x2024
+ #define MXR_GRAPHIC1_BASE_S		0x2044
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 0f7324a686ca..d729b2b4b66d 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -740,7 +740,7 @@ static inline enum dsi_cmd_dst_format dsi_get_cmd_fmt(
+ 	switch (mipi_fmt) {
+ 	case MIPI_DSI_FMT_RGB888:	return CMD_DST_FORMAT_RGB888;
+ 	case MIPI_DSI_FMT_RGB666_PACKED:
+-	case MIPI_DSI_FMT_RGB666:	return VID_DST_FORMAT_RGB666;
++	case MIPI_DSI_FMT_RGB666:	return CMD_DST_FORMAT_RGB666;
+ 	case MIPI_DSI_FMT_RGB565:	return CMD_DST_FORMAT_RGB565;
+ 	default:			return CMD_DST_FORMAT_RGB888;
+ 	}
+diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
+index c178563fcd4d..456622b46335 100644
+--- a/drivers/gpu/drm/msm/msm_fbdev.c
++++ b/drivers/gpu/drm/msm/msm_fbdev.c
+@@ -92,8 +92,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
+ 
+ 	if (IS_ERR(fb)) {
+ 		dev_err(dev->dev, "failed to allocate fb\n");
+-		ret = PTR_ERR(fb);
+-		goto fail;
++		return PTR_ERR(fb);
+ 	}
+ 
+ 	bo = msm_framebuffer_bo(fb, 0);
+@@ -151,13 +150,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
+ 
+ fail_unlock:
+ 	mutex_unlock(&dev->struct_mutex);
+-fail:
+-
+-	if (ret) {
+-		if (fb)
+-			drm_framebuffer_remove(fb);
+-	}
+-
++	drm_framebuffer_remove(fb);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 07376de9ff4c..37ec3411297b 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -132,17 +132,19 @@ static void put_pages(struct drm_gem_object *obj)
+ 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
+ 
+ 	if (msm_obj->pages) {
+-		/* For non-cached buffers, ensure the new pages are clean
+-		 * because display controller, GPU, etc. are not coherent:
+-		 */
+-		if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
+-			dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
+-					msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
++		if (msm_obj->sgt) {
++			/* For non-cached buffers, ensure the new
++			 * pages are clean because display controller,
++			 * GPU, etc. are not coherent:
++			 */
++			if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
++				dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
++					     msm_obj->sgt->nents,
++					     DMA_BIDIRECTIONAL);
+ 
+-		if (msm_obj->sgt)
+ 			sg_free_table(msm_obj->sgt);
+-
+-		kfree(msm_obj->sgt);
++			kfree(msm_obj->sgt);
++		}
+ 
+ 		if (use_pages(obj))
+ 			drm_gem_put_pages(obj, msm_obj->pages, true, false);
+diff --git a/drivers/gpu/drm/omapdrm/dss/hdmi4.c b/drivers/gpu/drm/omapdrm/dss/hdmi4.c
+index bf914f2ac99e..f3d7decbbe24 100644
+--- a/drivers/gpu/drm/omapdrm/dss/hdmi4.c
++++ b/drivers/gpu/drm/omapdrm/dss/hdmi4.c
+@@ -665,7 +665,7 @@ static int hdmi_audio_config(struct device *dev,
+ 			     struct omap_dss_audio *dss_audio)
+ {
+ 	struct omap_hdmi *hd = dev_get_drvdata(dev);
+-	int ret;
++	int ret = 0;
+ 
+ 	mutex_lock(&hd->lock);
+ 
+diff --git a/drivers/gpu/drm/omapdrm/dss/hdmi4_core.c b/drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
+index 35ed2add6189..813ba42f2753 100644
+--- a/drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
++++ b/drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
+@@ -922,8 +922,13 @@ int hdmi4_core_init(struct platform_device *pdev, struct hdmi_core_data *core)
+ {
+ 	const struct hdmi4_features *features;
+ 	struct resource *res;
++	const struct soc_device_attribute *soc;
+ 
+-	features = soc_device_match(hdmi4_soc_devices)->data;
++	soc = soc_device_match(hdmi4_soc_devices);
++	if (!soc)
++		return -ENODEV;
++
++	features = soc->data;
+ 	core->cts_swmode = features->cts_swmode;
+ 	core->audio_use_mclk = features->audio_use_mclk;
+ 
+diff --git a/drivers/gpu/drm/omapdrm/dss/hdmi5.c b/drivers/gpu/drm/omapdrm/dss/hdmi5.c
+index 689cda41858b..dc36274bdc15 100644
+--- a/drivers/gpu/drm/omapdrm/dss/hdmi5.c
++++ b/drivers/gpu/drm/omapdrm/dss/hdmi5.c
+@@ -660,7 +660,7 @@ static int hdmi_audio_config(struct device *dev,
+ 			     struct omap_dss_audio *dss_audio)
+ {
+ 	struct omap_hdmi *hd = dev_get_drvdata(dev);
+-	int ret;
++	int ret = 0;
+ 
+ 	mutex_lock(&hd->lock);
+ 
+diff --git a/drivers/gpu/drm/omapdrm/omap_connector.c b/drivers/gpu/drm/omapdrm/omap_connector.c
+index a0d7b1d905e8..5cde26ac937b 100644
+--- a/drivers/gpu/drm/omapdrm/omap_connector.c
++++ b/drivers/gpu/drm/omapdrm/omap_connector.c
+@@ -121,6 +121,9 @@ static int omap_connector_get_modes(struct drm_connector *connector)
+ 	if (dssdrv->read_edid) {
+ 		void *edid = kzalloc(MAX_EDID, GFP_KERNEL);
+ 
++		if (!edid)
++			return 0;
++
+ 		if ((dssdrv->read_edid(dssdev, edid, MAX_EDID) > 0) &&
+ 				drm_edid_is_valid(edid)) {
+ 			drm_mode_connector_update_edid_property(
+@@ -139,6 +142,9 @@ static int omap_connector_get_modes(struct drm_connector *connector)
+ 		struct drm_display_mode *mode = drm_mode_create(dev);
+ 		struct videomode vm = {0};
+ 
++		if (!mode)
++			return 0;
++
+ 		dssdrv->get_timings(dssdev, &vm);
+ 
+ 		drm_display_mode_from_videomode(&vm, mode);
+@@ -200,6 +206,10 @@ static int omap_connector_mode_valid(struct drm_connector *connector,
+ 	if (!r) {
+ 		/* check if vrefresh is still valid */
+ 		new_mode = drm_mode_duplicate(dev, mode);
++
++		if (!new_mode)
++			return MODE_BAD;
++
+ 		new_mode->clock = vm.pixelclock / 1000;
+ 		new_mode->vrefresh = 0;
+ 		if (mode->vrefresh == drm_mode_vrefresh(new_mode))
+diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+index 4be0c94673f5..17d1baee522b 100644
+--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
++++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+@@ -401,12 +401,16 @@ int tiler_unpin(struct tiler_block *block)
+ struct tiler_block *tiler_reserve_2d(enum tiler_fmt fmt, uint16_t w,
+ 		uint16_t h, uint16_t align)
+ {
+-	struct tiler_block *block = kzalloc(sizeof(*block), GFP_KERNEL);
++	struct tiler_block *block;
+ 	u32 min_align = 128;
+ 	int ret;
+ 	unsigned long flags;
+ 	u32 slot_bytes;
+ 
++	block = kzalloc(sizeof(*block), GFP_KERNEL);
++	if (!block)
++		return ERR_PTR(-ENOMEM);
++
+ 	BUG_ON(!validfmt(fmt));
+ 
+ 	/* convert width/height to slots */
+diff --git a/drivers/gpu/drm/omapdrm/tcm-sita.c b/drivers/gpu/drm/omapdrm/tcm-sita.c
+index 661362d072f7..ebfdb38b4616 100644
+--- a/drivers/gpu/drm/omapdrm/tcm-sita.c
++++ b/drivers/gpu/drm/omapdrm/tcm-sita.c
+@@ -90,7 +90,7 @@ static int l2r_t2b(uint16_t w, uint16_t h, uint16_t a, int16_t offset,
+ {
+ 	int i;
+ 	unsigned long index;
+-	bool area_free;
++	bool area_free = false;
+ 	unsigned long slots_per_band = PAGE_SIZE / slot_bytes;
+ 	unsigned long bit_offset = (offset > 0) ? offset / slot_bytes : 0;
+ 	unsigned long curr_bit = bit_offset;
+diff --git a/drivers/gpu/drm/vc4/vc4_dpi.c b/drivers/gpu/drm/vc4/vc4_dpi.c
+index 72c9dbd81d7f..f185812970da 100644
+--- a/drivers/gpu/drm/vc4/vc4_dpi.c
++++ b/drivers/gpu/drm/vc4/vc4_dpi.c
+@@ -96,7 +96,6 @@ struct vc4_dpi {
+ 	struct platform_device *pdev;
+ 
+ 	struct drm_encoder *encoder;
+-	struct drm_connector *connector;
+ 
+ 	void __iomem *regs;
+ 
+@@ -164,14 +163,31 @@ static void vc4_dpi_encoder_disable(struct drm_encoder *encoder)
+ 
+ static void vc4_dpi_encoder_enable(struct drm_encoder *encoder)
+ {
++	struct drm_device *dev = encoder->dev;
+ 	struct drm_display_mode *mode = &encoder->crtc->mode;
+ 	struct vc4_dpi_encoder *vc4_encoder = to_vc4_dpi_encoder(encoder);
+ 	struct vc4_dpi *dpi = vc4_encoder->dpi;
++	struct drm_connector_list_iter conn_iter;
++	struct drm_connector *connector = NULL, *connector_scan;
+ 	u32 dpi_c = DPI_ENABLE | DPI_OUTPUT_ENABLE_MODE;
+ 	int ret;
+ 
+-	if (dpi->connector->display_info.num_bus_formats) {
+-		u32 bus_format = dpi->connector->display_info.bus_formats[0];
++	/* Look up the connector attached to DPI so we can get the
++	 * bus_format.  Ideally the bridge would tell us the
++	 * bus_format we want, but it doesn't yet, so assume that it's
++	 * uniform throughout the bridge chain.
++	 */
++	drm_connector_list_iter_begin(dev, &conn_iter);
++	drm_for_each_connector_iter(connector_scan, &conn_iter) {
++		if (connector_scan->encoder == encoder) {
++			connector = connector_scan;
++			break;
++		}
++	}
++	drm_connector_list_iter_end(&conn_iter);
++
++	if (connector && connector->display_info.num_bus_formats) {
++		u32 bus_format = connector->display_info.bus_formats[0];
+ 
+ 		switch (bus_format) {
+ 		case MEDIA_BUS_FMT_RGB888_1X24:
+@@ -199,6 +215,9 @@ static void vc4_dpi_encoder_enable(struct drm_encoder *encoder)
+ 			DRM_ERROR("Unknown media bus format %d\n", bus_format);
+ 			break;
+ 		}
++	} else {
++		/* Default to 24bit if no connector found. */
++		dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB, DPI_FORMAT);
+ 	}
+ 
+ 	if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 19c499f5623d..84ace3b62bb0 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -448,10 +448,11 @@ config HID_LENOVO
+ 	select NEW_LEDS
+ 	select LEDS_CLASS
+ 	---help---
+-	Support for Lenovo devices that are not fully compliant with HID standard.
++	Support for IBM/Lenovo devices that are not fully compliant with HID standard.
+ 
+-	Say Y if you want support for the non-compliant features of the Lenovo
+-	Thinkpad standalone keyboards, e.g:
++	Say Y if you want support for horizontal scrolling of the IBM/Lenovo
++	Scrollpoint mice or the non-compliant features of the Lenovo Thinkpad
++	standalone keyboards, e.g:
+ 	- ThinkPad USB Keyboard with TrackPoint (supports extra LEDs and trackpoint
+ 	  configuration)
+ 	- ThinkPad Compact Bluetooth Keyboard with TrackPoint (supports Fn keys)
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c631d2c8988d..a026cc76f4f1 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -546,6 +546,13 @@
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
+ 
++#define USB_VENDOR_ID_IBM					0x04b3
++#define USB_DEVICE_ID_IBM_SCROLLPOINT_III			0x3100
++#define USB_DEVICE_ID_IBM_SCROLLPOINT_PRO			0x3103
++#define USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL			0x3105
++#define USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL		0x3108
++#define USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO	0x3109
++
+ #define USB_VENDOR_ID_IDEACOM		0x1cb6
+ #define USB_DEVICE_ID_IDEACOM_IDC6650	0x6650
+ #define USB_DEVICE_ID_IDEACOM_IDC6651	0x6651
+@@ -678,6 +685,7 @@
+ #define USB_DEVICE_ID_LENOVO_TPKBD	0x6009
+ #define USB_DEVICE_ID_LENOVO_CUSBKBD	0x6047
+ #define USB_DEVICE_ID_LENOVO_CBTKBD	0x6048
++#define USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL	0x6049
+ #define USB_DEVICE_ID_LENOVO_TPPRODOCK	0x6067
+ #define USB_DEVICE_ID_LENOVO_X1_COVER	0x6085
+ #define USB_DEVICE_ID_LENOVO_X1_TAB	0x60a3
+@@ -958,6 +966,7 @@
+ #define USB_DEVICE_ID_SIS817_TOUCH	0x0817
+ #define USB_DEVICE_ID_SIS_TS		0x1013
+ #define USB_DEVICE_ID_SIS1030_TOUCH	0x1030
++#define USB_DEVICE_ID_SIS10FB_TOUCH	0x10fb
+ 
+ #define USB_VENDOR_ID_SKYCABLE			0x1223
+ #define	USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER	0x3F07
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index 1ac4ff4d57a6..643b6eb54442 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -6,6 +6,17 @@
+  *
+  *  Copyright (c) 2012 Bernhard Seibold
+  *  Copyright (c) 2014 Jamie Lentin <jm@lentin.co.uk>
++ *
++ * Linux IBM/Lenovo Scrollpoint mouse driver:
++ * - IBM Scrollpoint III
++ * - IBM Scrollpoint Pro
++ * - IBM Scrollpoint Optical
++ * - IBM Scrollpoint Optical 800dpi
++ * - IBM Scrollpoint Optical 800dpi Pro
++ * - Lenovo Scrollpoint Optical
++ *
++ *  Copyright (c) 2012 Peter De Wachter <pdewacht@gmail.com>
++ *  Copyright (c) 2018 Peter Ganzhorn <peter.ganzhorn@gmail.com>
+  */
+ 
+ /*
+@@ -160,6 +171,17 @@ static int lenovo_input_mapping_cptkbd(struct hid_device *hdev,
+ 	return 0;
+ }
+ 
++static int lenovo_input_mapping_scrollpoint(struct hid_device *hdev,
++		struct hid_input *hi, struct hid_field *field,
++		struct hid_usage *usage, unsigned long **bit, int *max)
++{
++	if (usage->hid == HID_GD_Z) {
++		hid_map_usage(hi, usage, bit, max, EV_REL, REL_HWHEEL);
++		return 1;
++	}
++	return 0;
++}
++
+ static int lenovo_input_mapping(struct hid_device *hdev,
+ 		struct hid_input *hi, struct hid_field *field,
+ 		struct hid_usage *usage, unsigned long **bit, int *max)
+@@ -172,6 +194,14 @@ static int lenovo_input_mapping(struct hid_device *hdev,
+ 	case USB_DEVICE_ID_LENOVO_CBTKBD:
+ 		return lenovo_input_mapping_cptkbd(hdev, hi, field,
+ 							usage, bit, max);
++	case USB_DEVICE_ID_IBM_SCROLLPOINT_III:
++	case USB_DEVICE_ID_IBM_SCROLLPOINT_PRO:
++	case USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL:
++	case USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL:
++	case USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO:
++	case USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL:
++		return lenovo_input_mapping_scrollpoint(hdev, hi, field,
++							usage, bit, max);
+ 	default:
+ 		return 0;
+ 	}
+@@ -883,6 +913,12 @@ static const struct hid_device_id lenovo_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_CUSBKBD) },
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_CBTKBD) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_TPPRODOCK) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_III) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_PRO) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL) },
+ 	{ }
+ };
+ 
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index fd9f70a8b813..4c0d2491db08 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -174,6 +174,8 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
++		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
+index f272cdd9bd55..2623a567ffba 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/bus.c
++++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
+@@ -418,7 +418,7 @@ static struct ishtp_cl_device *ishtp_bus_add_device(struct ishtp_device *dev,
+ 		list_del(&device->device_link);
+ 		spin_unlock_irqrestore(&dev->device_list_lock, flags);
+ 		dev_err(dev->devc, "Failed to register ISHTP client device\n");
+-		kfree(device);
++		put_device(&device->dev);
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index b54ef1ffcbec..ee7a37eb159a 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -1213,8 +1213,10 @@ static int __wacom_devm_sysfs_create_group(struct wacom *wacom,
+ 	devres->root = root;
+ 
+ 	error = sysfs_create_group(devres->root, group);
+-	if (error)
++	if (error) {
++		devres_free(devres);
+ 		return error;
++	}
+ 
+ 	devres_add(&wacom->hdev->dev, devres);
+ 
+diff --git a/drivers/i2c/busses/i2c-pmcmsp.c b/drivers/i2c/busses/i2c-pmcmsp.c
+index 2aa0e83174c5..dae8ac618a52 100644
+--- a/drivers/i2c/busses/i2c-pmcmsp.c
++++ b/drivers/i2c/busses/i2c-pmcmsp.c
+@@ -564,10 +564,10 @@ static int pmcmsptwi_master_xfer(struct i2c_adapter *adap,
+ 		 * TODO: We could potentially loop and retry in the case
+ 		 * of MSP_TWI_XFER_TIMEOUT.
+ 		 */
+-		return -1;
++		return -EIO;
+ 	}
+ 
+-	return 0;
++	return num;
+ }
+ 
+ static u32 pmcmsptwi_i2c_func(struct i2c_adapter *adapter)
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 25fcc3c1e32b..4053259bccb8 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -86,6 +86,7 @@ struct sprd_i2c {
+ 	u32 count;
+ 	int irq;
+ 	int err;
++	bool is_suspended;
+ };
+ 
+ static void sprd_i2c_set_count(struct sprd_i2c *i2c_dev, u32 count)
+@@ -283,6 +284,9 @@ static int sprd_i2c_master_xfer(struct i2c_adapter *i2c_adap,
+ 	struct sprd_i2c *i2c_dev = i2c_adap->algo_data;
+ 	int im, ret;
+ 
++	if (i2c_dev->is_suspended)
++		return -EBUSY;
++
+ 	ret = pm_runtime_get_sync(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+@@ -364,13 +368,12 @@ static irqreturn_t sprd_i2c_isr_thread(int irq, void *dev_id)
+ 	struct sprd_i2c *i2c_dev = dev_id;
+ 	struct i2c_msg *msg = i2c_dev->msg;
+ 	bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);
+-	u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);
+ 	u32 i2c_tran;
+ 
+ 	if (msg->flags & I2C_M_RD)
+ 		i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;
+ 	else
+-		i2c_tran = i2c_count;
++		i2c_tran = i2c_dev->count;
+ 
+ 	/*
+ 	 * If we got one ACK from slave when writing data, and we did not
+@@ -408,14 +411,13 @@ static irqreturn_t sprd_i2c_isr(int irq, void *dev_id)
+ {
+ 	struct sprd_i2c *i2c_dev = dev_id;
+ 	struct i2c_msg *msg = i2c_dev->msg;
+-	u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);
+ 	bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);
+ 	u32 i2c_tran;
+ 
+ 	if (msg->flags & I2C_M_RD)
+ 		i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;
+ 	else
+-		i2c_tran = i2c_count;
++		i2c_tran = i2c_dev->count;
+ 
+ 	/*
+ 	 * If we did not get one ACK from slave when writing data, then we
+@@ -586,11 +588,23 @@ static int sprd_i2c_remove(struct platform_device *pdev)
+ 
+ static int __maybe_unused sprd_i2c_suspend_noirq(struct device *pdev)
+ {
++	struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);
++
++	i2c_lock_adapter(&i2c_dev->adap);
++	i2c_dev->is_suspended = true;
++	i2c_unlock_adapter(&i2c_dev->adap);
++
+ 	return pm_runtime_force_suspend(pdev);
+ }
+ 
+ static int __maybe_unused sprd_i2c_resume_noirq(struct device *pdev)
+ {
++	struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);
++
++	i2c_lock_adapter(&i2c_dev->adap);
++	i2c_dev->is_suspended = false;
++	i2c_unlock_adapter(&i2c_dev->adap);
++
+ 	return pm_runtime_force_resume(pdev);
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-viperboard.c b/drivers/i2c/busses/i2c-viperboard.c
+index e4be86b3de9a..7235c7302bb7 100644
+--- a/drivers/i2c/busses/i2c-viperboard.c
++++ b/drivers/i2c/busses/i2c-viperboard.c
+@@ -337,7 +337,7 @@ static int vprbrd_i2c_xfer(struct i2c_adapter *i2c, struct i2c_msg *msgs,
+ 		}
+ 		mutex_unlock(&vb->lock);
+ 	}
+-	return 0;
++	return num;
+ error:
+ 	mutex_unlock(&vb->lock);
+ 	return error;
+diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
+index 8517d6ea91a6..6154da184fc1 100644
+--- a/drivers/infiniband/Kconfig
++++ b/drivers/infiniband/Kconfig
+@@ -62,9 +62,12 @@ config INFINIBAND_ON_DEMAND_PAGING
+ 	  pages on demand instead.
+ 
+ config INFINIBAND_ADDR_TRANS
+-	bool
++	bool "RDMA/CM"
+ 	depends on INFINIBAND
+ 	default y
++	---help---
++	  Support for RDMA communication manager (CM).
++	  This allows for a generic connection abstraction over RDMA.
+ 
+ config INFINIBAND_ADDR_TRANS_CONFIGFS
+ 	bool
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index a5367c5efbe7..b5a7897eb180 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -420,6 +420,8 @@ struct cma_hdr {
+ #define CMA_VERSION 0x00
+ 
+ struct cma_req_info {
++	struct sockaddr_storage listen_addr_storage;
++	struct sockaddr_storage src_addr_storage;
+ 	struct ib_device *device;
+ 	int port;
+ 	union ib_gid local_gid;
+@@ -899,7 +901,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
+ {
+ 	struct ib_qp_attr qp_attr;
+ 	int qp_attr_mask, ret;
+-	union ib_gid sgid;
+ 
+ 	mutex_lock(&id_priv->qp_mutex);
+ 	if (!id_priv->id.qp) {
+@@ -922,12 +923,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
+ 	if (ret)
+ 		goto out;
+ 
+-	ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num,
+-			   rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index,
+-			   &sgid, NULL);
+-	if (ret)
+-		goto out;
+-
+ 	BUG_ON(id_priv->cma_dev->device != id_priv->id.device);
+ 
+ 	if (conn_param)
+@@ -1373,11 +1368,11 @@ static bool validate_net_dev(struct net_device *net_dev,
+ }
+ 
+ static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
+-					  const struct cma_req_info *req)
++					  struct cma_req_info *req)
+ {
+-	struct sockaddr_storage listen_addr_storage, src_addr_storage;
+-	struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage,
+-			*src_addr = (struct sockaddr *)&src_addr_storage;
++	struct sockaddr *listen_addr =
++			(struct sockaddr *)&req->listen_addr_storage;
++	struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage;
+ 	struct net_device *net_dev;
+ 	const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL;
+ 	int err;
+@@ -1392,11 +1387,6 @@ static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
+ 	if (!net_dev)
+ 		return ERR_PTR(-ENODEV);
+ 
+-	if (!validate_net_dev(net_dev, listen_addr, src_addr)) {
+-		dev_put(net_dev);
+-		return ERR_PTR(-EHOSTUNREACH);
+-	}
+-
+ 	return net_dev;
+ }
+ 
+@@ -1532,15 +1522,51 @@ static struct rdma_id_private *cma_id_from_event(struct ib_cm_id *cm_id,
+ 		}
+ 	}
+ 
++	/*
++	 * Net namespace might be getting deleted while route lookup,
++	 * cm_id lookup is in progress. Therefore, perform netdevice
++	 * validation, cm_id lookup under rcu lock.
++	 * RCU lock along with netdevice state check, synchronizes with
++	 * netdevice migrating to different net namespace and also avoids
++	 * case where net namespace doesn't get deleted while lookup is in
++	 * progress.
++	 * If the device state is not IFF_UP, its properties such as ifindex
++	 * and nd_net cannot be trusted to remain valid without rcu lock.
++	 * net/core/dev.c change_net_namespace() ensures to synchronize with
++	 * ongoing operations on net device after device is closed using
++	 * synchronize_net().
++	 */
++	rcu_read_lock();
++	if (*net_dev) {
++		/*
++		 * If netdevice is down, it is likely that it is administratively
++		 * down or it might be migrating to different namespace.
++		 * In that case avoid further processing, as the net namespace
++		 * or ifindex may change.
++		 */
++		if (((*net_dev)->flags & IFF_UP) == 0) {
++			id_priv = ERR_PTR(-EHOSTUNREACH);
++			goto err;
++		}
++
++		if (!validate_net_dev(*net_dev,
++				 (struct sockaddr *)&req.listen_addr_storage,
++				 (struct sockaddr *)&req.src_addr_storage)) {
++			id_priv = ERR_PTR(-EHOSTUNREACH);
++			goto err;
++		}
++	}
++
+ 	bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net,
+ 				rdma_ps_from_service_id(req.service_id),
+ 				cma_port_from_service_id(req.service_id));
+ 	id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev);
++err:
++	rcu_read_unlock();
+ 	if (IS_ERR(id_priv) && *net_dev) {
+ 		dev_put(*net_dev);
+ 		*net_dev = NULL;
+ 	}
+-
+ 	return id_priv;
+ }
+ 
+diff --git a/drivers/infiniband/core/iwpm_util.c b/drivers/infiniband/core/iwpm_util.c
+index 81528f64061a..cb0fecc958b5 100644
+--- a/drivers/infiniband/core/iwpm_util.c
++++ b/drivers/infiniband/core/iwpm_util.c
+@@ -114,7 +114,7 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
+ 			struct sockaddr_storage *mapped_sockaddr,
+ 			u8 nl_client)
+ {
+-	struct hlist_head *hash_bucket_head;
++	struct hlist_head *hash_bucket_head = NULL;
+ 	struct iwpm_mapping_info *map_info;
+ 	unsigned long flags;
+ 	int ret = -EINVAL;
+@@ -142,6 +142,9 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags);
++
++	if (!hash_bucket_head)
++		kfree(map_info);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index c50596f7f98a..b28452a55a08 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -59,7 +59,7 @@ module_param_named(recv_queue_size, mad_recvq_size, int, 0444);
+ MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests");
+ 
+ static struct list_head ib_mad_port_list;
+-static u32 ib_mad_client_id = 0;
++static atomic_t ib_mad_client_id = ATOMIC_INIT(0);
+ 
+ /* Port list lock */
+ static DEFINE_SPINLOCK(ib_mad_port_list_lock);
+@@ -377,7 +377,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
+ 	}
+ 
+ 	spin_lock_irqsave(&port_priv->reg_lock, flags);
+-	mad_agent_priv->agent.hi_tid = ++ib_mad_client_id;
++	mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id);
+ 
+ 	/*
+ 	 * Make sure MAD registration (if supplied)
+diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c
+index 339b85145044..6a71cdf1fe33 100644
+--- a/drivers/infiniband/core/uverbs_ioctl.c
++++ b/drivers/infiniband/core/uverbs_ioctl.c
+@@ -191,6 +191,15 @@ static int uverbs_validate_kernel_mandatory(const struct uverbs_method_spec *met
+ 			return -EINVAL;
+ 	}
+ 
++	for (; i < method_spec->num_buckets; i++) {
++		struct uverbs_attr_spec_hash *attr_spec_bucket =
++			method_spec->attr_buckets[i];
++
++		if (!bitmap_empty(attr_spec_bucket->mandatory_attrs_bitmask,
++				  attr_spec_bucket->num_attrs))
++			return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index a97055dd4fbd..b5fab55cc275 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -412,7 +412,6 @@ static void hfi1_cleanup_sdma_notifier(struct hfi1_msix_entry *msix)
+ static int get_irq_affinity(struct hfi1_devdata *dd,
+ 			    struct hfi1_msix_entry *msix)
+ {
+-	int ret;
+ 	cpumask_var_t diff;
+ 	struct hfi1_affinity_node *entry;
+ 	struct cpu_mask_set *set = NULL;
+@@ -424,10 +423,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
+ 	extra[0] = '\0';
+ 	cpumask_clear(&msix->mask);
+ 
+-	ret = zalloc_cpumask_var(&diff, GFP_KERNEL);
+-	if (!ret)
+-		return -ENOMEM;
+-
+ 	entry = node_affinity_lookup(dd->node);
+ 
+ 	switch (msix->type) {
+@@ -458,6 +453,9 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
+ 	 * finds its CPU here.
+ 	 */
+ 	if (cpu == -1 && set) {
++		if (!zalloc_cpumask_var(&diff, GFP_KERNEL))
++			return -ENOMEM;
++
+ 		if (cpumask_equal(&set->mask, &set->used)) {
+ 			/*
+ 			 * We've used up all the CPUs, bump up the generation
+@@ -469,6 +467,8 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
+ 		cpumask_andnot(diff, &set->mask, &set->used);
+ 		cpu = cpumask_first(diff);
+ 		cpumask_set_cpu(cpu, &set->used);
++
++		free_cpumask_var(diff);
+ 	}
+ 
+ 	cpumask_set_cpu(cpu, &msix->mask);
+@@ -482,7 +482,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
+ 		hfi1_setup_sdma_notifier(msix);
+ 	}
+ 
+-	free_cpumask_var(diff);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index b27fe75c7102..6309edf811df 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -88,9 +88,9 @@
+  * pio buffers per ctxt, etc.)  Zero means use one user context per CPU.
+  */
+ int num_user_contexts = -1;
+-module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO);
++module_param_named(num_user_contexts, num_user_contexts, int, 0444);
+ MODULE_PARM_DESC(
+-	num_user_contexts, "Set max number of user contexts to use");
++	num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)");
+ 
+ uint krcvqs[RXE_NUM_DATA_VL];
+ int krcvqsset;
+@@ -1209,30 +1209,49 @@ static void finalize_asic_data(struct hfi1_devdata *dd,
+ 	kfree(ad);
+ }
+ 
+-static void __hfi1_free_devdata(struct kobject *kobj)
++/**
++ * hfi1_clean_devdata - cleans up per-unit data structure
++ * @dd: pointer to a valid devdata structure
++ *
++ * It cleans up all data structures set up by
++ * by hfi1_alloc_devdata().
++ */
++static void hfi1_clean_devdata(struct hfi1_devdata *dd)
+ {
+-	struct hfi1_devdata *dd =
+-		container_of(kobj, struct hfi1_devdata, kobj);
+ 	struct hfi1_asic_data *ad;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&hfi1_devs_lock, flags);
+-	idr_remove(&hfi1_unit_table, dd->unit);
+-	list_del(&dd->list);
++	if (!list_empty(&dd->list)) {
++		idr_remove(&hfi1_unit_table, dd->unit);
++		list_del_init(&dd->list);
++	}
+ 	ad = release_asic_data(dd);
+ 	spin_unlock_irqrestore(&hfi1_devs_lock, flags);
+-	if (ad)
+-		finalize_asic_data(dd, ad);
++
++	finalize_asic_data(dd, ad);
+ 	free_platform_config(dd);
+ 	rcu_barrier(); /* wait for rcu callbacks to complete */
+ 	free_percpu(dd->int_counter);
+ 	free_percpu(dd->rcv_limit);
+ 	free_percpu(dd->send_schedule);
+ 	free_percpu(dd->tx_opstats);
++	dd->int_counter   = NULL;
++	dd->rcv_limit     = NULL;
++	dd->send_schedule = NULL;
++	dd->tx_opstats    = NULL;
+ 	sdma_clean(dd, dd->num_sdma);
+ 	rvt_dealloc_device(&dd->verbs_dev.rdi);
+ }
+ 
++static void __hfi1_free_devdata(struct kobject *kobj)
++{
++	struct hfi1_devdata *dd =
++		container_of(kobj, struct hfi1_devdata, kobj);
++
++	hfi1_clean_devdata(dd);
++}
++
+ static struct kobj_type hfi1_devdata_type = {
+ 	.release = __hfi1_free_devdata,
+ };
+@@ -1333,9 +1352,7 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
+ 	return dd;
+ 
+ bail:
+-	if (!list_empty(&dd->list))
+-		list_del_init(&dd->list);
+-	rvt_dealloc_device(&dd->verbs_dev.rdi);
++	hfi1_clean_devdata(dd);
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/platform.c b/drivers/infiniband/hw/hfi1/platform.c
+index d486355880cb..cbf7faa5038c 100644
+--- a/drivers/infiniband/hw/hfi1/platform.c
++++ b/drivers/infiniband/hw/hfi1/platform.c
+@@ -199,6 +199,7 @@ void free_platform_config(struct hfi1_devdata *dd)
+ {
+ 	/* Release memory allocated for eprom or fallback file read. */
+ 	kfree(dd->platform_config.data);
++	dd->platform_config.data = NULL;
+ }
+ 
+ void get_port_type(struct hfi1_pportdata *ppd)
+diff --git a/drivers/infiniband/hw/hfi1/qsfp.c b/drivers/infiniband/hw/hfi1/qsfp.c
+index 1869f639c3ae..b5966991d647 100644
+--- a/drivers/infiniband/hw/hfi1/qsfp.c
++++ b/drivers/infiniband/hw/hfi1/qsfp.c
+@@ -204,6 +204,8 @@ static void clean_i2c_bus(struct hfi1_i2c_bus *bus)
+ 
+ void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad)
+ {
++	if (!ad)
++		return;
+ 	clean_i2c_bus(ad->i2c_bus0);
+ 	ad->i2c_bus0 = NULL;
+ 	clean_i2c_bus(ad->i2c_bus1);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 0eeabfbee192..0d8c113083ad 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -912,7 +912,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
+ 		obj_per_chunk = buf_chunk_size / obj_size;
+ 		num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk;
+ 		bt_chunk_num = bt_chunk_size / 8;
+-		if (table->type >= HEM_TYPE_MTT)
++		if (type >= HEM_TYPE_MTT)
+ 			num_bt_l0 = bt_chunk_num;
+ 
+ 		table->hem = kcalloc(num_hem, sizeof(*table->hem),
+@@ -920,7 +920,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
+ 		if (!table->hem)
+ 			goto err_kcalloc_hem_buf;
+ 
+-		if (check_whether_bt_num_3(table->type, hop_num)) {
++		if (check_whether_bt_num_3(type, hop_num)) {
+ 			unsigned long num_bt_l1;
+ 
+ 			num_bt_l1 = (num_hem + bt_chunk_num - 1) /
+@@ -939,8 +939,8 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
+ 				goto err_kcalloc_l1_dma;
+ 		}
+ 
+-		if (check_whether_bt_num_2(table->type, hop_num) ||
+-			check_whether_bt_num_3(table->type, hop_num)) {
++		if (check_whether_bt_num_2(type, hop_num) ||
++			check_whether_bt_num_3(type, hop_num)) {
+ 			table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0),
+ 					       GFP_KERNEL);
+ 			if (!table->bt_l0)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index ec638778661c..3d056c67a339 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -71,6 +71,11 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			return -EINVAL;
+ 		}
+ 
++		if (wr->opcode == IB_WR_RDMA_READ) {
++			dev_err(hr_dev->dev, "Not support inline data!\n");
++			return -EINVAL;
++		}
++
+ 		for (i = 0; i < wr->num_sge; i++) {
+ 			memcpy(wqe, ((void *)wr->sg_list[i].addr),
+ 			       wr->sg_list[i].length);
+@@ -148,7 +153,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 		     ibqp->qp_type != IB_QPT_GSI &&
+ 		     ibqp->qp_type != IB_QPT_UD)) {
+ 		dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type);
+-		*bad_wr = NULL;
++		*bad_wr = wr;
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+@@ -456,6 +461,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 		} else {
+ 			dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
+ 			spin_unlock_irqrestore(&qp->sq.lock, flags);
++			*bad_wr = wr;
+ 			return -EOPNOTSUPP;
+ 		}
+ 	}
+@@ -3161,7 +3167,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 		   (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) ||
+ 		   (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) ||
+ 		   (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) ||
+-		   (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR)) {
++		   (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) ||
++		   (cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) {
+ 		/* Nothing */
+ 		;
+ 	} else {
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 4975f3e6596e..32fafa7700e3 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -346,7 +346,7 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va,
+ 	/* Add to the first block the misalignment that it suffers from. */
+ 	total_len += (first_block_start & ((1ULL << block_shift) - 1ULL));
+ 	last_block_end = current_block_start + current_block_len;
+-	last_block_aligned_end = round_up(last_block_end, 1 << block_shift);
++	last_block_aligned_end = round_up(last_block_end, 1ULL << block_shift);
+ 	total_len += (last_block_aligned_end - last_block_end);
+ 
+ 	if (total_len & ((1ULL << block_shift) - 1ULL))
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index c14ed9cc9c9e..cf7b4bda8597 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4833,9 +4833,7 @@ static void mlx5_ib_stage_cong_debugfs_cleanup(struct mlx5_ib_dev *dev)
+ static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev)
+ {
+ 	dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev);
+-	if (!dev->mdev->priv.uar)
+-		return -ENOMEM;
+-	return 0;
++	return PTR_ERR_OR_ZERO(dev->mdev->priv.uar);
+ }
+ 
+ static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev)
+diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
+index 61927c165b59..4cf11063e0b5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
++++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
+@@ -390,7 +390,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
+ 		.name	= "IB_OPCODE_RC_SEND_ONLY_INV",
+ 		.mask	= RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK
+ 				| RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK
+-				| RXE_END_MASK,
++				| RXE_END_MASK  | RXE_START_MASK,
+ 		.length = RXE_BTH_BYTES + RXE_IETH_BYTES,
+ 		.offset = {
+ 			[RXE_BTH]	= 0,
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 7bdaf71b8221..785199990457 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -728,7 +728,6 @@ int rxe_requester(void *arg)
+ 		rollback_state(wqe, qp, &rollback_wqe, rollback_psn);
+ 
+ 		if (ret == -EAGAIN) {
+-			kfree_skb(skb);
+ 			rxe_run_task(&qp->req.task, 1);
+ 			goto exit;
+ 		}
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index d37bb9b97569..e319bd904d30 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -742,7 +742,6 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 	err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
+ 	if (err) {
+ 		pr_err("Failed sending RDMA reply.\n");
+-		kfree_skb(skb);
+ 		return RESPST_ERR_RNR;
+ 	}
+ 
+@@ -954,10 +953,8 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
+ 	}
+ 
+ 	err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
+-	if (err) {
++	if (err)
+ 		pr_err_ratelimited("Failed sending ack\n");
+-		kfree_skb(skb);
+-	}
+ 
+ err1:
+ 	return err;
+@@ -1150,7 +1147,6 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 			if (rc) {
+ 				pr_err("Failed resending result. This flow is not handled - skb ignored\n");
+ 				rxe_drop_ref(qp);
+-				kfree_skb(skb_copy);
+ 				rc = RESPST_CLEANUP;
+ 				goto out;
+ 			}
+diff --git a/drivers/infiniband/ulp/srp/Kconfig b/drivers/infiniband/ulp/srp/Kconfig
+index c74ee9633041..99db8fe5173a 100644
+--- a/drivers/infiniband/ulp/srp/Kconfig
++++ b/drivers/infiniband/ulp/srp/Kconfig
+@@ -1,6 +1,6 @@
+ config INFINIBAND_SRP
+ 	tristate "InfiniBand SCSI RDMA Protocol"
+-	depends on SCSI
++	depends on SCSI && INFINIBAND_ADDR_TRANS
+ 	select SCSI_SRP_ATTRS
+ 	---help---
+ 	  Support for the SCSI RDMA Protocol over InfiniBand.  This
+diff --git a/drivers/infiniband/ulp/srpt/Kconfig b/drivers/infiniband/ulp/srpt/Kconfig
+index 31ee83d528d9..fb8b7182f05e 100644
+--- a/drivers/infiniband/ulp/srpt/Kconfig
++++ b/drivers/infiniband/ulp/srpt/Kconfig
+@@ -1,6 +1,6 @@
+ config INFINIBAND_SRPT
+ 	tristate "InfiniBand SCSI RDMA Protocol target support"
+-	depends on INFINIBAND && TARGET_CORE
++	depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE
+ 	---help---
+ 
+ 	  Support for the SCSI RDMA Protocol (SRP) Target driver. The
+diff --git a/drivers/input/rmi4/rmi_spi.c b/drivers/input/rmi4/rmi_spi.c
+index 76edbf2c1bce..082defc329a8 100644
+--- a/drivers/input/rmi4/rmi_spi.c
++++ b/drivers/input/rmi4/rmi_spi.c
+@@ -147,8 +147,11 @@ static int rmi_spi_xfer(struct rmi_spi_xport *rmi_spi,
+ 	if (len > RMI_SPI_XFER_SIZE_LIMIT)
+ 		return -EINVAL;
+ 
+-	if (rmi_spi->xfer_buf_size < len)
+-		rmi_spi_manage_pools(rmi_spi, len);
++	if (rmi_spi->xfer_buf_size < len) {
++		ret = rmi_spi_manage_pools(rmi_spi, len);
++		if (ret < 0)
++			return ret;
++	}
+ 
+ 	if (addr == 0)
+ 		/*
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index 429b694405c7..fc149ea64be7 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -275,7 +275,8 @@ struct mxt_data {
+ 	char phys[64];		/* device physical location */
+ 	const struct mxt_platform_data *pdata;
+ 	struct mxt_object *object_table;
+-	struct mxt_info info;
++	struct mxt_info *info;
++	void *raw_info_block;
+ 	unsigned int irq;
+ 	unsigned int max_x;
+ 	unsigned int max_y;
+@@ -450,12 +451,13 @@ static int mxt_lookup_bootloader_address(struct mxt_data *data, bool retry)
+ {
+ 	u8 appmode = data->client->addr;
+ 	u8 bootloader;
++	u8 family_id = data->info ? data->info->family_id : 0;
+ 
+ 	switch (appmode) {
+ 	case 0x4a:
+ 	case 0x4b:
+ 		/* Chips after 1664S use different scheme */
+-		if (retry || data->info.family_id >= 0xa2) {
++		if (retry || family_id >= 0xa2) {
+ 			bootloader = appmode - 0x24;
+ 			break;
+ 		}
+@@ -682,7 +684,7 @@ mxt_get_object(struct mxt_data *data, u8 type)
+ 	struct mxt_object *object;
+ 	int i;
+ 
+-	for (i = 0; i < data->info.object_num; i++) {
++	for (i = 0; i < data->info->object_num; i++) {
+ 		object = data->object_table + i;
+ 		if (object->type == type)
+ 			return object;
+@@ -1453,12 +1455,12 @@ static int mxt_update_cfg(struct mxt_data *data, const struct firmware *cfg)
+ 		data_pos += offset;
+ 	}
+ 
+-	if (cfg_info.family_id != data->info.family_id) {
++	if (cfg_info.family_id != data->info->family_id) {
+ 		dev_err(dev, "Family ID mismatch!\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	if (cfg_info.variant_id != data->info.variant_id) {
++	if (cfg_info.variant_id != data->info->variant_id) {
+ 		dev_err(dev, "Variant ID mismatch!\n");
+ 		return -EINVAL;
+ 	}
+@@ -1503,7 +1505,7 @@ static int mxt_update_cfg(struct mxt_data *data, const struct firmware *cfg)
+ 
+ 	/* Malloc memory to store configuration */
+ 	cfg_start_ofs = MXT_OBJECT_START +
+-			data->info.object_num * sizeof(struct mxt_object) +
++			data->info->object_num * sizeof(struct mxt_object) +
+ 			MXT_INFO_CHECKSUM_SIZE;
+ 	config_mem_size = data->mem_size - cfg_start_ofs;
+ 	config_mem = kzalloc(config_mem_size, GFP_KERNEL);
+@@ -1554,20 +1556,6 @@ static int mxt_update_cfg(struct mxt_data *data, const struct firmware *cfg)
+ 	return ret;
+ }
+ 
+-static int mxt_get_info(struct mxt_data *data)
+-{
+-	struct i2c_client *client = data->client;
+-	struct mxt_info *info = &data->info;
+-	int error;
+-
+-	/* Read 7-byte info block starting at address 0 */
+-	error = __mxt_read_reg(client, 0, sizeof(*info), info);
+-	if (error)
+-		return error;
+-
+-	return 0;
+-}
+-
+ static void mxt_free_input_device(struct mxt_data *data)
+ {
+ 	if (data->input_dev) {
+@@ -1582,9 +1570,10 @@ static void mxt_free_object_table(struct mxt_data *data)
+ 	video_unregister_device(&data->dbg.vdev);
+ 	v4l2_device_unregister(&data->dbg.v4l2);
+ #endif
+-
+-	kfree(data->object_table);
+ 	data->object_table = NULL;
++	data->info = NULL;
++	kfree(data->raw_info_block);
++	data->raw_info_block = NULL;
+ 	kfree(data->msg_buf);
+ 	data->msg_buf = NULL;
+ 	data->T5_address = 0;
+@@ -1600,34 +1589,18 @@ static void mxt_free_object_table(struct mxt_data *data)
+ 	data->max_reportid = 0;
+ }
+ 
+-static int mxt_get_object_table(struct mxt_data *data)
++static int mxt_parse_object_table(struct mxt_data *data,
++				  struct mxt_object *object_table)
+ {
+ 	struct i2c_client *client = data->client;
+-	size_t table_size;
+-	struct mxt_object *object_table;
+-	int error;
+ 	int i;
+ 	u8 reportid;
+ 	u16 end_address;
+ 
+-	table_size = data->info.object_num * sizeof(struct mxt_object);
+-	object_table = kzalloc(table_size, GFP_KERNEL);
+-	if (!object_table) {
+-		dev_err(&data->client->dev, "Failed to allocate memory\n");
+-		return -ENOMEM;
+-	}
+-
+-	error = __mxt_read_reg(client, MXT_OBJECT_START, table_size,
+-			object_table);
+-	if (error) {
+-		kfree(object_table);
+-		return error;
+-	}
+-
+ 	/* Valid Report IDs start counting from 1 */
+ 	reportid = 1;
+ 	data->mem_size = 0;
+-	for (i = 0; i < data->info.object_num; i++) {
++	for (i = 0; i < data->info->object_num; i++) {
+ 		struct mxt_object *object = object_table + i;
+ 		u8 min_id, max_id;
+ 
+@@ -1651,8 +1624,8 @@ static int mxt_get_object_table(struct mxt_data *data)
+ 
+ 		switch (object->type) {
+ 		case MXT_GEN_MESSAGE_T5:
+-			if (data->info.family_id == 0x80 &&
+-			    data->info.version < 0x20) {
++			if (data->info->family_id == 0x80 &&
++			    data->info->version < 0x20) {
+ 				/*
+ 				 * On mXT224 firmware versions prior to V2.0
+ 				 * read and discard unused CRC byte otherwise
+@@ -1707,24 +1680,102 @@ static int mxt_get_object_table(struct mxt_data *data)
+ 	/* If T44 exists, T5 position has to be directly after */
+ 	if (data->T44_address && (data->T5_address != data->T44_address + 1)) {
+ 		dev_err(&client->dev, "Invalid T44 position\n");
+-		error = -EINVAL;
+-		goto free_object_table;
++		return -EINVAL;
+ 	}
+ 
+ 	data->msg_buf = kcalloc(data->max_reportid,
+ 				data->T5_msg_size, GFP_KERNEL);
+-	if (!data->msg_buf) {
+-		dev_err(&client->dev, "Failed to allocate message buffer\n");
++	if (!data->msg_buf)
++		return -ENOMEM;
++
++	return 0;
++}
++
++static int mxt_read_info_block(struct mxt_data *data)
++{
++	struct i2c_client *client = data->client;
++	int error;
++	size_t size;
++	void *id_buf, *buf;
++	uint8_t num_objects;
++	u32 calculated_crc;
++	u8 *crc_ptr;
++
++	/* If info block already allocated, free it */
++	if (data->raw_info_block)
++		mxt_free_object_table(data);
++
++	/* Read 7-byte ID information block starting at address 0 */
++	size = sizeof(struct mxt_info);
++	id_buf = kzalloc(size, GFP_KERNEL);
++	if (!id_buf)
++		return -ENOMEM;
++
++	error = __mxt_read_reg(client, 0, size, id_buf);
++	if (error)
++		goto err_free_mem;
++
++	/* Resize buffer to give space for rest of info block */
++	num_objects = ((struct mxt_info *)id_buf)->object_num;
++	size += (num_objects * sizeof(struct mxt_object))
++		+ MXT_INFO_CHECKSUM_SIZE;
++
++	buf = krealloc(id_buf, size, GFP_KERNEL);
++	if (!buf) {
+ 		error = -ENOMEM;
+-		goto free_object_table;
++		goto err_free_mem;
++	}
++	id_buf = buf;
++
++	/* Read rest of info block */
++	error = __mxt_read_reg(client, MXT_OBJECT_START,
++			       size - MXT_OBJECT_START,
++			       id_buf + MXT_OBJECT_START);
++	if (error)
++		goto err_free_mem;
++
++	/* Extract & calculate checksum */
++	crc_ptr = id_buf + size - MXT_INFO_CHECKSUM_SIZE;
++	data->info_crc = crc_ptr[0] | (crc_ptr[1] << 8) | (crc_ptr[2] << 16);
++
++	calculated_crc = mxt_calculate_crc(id_buf, 0,
++					   size - MXT_INFO_CHECKSUM_SIZE);
++
++	/*
++	 * CRC mismatch can be caused by data corruption due to I2C comms
++	 * issue or else device is not using Object Based Protocol (eg i2c-hid)
++	 */
++	if ((data->info_crc == 0) || (data->info_crc != calculated_crc)) {
++		dev_err(&client->dev,
++			"Info Block CRC error calculated=0x%06X read=0x%06X\n",
++			calculated_crc, data->info_crc);
++		error = -EIO;
++		goto err_free_mem;
++	}
++
++	data->raw_info_block = id_buf;
++	data->info = (struct mxt_info *)id_buf;
++
++	dev_info(&client->dev,
++		 "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",
++		 data->info->family_id, data->info->variant_id,
++		 data->info->version >> 4, data->info->version & 0xf,
++		 data->info->build, data->info->object_num);
++
++	/* Parse object table information */
++	error = mxt_parse_object_table(data, id_buf + MXT_OBJECT_START);
++	if (error) {
++		dev_err(&client->dev, "Error %d parsing object table\n", error);
++		mxt_free_object_table(data);
++		goto err_free_mem;
+ 	}
+ 
+-	data->object_table = object_table;
++	data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);
+ 
+ 	return 0;
+ 
+-free_object_table:
+-	mxt_free_object_table(data);
++err_free_mem:
++	kfree(id_buf);
+ 	return error;
+ }
+ 
+@@ -2039,7 +2090,7 @@ static int mxt_initialize(struct mxt_data *data)
+ 	int error;
+ 
+ 	while (1) {
+-		error = mxt_get_info(data);
++		error = mxt_read_info_block(data);
+ 		if (!error)
+ 			break;
+ 
+@@ -2070,16 +2121,9 @@ static int mxt_initialize(struct mxt_data *data)
+ 		msleep(MXT_FW_RESET_TIME);
+ 	}
+ 
+-	/* Get object table information */
+-	error = mxt_get_object_table(data);
+-	if (error) {
+-		dev_err(&client->dev, "Error %d reading object table\n", error);
+-		return error;
+-	}
+-
+ 	error = mxt_acquire_irq(data);
+ 	if (error)
+-		goto err_free_object_table;
++		return error;
+ 
+ 	error = request_firmware_nowait(THIS_MODULE, true, MXT_CFG_NAME,
+ 					&client->dev, GFP_KERNEL, data,
+@@ -2087,14 +2131,10 @@ static int mxt_initialize(struct mxt_data *data)
+ 	if (error) {
+ 		dev_err(&client->dev, "Failed to invoke firmware loader: %d\n",
+ 			error);
+-		goto err_free_object_table;
++		return error;
+ 	}
+ 
+ 	return 0;
+-
+-err_free_object_table:
+-	mxt_free_object_table(data);
+-	return error;
+ }
+ 
+ static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep)
+@@ -2155,7 +2195,7 @@ static int mxt_init_t7_power_cfg(struct mxt_data *data)
+ static u16 mxt_get_debug_value(struct mxt_data *data, unsigned int x,
+ 			       unsigned int y)
+ {
+-	struct mxt_info *info = &data->info;
++	struct mxt_info *info = data->info;
+ 	struct mxt_dbg *dbg = &data->dbg;
+ 	unsigned int ofs, page;
+ 	unsigned int col = 0;
+@@ -2483,7 +2523,7 @@ static const struct video_device mxt_video_device = {
+ 
+ static void mxt_debug_init(struct mxt_data *data)
+ {
+-	struct mxt_info *info = &data->info;
++	struct mxt_info *info = data->info;
+ 	struct mxt_dbg *dbg = &data->dbg;
+ 	struct mxt_object *object;
+ 	int error;
+@@ -2569,7 +2609,6 @@ static int mxt_configure_objects(struct mxt_data *data,
+ 				 const struct firmware *cfg)
+ {
+ 	struct device *dev = &data->client->dev;
+-	struct mxt_info *info = &data->info;
+ 	int error;
+ 
+ 	error = mxt_init_t7_power_cfg(data);
+@@ -2594,11 +2633,6 @@ static int mxt_configure_objects(struct mxt_data *data,
+ 
+ 	mxt_debug_init(data);
+ 
+-	dev_info(dev,
+-		 "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",
+-		 info->family_id, info->variant_id, info->version >> 4,
+-		 info->version & 0xf, info->build, info->object_num);
+-
+ 	return 0;
+ }
+ 
+@@ -2607,7 +2641,7 @@ static ssize_t mxt_fw_version_show(struct device *dev,
+ 				   struct device_attribute *attr, char *buf)
+ {
+ 	struct mxt_data *data = dev_get_drvdata(dev);
+-	struct mxt_info *info = &data->info;
++	struct mxt_info *info = data->info;
+ 	return scnprintf(buf, PAGE_SIZE, "%u.%u.%02X\n",
+ 			 info->version >> 4, info->version & 0xf, info->build);
+ }
+@@ -2617,7 +2651,7 @@ static ssize_t mxt_hw_version_show(struct device *dev,
+ 				   struct device_attribute *attr, char *buf)
+ {
+ 	struct mxt_data *data = dev_get_drvdata(dev);
+-	struct mxt_info *info = &data->info;
++	struct mxt_info *info = data->info;
+ 	return scnprintf(buf, PAGE_SIZE, "%u.%u\n",
+ 			 info->family_id, info->variant_id);
+ }
+@@ -2656,7 +2690,7 @@ static ssize_t mxt_object_show(struct device *dev,
+ 		return -ENOMEM;
+ 
+ 	error = 0;
+-	for (i = 0; i < data->info.object_num; i++) {
++	for (i = 0; i < data->info->object_num; i++) {
+ 		object = data->object_table + i;
+ 
+ 		if (!mxt_object_readable(object->type))
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index 9a7ffd13c7f0..4e3e3d2f51c8 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -1345,7 +1345,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+ 	struct qi_desc desc;
+ 
+ 	if (mask) {
+-		BUG_ON(addr & ((1 << (VTD_PAGE_SHIFT + mask)) - 1));
++		BUG_ON(addr & ((1ULL << (VTD_PAGE_SHIFT + mask)) - 1));
+ 		addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1;
+ 		desc.high = QI_DEV_IOTLB_ADDR(addr) | QI_DEV_IOTLB_SIZE;
+ 	} else
+diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
+index 66f69af2c219..3062a154a9fb 100644
+--- a/drivers/iommu/intel_irq_remapping.c
++++ b/drivers/iommu/intel_irq_remapping.c
+@@ -1136,7 +1136,7 @@ static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
+ 	irte->dest_id = IRTE_DEST(cfg->dest_apicid);
+ 
+ 	/* Update the hardware only if the interrupt is in remapped mode. */
+-	if (!force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
++	if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
+ 		modify_irte(&ir_data->irq_2_iommu, irte);
+ }
+ 
+diff --git a/drivers/mtd/onenand/omap2.c b/drivers/mtd/onenand/omap2.c
+index 87c34f607a75..f47678be6383 100644
+--- a/drivers/mtd/onenand/omap2.c
++++ b/drivers/mtd/onenand/omap2.c
+@@ -377,56 +377,42 @@ static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
+ {
+ 	struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
+ 	struct onenand_chip *this = mtd->priv;
+-	dma_addr_t dma_src, dma_dst;
+-	int bram_offset;
++	struct device *dev = &c->pdev->dev;
+ 	void *buf = (void *)buffer;
++	dma_addr_t dma_src, dma_dst;
++	int bram_offset, err;
+ 	size_t xtra;
+-	int ret;
+ 
+ 	bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
+-	if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
+-		goto out_copy;
+-
+-	/* panic_write() may be in an interrupt context */
+-	if (in_interrupt() || oops_in_progress)
++	/*
++	 * If the buffer address is not DMA-able, len is not long enough to make
++	 * DMA transfers profitable or panic_write() may be in an interrupt
++	 * context fallback to PIO mode.
++	 */
++	if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||
++	    count < 384 || in_interrupt() || oops_in_progress )
+ 		goto out_copy;
+ 
+-	if (buf >= high_memory) {
+-		struct page *p1;
+-
+-		if (((size_t)buf & PAGE_MASK) !=
+-		    ((size_t)(buf + count - 1) & PAGE_MASK))
+-			goto out_copy;
+-		p1 = vmalloc_to_page(buf);
+-		if (!p1)
+-			goto out_copy;
+-		buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);
+-	}
+-
+ 	xtra = count & 3;
+ 	if (xtra) {
+ 		count -= xtra;
+ 		memcpy(buf + count, this->base + bram_offset + count, xtra);
+ 	}
+ 
++	dma_dst = dma_map_single(dev, buf, count, DMA_FROM_DEVICE);
+ 	dma_src = c->phys_base + bram_offset;
+-	dma_dst = dma_map_single(&c->pdev->dev, buf, count, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
+-		dev_err(&c->pdev->dev,
+-			"Couldn't DMA map a %d byte buffer\n",
+-			count);
+-		goto out_copy;
+-	}
+ 
+-	ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
+-	dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);
+-
+-	if (ret) {
+-		dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
++	if (dma_mapping_error(dev, dma_dst)) {
++		dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count);
+ 		goto out_copy;
+ 	}
+ 
+-	return 0;
++	err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
++	dma_unmap_single(dev, dma_dst, count, DMA_FROM_DEVICE);
++	if (!err)
++		return 0;
++
++	dev_err(dev, "timeout waiting for DMA\n");
+ 
+ out_copy:
+ 	memcpy(buf, this->base + bram_offset, count);
+@@ -439,49 +425,34 @@ static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
+ {
+ 	struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
+ 	struct onenand_chip *this = mtd->priv;
+-	dma_addr_t dma_src, dma_dst;
+-	int bram_offset;
++	struct device *dev = &c->pdev->dev;
+ 	void *buf = (void *)buffer;
+-	int ret;
++	dma_addr_t dma_src, dma_dst;
++	int bram_offset, err;
+ 
+ 	bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
+-	if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
+-		goto out_copy;
+-
+-	/* panic_write() may be in an interrupt context */
+-	if (in_interrupt() || oops_in_progress)
++	/*
++	 * If the buffer address is not DMA-able, len is not long enough to make
++	 * DMA transfers profitable or panic_write() may be in an interrupt
++	 * context fallback to PIO mode.
++	 */
++	if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||
++	    count < 384 || in_interrupt() || oops_in_progress )
+ 		goto out_copy;
+ 
+-	if (buf >= high_memory) {
+-		struct page *p1;
+-
+-		if (((size_t)buf & PAGE_MASK) !=
+-		    ((size_t)(buf + count - 1) & PAGE_MASK))
+-			goto out_copy;
+-		p1 = vmalloc_to_page(buf);
+-		if (!p1)
+-			goto out_copy;
+-		buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);
+-	}
+-
+-	dma_src = dma_map_single(&c->pdev->dev, buf, count, DMA_TO_DEVICE);
++	dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE);
+ 	dma_dst = c->phys_base + bram_offset;
+-	if (dma_mapping_error(&c->pdev->dev, dma_src)) {
+-		dev_err(&c->pdev->dev,
+-			"Couldn't DMA map a %d byte buffer\n",
+-			count);
+-		return -1;
+-	}
+-
+-	ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
+-	dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE);
+-
+-	if (ret) {
+-		dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
++	if (dma_mapping_error(dev, dma_src)) {
++		dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count);
+ 		goto out_copy;
+ 	}
+ 
+-	return 0;
++	err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);
++	dma_unmap_page(dev, dma_src, count, DMA_TO_DEVICE);
++	if (!err)
++		return 0;
++
++	dev_err(dev, "timeout waiting for DMA\n");
+ 
+ out_copy:
+ 	memcpy(this->base + bram_offset, buf, count);
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index b1779566c5bb..3c71f1cb205f 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -605,7 +605,7 @@ void can_bus_off(struct net_device *dev)
+ {
+ 	struct can_priv *priv = netdev_priv(dev);
+ 
+-	netdev_dbg(dev, "bus-off\n");
++	netdev_info(dev, "bus-off\n");
+ 
+ 	netif_carrier_off(dev);
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 32f6d2e24d66..1a1a6380c128 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -95,6 +95,7 @@ void aq_nic_cfg_start(struct aq_nic_s *self)
+ 	/*rss rings */
+ 	cfg->vecs = min(cfg->aq_hw_caps->vecs, AQ_CFG_VECS_DEF);
+ 	cfg->vecs = min(cfg->vecs, num_online_cpus());
++	cfg->vecs = min(cfg->vecs, self->irqvecs);
+ 	/* cfg->vecs should be power of 2 for RSS */
+ 	if (cfg->vecs >= 8U)
+ 		cfg->vecs = 8U;
+@@ -246,6 +247,8 @@ void aq_nic_ndev_init(struct aq_nic_s *self)
+ 
+ 	self->ndev->hw_features |= aq_hw_caps->hw_features;
+ 	self->ndev->features = aq_hw_caps->hw_features;
++	self->ndev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM |
++				     NETIF_F_RXHASH | NETIF_F_SG | NETIF_F_LRO;
+ 	self->ndev->priv_flags = aq_hw_caps->hw_priv_flags;
+ 	self->ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+index 219b550d1665..faa533a0ec47 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+@@ -80,6 +80,7 @@ struct aq_nic_s {
+ 
+ 	struct pci_dev *pdev;
+ 	unsigned int msix_entry_mask;
++	u32 irqvecs;
+ };
+ 
+ static inline struct device *aq_nic_get_dev(struct aq_nic_s *self)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index ecc6306f940f..750007513f9d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -267,16 +267,15 @@ static int aq_pci_probe(struct pci_dev *pdev,
+ 	numvecs = min(numvecs, num_online_cpus());
+ 	/*enable interrupts */
+ #if !AQ_CFG_FORCE_LEGACY_INT
+-	err = pci_alloc_irq_vectors(self->pdev, numvecs, numvecs,
+-				    PCI_IRQ_MSIX);
+-
+-	if (err < 0) {
+-		err = pci_alloc_irq_vectors(self->pdev, 1, 1,
+-					    PCI_IRQ_MSI | PCI_IRQ_LEGACY);
+-		if (err < 0)
+-			goto err_hwinit;
+-	}
++	err = pci_alloc_irq_vectors(self->pdev, 1, numvecs,
++				    PCI_IRQ_MSIX | PCI_IRQ_MSI |
++				    PCI_IRQ_LEGACY);
++
++	if (err < 0)
++		goto err_hwinit;
++	numvecs = err;
+ #endif
++	self->irqvecs = numvecs;
+ 
+ 	/* net device init */
+ 	aq_nic_cfg_start(self);
+@@ -298,9 +297,9 @@ static int aq_pci_probe(struct pci_dev *pdev,
+ 	kfree(self->aq_hw);
+ err_ioremap:
+ 	free_netdev(ndev);
+-err_pci_func:
+-	pci_release_regions(pdev);
+ err_ndev:
++	pci_release_regions(pdev);
++err_pci_func:
+ 	pci_disable_device(pdev);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 57dcb957f27c..e95fb6b43187 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5191,6 +5191,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 	spin_lock_init(&adapter->mbox_lock);
+ 	INIT_LIST_HEAD(&adapter->mlist.list);
++	adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS;
+ 	pci_set_drvdata(pdev, adapter);
+ 
+ 	if (func != ent->driver_data) {
+@@ -5225,8 +5226,6 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto out_free_adapter;
+ 	}
+ 
+-	adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS;
+-
+ 	/* PCI device has been enabled */
+ 	adapter->flags |= DEV_ENABLED;
+ 	memset(adapter->chan_map, 0xff, sizeof(adapter->chan_map));
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index 3e62692af011..fa5b30f547f6 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -87,7 +87,7 @@ do { \
+ 
+ #define HNAE_AE_REGISTER 0x1
+ 
+-#define RCB_RING_NAME_LEN 16
++#define RCB_RING_NAME_LEN (IFNAMSIZ + 4)
+ 
+ #define HNAE_LOWEST_LATENCY_COAL_PARAM	30
+ #define HNAE_LOW_LATENCY_COAL_PARAM	80
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index fd8e6937ee00..cd6d08399970 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1711,7 +1711,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 	for (i = 0; i < adapter->req_rx_queues; i++)
+ 		napi_schedule(&adapter->napi[i]);
+ 
+-	if (adapter->reset_reason != VNIC_RESET_FAILOVER)
++	if (adapter->reset_reason != VNIC_RESET_FAILOVER &&
++	    adapter->reset_reason != VNIC_RESET_CHANGE_PARAM)
+ 		netdev_notify_peers(netdev);
+ 
+ 	netif_carrier_on(netdev);
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b88fae785369..33a052174c0f 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -1698,7 +1698,22 @@ static void igb_configure_cbs(struct igb_adapter *adapter, int queue,
+ 	WARN_ON(hw->mac.type != e1000_i210);
+ 	WARN_ON(queue < 0 || queue > 1);
+ 
+-	if (enable) {
++	if (enable || queue == 0) {
++		/* i210 does not allow the queue 0 to be in the Strict
++		 * Priority mode while the Qav mode is enabled, so,
++		 * instead of disabling strict priority mode, we give
++		 * queue 0 the maximum of credits possible.
++		 *
++		 * See section 8.12.19 of the i210 datasheet, "Note:
++		 * Queue0 QueueMode must be set to 1b when
++		 * TransmitMode is set to Qav."
++		 */
++		if (queue == 0 && !enable) {
++			/* max "linkspeed" idleslope in kbps */
++			idleslope = 1000000;
++			hicredit = ETH_FRAME_LEN;
++		}
++
+ 		set_tx_desc_fetch_prio(hw, queue, TX_QUEUE_PRIO_HIGH);
+ 		set_queue_mode(hw, queue, QUEUE_MODE_STREAM_RESERVATION);
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index 93eacddb6704..336562a0685d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -918,8 +918,8 @@ void ixgbe_init_ipsec_offload(struct ixgbe_adapter *adapter)
+ 	kfree(ipsec->ip_tbl);
+ 	kfree(ipsec->rx_tbl);
+ 	kfree(ipsec->tx_tbl);
++	kfree(ipsec);
+ err1:
+-	kfree(adapter->ipsec);
+ 	netdev_err(adapter->netdev, "Unable to allocate memory for SA tables");
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+index f470d0204771..14e3a801390b 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+@@ -3427,6 +3427,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 		hw->phy.sfp_setup_needed = false;
+ 	}
+ 
++	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++		return status;
++
+ 	/* Reset PHY */
+ 	if (!hw->phy.reset_disable && hw->phy.ops.reset)
+ 		hw->phy.ops.reset(hw);
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 7f1083ce23da..7f5b9b6bf007 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -8332,12 +8332,12 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 		if (IS_ERR(priv->axi_clk)) {
+ 			err = PTR_ERR(priv->axi_clk);
+ 			if (err == -EPROBE_DEFER)
+-				goto err_gop_clk;
++				goto err_mg_clk;
+ 			priv->axi_clk = NULL;
+ 		} else {
+ 			err = clk_prepare_enable(priv->axi_clk);
+ 			if (err < 0)
+-				goto err_gop_clk;
++				goto err_mg_clk;
+ 		}
+ 
+ 		/* Get system's tclk rate */
+@@ -8351,7 +8351,7 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 	if (priv->hw_version == MVPP22) {
+ 		err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK);
+ 		if (err)
+-			goto err_mg_clk;
++			goto err_axi_clk;
+ 		/* Sadly, the BM pools all share the same register to
+ 		 * store the high 32 bits of their address. So they
+ 		 * must all have the same high 32 bits, which forces
+@@ -8359,14 +8359,14 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 		 */
+ 		err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
+ 		if (err)
+-			goto err_mg_clk;
++			goto err_axi_clk;
+ 	}
+ 
+ 	/* Initialize network controller */
+ 	err = mvpp2_init(pdev, priv);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "failed to initialize controller\n");
+-		goto err_mg_clk;
++		goto err_axi_clk;
+ 	}
+ 
+ 	/* Initialize ports */
+@@ -8379,7 +8379,7 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 	if (priv->port_count == 0) {
+ 		dev_err(&pdev->dev, "no ports enabled\n");
+ 		err = -ENODEV;
+-		goto err_mg_clk;
++		goto err_axi_clk;
+ 	}
+ 
+ 	/* Statistics must be gathered regularly because some of them (like
+@@ -8407,8 +8407,9 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			mvpp2_port_remove(priv->port_list[i]);
+ 		i++;
+ 	}
+-err_mg_clk:
++err_axi_clk:
+ 	clk_disable_unprepare(priv->axi_clk);
++err_mg_clk:
+ 	if (priv->hw_version == MVPP22)
+ 		clk_disable_unprepare(priv->mg_clk);
+ err_gop_clk:
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
+index baaea6f1a9d8..6409957e1657 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
+@@ -242,18 +242,49 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb)
+ 
+ void nfp_flower_cmsg_process_rx(struct work_struct *work)
+ {
++	struct sk_buff_head cmsg_joined;
+ 	struct nfp_flower_priv *priv;
+ 	struct sk_buff *skb;
+ 
+ 	priv = container_of(work, struct nfp_flower_priv, cmsg_work);
++	skb_queue_head_init(&cmsg_joined);
+ 
+-	while ((skb = skb_dequeue(&priv->cmsg_skbs)))
++	spin_lock_bh(&priv->cmsg_skbs_high.lock);
++	skb_queue_splice_tail_init(&priv->cmsg_skbs_high, &cmsg_joined);
++	spin_unlock_bh(&priv->cmsg_skbs_high.lock);
++
++	spin_lock_bh(&priv->cmsg_skbs_low.lock);
++	skb_queue_splice_tail_init(&priv->cmsg_skbs_low, &cmsg_joined);
++	spin_unlock_bh(&priv->cmsg_skbs_low.lock);
++
++	while ((skb = __skb_dequeue(&cmsg_joined)))
+ 		nfp_flower_cmsg_process_one_rx(priv->app, skb);
+ }
+ 
+-void nfp_flower_cmsg_rx(struct nfp_app *app, struct sk_buff *skb)
++static void
++nfp_flower_queue_ctl_msg(struct nfp_app *app, struct sk_buff *skb, int type)
+ {
+ 	struct nfp_flower_priv *priv = app->priv;
++	struct sk_buff_head *skb_head;
++
++	if (type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY ||
++	    type == NFP_FLOWER_CMSG_TYPE_PORT_MOD)
++		skb_head = &priv->cmsg_skbs_high;
++	else
++		skb_head = &priv->cmsg_skbs_low;
++
++	if (skb_queue_len(skb_head) >= NFP_FLOWER_WORKQ_MAX_SKBS) {
++		nfp_flower_cmsg_warn(app, "Dropping queued control messages\n");
++		dev_kfree_skb_any(skb);
++		return;
++	}
++
++	skb_queue_tail(skb_head, skb);
++	schedule_work(&priv->cmsg_work);
++}
++
++void nfp_flower_cmsg_rx(struct nfp_app *app, struct sk_buff *skb)
++{
+ 	struct nfp_flower_cmsg_hdr *cmsg_hdr;
+ 
+ 	cmsg_hdr = nfp_flower_cmsg_get_hdr(skb);
+@@ -270,7 +301,6 @@ void nfp_flower_cmsg_rx(struct nfp_app *app, struct sk_buff *skb)
+ 		nfp_flower_rx_flow_stats(app, skb);
+ 		dev_consume_skb_any(skb);
+ 	} else {
+-		skb_queue_tail(&priv->cmsg_skbs, skb);
+-		schedule_work(&priv->cmsg_work);
++		nfp_flower_queue_ctl_msg(app, skb, cmsg_hdr->type);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+index 329a9b6d453a..343f9117fb57 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
++++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+@@ -98,6 +98,8 @@
+ #define NFP_FL_IPV4_TUNNEL_TYPE		GENMASK(7, 4)
+ #define NFP_FL_IPV4_PRE_TUN_INDEX	GENMASK(2, 0)
+ 
++#define NFP_FLOWER_WORKQ_MAX_SKBS	30000
++
+ #define nfp_flower_cmsg_warn(app, fmt, args...)                         \
+ 	do {                                                            \
+ 		if (net_ratelimit())                                    \
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
+index 742d6f1575b5..646fc97f1f0b 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
+@@ -358,7 +358,7 @@ nfp_flower_spawn_phy_reprs(struct nfp_app *app, struct nfp_flower_priv *priv)
+ 		}
+ 
+ 		SET_NETDEV_DEV(repr, &priv->nn->pdev->dev);
+-		nfp_net_get_mac_addr(app->pf, port);
++		nfp_net_get_mac_addr(app->pf, repr, port);
+ 
+ 		cmsg_port_id = nfp_flower_cmsg_phys_port(phys_port);
+ 		err = nfp_repr_init(app, repr,
+@@ -517,7 +517,8 @@ static int nfp_flower_init(struct nfp_app *app)
+ 
+ 	app->priv = app_priv;
+ 	app_priv->app = app;
+-	skb_queue_head_init(&app_priv->cmsg_skbs);
++	skb_queue_head_init(&app_priv->cmsg_skbs_high);
++	skb_queue_head_init(&app_priv->cmsg_skbs_low);
+ 	INIT_WORK(&app_priv->cmsg_work, nfp_flower_cmsg_process_rx);
+ 	init_waitqueue_head(&app_priv->reify_wait_queue);
+ 
+@@ -544,7 +545,8 @@ static void nfp_flower_clean(struct nfp_app *app)
+ {
+ 	struct nfp_flower_priv *app_priv = app->priv;
+ 
+-	skb_queue_purge(&app_priv->cmsg_skbs);
++	skb_queue_purge(&app_priv->cmsg_skbs_high);
++	skb_queue_purge(&app_priv->cmsg_skbs_low);
+ 	flush_work(&app_priv->cmsg_work);
+ 
+ 	nfp_flower_metadata_cleanup(app);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
+index 332ff0fdc038..1eca582c5846 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
+@@ -89,7 +89,10 @@ struct nfp_fl_stats_id {
+  * @mask_table:		Hash table used to store masks
+  * @flow_table:		Hash table used to store flower rules
+  * @cmsg_work:		Workqueue for control messages processing
+- * @cmsg_skbs:		List of skbs for control message processing
++ * @cmsg_skbs_high:	List of higher priority skbs for control message
++ *			processing
++ * @cmsg_skbs_low:	List of lower priority skbs for control message
++ *			processing
+  * @nfp_mac_off_list:	List of MAC addresses to offload
+  * @nfp_mac_index_list:	List of unique 8-bit indexes for non NFP netdevs
+  * @nfp_ipv4_off_list:	List of IPv4 addresses to offload
+@@ -117,7 +120,8 @@ struct nfp_flower_priv {
+ 	DECLARE_HASHTABLE(mask_table, NFP_FLOWER_MASK_HASH_BITS);
+ 	DECLARE_HASHTABLE(flow_table, NFP_FLOWER_HASH_BITS);
+ 	struct work_struct cmsg_work;
+-	struct sk_buff_head cmsg_skbs;
++	struct sk_buff_head cmsg_skbs_high;
++	struct sk_buff_head cmsg_skbs_low;
+ 	struct list_head nfp_mac_off_list;
+ 	struct list_head nfp_mac_index_list;
+ 	struct list_head nfp_ipv4_off_list;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_app_nic.c b/drivers/net/ethernet/netronome/nfp/nfp_app_nic.c
+index 2a2f2fbc8850..b9618c37403f 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_app_nic.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_app_nic.c
+@@ -69,7 +69,7 @@ int nfp_app_nic_vnic_alloc(struct nfp_app *app, struct nfp_net *nn,
+ 	if (err)
+ 		return err < 0 ? err : 0;
+ 
+-	nfp_net_get_mac_addr(app->pf, nn->port);
++	nfp_net_get_mac_addr(app->pf, nn->dp.netdev, nn->port);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.h b/drivers/net/ethernet/netronome/nfp/nfp_main.h
+index add46e28212b..42211083b51f 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_main.h
++++ b/drivers/net/ethernet/netronome/nfp/nfp_main.h
+@@ -171,7 +171,9 @@ void nfp_net_pci_remove(struct nfp_pf *pf);
+ int nfp_hwmon_register(struct nfp_pf *pf);
+ void nfp_hwmon_unregister(struct nfp_pf *pf);
+ 
+-void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port);
++void
++nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev,
++		     struct nfp_port *port);
+ 
+ bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb);
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
+index 15fa47f622aa..45cd2092e498 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
+@@ -67,23 +67,26 @@
+ /**
+  * nfp_net_get_mac_addr() - Get the MAC address.
+  * @pf:       NFP PF handle
++ * @netdev:   net_device to set MAC address on
+  * @port:     NFP port structure
+  *
+  * First try to get the MAC address from NSP ETH table. If that
+  * fails generate a random address.
+  */
+-void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port)
++void
++nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev,
++		     struct nfp_port *port)
+ {
+ 	struct nfp_eth_table_port *eth_port;
+ 
+ 	eth_port = __nfp_port_get_eth_port(port);
+ 	if (!eth_port) {
+-		eth_hw_addr_random(port->netdev);
++		eth_hw_addr_random(netdev);
+ 		return;
+ 	}
+ 
+-	ether_addr_copy(port->netdev->dev_addr, eth_port->mac_addr);
+-	ether_addr_copy(port->netdev->perm_addr, eth_port->mac_addr);
++	ether_addr_copy(netdev->dev_addr, eth_port->mac_addr);
++	ether_addr_copy(netdev->perm_addr, eth_port->mac_addr);
+ }
+ 
+ static struct nfp_eth_table_port *
+@@ -511,16 +514,18 @@ static int nfp_net_pci_map_mem(struct nfp_pf *pf)
+ 		return PTR_ERR(mem);
+ 	}
+ 
+-	min_size =  NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1);
+-	pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats",
+-					  "net.macstats", min_size,
+-					  &pf->mac_stats_bar);
+-	if (IS_ERR(pf->mac_stats_mem)) {
+-		if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) {
+-			err = PTR_ERR(pf->mac_stats_mem);
+-			goto err_unmap_ctrl;
++	if (pf->eth_tbl) {
++		min_size =  NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1);
++		pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats",
++						  "net.macstats", min_size,
++						  &pf->mac_stats_bar);
++		if (IS_ERR(pf->mac_stats_mem)) {
++			if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) {
++				err = PTR_ERR(pf->mac_stats_mem);
++				goto err_unmap_ctrl;
++			}
++			pf->mac_stats_mem = NULL;
+ 		}
+-		pf->mac_stats_mem = NULL;
+ 	}
+ 
+ 	pf->vf_cfg_mem = nfp_net_pf_map_rtsym(pf, "net.vfcfg",
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
+index 99bb679a9801..2abee0fe3a7c 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
+@@ -281,8 +281,7 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp, u64 *reg, u32 nsp_cpp, u64 addr,
+ 		if ((*reg & mask) == val)
+ 			return 0;
+ 
+-		if (msleep_interruptible(25))
+-			return -ERESTARTSYS;
++		msleep(25);
+ 
+ 		if (time_after(start_time, wait_until))
+ 			return -ETIMEDOUT;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+index 893ef08a4b39..eaf50e6af6b3 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+@@ -115,8 +115,7 @@ int qed_l2_alloc(struct qed_hwfn *p_hwfn)
+ 
+ void qed_l2_setup(struct qed_hwfn *p_hwfn)
+ {
+-	if (p_hwfn->hw_info.personality != QED_PCI_ETH &&
+-	    p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)
++	if (!QED_IS_L2_PERSONALITY(p_hwfn))
+ 		return;
+ 
+ 	mutex_init(&p_hwfn->p_l2_info->lock);
+@@ -126,8 +125,7 @@ void qed_l2_free(struct qed_hwfn *p_hwfn)
+ {
+ 	u32 i;
+ 
+-	if (p_hwfn->hw_info.personality != QED_PCI_ETH &&
+-	    p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)
++	if (!QED_IS_L2_PERSONALITY(p_hwfn))
+ 		return;
+ 
+ 	if (!p_hwfn->p_l2_info)
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_rdma.c b/drivers/net/ethernet/qlogic/qede/qede_rdma.c
+index 50b142fad6b8..1900bf7e67d1 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_rdma.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_rdma.c
+@@ -238,7 +238,7 @@ qede_rdma_get_free_event_node(struct qede_dev *edev)
+ 	}
+ 
+ 	if (!found) {
+-		event_node = kzalloc(sizeof(*event_node), GFP_KERNEL);
++		event_node = kzalloc(sizeof(*event_node), GFP_ATOMIC);
+ 		if (!event_node) {
+ 			DP_NOTICE(edev,
+ 				  "qedr: Could not allocate memory for rdma work\n");
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 3bb6b66dc7bf..f9c25912eb98 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -720,6 +720,15 @@ static struct phy_driver broadcom_drivers[] = {
+ 	.get_strings	= bcm_phy_get_strings,
+ 	.get_stats	= bcm53xx_phy_get_stats,
+ 	.probe		= bcm53xx_phy_probe,
++}, {
++	.phy_id         = PHY_ID_BCM89610,
++	.phy_id_mask    = 0xfffffff0,
++	.name           = "Broadcom BCM89610",
++	.features       = PHY_GBIT_FEATURES,
++	.flags          = PHY_HAS_INTERRUPT,
++	.config_init    = bcm54xx_config_init,
++	.ack_interrupt  = bcm_phy_ack_intr,
++	.config_intr    = bcm_phy_config_intr,
+ } };
+ 
+ module_phy_driver(broadcom_drivers);
+@@ -741,6 +750,7 @@ static struct mdio_device_id __maybe_unused broadcom_tbl[] = {
+ 	{ PHY_ID_BCMAC131, 0xfffffff0 },
+ 	{ PHY_ID_BCM5241, 0xfffffff0 },
+ 	{ PHY_ID_BCM5395, 0xfffffff0 },
++	{ PHY_ID_BCM89610, 0xfffffff0 },
+ 	{ }
+ };
+ 
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 0e0978d8a0eb..febbeeecb078 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1377,6 +1377,15 @@ static int m88e1318_set_wol(struct phy_device *phydev,
+ 		if (err < 0)
+ 			goto error;
+ 
++		/* If WOL event happened once, the LED[2] interrupt pin
++		 * will not be cleared unless we reading the interrupt status
++		 * register. If interrupts are in use, the normal interrupt
++		 * handling will clear the WOL event. Clear the WOL event
++		 * before enabling it if !phy_interrupt_is_valid()
++		 */
++		if (!phy_interrupt_is_valid(phydev))
++			phy_read(phydev, MII_M1011_IEVENT);
++
+ 		/* Enable the WOL interrupt */
+ 		err = __phy_modify(phydev, MII_88E1318S_PHY_CSIER, 0,
+ 				   MII_88E1318S_PHY_CSIER_WOL_EIE);
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index 0f293ef28935..a97ac8c12c4c 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -20,6 +20,7 @@
+ #include <linux/ethtool.h>
+ #include <linux/phy.h>
+ #include <linux/microchipphy.h>
++#include <linux/delay.h>
+ 
+ #define DRIVER_AUTHOR	"WOOJUNG HUH <woojung.huh@microchip.com>"
+ #define DRIVER_DESC	"Microchip LAN88XX PHY driver"
+@@ -30,6 +31,16 @@ struct lan88xx_priv {
+ 	__u32	wolopts;
+ };
+ 
++static int lan88xx_read_page(struct phy_device *phydev)
++{
++	return __phy_read(phydev, LAN88XX_EXT_PAGE_ACCESS);
++}
++
++static int lan88xx_write_page(struct phy_device *phydev, int page)
++{
++	return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);
++}
++
+ static int lan88xx_phy_config_intr(struct phy_device *phydev)
+ {
+ 	int rc;
+@@ -66,6 +77,150 @@ static int lan88xx_suspend(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
++static int lan88xx_TR_reg_set(struct phy_device *phydev, u16 regaddr,
++			      u32 data)
++{
++	int val, save_page, ret = 0;
++	u16 buf;
++
++	/* Save current page */
++	save_page = phy_save_page(phydev);
++	if (save_page < 0) {
++		pr_warn("Failed to get current page\n");
++		goto err;
++	}
++
++	/* Switch to TR page */
++	lan88xx_write_page(phydev, LAN88XX_EXT_PAGE_ACCESS_TR);
++
++	ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_LOW_DATA,
++			  (data & 0xFFFF));
++	if (ret < 0) {
++		pr_warn("Failed to write TR low data\n");
++		goto err;
++	}
++
++	ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_HIGH_DATA,
++			  (data & 0x00FF0000) >> 16);
++	if (ret < 0) {
++		pr_warn("Failed to write TR high data\n");
++		goto err;
++	}
++
++	/* Config control bits [15:13] of register */
++	buf = (regaddr & ~(0x3 << 13));/* Clr [14:13] to write data in reg */
++	buf |= 0x8000; /* Set [15] to Packet transmit */
++
++	ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_CR, buf);
++	if (ret < 0) {
++		pr_warn("Failed to write data in reg\n");
++		goto err;
++	}
++
++	usleep_range(1000, 2000);/* Wait for Data to be written */
++	val = __phy_read(phydev, LAN88XX_EXT_PAGE_TR_CR);
++	if (!(val & 0x8000))
++		pr_warn("TR Register[0x%X] configuration failed\n", regaddr);
++err:
++	return phy_restore_page(phydev, save_page, ret);
++}
++
++static void lan88xx_config_TR_regs(struct phy_device *phydev)
++{
++	int err;
++
++	/* Get access to Channel 0x1, Node 0xF , Register 0x01.
++	 * Write 24-bit value 0x12B00A to register. Setting MrvlTrFix1000Kf,
++	 * MrvlTrFix1000Kp, MasterEnableTR bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x0F82, 0x12B00A);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x0F82]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x06.
++	 * Write 24-bit value 0xD2C46F to register. Setting SSTrKf1000Slv,
++	 * SSTrKp1000Mas bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x168C, 0xD2C46F);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x168C]\n");
++
++	/* Get access to Channel b'10, Node b'1111, Register 0x11.
++	 * Write 24-bit value 0x620 to register. Setting rem_upd_done_thresh
++	 * bits
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x17A2, 0x620);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x17A2]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x10.
++	 * Write 24-bit value 0xEEFFDD to register. Setting
++	 * eee_TrKp1Long_1000, eee_TrKp2Long_1000, eee_TrKp3Long_1000,
++	 * eee_TrKp1Short_1000,eee_TrKp2Short_1000, eee_TrKp3Short_1000 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x16A0, 0xEEFFDD);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x16A0]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x13.
++	 * Write 24-bit value 0x071448 to register. Setting
++	 * slv_lpi_tr_tmr_val1, slv_lpi_tr_tmr_val2 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x16A6, 0x071448);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x16A6]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x12.
++	 * Write 24-bit value 0x13132F to register. Setting
++	 * slv_sigdet_timer_val1, slv_sigdet_timer_val2 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x16A4, 0x13132F);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x16A4]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x14.
++	 * Write 24-bit value 0x0 to register. Setting eee_3level_delay,
++	 * eee_TrKf_freeze_delay bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x16A8, 0x0);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x16A8]\n");
++
++	/* Get access to Channel b'01, Node b'1111, Register 0x34.
++	 * Write 24-bit value 0x91B06C to register. Setting
++	 * FastMseSearchThreshLong1000, FastMseSearchThreshShort1000,
++	 * FastMseSearchUpdGain1000 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x0FE8, 0x91B06C);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x0FE8]\n");
++
++	/* Get access to Channel b'01, Node b'1111, Register 0x3E.
++	 * Write 24-bit value 0xC0A028 to register. Setting
++	 * FastMseKp2ThreshLong1000, FastMseKp2ThreshShort1000,
++	 * FastMseKp2UpdGain1000, FastMseKp2ExitEn1000 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x0FFC, 0xC0A028);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x0FFC]\n");
++
++	/* Get access to Channel b'01, Node b'1111, Register 0x35.
++	 * Write 24-bit value 0x041600 to register. Setting
++	 * FastMseSearchPhShNum1000, FastMseSearchClksPerPh1000,
++	 * FastMsePhChangeDelay1000 bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x0FEA, 0x041600);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x0FEA]\n");
++
++	/* Get access to Channel b'10, Node b'1101, Register 0x03.
++	 * Write 24-bit value 0x000004 to register. Setting TrFreeze bits.
++	 */
++	err = lan88xx_TR_reg_set(phydev, 0x1686, 0x000004);
++	if (err < 0)
++		pr_warn("Failed to Set Register[0x1686]\n");
++}
++
+ static int lan88xx_probe(struct phy_device *phydev)
+ {
+ 	struct device *dev = &phydev->mdio.dev;
+@@ -132,6 +287,25 @@ static void lan88xx_set_mdix(struct phy_device *phydev)
+ 	phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, LAN88XX_EXT_PAGE_SPACE_0);
+ }
+ 
++static int lan88xx_config_init(struct phy_device *phydev)
++{
++	int val;
++
++	genphy_config_init(phydev);
++	/*Zerodetect delay enable */
++	val = phy_read_mmd(phydev, MDIO_MMD_PCS,
++			   PHY_ARDENNES_MMD_DEV_3_PHY_CFG);
++	val |= PHY_ARDENNES_MMD_DEV_3_PHY_CFG_ZD_DLY_EN_;
++
++	phy_write_mmd(phydev, MDIO_MMD_PCS, PHY_ARDENNES_MMD_DEV_3_PHY_CFG,
++		      val);
++
++	/* Config DSP registers */
++	lan88xx_config_TR_regs(phydev);
++
++	return 0;
++}
++
+ static int lan88xx_config_aneg(struct phy_device *phydev)
+ {
+ 	lan88xx_set_mdix(phydev);
+@@ -151,7 +325,7 @@ static struct phy_driver microchip_phy_driver[] = {
+ 	.probe		= lan88xx_probe,
+ 	.remove		= lan88xx_remove,
+ 
+-	.config_init	= genphy_config_init,
++	.config_init	= lan88xx_config_init,
+ 	.config_aneg	= lan88xx_config_aneg,
+ 
+ 	.ack_interrupt	= lan88xx_phy_ack_interrupt,
+@@ -160,6 +334,8 @@ static struct phy_driver microchip_phy_driver[] = {
+ 	.suspend	= lan88xx_suspend,
+ 	.resume		= genphy_resume,
+ 	.set_wol	= lan88xx_set_wol,
++	.read_page	= lan88xx_read_page,
++	.write_page	= lan88xx_write_page,
+ } };
+ 
+ module_phy_driver(microchip_phy_driver);
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index b979cf3bce65..88a8b5916624 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -27,7 +27,7 @@ config NVME_FABRICS
+ 
+ config NVME_RDMA
+ 	tristate "NVM Express over Fabrics RDMA host driver"
+-	depends on INFINIBAND && BLOCK
++	depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK
+ 	select NVME_CORE
+ 	select NVME_FABRICS
+ 	select SG_POOL
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index df3d5051539d..4ae5be34131c 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -99,6 +99,7 @@ static struct class *nvme_subsys_class;
+ 
+ static void nvme_ns_remove(struct nvme_ns *ns);
+ static int nvme_revalidate_disk(struct gendisk *disk);
++static void nvme_put_subsystem(struct nvme_subsystem *subsys);
+ 
+ static __le32 nvme_get_log_dw10(u8 lid, size_t size)
+ {
+@@ -353,6 +354,7 @@ static void nvme_free_ns_head(struct kref *ref)
+ 	ida_simple_remove(&head->subsys->ns_ida, head->instance);
+ 	list_del_init(&head->entry);
+ 	cleanup_srcu_struct(&head->srcu);
++	nvme_put_subsystem(head->subsys);
+ 	kfree(head);
+ }
+ 
+@@ -767,6 +769,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ 				ret = PTR_ERR(meta);
+ 				goto out_unmap;
+ 			}
++			req->cmd_flags |= REQ_INTEGRITY;
+ 		}
+ 	}
+ 
+@@ -2842,6 +2845,9 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
+ 		goto out_cleanup_srcu;
+ 
+ 	list_add_tail(&head->entry, &ctrl->subsys->nsheads);
++
++	kref_get(&ctrl->subsys->ref);
++
+ 	return head;
+ out_cleanup_srcu:
+ 	cleanup_srcu_struct(&head->srcu);
+@@ -2978,31 +2984,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ 	if (nvme_init_ns_head(ns, nsid, id))
+ 		goto out_free_id;
+ 	nvme_setup_streams_ns(ctrl, ns);
+-	
+-#ifdef CONFIG_NVME_MULTIPATH
+-	/*
+-	 * If multipathing is enabled we need to always use the subsystem
+-	 * instance number for numbering our devices to avoid conflicts
+-	 * between subsystems that have multiple controllers and thus use
+-	 * the multipath-aware subsystem node and those that have a single
+-	 * controller and use the controller node directly.
+-	 */
+-	if (ns->head->disk) {
+-		sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance,
+-				ctrl->cntlid, ns->head->instance);
+-		flags = GENHD_FL_HIDDEN;
+-	} else {
+-		sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance,
+-				ns->head->instance);
+-	}
+-#else
+-	/*
+-	 * But without the multipath code enabled, multiple controller per
+-	 * subsystems are visible as devices and thus we cannot use the
+-	 * subsystem instance.
+-	 */
+-	sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);
+-#endif
++	nvme_set_disk_name(disk_name, ns, ctrl, &flags);
+ 
+ 	if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) {
+ 		if (nvme_nvm_register(ns, disk_name, node)) {
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 124c458806df..7ae732a77fe8 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -668,6 +668,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++			kfree(opts->transport);
+ 			opts->transport = p;
+ 			break;
+ 		case NVMF_OPT_NQN:
+@@ -676,6 +677,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++			kfree(opts->subsysnqn);
+ 			opts->subsysnqn = p;
+ 			nqnlen = strlen(opts->subsysnqn);
+ 			if (nqnlen >= NVMF_NQN_SIZE) {
+@@ -698,6 +700,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++			kfree(opts->traddr);
+ 			opts->traddr = p;
+ 			break;
+ 		case NVMF_OPT_TRSVCID:
+@@ -706,6 +709,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++			kfree(opts->trsvcid);
+ 			opts->trsvcid = p;
+ 			break;
+ 		case NVMF_OPT_QUEUE_SIZE:
+@@ -792,6 +796,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
++			nvmf_host_put(opts->host);
+ 			opts->host = nvmf_host_add(p);
+ 			kfree(p);
+ 			if (!opts->host) {
+@@ -817,6 +822,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++			kfree(opts->host_traddr);
+ 			opts->host_traddr = p;
+ 			break;
+ 		case NVMF_OPT_HOST_ID:
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 060f69e03427..0949633ac87c 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -15,10 +15,32 @@
+ #include "nvme.h"
+ 
+ static bool multipath = true;
+-module_param(multipath, bool, 0644);
++module_param(multipath, bool, 0444);
+ MODULE_PARM_DESC(multipath,
+ 	"turn on native support for multiple controllers per subsystem");
+ 
++/*
++ * If multipathing is enabled we need to always use the subsystem instance
++ * number for numbering our devices to avoid conflicts between subsystems that
++ * have multiple controllers and thus use the multipath-aware subsystem node
++ * and those that have a single controller and use the controller node
++ * directly.
++ */
++void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,
++			struct nvme_ctrl *ctrl, int *flags)
++{
++	if (!multipath) {
++		sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);
++	} else if (ns->head->disk) {
++		sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance,
++				ctrl->cntlid, ns->head->instance);
++		*flags = GENHD_FL_HIDDEN;
++	} else {
++		sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance,
++				ns->head->instance);
++	}
++}
++
+ void nvme_failover_req(struct request *req)
+ {
+ 	struct nvme_ns *ns = req->q->queuedata;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 0133f3d2ce94..011d67ba11d5 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -411,6 +411,8 @@ extern const struct attribute_group nvme_ns_id_attr_group;
+ extern const struct block_device_operations nvme_ns_head_ops;
+ 
+ #ifdef CONFIG_NVME_MULTIPATH
++void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,
++			struct nvme_ctrl *ctrl, int *flags);
+ void nvme_failover_req(struct request *req);
+ bool nvme_req_needs_failover(struct request *req, blk_status_t error);
+ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);
+@@ -436,6 +438,16 @@ static inline void nvme_mpath_check_last_path(struct nvme_ns *ns)
+ }
+ 
+ #else
++/*
++ * Without the multipath code enabled, multiple controller per subsystems are
++ * visible as devices and thus we cannot use the subsystem instance.
++ */
++static inline void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,
++				      struct nvme_ctrl *ctrl, int *flags)
++{
++	sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);
++}
++
+ static inline void nvme_failover_req(struct request *req)
+ {
+ }
+diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
+index 5f4f8b16685f..3c7b61ddb0d1 100644
+--- a/drivers/nvme/target/Kconfig
++++ b/drivers/nvme/target/Kconfig
+@@ -27,7 +27,7 @@ config NVME_TARGET_LOOP
+ 
+ config NVME_TARGET_RDMA
+ 	tristate "NVMe over Fabrics RDMA target support"
+-	depends on INFINIBAND
++	depends on INFINIBAND && INFINIBAND_ADDR_TRANS
+ 	depends on NVME_TARGET
+ 	select SGL_ALLOC
+ 	help
+diff --git a/drivers/pci/dwc/pcie-kirin.c b/drivers/pci/dwc/pcie-kirin.c
+index 13d839bd6160..c1b396a36a20 100644
+--- a/drivers/pci/dwc/pcie-kirin.c
++++ b/drivers/pci/dwc/pcie-kirin.c
+@@ -487,7 +487,7 @@ static int kirin_pcie_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node,
+-						      "reset-gpio", 0);
++						      "reset-gpios", 0);
+ 	if (kirin_pcie->gpio_id_reset < 0)
+ 		return -ENODEV;
+ 
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index b1ae1618fefe..fee9225ca559 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1622,22 +1622,30 @@ static int chv_gpio_probe(struct chv_pinctrl *pctrl, int irq)
+ 
+ 	if (!need_valid_mask) {
+ 		irq_base = devm_irq_alloc_descs(pctrl->dev, -1, 0,
+-						chip->ngpio, NUMA_NO_NODE);
++						community->npins, NUMA_NO_NODE);
+ 		if (irq_base < 0) {
+ 			dev_err(pctrl->dev, "Failed to allocate IRQ numbers\n");
+ 			return irq_base;
+ 		}
+-	} else {
+-		irq_base = 0;
+ 	}
+ 
+-	ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, irq_base,
++	ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, 0,
+ 				   handle_bad_irq, IRQ_TYPE_NONE);
+ 	if (ret) {
+ 		dev_err(pctrl->dev, "failed to add IRQ chip\n");
+ 		return ret;
+ 	}
+ 
++	if (!need_valid_mask) {
++		for (i = 0; i < community->ngpio_ranges; i++) {
++			range = &community->gpio_ranges[i];
++
++			irq_domain_associate_many(chip->irq.domain, irq_base,
++						  range->base, range->npins);
++			irq_base += range->npins;
++		}
++	}
++
+ 	gpiochip_set_chained_irqchip(chip, &chv_gpio_irqchip, irq,
+ 				     chv_gpio_irq_handler);
+ 	return 0;
+diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.c b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+index 4b91ff74779b..99a6ceac8e53 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson-axg.c
++++ b/drivers/pinctrl/meson/pinctrl-meson-axg.c
+@@ -898,7 +898,7 @@ static struct meson_bank meson_axg_periphs_banks[] = {
+ 
+ static struct meson_bank meson_axg_aobus_banks[] = {
+ 	/*   name    first      last      irq	pullen  pull    dir     out     in  */
+-	BANK("AO",   GPIOAO_0,  GPIOAO_9, 0, 13, 0,  16,  0, 0,  0,  0,  0, 16,  1,  0),
++	BANK("AO",   GPIOAO_0,  GPIOAO_13, 0, 13, 0,  16,  0, 0,  0,  0,  0, 16,  1,  0),
+ };
+ 
+ static struct meson_pmx_bank meson_axg_periphs_pmx_banks[] = {
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 51a1b49760ea..6bfb47c18a15 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -168,8 +168,8 @@ config DELL_WMI
+ 	depends on DMI
+ 	depends on INPUT
+ 	depends on ACPI_VIDEO || ACPI_VIDEO = n
++	depends on DELL_SMBIOS
+ 	select DELL_WMI_DESCRIPTOR
+-	select DELL_SMBIOS
+ 	select INPUT_SPARSEKMAP
+ 	---help---
+ 	  Say Y here if you want to support WMI-based hotkeys on Dell laptops.
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index b4e5e725848d..5f5b57fcf792 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -1088,6 +1088,7 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 		dev_err(qproc->dev, "unable to resolve mba region\n");
+ 		return ret;
+ 	}
++	of_node_put(node);
+ 
+ 	qproc->mba_phys = r.start;
+ 	qproc->mba_size = resource_size(&r);
+@@ -1105,6 +1106,7 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ 		dev_err(qproc->dev, "unable to resolve mpss region\n");
+ 		return ret;
+ 	}
++	of_node_put(node);
+ 
+ 	qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ 	qproc->mpss_size = resource_size(&r);
+diff --git a/drivers/reset/reset-uniphier.c b/drivers/reset/reset-uniphier.c
+index e8bb023ff15e..3e3417c8bb9e 100644
+--- a/drivers/reset/reset-uniphier.c
++++ b/drivers/reset/reset-uniphier.c
+@@ -107,7 +107,7 @@ static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
+ 	UNIPHIER_RESETX(4, 0x200c, 2),		/* eMMC */
+ 	UNIPHIER_RESETX(6, 0x200c, 6),		/* Ether */
+ 	UNIPHIER_RESETX(8, 0x200c, 8),		/* STDMAC (HSC) */
+-	UNIPHIER_RESETX(12, 0x200c, 5),		/* GIO (PCIe, USB3) */
++	UNIPHIER_RESETX(14, 0x200c, 5),		/* USB30 */
+ 	UNIPHIER_RESETX(16, 0x200c, 12),	/* USB30-PHY0 */
+ 	UNIPHIER_RESETX(17, 0x200c, 13),	/* USB30-PHY1 */
+ 	UNIPHIER_RESETX(18, 0x200c, 14),	/* USB30-PHY2 */
+@@ -122,8 +122,8 @@ static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
+ 	UNIPHIER_RESETX(2, 0x200c, 0),		/* NAND */
+ 	UNIPHIER_RESETX(4, 0x200c, 2),		/* eMMC */
+ 	UNIPHIER_RESETX(8, 0x200c, 12),		/* STDMAC */
+-	UNIPHIER_RESETX(12, 0x200c, 4),		/* USB30 link (GIO0) */
+-	UNIPHIER_RESETX(13, 0x200c, 5),		/* USB31 link (GIO1) */
++	UNIPHIER_RESETX(12, 0x200c, 4),		/* USB30 link */
++	UNIPHIER_RESETX(13, 0x200c, 5),		/* USB31 link */
+ 	UNIPHIER_RESETX(16, 0x200c, 16),	/* USB30-PHY0 */
+ 	UNIPHIER_RESETX(17, 0x200c, 18),	/* USB30-PHY1 */
+ 	UNIPHIER_RESETX(18, 0x200c, 20),	/* USB30-PHY2 */
+diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
+index 64b6de9763ee..1efdf9ff8679 100644
+--- a/drivers/rpmsg/rpmsg_char.c
++++ b/drivers/rpmsg/rpmsg_char.c
+@@ -581,4 +581,6 @@ static void rpmsg_chrdev_exit(void)
+ 	unregister_chrdev_region(rpmsg_major, RPMSG_DEV_MAX);
+ }
+ module_exit(rpmsg_chrdev_exit);
++
++MODULE_ALIAS("rpmsg:rpmsg_chrdev");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index c11a083cd956..086f172d404c 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -706,7 +706,6 @@ void qeth_clear_ipacmd_list(struct qeth_card *card)
+ 		qeth_put_reply(reply);
+ 	}
+ 	spin_unlock_irqrestore(&card->lock, flags);
+-	atomic_set(&card->write.irq_pending, 0);
+ }
+ EXPORT_SYMBOL_GPL(qeth_clear_ipacmd_list);
+ 
+@@ -1101,14 +1100,9 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ {
+ 	int rc;
+ 	int cstat, dstat;
++	struct qeth_cmd_buffer *iob = NULL;
+ 	struct qeth_channel *channel;
+ 	struct qeth_card *card;
+-	struct qeth_cmd_buffer *iob;
+-
+-	if (__qeth_check_irb_error(cdev, intparm, irb))
+-		return;
+-	cstat = irb->scsw.cmd.cstat;
+-	dstat = irb->scsw.cmd.dstat;
+ 
+ 	card = CARD_FROM_CDEV(cdev);
+ 	if (!card)
+@@ -1126,6 +1120,19 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ 		channel = &card->data;
+ 		QETH_CARD_TEXT(card, 5, "data");
+ 	}
++
++	if (qeth_intparm_is_iob(intparm))
++		iob = (struct qeth_cmd_buffer *) __va((addr_t)intparm);
++
++	if (__qeth_check_irb_error(cdev, intparm, irb)) {
++		/* IO was terminated, free its resources. */
++		if (iob)
++			qeth_release_buffer(iob->channel, iob);
++		atomic_set(&channel->irq_pending, 0);
++		wake_up(&card->wait_q);
++		return;
++	}
++
+ 	atomic_set(&channel->irq_pending, 0);
+ 
+ 	if (irb->scsw.cmd.fctl & (SCSW_FCTL_CLEAR_FUNC))
+@@ -1149,6 +1156,10 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ 		/* we don't have to handle this further */
+ 		intparm = 0;
+ 	}
++
++	cstat = irb->scsw.cmd.cstat;
++	dstat = irb->scsw.cmd.dstat;
++
+ 	if ((dstat & DEV_STAT_UNIT_EXCEP) ||
+ 	    (dstat & DEV_STAT_UNIT_CHECK) ||
+ 	    (cstat)) {
+@@ -1187,11 +1198,8 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ 	    channel->state == CH_STATE_UP)
+ 		__qeth_issue_next_read(card);
+ 
+-	if (intparm) {
+-		iob = (struct qeth_cmd_buffer *) __va((addr_t)intparm);
+-		if (iob->callback)
+-			iob->callback(iob->channel, iob);
+-	}
++	if (iob && iob->callback)
++		iob->callback(iob->channel, iob);
+ 
+ out:
+ 	wake_up(&card->wait_q);
+@@ -1862,8 +1870,8 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel,
+ 		   atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0);
+ 	QETH_DBF_TEXT(SETUP, 6, "noirqpnd");
+ 	spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
+-	rc = ccw_device_start(channel->ccwdev,
+-			      &channel->ccw, (addr_t) iob, 0, 0);
++	rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw,
++				      (addr_t) iob, 0, 0, QETH_TIMEOUT);
+ 	spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
+ 
+ 	if (rc) {
+@@ -1880,7 +1888,6 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel,
+ 	if (channel->state != CH_STATE_UP) {
+ 		rc = -ETIME;
+ 		QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+-		qeth_clear_cmd_buffers(channel);
+ 	} else
+ 		rc = 0;
+ 	return rc;
+@@ -1934,8 +1941,8 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel,
+ 		   atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0);
+ 	QETH_DBF_TEXT(SETUP, 6, "noirqpnd");
+ 	spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
+-	rc = ccw_device_start(channel->ccwdev,
+-			      &channel->ccw, (addr_t) iob, 0, 0);
++	rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw,
++				      (addr_t) iob, 0, 0, QETH_TIMEOUT);
+ 	spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
+ 
+ 	if (rc) {
+@@ -1956,7 +1963,6 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel,
+ 		QETH_DBF_MESSAGE(2, "%s IDX activate timed out\n",
+ 			dev_name(&channel->ccwdev->dev));
+ 		QETH_DBF_TEXT_(SETUP, 2, "2err%d", -ETIME);
+-		qeth_clear_cmd_buffers(channel);
+ 		return -ETIME;
+ 	}
+ 	return qeth_idx_activate_get_answer(channel, idx_reply_cb);
+@@ -2158,8 +2164,8 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+ 
+ 	QETH_CARD_TEXT(card, 6, "noirqpnd");
+ 	spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags);
+-	rc = ccw_device_start(card->write.ccwdev, &card->write.ccw,
+-			      (addr_t) iob, 0, 0);
++	rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw,
++				      (addr_t) iob, 0, 0, event_timeout);
+ 	spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags);
+ 	if (rc) {
+ 		QETH_DBF_MESSAGE(2, "%s qeth_send_control_data: "
+@@ -2191,8 +2197,6 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+ 		}
+ 	}
+ 
+-	if (reply->rc == -EIO)
+-		goto error;
+ 	rc = reply->rc;
+ 	qeth_put_reply(reply);
+ 	return rc;
+@@ -2203,9 +2207,6 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+ 	list_del_init(&reply->list);
+ 	spin_unlock_irqrestore(&reply->card->lock, flags);
+ 	atomic_inc(&reply->received);
+-error:
+-	atomic_set(&card->write.irq_pending, 0);
+-	qeth_release_buffer(iob->channel, iob);
+ 	rc = reply->rc;
+ 	qeth_put_reply(reply);
+ 	return rc;
+diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h
+index 619f897b4bb0..f4d1ec0b8f5a 100644
+--- a/drivers/s390/net/qeth_core_mpc.h
++++ b/drivers/s390/net/qeth_core_mpc.h
+@@ -35,6 +35,18 @@ extern unsigned char IPA_PDU_HEADER[];
+ #define QETH_HALT_CHANNEL_PARM	-11
+ #define QETH_RCD_PARM -12
+ 
++static inline bool qeth_intparm_is_iob(unsigned long intparm)
++{
++	switch (intparm) {
++	case QETH_CLEAR_CHANNEL_PARM:
++	case QETH_HALT_CHANNEL_PARM:
++	case QETH_RCD_PARM:
++	case 0:
++		return false;
++	}
++	return true;
++}
++
+ /*****************************************************************************/
+ /* IP Assist related definitions                                             */
+ /*****************************************************************************/
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 5ef4c978ad19..eb5ca4701cec 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -1339,8 +1339,8 @@ static int qeth_osn_send_control_data(struct qeth_card *card, int len,
+ 	qeth_prepare_control_data(card, len, iob);
+ 	QETH_CARD_TEXT(card, 6, "osnoirqp");
+ 	spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags);
+-	rc = ccw_device_start(card->write.ccwdev, &card->write.ccw,
+-			      (addr_t) iob, 0, 0);
++	rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw,
++				      (addr_t) iob, 0, 0, QETH_IPA_TIMEOUT);
+ 	spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags);
+ 	if (rc) {
+ 		QETH_DBF_MESSAGE(2, "qeth_osn_send_control_data: "
+diff --git a/drivers/s390/net/smsgiucv.c b/drivers/s390/net/smsgiucv.c
+index 3b0c8b8a7634..066b5c3aaae6 100644
+--- a/drivers/s390/net/smsgiucv.c
++++ b/drivers/s390/net/smsgiucv.c
+@@ -176,7 +176,7 @@ static struct device_driver smsg_driver = {
+ 
+ static void __exit smsg_exit(void)
+ {
+-	cpcmd("SET SMSG IUCV", NULL, 0, NULL);
++	cpcmd("SET SMSG OFF", NULL, 0, NULL);
+ 	device_unregister(smsg_dev);
+ 	iucv_unregister(&smsg_handler, 1);
+ 	driver_unregister(&smsg_driver);
+diff --git a/drivers/scsi/isci/port_config.c b/drivers/scsi/isci/port_config.c
+index edb7be786c65..9e8de1462593 100644
+--- a/drivers/scsi/isci/port_config.c
++++ b/drivers/scsi/isci/port_config.c
+@@ -291,7 +291,7 @@ sci_mpc_agent_validate_phy_configuration(struct isci_host *ihost,
+ 		 * Note: We have not moved the current phy_index so we will actually
+ 		 *       compare the startting phy with itself.
+ 		 *       This is expected and required to add the phy to the port. */
+-		while (phy_index < SCI_MAX_PHYS) {
++		for (; phy_index < SCI_MAX_PHYS; phy_index++) {
+ 			if ((phy_mask & (1 << phy_index)) == 0)
+ 				continue;
+ 			sci_phy_get_sas_address(&ihost->phys[phy_index],
+@@ -311,7 +311,6 @@ sci_mpc_agent_validate_phy_configuration(struct isci_host *ihost,
+ 					      &ihost->phys[phy_index]);
+ 
+ 			assigned_phy_mask |= (1 << phy_index);
+-			phy_index++;
+ 		}
+ 
+ 	}
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 5ec3b74e8aed..2834171b5012 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -1124,12 +1124,12 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
+ 		goto fail_fw_init;
+ 	}
+ 
+-	ret = 0;
++	return 0;
+ 
+ fail_fw_init:
+ 	dev_err(&instance->pdev->dev,
+-		"Init cmd return status %s for SCSI host %d\n",
+-		ret ? "FAILED" : "SUCCESS", instance->host->host_no);
++		"Init cmd return status FAILED for SCSI host %d\n",
++		instance->host->host_no);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index f4b52b44b966..65f6c94f2e9b 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2322,6 +2322,12 @@ iscsi_multicast_skb(struct sk_buff *skb, uint32_t group, gfp_t gfp)
+ 	return nlmsg_multicast(nls, skb, 0, group, gfp);
+ }
+ 
++static int
++iscsi_unicast_skb(struct sk_buff *skb, u32 portid)
++{
++	return nlmsg_unicast(nls, skb, portid);
++}
++
+ int iscsi_recv_pdu(struct iscsi_cls_conn *conn, struct iscsi_hdr *hdr,
+ 		   char *data, uint32_t data_size)
+ {
+@@ -2524,14 +2530,11 @@ void iscsi_ping_comp_event(uint32_t host_no, struct iscsi_transport *transport,
+ EXPORT_SYMBOL_GPL(iscsi_ping_comp_event);
+ 
+ static int
+-iscsi_if_send_reply(uint32_t group, int seq, int type, int done, int multi,
+-		    void *payload, int size)
++iscsi_if_send_reply(u32 portid, int type, void *payload, int size)
+ {
+ 	struct sk_buff	*skb;
+ 	struct nlmsghdr	*nlh;
+ 	int len = nlmsg_total_size(size);
+-	int flags = multi ? NLM_F_MULTI : 0;
+-	int t = done ? NLMSG_DONE : type;
+ 
+ 	skb = alloc_skb(len, GFP_ATOMIC);
+ 	if (!skb) {
+@@ -2539,10 +2542,9 @@ iscsi_if_send_reply(uint32_t group, int seq, int type, int done, int multi,
+ 		return -ENOMEM;
+ 	}
+ 
+-	nlh = __nlmsg_put(skb, 0, 0, t, (len - sizeof(*nlh)), 0);
+-	nlh->nlmsg_flags = flags;
++	nlh = __nlmsg_put(skb, 0, 0, type, (len - sizeof(*nlh)), 0);
+ 	memcpy(nlmsg_data(nlh), payload, size);
+-	return iscsi_multicast_skb(skb, group, GFP_ATOMIC);
++	return iscsi_unicast_skb(skb, portid);
+ }
+ 
+ static int
+@@ -3470,6 +3472,7 @@ static int
+ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ {
+ 	int err = 0;
++	u32 portid;
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_transport *transport = NULL;
+ 	struct iscsi_internal *priv;
+@@ -3490,10 +3493,12 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	if (!try_module_get(transport->owner))
+ 		return -EINVAL;
+ 
++	portid = NETLINK_CB(skb).portid;
++
+ 	switch (nlh->nlmsg_type) {
+ 	case ISCSI_UEVENT_CREATE_SESSION:
+ 		err = iscsi_if_create_session(priv, ep, ev,
+-					      NETLINK_CB(skb).portid,
++					      portid,
+ 					      ev->u.c_session.initial_cmdsn,
+ 					      ev->u.c_session.cmds_max,
+ 					      ev->u.c_session.queue_depth);
+@@ -3506,7 +3511,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		}
+ 
+ 		err = iscsi_if_create_session(priv, ep, ev,
+-					NETLINK_CB(skb).portid,
++					portid,
+ 					ev->u.c_bound_session.initial_cmdsn,
+ 					ev->u.c_bound_session.cmds_max,
+ 					ev->u.c_bound_session.queue_depth);
+@@ -3664,6 +3669,8 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ static void
+ iscsi_if_rx(struct sk_buff *skb)
+ {
++	u32 portid = NETLINK_CB(skb).portid;
++
+ 	mutex_lock(&rx_queue_mutex);
+ 	while (skb->len >= NLMSG_HDRLEN) {
+ 		int err;
+@@ -3699,8 +3706,8 @@ iscsi_if_rx(struct sk_buff *skb)
+ 				break;
+ 			if (ev->type == ISCSI_UEVENT_GET_CHAP && !err)
+ 				break;
+-			err = iscsi_if_send_reply(group, nlh->nlmsg_seq,
+-				nlh->nlmsg_type, 0, 0, ev, sizeof(*ev));
++			err = iscsi_if_send_reply(portid, nlh->nlmsg_type,
++						  ev, sizeof(*ev));
+ 		} while (err < 0 && err != -ECONNREFUSED && err != -ESRCH);
+ 		skb_pull(skb, rlen);
+ 	}
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 8c51d628b52e..a2ec0bc9e9fa 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1722,11 +1722,14 @@ static int storvsc_probe(struct hv_device *device,
+ 		max_targets = STORVSC_MAX_TARGETS;
+ 		max_channels = STORVSC_MAX_CHANNELS;
+ 		/*
+-		 * On Windows8 and above, we support sub-channels for storage.
++		 * On Windows8 and above, we support sub-channels for storage
++		 * on SCSI and FC controllers.
+ 		 * The number of sub-channels offerred is based on the number of
+ 		 * VCPUs in the guest.
+ 		 */
+-		max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel);
++		if (!dev_is_ide)
++			max_sub_channels =
++				(num_cpus - 1) / storvsc_vcpus_per_sub_channel;
+ 	}
+ 
+ 	scsi_driver.can_queue = (max_outstanding_req_per_channel *
+diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
+index c374e3b5c678..777e5f1e52d1 100644
+--- a/drivers/scsi/vmw_pvscsi.c
++++ b/drivers/scsi/vmw_pvscsi.c
+@@ -609,7 +609,7 @@ static void pvscsi_complete_request(struct pvscsi_adapter *adapter,
+ 			break;
+ 
+ 		case BTSTAT_ABORTQUEUE:
+-			cmd->result = (DID_ABORT << 16);
++			cmd->result = (DID_BUS_BUSY << 16);
+ 			break;
+ 
+ 		case BTSTAT_SCSIPARITY:
+diff --git a/drivers/soc/bcm/raspberrypi-power.c b/drivers/soc/bcm/raspberrypi-power.c
+index fe96a8b956fb..f7ed1187518b 100644
+--- a/drivers/soc/bcm/raspberrypi-power.c
++++ b/drivers/soc/bcm/raspberrypi-power.c
+@@ -45,7 +45,7 @@ struct rpi_power_domains {
+ struct rpi_power_domain_packet {
+ 	u32 domain;
+ 	u32 on;
+-} __packet;
++};
+ 
+ /*
+  * Asks the firmware to enable or disable power on a specific power
+diff --git a/drivers/spi/spi-bcm2835aux.c b/drivers/spi/spi-bcm2835aux.c
+index 7428091d3f5b..bd00b7cc8b78 100644
+--- a/drivers/spi/spi-bcm2835aux.c
++++ b/drivers/spi/spi-bcm2835aux.c
+@@ -184,6 +184,11 @@ static irqreturn_t bcm2835aux_spi_interrupt(int irq, void *dev_id)
+ 	struct bcm2835aux_spi *bs = spi_master_get_devdata(master);
+ 	irqreturn_t ret = IRQ_NONE;
+ 
++	/* IRQ may be shared, so return if our interrupts are disabled */
++	if (!(bcm2835aux_rd(bs, BCM2835_AUX_SPI_CNTL1) &
++	      (BCM2835_AUX_SPI_CNTL1_TXEMPTY | BCM2835_AUX_SPI_CNTL1_IDLE)))
++		return ret;
++
+ 	/* check if we have data to read */
+ 	while (bs->rx_len &&
+ 	       (!(bcm2835aux_rd(bs, BCM2835_AUX_SPI_STAT) &
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index 5c9516ae4942..4a001634023e 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -313,6 +313,14 @@ static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
+ 
+ 	while ((trans_cnt < CDNS_SPI_FIFO_DEPTH) &&
+ 	       (xspi->tx_bytes > 0)) {
++
++		/* When xspi in busy condition, bytes may send failed,
++		 * then spi control did't work thoroughly, add one byte delay
++		 */
++		if (cdns_spi_read(xspi, CDNS_SPI_ISR) &
++		    CDNS_SPI_IXR_TXFULL)
++			usleep_range(10, 20);
++
+ 		if (xspi->txbuf)
+ 			cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);
+ 		else
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index c5dcfb434a49..584118ed12eb 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -283,6 +283,7 @@ static void sh_msiof_spi_set_clk_regs(struct sh_msiof_spi_priv *p,
+ 	}
+ 
+ 	k = min_t(int, k, ARRAY_SIZE(sh_msiof_spi_div_table) - 1);
++	brps = min_t(int, brps, 32);
+ 
+ 	scr = sh_msiof_spi_div_table[k].brdv | SCR_BRPS(brps);
+ 	sh_msiof_write(p, TSCR, scr);
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 0d99b242e82e..6cb933ecc084 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -890,6 +890,7 @@ pscsi_map_sg(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 			bytes = min(bytes, data_len);
+ 
+ 			if (!bio) {
++new_bio:
+ 				nr_vecs = min_t(int, BIO_MAX_PAGES, nr_pages);
+ 				nr_pages -= nr_vecs;
+ 				/*
+@@ -931,6 +932,7 @@ pscsi_map_sg(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 				 * be allocated with pscsi_get_bio() above.
+ 				 */
+ 				bio = NULL;
++				goto new_bio;
+ 			}
+ 
+ 			data_len -= bytes;
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index 6c4b200a4560..9dbbb3c3bf35 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -238,6 +238,17 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params,
+ 			if (IS_ERR(shm))
+ 				return PTR_ERR(shm);
+ 
++			/*
++			 * Ensure offset + size does not overflow offset
++			 * and does not overflow the size of the referred
++			 * shared memory object.
++			 */
++			if ((ip.a + ip.b) < ip.a ||
++			    (ip.a + ip.b) > shm->size) {
++				tee_shm_put(shm);
++				return -EINVAL;
++			}
++
+ 			params[n].u.memref.shm_offs = ip.a;
+ 			params[n].u.memref.size = ip.b;
+ 			params[n].u.memref.shm = shm;
+diff --git a/drivers/thermal/int340x_thermal/int3403_thermal.c b/drivers/thermal/int340x_thermal/int3403_thermal.c
+index 8a7f24dd9315..0c19fcd56a0d 100644
+--- a/drivers/thermal/int340x_thermal/int3403_thermal.c
++++ b/drivers/thermal/int340x_thermal/int3403_thermal.c
+@@ -194,6 +194,7 @@ static int int3403_cdev_add(struct int3403_priv *priv)
+ 		return -EFAULT;
+ 	}
+ 
++	priv->priv = obj;
+ 	obj->max_state = p->package.count - 1;
+ 	obj->cdev =
+ 		thermal_cooling_device_register(acpi_device_bid(priv->adev),
+@@ -201,8 +202,6 @@ static int int3403_cdev_add(struct int3403_priv *priv)
+ 	if (IS_ERR(obj->cdev))
+ 		result = PTR_ERR(obj->cdev);
+ 
+-	priv->priv = obj;
+-
+ 	kfree(buf.pointer);
+ 	/* TODO: add ACPI notification support */
+ 
+diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
+index 0ee0c6d7f194..f4c42ac62789 100644
+--- a/drivers/usb/musb/musb_host.c
++++ b/drivers/usb/musb/musb_host.c
+@@ -2530,8 +2530,11 @@ static int musb_bus_suspend(struct usb_hcd *hcd)
+ {
+ 	struct musb	*musb = hcd_to_musb(hcd);
+ 	u8		devctl;
++	int		ret;
+ 
+-	musb_port_suspend(musb, true);
++	ret = musb_port_suspend(musb, true);
++	if (ret)
++		return ret;
+ 
+ 	if (!is_host_active(musb))
+ 		return 0;
+diff --git a/drivers/usb/musb/musb_host.h b/drivers/usb/musb/musb_host.h
+index 72392bbcd0a4..2999845632ce 100644
+--- a/drivers/usb/musb/musb_host.h
++++ b/drivers/usb/musb/musb_host.h
+@@ -67,7 +67,7 @@ extern void musb_host_rx(struct musb *, u8);
+ extern void musb_root_disconnect(struct musb *musb);
+ extern void musb_host_resume_root_hub(struct musb *musb);
+ extern void musb_host_poke_root_hub(struct musb *musb);
+-extern void musb_port_suspend(struct musb *musb, bool do_suspend);
++extern int musb_port_suspend(struct musb *musb, bool do_suspend);
+ extern void musb_port_reset(struct musb *musb, bool do_reset);
+ extern void musb_host_finish_resume(struct work_struct *work);
+ #else
+@@ -99,7 +99,10 @@ static inline void musb_root_disconnect(struct musb *musb)	{}
+ static inline void musb_host_resume_root_hub(struct musb *musb)	{}
+ static inline void musb_host_poll_rh_status(struct musb *musb)	{}
+ static inline void musb_host_poke_root_hub(struct musb *musb)	{}
+-static inline void musb_port_suspend(struct musb *musb, bool do_suspend) {}
++static inline int musb_port_suspend(struct musb *musb, bool do_suspend)
++{
++	return 0;
++}
+ static inline void musb_port_reset(struct musb *musb, bool do_reset) {}
+ static inline void musb_host_finish_resume(struct work_struct *work) {}
+ #endif
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index 5165d2b07ade..2f8dd9826e94 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -48,14 +48,14 @@ void musb_host_finish_resume(struct work_struct *work)
+ 	spin_unlock_irqrestore(&musb->lock, flags);
+ }
+ 
+-void musb_port_suspend(struct musb *musb, bool do_suspend)
++int musb_port_suspend(struct musb *musb, bool do_suspend)
+ {
+ 	struct usb_otg	*otg = musb->xceiv->otg;
+ 	u8		power;
+ 	void __iomem	*mbase = musb->mregs;
+ 
+ 	if (!is_host_active(musb))
+-		return;
++		return 0;
+ 
+ 	/* NOTE:  this doesn't necessarily put PHY into low power mode,
+ 	 * turning off its clock; that's a function of PHY integration and
+@@ -66,16 +66,20 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
+ 	if (do_suspend) {
+ 		int retries = 10000;
+ 
+-		power &= ~MUSB_POWER_RESUME;
+-		power |= MUSB_POWER_SUSPENDM;
+-		musb_writeb(mbase, MUSB_POWER, power);
++		if (power & MUSB_POWER_RESUME)
++			return -EBUSY;
+ 
+-		/* Needed for OPT A tests */
+-		power = musb_readb(mbase, MUSB_POWER);
+-		while (power & MUSB_POWER_SUSPENDM) {
++		if (!(power & MUSB_POWER_SUSPENDM)) {
++			power |= MUSB_POWER_SUSPENDM;
++			musb_writeb(mbase, MUSB_POWER, power);
++
++			/* Needed for OPT A tests */
+ 			power = musb_readb(mbase, MUSB_POWER);
+-			if (retries-- < 1)
+-				break;
++			while (power & MUSB_POWER_SUSPENDM) {
++				power = musb_readb(mbase, MUSB_POWER);
++				if (retries-- < 1)
++					break;
++			}
+ 		}
+ 
+ 		musb_dbg(musb, "Root port suspended, power %02x", power);
+@@ -111,6 +115,7 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
+ 		schedule_delayed_work(&musb->finish_resume_work,
+ 				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
++	return 0;
+ }
+ 
+ void musb_port_reset(struct musb *musb, bool do_reset)
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 2719f5d382f7..7b01648c85ca 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -73,6 +73,7 @@ struct tps6598x {
+ 	struct device *dev;
+ 	struct regmap *regmap;
+ 	struct mutex lock; /* device lock */
++	u8 i2c_protocol:1;
+ 
+ 	struct typec_port *port;
+ 	struct typec_partner *partner;
+@@ -80,19 +81,39 @@ struct tps6598x {
+ 	struct typec_capability typec_cap;
+ };
+ 
++static int
++tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
++{
++	u8 data[len + 1];
++	int ret;
++
++	if (!tps->i2c_protocol)
++		return regmap_raw_read(tps->regmap, reg, val, len);
++
++	ret = regmap_raw_read(tps->regmap, reg, data, sizeof(data));
++	if (ret)
++		return ret;
++
++	if (data[0] < len)
++		return -EIO;
++
++	memcpy(val, &data[1], len);
++	return 0;
++}
++
+ static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)
+ {
+-	return regmap_raw_read(tps->regmap, reg, val, sizeof(u16));
++	return tps6598x_block_read(tps, reg, val, sizeof(u16));
+ }
+ 
+ static inline int tps6598x_read32(struct tps6598x *tps, u8 reg, u32 *val)
+ {
+-	return regmap_raw_read(tps->regmap, reg, val, sizeof(u32));
++	return tps6598x_block_read(tps, reg, val, sizeof(u32));
+ }
+ 
+ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)
+ {
+-	return regmap_raw_read(tps->regmap, reg, val, sizeof(u64));
++	return tps6598x_block_read(tps, reg, val, sizeof(u64));
+ }
+ 
+ static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)
+@@ -121,8 +142,8 @@ static int tps6598x_read_partner_identity(struct tps6598x *tps)
+ 	struct tps6598x_rx_identity_reg id;
+ 	int ret;
+ 
+-	ret = regmap_raw_read(tps->regmap, TPS_REG_RX_IDENTITY_SOP,
+-			      &id, sizeof(id));
++	ret = tps6598x_block_read(tps, TPS_REG_RX_IDENTITY_SOP,
++				  &id, sizeof(id));
+ 	if (ret)
+ 		return ret;
+ 
+@@ -223,13 +244,13 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd,
+ 	} while (val);
+ 
+ 	if (out_len) {
+-		ret = regmap_raw_read(tps->regmap, TPS_REG_DATA1,
+-				      out_data, out_len);
++		ret = tps6598x_block_read(tps, TPS_REG_DATA1,
++					  out_data, out_len);
+ 		if (ret)
+ 			return ret;
+ 		val = out_data[0];
+ 	} else {
+-		ret = regmap_read(tps->regmap, TPS_REG_DATA1, &val);
++		ret = tps6598x_block_read(tps, TPS_REG_DATA1, &val, sizeof(u8));
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -384,6 +405,16 @@ static int tps6598x_probe(struct i2c_client *client)
+ 	if (!vid)
+ 		return -ENODEV;
+ 
++	/*
++	 * Checking can the adapter handle SMBus protocol. If it can not, the
++	 * driver needs to take care of block reads separately.
++	 *
++	 * FIXME: Testing with I2C_FUNC_I2C. regmap-i2c uses I2C protocol
++	 * unconditionally if the adapter has I2C_FUNC_I2C set.
++	 */
++	if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C))
++		tps->i2c_protocol = true;
++
+ 	ret = tps6598x_read32(tps, TPS_REG_STATUS, &status);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/usb/typec/ucsi/Makefile b/drivers/usb/typec/ucsi/Makefile
+index b57891c1fd31..7afbea512207 100644
+--- a/drivers/usb/typec/ucsi/Makefile
++++ b/drivers/usb/typec/ucsi/Makefile
+@@ -5,6 +5,6 @@ obj-$(CONFIG_TYPEC_UCSI)	+= typec_ucsi.o
+ 
+ typec_ucsi-y			:= ucsi.o
+ 
+-typec_ucsi-$(CONFIG_FTRACE)	+= trace.o
++typec_ucsi-$(CONFIG_TRACING)	+= trace.o
+ 
+ obj-$(CONFIG_UCSI_ACPI)		+= ucsi_acpi.o
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 81a84b3c1c50..728870c9e6b4 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -403,7 +403,7 @@ static int xenbus_command_reply(struct xenbus_file_priv *u,
+ {
+ 	struct {
+ 		struct xsd_sockmsg hdr;
+-		const char body[16];
++		char body[16];
+ 	} msg;
+ 	int rc;
+ 
+@@ -412,6 +412,7 @@ static int xenbus_command_reply(struct xenbus_file_priv *u,
+ 	msg.hdr.len = strlen(reply) + 1;
+ 	if (msg.hdr.len > sizeof(msg.body))
+ 		return -E2BIG;
++	memcpy(&msg.body, reply, msg.hdr.len);
+ 
+ 	mutex_lock(&u->reply_mutex);
+ 	rc = queue_reply(&u->read_buffers, &msg, sizeof(msg.hdr) + msg.hdr.len);
+diff --git a/fs/afs/addr_list.c b/fs/afs/addr_list.c
+index fd9f28b8a933..88391c7a8462 100644
+--- a/fs/afs/addr_list.c
++++ b/fs/afs/addr_list.c
+@@ -121,7 +121,7 @@ struct afs_addr_list *afs_parse_text_addrs(const char *text, size_t len,
+ 	p = text;
+ 	do {
+ 		struct sockaddr_rxrpc *srx = &alist->addrs[alist->nr_addrs];
+-		char tdelim = delim;
++		const char *q, *stop;
+ 
+ 		if (*p == delim) {
+ 			p++;
+@@ -130,28 +130,33 @@ struct afs_addr_list *afs_parse_text_addrs(const char *text, size_t len,
+ 
+ 		if (*p == '[') {
+ 			p++;
+-			tdelim = ']';
++			q = memchr(p, ']', end - p);
++		} else {
++			for (q = p; q < end; q++)
++				if (*q == '+' || *q == delim)
++					break;
+ 		}
+ 
+-		if (in4_pton(p, end - p,
++		if (in4_pton(p, q - p,
+ 			     (u8 *)&srx->transport.sin6.sin6_addr.s6_addr32[3],
+-			     tdelim, &p)) {
++			     -1, &stop)) {
+ 			srx->transport.sin6.sin6_addr.s6_addr32[0] = 0;
+ 			srx->transport.sin6.sin6_addr.s6_addr32[1] = 0;
+ 			srx->transport.sin6.sin6_addr.s6_addr32[2] = htonl(0xffff);
+-		} else if (in6_pton(p, end - p,
++		} else if (in6_pton(p, q - p,
+ 				    srx->transport.sin6.sin6_addr.s6_addr,
+-				    tdelim, &p)) {
++				    -1, &stop)) {
+ 			/* Nothing to do */
+ 		} else {
+ 			goto bad_address;
+ 		}
+ 
+-		if (tdelim == ']') {
+-			if (p == end || *p != ']')
+-				goto bad_address;
++		if (stop != q)
++			goto bad_address;
++
++		p = q;
++		if (q < end && *q == ']')
+ 			p++;
+-		}
+ 
+ 		if (p < end) {
+ 			if (*p == '+') {
+diff --git a/fs/afs/callback.c b/fs/afs/callback.c
+index f4291b576054..96125c9e3e17 100644
+--- a/fs/afs/callback.c
++++ b/fs/afs/callback.c
+@@ -23,36 +23,55 @@
+ /*
+  * Set up an interest-in-callbacks record for a volume on a server and
+  * register it with the server.
+- * - Called with volume->server_sem held.
++ * - Called with vnode->io_lock held.
+  */
+ int afs_register_server_cb_interest(struct afs_vnode *vnode,
+-				    struct afs_server_entry *entry)
++				    struct afs_server_list *slist,
++				    unsigned int index)
+ {
+-	struct afs_cb_interest *cbi = entry->cb_interest, *vcbi, *new, *x;
++	struct afs_server_entry *entry = &slist->servers[index];
++	struct afs_cb_interest *cbi, *vcbi, *new, *old;
+ 	struct afs_server *server = entry->server;
+ 
+ again:
++	if (vnode->cb_interest &&
++	    likely(vnode->cb_interest == entry->cb_interest))
++		return 0;
++
++	read_lock(&slist->lock);
++	cbi = afs_get_cb_interest(entry->cb_interest);
++	read_unlock(&slist->lock);
++
+ 	vcbi = vnode->cb_interest;
+ 	if (vcbi) {
+-		if (vcbi == cbi)
++		if (vcbi == cbi) {
++			afs_put_cb_interest(afs_v2net(vnode), cbi);
+ 			return 0;
++		}
+ 
++		/* Use a new interest in the server list for the same server
++		 * rather than an old one that's still attached to a vnode.
++		 */
+ 		if (cbi && vcbi->server == cbi->server) {
+ 			write_seqlock(&vnode->cb_lock);
+-			vnode->cb_interest = afs_get_cb_interest(cbi);
++			old = vnode->cb_interest;
++			vnode->cb_interest = cbi;
+ 			write_sequnlock(&vnode->cb_lock);
+-			afs_put_cb_interest(afs_v2net(vnode), cbi);
++			afs_put_cb_interest(afs_v2net(vnode), old);
+ 			return 0;
+ 		}
+ 
++		/* Re-use the one attached to the vnode. */
+ 		if (!cbi && vcbi->server == server) {
+-			afs_get_cb_interest(vcbi);
+-			x = cmpxchg(&entry->cb_interest, cbi, vcbi);
+-			if (x != cbi) {
+-				cbi = x;
+-				afs_put_cb_interest(afs_v2net(vnode), vcbi);
++			write_lock(&slist->lock);
++			if (entry->cb_interest) {
++				write_unlock(&slist->lock);
++				afs_put_cb_interest(afs_v2net(vnode), cbi);
+ 				goto again;
+ 			}
++
++			entry->cb_interest = cbi;
++			write_unlock(&slist->lock);
+ 			return 0;
+ 		}
+ 	}
+@@ -72,13 +91,16 @@ int afs_register_server_cb_interest(struct afs_vnode *vnode,
+ 		list_add_tail(&new->cb_link, &server->cb_interests);
+ 		write_unlock(&server->cb_break_lock);
+ 
+-		x = cmpxchg(&entry->cb_interest, cbi, new);
+-		if (x == cbi) {
++		write_lock(&slist->lock);
++		if (!entry->cb_interest) {
++			entry->cb_interest = afs_get_cb_interest(new);
+ 			cbi = new;
++			new = NULL;
+ 		} else {
+-			cbi = x;
+-			afs_put_cb_interest(afs_v2net(vnode), new);
++			cbi = afs_get_cb_interest(entry->cb_interest);
+ 		}
++		write_unlock(&slist->lock);
++		afs_put_cb_interest(afs_v2net(vnode), new);
+ 	}
+ 
+ 	ASSERT(cbi);
+@@ -88,11 +110,13 @@ int afs_register_server_cb_interest(struct afs_vnode *vnode,
+ 	 */
+ 	write_seqlock(&vnode->cb_lock);
+ 
+-	vnode->cb_interest = afs_get_cb_interest(cbi);
++	old = vnode->cb_interest;
++	vnode->cb_interest = cbi;
+ 	vnode->cb_s_break = cbi->server->cb_s_break;
+ 	clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags);
+ 
+ 	write_sequnlock(&vnode->cb_lock);
++	afs_put_cb_interest(afs_v2net(vnode), old);
+ 	return 0;
+ }
+ 
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index 41e277f57b20..c0b53bfef490 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -341,7 +341,6 @@ static int afs_deliver_cb_init_call_back_state(struct afs_call *call)
+  */
+ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call)
+ {
+-	struct sockaddr_rxrpc srx;
+ 	struct afs_server *server;
+ 	struct afs_uuid *r;
+ 	unsigned loop;
+@@ -398,8 +397,9 @@ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call)
+ 
+ 	/* we'll need the file server record as that tells us which set of
+ 	 * vnodes to operate upon */
+-	rxrpc_kernel_get_peer(call->net->socket, call->rxcall, &srx);
+-	server = afs_find_server(call->net, &srx);
++	rcu_read_lock();
++	server = afs_find_server_by_uuid(call->net, call->request);
++	rcu_read_unlock();
+ 	if (!server)
+ 		return -ENOTCONN;
+ 	call->cm_server = server;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index f38d6a561a84..0aac3b5eb2ac 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -399,6 +399,7 @@ struct afs_server_list {
+ 	unsigned short		index;		/* Server currently in use */
+ 	unsigned short		vnovol_mask;	/* Servers to be skipped due to VNOVOL */
+ 	unsigned int		seq;		/* Set to ->servers_seq when installed */
++	rwlock_t		lock;
+ 	struct afs_server_entry	servers[];
+ };
+ 
+@@ -605,13 +606,15 @@ extern void afs_init_callback_state(struct afs_server *);
+ extern void afs_break_callback(struct afs_vnode *);
+ extern void afs_break_callbacks(struct afs_server *, size_t,struct afs_callback[]);
+ 
+-extern int afs_register_server_cb_interest(struct afs_vnode *, struct afs_server_entry *);
++extern int afs_register_server_cb_interest(struct afs_vnode *,
++					   struct afs_server_list *, unsigned int);
+ extern void afs_put_cb_interest(struct afs_net *, struct afs_cb_interest *);
+ extern void afs_clear_callback_interests(struct afs_net *, struct afs_server_list *);
+ 
+ static inline struct afs_cb_interest *afs_get_cb_interest(struct afs_cb_interest *cbi)
+ {
+-	refcount_inc(&cbi->usage);
++	if (cbi)
++		refcount_inc(&cbi->usage);
+ 	return cbi;
+ }
+ 
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index ad1328d85526..9caf7410aff3 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -179,7 +179,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 			 */
+ 			if (fc->flags & AFS_FS_CURSOR_VNOVOL) {
+ 				fc->ac.error = -EREMOTEIO;
+-				goto failed;
++				goto next_server;
+ 			}
+ 
+ 			write_lock(&vnode->volume->servers_lock);
+@@ -201,7 +201,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 			 */
+ 			if (vnode->volume->servers == fc->server_list) {
+ 				fc->ac.error = -EREMOTEIO;
+-				goto failed;
++				goto next_server;
+ 			}
+ 
+ 			/* Try again */
+@@ -350,8 +350,8 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 	 * break request before we've finished decoding the reply and
+ 	 * installing the vnode.
+ 	 */
+-	fc->ac.error = afs_register_server_cb_interest(
+-		vnode, &fc->server_list->servers[fc->index]);
++	fc->ac.error = afs_register_server_cb_interest(vnode, fc->server_list,
++						       fc->index);
+ 	if (fc->ac.error < 0)
+ 		goto failed;
+ 
+@@ -369,8 +369,16 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 	if (!test_bit(AFS_SERVER_FL_PROBED, &server->flags)) {
+ 		fc->ac.alist = afs_get_addrlist(alist);
+ 
+-		if (!afs_probe_fileserver(fc))
+-			goto failed;
++		if (!afs_probe_fileserver(fc)) {
++			switch (fc->ac.error) {
++			case -ENOMEM:
++			case -ERESTARTSYS:
++			case -EINTR:
++				goto failed;
++			default:
++				goto next_server;
++			}
++		}
+ 	}
+ 
+ 	if (!fc->ac.alist)
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index e1126659f043..e294a420d9db 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -41,6 +41,7 @@ int afs_open_socket(struct afs_net *net)
+ {
+ 	struct sockaddr_rxrpc srx;
+ 	struct socket *socket;
++	unsigned int min_level;
+ 	int ret;
+ 
+ 	_enter("");
+@@ -60,6 +61,12 @@ int afs_open_socket(struct afs_net *net)
+ 	srx.transport.sin6.sin6_family	= AF_INET6;
+ 	srx.transport.sin6.sin6_port	= htons(AFS_CM_PORT);
+ 
++	min_level = RXRPC_SECURITY_ENCRYPT;
++	ret = kernel_setsockopt(socket, SOL_RXRPC, RXRPC_MIN_SECURITY_LEVEL,
++				(void *)&min_level, sizeof(min_level));
++	if (ret < 0)
++		goto error_2;
++
+ 	ret = kernel_bind(socket, (struct sockaddr *) &srx, sizeof(srx));
+ 	if (ret == -EADDRINUSE) {
+ 		srx.transport.sin6.sin6_port = 0;
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index 1880f1b6a9f1..90f1ae7c3a1f 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -66,12 +66,6 @@ struct afs_server *afs_find_server(struct afs_net *net,
+ 							      sizeof(struct in6_addr));
+ 					if (diff == 0)
+ 						goto found;
+-					if (diff < 0) {
+-						// TODO: Sort the list
+-						//if (i == alist->nr_ipv4)
+-						//	goto not_found;
+-						break;
+-					}
+ 				}
+ 			}
+ 		} else {
+@@ -85,17 +79,10 @@ struct afs_server *afs_find_server(struct afs_net *net,
+ 							(u32)b->sin6_addr.s6_addr32[3]);
+ 					if (diff == 0)
+ 						goto found;
+-					if (diff < 0) {
+-						// TODO: Sort the list
+-						//if (i == 0)
+-						//	goto not_found;
+-						break;
+-					}
+ 				}
+ 			}
+ 		}
+ 
+-	//not_found:
+ 		server = NULL;
+ 	found:
+ 		if (server && !atomic_inc_not_zero(&server->usage))
+@@ -426,8 +413,15 @@ static void afs_gc_servers(struct afs_net *net, struct afs_server *gc_list)
+ 		}
+ 		write_sequnlock(&net->fs_lock);
+ 
+-		if (deleted)
++		if (deleted) {
++			write_seqlock(&net->fs_addr_lock);
++			if (!hlist_unhashed(&server->addr4_link))
++				hlist_del_rcu(&server->addr4_link);
++			if (!hlist_unhashed(&server->addr6_link))
++				hlist_del_rcu(&server->addr6_link);
++			write_sequnlock(&net->fs_addr_lock);
+ 			afs_destroy_server(net, server);
++		}
+ 	}
+ }
+ 
+diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c
+index 0f8dc4c8f07c..8a5760aa5832 100644
+--- a/fs/afs/server_list.c
++++ b/fs/afs/server_list.c
+@@ -49,6 +49,7 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell,
+ 		goto error;
+ 
+ 	refcount_set(&slist->usage, 1);
++	rwlock_init(&slist->lock);
+ 
+ 	/* Make sure a records exists for each server in the list. */
+ 	for (i = 0; i < vldb->nr_servers; i++) {
+@@ -64,9 +65,11 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell,
+ 			goto error_2;
+ 		}
+ 
+-		/* Insertion-sort by server pointer */
++		/* Insertion-sort by UUID */
+ 		for (j = 0; j < slist->nr_servers; j++)
+-			if (slist->servers[j].server >= server)
++			if (memcmp(&slist->servers[j].server->uuid,
++				   &server->uuid,
++				   sizeof(server->uuid)) >= 0)
+ 				break;
+ 		if (j < slist->nr_servers) {
+ 			if (slist->servers[j].server == server) {
+diff --git a/fs/cifs/Kconfig b/fs/cifs/Kconfig
+index e901ef6a4813..bda6175c2cbe 100644
+--- a/fs/cifs/Kconfig
++++ b/fs/cifs/Kconfig
+@@ -198,7 +198,7 @@ config CIFS_SMB311
+ 
+ config CIFS_SMB_DIRECT
+ 	bool "SMB Direct support (Experimental)"
+-	depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y
++	depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y
+ 	help
+ 	  Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1.
+ 	  SMB Direct allows transferring SMB packets over RDMA. If unsure,
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 1c1940d90c96..097598543403 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -589,9 +589,15 @@ smb2_query_eas(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ 
++	/*
++	 * If ea_name is NULL (listxattr) and there are no EAs, return 0 as it's
++	 * not an error. Otherwise, the specified ea_name was not found.
++	 */
+ 	if (!rc)
+ 		rc = move_smb2_ea_to_cifs(ea_data, buf_size, smb2_data,
+ 					  SMB2_MAX_EA_BUF, ea_name);
++	else if (!ea_name && rc == -ENODATA)
++		rc = 0;
+ 
+ 	kfree(smb2_data);
+ 	return rc;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 8ae6a089489c..93d3f4a14b32 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -621,8 +621,8 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 
+ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ {
+-	int rc = 0;
+-	struct validate_negotiate_info_req vneg_inbuf;
++	int rc;
++	struct validate_negotiate_info_req *pneg_inbuf;
+ 	struct validate_negotiate_info_rsp *pneg_rsp = NULL;
+ 	u32 rsplen;
+ 	u32 inbuflen; /* max of 4 dialects */
+@@ -656,63 +656,69 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 	if (tcon->ses->session_flags & SMB2_SESSION_FLAG_IS_NULL)
+ 		cifs_dbg(VFS, "Unexpected null user (anonymous) auth flag sent by server\n");
+ 
+-	vneg_inbuf.Capabilities =
++	pneg_inbuf = kmalloc(sizeof(*pneg_inbuf), GFP_NOFS);
++	if (!pneg_inbuf)
++		return -ENOMEM;
++
++	pneg_inbuf->Capabilities =
+ 			cpu_to_le32(tcon->ses->server->vals->req_capabilities);
+-	memcpy(vneg_inbuf.Guid, tcon->ses->server->client_guid,
++	memcpy(pneg_inbuf->Guid, tcon->ses->server->client_guid,
+ 					SMB2_CLIENT_GUID_SIZE);
+ 
+ 	if (tcon->ses->sign)
+-		vneg_inbuf.SecurityMode =
++		pneg_inbuf->SecurityMode =
+ 			cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED);
+ 	else if (global_secflags & CIFSSEC_MAY_SIGN)
+-		vneg_inbuf.SecurityMode =
++		pneg_inbuf->SecurityMode =
+ 			cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED);
+ 	else
+-		vneg_inbuf.SecurityMode = 0;
++		pneg_inbuf->SecurityMode = 0;
+ 
+ 
+ 	if (strcmp(tcon->ses->server->vals->version_string,
+ 		SMB3ANY_VERSION_STRING) == 0) {
+-		vneg_inbuf.Dialects[0] = cpu_to_le16(SMB30_PROT_ID);
+-		vneg_inbuf.Dialects[1] = cpu_to_le16(SMB302_PROT_ID);
+-		vneg_inbuf.DialectCount = cpu_to_le16(2);
++		pneg_inbuf->Dialects[0] = cpu_to_le16(SMB30_PROT_ID);
++		pneg_inbuf->Dialects[1] = cpu_to_le16(SMB302_PROT_ID);
++		pneg_inbuf->DialectCount = cpu_to_le16(2);
+ 		/* structure is big enough for 3 dialects, sending only 2 */
+-		inbuflen = sizeof(struct validate_negotiate_info_req) - 2;
++		inbuflen = sizeof(*pneg_inbuf) -
++				sizeof(pneg_inbuf->Dialects[0]);
+ 	} else if (strcmp(tcon->ses->server->vals->version_string,
+ 		SMBDEFAULT_VERSION_STRING) == 0) {
+-		vneg_inbuf.Dialects[0] = cpu_to_le16(SMB21_PROT_ID);
+-		vneg_inbuf.Dialects[1] = cpu_to_le16(SMB30_PROT_ID);
+-		vneg_inbuf.Dialects[2] = cpu_to_le16(SMB302_PROT_ID);
+-		vneg_inbuf.DialectCount = cpu_to_le16(3);
++		pneg_inbuf->Dialects[0] = cpu_to_le16(SMB21_PROT_ID);
++		pneg_inbuf->Dialects[1] = cpu_to_le16(SMB30_PROT_ID);
++		pneg_inbuf->Dialects[2] = cpu_to_le16(SMB302_PROT_ID);
++		pneg_inbuf->DialectCount = cpu_to_le16(3);
+ 		/* structure is big enough for 3 dialects */
+-		inbuflen = sizeof(struct validate_negotiate_info_req);
++		inbuflen = sizeof(*pneg_inbuf);
+ 	} else {
+ 		/* otherwise specific dialect was requested */
+-		vneg_inbuf.Dialects[0] =
++		pneg_inbuf->Dialects[0] =
+ 			cpu_to_le16(tcon->ses->server->vals->protocol_id);
+-		vneg_inbuf.DialectCount = cpu_to_le16(1);
++		pneg_inbuf->DialectCount = cpu_to_le16(1);
+ 		/* structure is big enough for 3 dialects, sending only 1 */
+-		inbuflen = sizeof(struct validate_negotiate_info_req) - 4;
++		inbuflen = sizeof(*pneg_inbuf) -
++				sizeof(pneg_inbuf->Dialects[0]) * 2;
+ 	}
+ 
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ 		FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
+-		(char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req),
+-		(char **)&pneg_rsp, &rsplen);
++		(char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);
+ 
+ 	if (rc != 0) {
+ 		cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+-		return -EIO;
++		rc = -EIO;
++		goto out_free_inbuf;
+ 	}
+ 
+-	if (rsplen != sizeof(struct validate_negotiate_info_rsp)) {
++	rc = -EIO;
++	if (rsplen != sizeof(*pneg_rsp)) {
+ 		cifs_dbg(VFS, "invalid protocol negotiate response size: %d\n",
+ 			 rsplen);
+ 
+ 		/* relax check since Mac returns max bufsize allowed on ioctl */
+-		if ((rsplen > CIFSMaxBufSize)
+-		     || (rsplen < sizeof(struct validate_negotiate_info_rsp)))
+-			goto err_rsp_free;
++		if (rsplen > CIFSMaxBufSize || rsplen < sizeof(*pneg_rsp))
++			goto out_free_rsp;
+ 	}
+ 
+ 	/* check validate negotiate info response matches what we got earlier */
+@@ -729,15 +735,17 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 		goto vneg_out;
+ 
+ 	/* validate negotiate successful */
++	rc = 0;
+ 	cifs_dbg(FYI, "validate negotiate info successful\n");
+-	kfree(pneg_rsp);
+-	return 0;
++	goto out_free_rsp;
+ 
+ vneg_out:
+ 	cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n");
+-err_rsp_free:
++out_free_rsp:
+ 	kfree(pneg_rsp);
+-	return -EIO;
++out_free_inbuf:
++	kfree(pneg_inbuf);
++	return rc;
+ }
+ 
+ enum securityEnum
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 1b5cd3b8617c..a56abb46613e 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -833,8 +833,11 @@ SendReceive2(const unsigned int xid, struct cifs_ses *ses,
+ 	if (n_vec + 1 > CIFS_MAX_IOV_SIZE) {
+ 		new_iov = kmalloc(sizeof(struct kvec) * (n_vec + 1),
+ 				  GFP_KERNEL);
+-		if (!new_iov)
++		if (!new_iov) {
++			/* otherwise cifs_send_recv below sets resp_buf_type */
++			*resp_buf_type = CIFS_NO_BUFFER;
+ 			return -ENOMEM;
++		}
+ 	} else
+ 		new_iov = s_iov;
+ 
+diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
+index 846ca150d52e..4dd842f72846 100644
+--- a/fs/ecryptfs/crypto.c
++++ b/fs/ecryptfs/crypto.c
+@@ -1997,6 +1997,16 @@ int ecryptfs_encrypt_and_encode_filename(
+ 	return rc;
+ }
+ 
++static bool is_dot_dotdot(const char *name, size_t name_size)
++{
++	if (name_size == 1 && name[0] == '.')
++		return true;
++	else if (name_size == 2 && name[0] == '.' && name[1] == '.')
++		return true;
++
++	return false;
++}
++
+ /**
+  * ecryptfs_decode_and_decrypt_filename - converts the encoded cipher text name to decoded plaintext
+  * @plaintext_name: The plaintext name
+@@ -2021,13 +2031,21 @@ int ecryptfs_decode_and_decrypt_filename(char **plaintext_name,
+ 	size_t packet_size;
+ 	int rc = 0;
+ 
+-	if ((mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES)
+-	    && !(mount_crypt_stat->flags & ECRYPTFS_ENCRYPTED_VIEW_ENABLED)
+-	    && (name_size > ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE)
+-	    && (strncmp(name, ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX,
+-			ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE) == 0)) {
+-		const char *orig_name = name;
+-		size_t orig_name_size = name_size;
++	if ((mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES) &&
++	    !(mount_crypt_stat->flags & ECRYPTFS_ENCRYPTED_VIEW_ENABLED)) {
++		if (is_dot_dotdot(name, name_size)) {
++			rc = ecryptfs_copy_filename(plaintext_name,
++						    plaintext_name_size,
++						    name, name_size);
++			goto out;
++		}
++
++		if (name_size <= ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE ||
++		    strncmp(name, ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX,
++			    ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE)) {
++			rc = -EINVAL;
++			goto out;
++		}
+ 
+ 		name += ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE;
+ 		name_size -= ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE;
+@@ -2047,12 +2065,9 @@ int ecryptfs_decode_and_decrypt_filename(char **plaintext_name,
+ 						  decoded_name,
+ 						  decoded_name_size);
+ 		if (rc) {
+-			printk(KERN_INFO "%s: Could not parse tag 70 packet "
+-			       "from filename; copying through filename "
+-			       "as-is\n", __func__);
+-			rc = ecryptfs_copy_filename(plaintext_name,
+-						    plaintext_name_size,
+-						    orig_name, orig_name_size);
++			ecryptfs_printk(KERN_DEBUG,
++					"%s: Could not parse tag 70 packet from filename\n",
++					__func__);
+ 			goto out_free;
+ 		}
+ 	} else {
+diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
+index c74ed3ca3372..b76a9853325e 100644
+--- a/fs/ecryptfs/file.c
++++ b/fs/ecryptfs/file.c
+@@ -82,17 +82,28 @@ ecryptfs_filldir(struct dir_context *ctx, const char *lower_name,
+ 						  buf->sb, lower_name,
+ 						  lower_namelen);
+ 	if (rc) {
+-		printk(KERN_ERR "%s: Error attempting to decode and decrypt "
+-		       "filename [%s]; rc = [%d]\n", __func__, lower_name,
+-		       rc);
+-		goto out;
++		if (rc != -EINVAL) {
++			ecryptfs_printk(KERN_DEBUG,
++					"%s: Error attempting to decode and decrypt filename [%s]; rc = [%d]\n",
++					__func__, lower_name, rc);
++			return rc;
++		}
++
++		/* Mask -EINVAL errors as these are most likely due a plaintext
++		 * filename present in the lower filesystem despite filename
++		 * encryption being enabled. One unavoidable example would be
++		 * the "lost+found" dentry in the root directory of an Ext4
++		 * filesystem.
++		 */
++		return 0;
+ 	}
++
+ 	buf->caller->pos = buf->ctx.pos;
+ 	rc = !dir_emit(buf->caller, name, name_size, ino, d_type);
+ 	kfree(name);
+ 	if (!rc)
+ 		buf->entries_written++;
+-out:
++
+ 	return rc;
+ }
+ 
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index bc258a4402f6..ec3fba7d492f 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -394,7 +394,10 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 			break;
+ #ifdef CONFIG_JOLIET
+ 		case Opt_iocharset:
++			kfree(popt->iocharset);
+ 			popt->iocharset = match_strdup(&args[0]);
++			if (!popt->iocharset)
++				return 0;
+ 			break;
+ #endif
+ 		case Opt_map_a:
+diff --git a/fs/namespace.c b/fs/namespace.c
+index c3ed9dc78655..cb20c4ee97fc 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2810,7 +2810,7 @@ long do_mount(const char *dev_name, const char __user *dir_name,
+ 		mnt_flags |= MNT_NODIRATIME;
+ 	if (flags & MS_STRICTATIME)
+ 		mnt_flags &= ~(MNT_RELATIME | MNT_NOATIME);
+-	if (flags & SB_RDONLY)
++	if (flags & MS_RDONLY)
+ 		mnt_flags |= MNT_READONLY;
+ 
+ 	/* The default atime for remount is preservation */
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 219b269c737e..613ec7e5a465 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -192,8 +192,9 @@ static int send_to_group(struct inode *to_tell,
+ 			 struct fsnotify_iter_info *iter_info)
+ {
+ 	struct fsnotify_group *group = NULL;
+-	__u32 inode_test_mask = 0;
+-	__u32 vfsmount_test_mask = 0;
++	__u32 test_mask = (mask & ~FS_EVENT_ON_CHILD);
++	__u32 marks_mask = 0;
++	__u32 marks_ignored_mask = 0;
+ 
+ 	if (unlikely(!inode_mark && !vfsmount_mark)) {
+ 		BUG();
+@@ -213,29 +214,25 @@ static int send_to_group(struct inode *to_tell,
+ 	/* does the inode mark tell us to do something? */
+ 	if (inode_mark) {
+ 		group = inode_mark->group;
+-		inode_test_mask = (mask & ~FS_EVENT_ON_CHILD);
+-		inode_test_mask &= inode_mark->mask;
+-		inode_test_mask &= ~inode_mark->ignored_mask;
++		marks_mask |= inode_mark->mask;
++		marks_ignored_mask |= inode_mark->ignored_mask;
+ 	}
+ 
+ 	/* does the vfsmount_mark tell us to do something? */
+ 	if (vfsmount_mark) {
+-		vfsmount_test_mask = (mask & ~FS_EVENT_ON_CHILD);
+ 		group = vfsmount_mark->group;
+-		vfsmount_test_mask &= vfsmount_mark->mask;
+-		vfsmount_test_mask &= ~vfsmount_mark->ignored_mask;
+-		if (inode_mark)
+-			vfsmount_test_mask &= ~inode_mark->ignored_mask;
++		marks_mask |= vfsmount_mark->mask;
++		marks_ignored_mask |= vfsmount_mark->ignored_mask;
+ 	}
+ 
+ 	pr_debug("%s: group=%p to_tell=%p mask=%x inode_mark=%p"
+-		 " inode_test_mask=%x vfsmount_mark=%p vfsmount_test_mask=%x"
++		 " vfsmount_mark=%p marks_mask=%x marks_ignored_mask=%x"
+ 		 " data=%p data_is=%d cookie=%d\n",
+-		 __func__, group, to_tell, mask, inode_mark,
+-		 inode_test_mask, vfsmount_mark, vfsmount_test_mask, data,
++		 __func__, group, to_tell, mask, inode_mark, vfsmount_mark,
++		 marks_mask, marks_ignored_mask, data,
+ 		 data_is, cookie);
+ 
+-	if (!inode_test_mask && !vfsmount_test_mask)
++	if (!(test_mask & marks_mask & ~marks_ignored_mask))
+ 		return 0;
+ 
+ 	return group->ops->handle_event(group, to_tell, inode_mark,
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index ab156e35ec00..1b1283f07941 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -4250,10 +4250,11 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,
+ 			 struct dentry *new_dentry, bool preserve)
+ {
+-	int error;
++	int error, had_lock;
+ 	struct inode *inode = d_inode(old_dentry);
+ 	struct buffer_head *old_bh = NULL;
+ 	struct inode *new_orphan_inode = NULL;
++	struct ocfs2_lock_holder oh;
+ 
+ 	if (!ocfs2_refcount_tree(OCFS2_SB(inode->i_sb)))
+ 		return -EOPNOTSUPP;
+@@ -4295,6 +4296,14 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,
+ 		goto out;
+ 	}
+ 
++	had_lock = ocfs2_inode_lock_tracker(new_orphan_inode, NULL, 1,
++					    &oh);
++	if (had_lock < 0) {
++		error = had_lock;
++		mlog_errno(error);
++		goto out;
++	}
++
+ 	/* If the security isn't preserved, we need to re-initialize them. */
+ 	if (!preserve) {
+ 		error = ocfs2_init_security_and_acl(dir, new_orphan_inode,
+@@ -4302,14 +4311,15 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,
+ 		if (error)
+ 			mlog_errno(error);
+ 	}
+-out:
+ 	if (!error) {
+ 		error = ocfs2_mv_orphaned_inode_to_new(dir, new_orphan_inode,
+ 						       new_dentry);
+ 		if (error)
+ 			mlog_errno(error);
+ 	}
++	ocfs2_inode_unlock_tracker(new_orphan_inode, 1, &oh, had_lock);
+ 
++out:
+ 	if (new_orphan_inode) {
+ 		/*
+ 		 * We need to open_unlock the inode no matter whether we
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index f034eccd8616..d256b24f7d28 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -1693,6 +1693,12 @@ void task_dump_owner(struct task_struct *task, umode_t mode,
+ 	kuid_t uid;
+ 	kgid_t gid;
+ 
++	if (unlikely(task->flags & PF_KTHREAD)) {
++		*ruid = GLOBAL_ROOT_UID;
++		*rgid = GLOBAL_ROOT_GID;
++		return;
++	}
++
+ 	/* Default to the tasks effective ownership */
+ 	rcu_read_lock();
+ 	cred = __task_cred(task);
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index d1e82761de81..e64ecb9f2720 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -209,25 +209,34 @@ kclist_add_private(unsigned long pfn, unsigned long nr_pages, void *arg)
+ {
+ 	struct list_head *head = (struct list_head *)arg;
+ 	struct kcore_list *ent;
++	struct page *p;
++
++	if (!pfn_valid(pfn))
++		return 1;
++
++	p = pfn_to_page(pfn);
++	if (!memmap_valid_within(pfn, p, page_zone(p)))
++		return 1;
+ 
+ 	ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+ 	if (!ent)
+ 		return -ENOMEM;
+-	ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT));
++	ent->addr = (unsigned long)page_to_virt(p);
+ 	ent->size = nr_pages << PAGE_SHIFT;
+ 
+-	/* Sanity check: Can happen in 32bit arch...maybe */
+-	if (ent->addr < (unsigned long) __va(0))
++	if (!virt_addr_valid(ent->addr))
+ 		goto free_out;
+ 
+ 	/* cut not-mapped area. ....from ppc-32 code. */
+ 	if (ULONG_MAX - ent->addr < ent->size)
+ 		ent->size = ULONG_MAX - ent->addr;
+ 
+-	/* cut when vmalloc() area is higher than direct-map area */
+-	if (VMALLOC_START > (unsigned long)__va(0)) {
+-		if (ent->addr > VMALLOC_START)
+-			goto free_out;
++	/*
++	 * We've already checked virt_addr_valid so we know this address
++	 * is a valid pointer, therefore we can check against it to determine
++	 * if we need to trim
++	 */
++	if (VMALLOC_START > ent->addr) {
+ 		if (VMALLOC_START - ent->addr < ent->size)
+ 			ent->size = VMALLOC_START - ent->addr;
+ 	}
+diff --git a/fs/proc/loadavg.c b/fs/proc/loadavg.c
+index a000d7547479..b572cc865b92 100644
+--- a/fs/proc/loadavg.c
++++ b/fs/proc/loadavg.c
+@@ -24,7 +24,7 @@ static int loadavg_proc_show(struct seq_file *m, void *v)
+ 		LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]),
+ 		LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]),
+ 		nr_running(), nr_threads,
+-		idr_get_cursor(&task_active_pid_ns(current)->idr));
++		idr_get_cursor(&task_active_pid_ns(current)->idr) - 1);
+ 	return 0;
+ }
+ 
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index ec6d2983a5cb..dd1b2aeb01e8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1329,9 +1329,11 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ 		else if (is_swap_pmd(pmd)) {
+ 			swp_entry_t entry = pmd_to_swp_entry(pmd);
++			unsigned long offset = swp_offset(entry);
+ 
++			offset += (addr & ~PMD_MASK) >> PAGE_SHIFT;
+ 			frame = swp_type(entry) |
+-				(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
++				(offset << MAX_SWAPFILES_SHIFT);
+ 			flags |= PM_SWAP;
+ 			if (pmd_swp_soft_dirty(pmd))
+ 				flags |= PM_SOFT_DIRTY;
+@@ -1351,6 +1353,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 				break;
+ 			if (pm->show_pfn && (flags & PM_PRESENT))
+ 				frame++;
++			else if (flags & PM_SWAP)
++				frame += (1 << MAX_SWAPFILES_SHIFT);
+ 		}
+ 		spin_unlock(ptl);
+ 		return err;
+diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
+index d3339dd48b1a..b324e01ccf2d 100644
+--- a/include/linux/brcmphy.h
++++ b/include/linux/brcmphy.h
+@@ -25,6 +25,7 @@
+ #define PHY_ID_BCM54612E		0x03625e60
+ #define PHY_ID_BCM54616S		0x03625d10
+ #define PHY_ID_BCM57780			0x03625d90
++#define PHY_ID_BCM89610			0x03625cd0
+ 
+ #define PHY_ID_BCM7250			0xae025280
+ #define PHY_ID_BCM7260			0xae025190
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index f711be6e8c44..f3ae6ae7e786 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -755,6 +755,9 @@ int __clk_mux_determine_rate(struct clk_hw *hw,
+ int __clk_determine_rate(struct clk_hw *core, struct clk_rate_request *req);
+ int __clk_mux_determine_rate_closest(struct clk_hw *hw,
+ 				     struct clk_rate_request *req);
++int clk_mux_determine_rate_flags(struct clk_hw *hw,
++				 struct clk_rate_request *req,
++				 unsigned long flags);
+ void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent);
+ void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate,
+ 			   unsigned long max_rate);
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 2ec41a7eb54f..35e5954a5a15 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -310,6 +310,8 @@ bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32,
+  *	fields should be ignored (use %__ETHTOOL_LINK_MODE_MASK_NBITS
+  *	instead of the latter), any change to them will be overwritten
+  *	by kernel. Returns a negative error code or zero.
++ * @get_fecparam: Get the network device Forward Error Correction parameters.
++ * @set_fecparam: Set the network device Forward Error Correction parameters.
+  *
+  * All operations are optional (i.e. the function pointer may be set
+  * to %NULL) and callers must take this into account.  Callers must
+diff --git a/include/linux/genhd.h b/include/linux/genhd.h
+index c826b0b5232a..6cb8a5789668 100644
+--- a/include/linux/genhd.h
++++ b/include/linux/genhd.h
+@@ -368,7 +368,9 @@ static inline void free_part_stats(struct hd_struct *part)
+ 	part_stat_add(cpu, gendiskp, field, -subnd)
+ 
+ void part_in_flight(struct request_queue *q, struct hd_struct *part,
+-			unsigned int inflight[2]);
++		    unsigned int inflight[2]);
++void part_in_flight_rw(struct request_queue *q, struct hd_struct *part,
++		       unsigned int inflight[2]);
+ void part_dec_in_flight(struct request_queue *q, struct hd_struct *part,
+ 			int rw);
+ void part_inc_in_flight(struct request_queue *q, struct hd_struct *part,
+diff --git a/include/linux/kthread.h b/include/linux/kthread.h
+index c1961761311d..2803264c512f 100644
+--- a/include/linux/kthread.h
++++ b/include/linux/kthread.h
+@@ -62,6 +62,7 @@ void *kthread_probe_data(struct task_struct *k);
+ int kthread_park(struct task_struct *k);
+ void kthread_unpark(struct task_struct *k);
+ void kthread_parkme(void);
++void kthread_park_complete(struct task_struct *k);
+ 
+ int kthreadd(void *unused);
+ extern struct task_struct *kthreadd_task;
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 6930c63126c7..6d6e79c59e68 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
+ 
+ #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
+ 
+-#ifdef CONFIG_S390
+-#define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
+-#elif defined(CONFIG_ARM64)
+-#define KVM_MAX_IRQ_ROUTES 4096
+-#else
+-#define KVM_MAX_IRQ_ROUTES 1024
+-#endif
++#define KVM_MAX_IRQ_ROUTES 4096 /* might need extension/rework in the future */
+ 
+ bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
+ int kvm_set_irq_routing(struct kvm *kvm,
+diff --git a/include/linux/microchipphy.h b/include/linux/microchipphy.h
+index eb492d47f717..8f9c90379732 100644
+--- a/include/linux/microchipphy.h
++++ b/include/linux/microchipphy.h
+@@ -70,4 +70,12 @@
+ #define	LAN88XX_MMD3_CHIP_ID			(32877)
+ #define	LAN88XX_MMD3_CHIP_REV			(32878)
+ 
++/* DSP registers */
++#define PHY_ARDENNES_MMD_DEV_3_PHY_CFG		(0x806A)
++#define PHY_ARDENNES_MMD_DEV_3_PHY_CFG_ZD_DLY_EN_	(0x2000)
++#define LAN88XX_EXT_PAGE_ACCESS_TR		(0x52B5)
++#define LAN88XX_EXT_PAGE_TR_CR			16
++#define LAN88XX_EXT_PAGE_TR_LOW_DATA		17
++#define LAN88XX_EXT_PAGE_TR_HIGH_DATA		18
++
+ #endif /* _MICROCHIPPHY_H */
+diff --git a/include/linux/mtd/map.h b/include/linux/mtd/map.h
+index b5b43f94f311..01b990e4b228 100644
+--- a/include/linux/mtd/map.h
++++ b/include/linux/mtd/map.h
+@@ -312,7 +312,7 @@ void map_destroy(struct mtd_info *mtd);
+ ({									\
+ 	int i, ret = 1;							\
+ 	for (i = 0; i < map_words(map); i++) {				\
+-		if (((val1).x[i] & (val2).x[i]) != (val2).x[i]) {	\
++		if (((val1).x[i] & (val2).x[i]) != (val3).x[i]) {	\
+ 			ret = 0;					\
+ 			break;						\
+ 		}							\
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 56c5570aadbe..694f718d012f 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -824,12 +824,18 @@ struct nand_op_instr {
+  * tBERS (during an erase) which all of them are u64 values that cannot be
+  * divided by usual kernel macros and must be handled with the special
+  * DIV_ROUND_UP_ULL() macro.
++ *
++ * Cast to type of dividend is needed here to guarantee that the result won't
++ * be an unsigned long long when the dividend is an unsigned long (or smaller),
++ * which is what the compiler does when it sees ternary operator with 2
++ * different return types (picks the largest type to make sure there's no
++ * loss).
+  */
+-#define __DIVIDE(dividend, divisor) ({					\
+-	sizeof(dividend) == sizeof(u32) ?				\
+-		DIV_ROUND_UP(dividend, divisor) :			\
+-		DIV_ROUND_UP_ULL(dividend, divisor);			\
+-		})
++#define __DIVIDE(dividend, divisor) ({						\
++	(__typeof__(dividend))(sizeof(dividend) <= sizeof(unsigned long) ?	\
++			       DIV_ROUND_UP(dividend, divisor) :		\
++			       DIV_ROUND_UP_ULL(dividend, divisor)); 		\
++	})
+ #define PSEC_TO_NSEC(x) __DIVIDE(x, 1000)
+ #define PSEC_TO_MSEC(x) __DIVIDE(x, 1000000000)
+ 
+diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
+index b1f37a89e368..79b99d653e03 100644
+--- a/include/linux/percpu-rwsem.h
++++ b/include/linux/percpu-rwsem.h
+@@ -133,7 +133,7 @@ static inline void percpu_rwsem_release(struct percpu_rw_semaphore *sem,
+ 	lock_release(&sem->rw_sem.dep_map, 1, ip);
+ #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
+ 	if (!read)
+-		sem->rw_sem.owner = NULL;
++		sem->rw_sem.owner = RWSEM_OWNER_UNKNOWN;
+ #endif
+ }
+ 
+@@ -141,6 +141,10 @@ static inline void percpu_rwsem_acquire(struct percpu_rw_semaphore *sem,
+ 					bool read, unsigned long ip)
+ {
+ 	lock_acquire(&sem->rw_sem.dep_map, 0, 1, read, 1, NULL, ip);
++#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
++	if (!read)
++		sem->rw_sem.owner = current;
++#endif
+ }
+ 
+ #endif
+diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
+index 56707d5ff6ad..ab93b6eae696 100644
+--- a/include/linux/rwsem.h
++++ b/include/linux/rwsem.h
+@@ -44,6 +44,12 @@ struct rw_semaphore {
+ #endif
+ };
+ 
++/*
++ * Setting bit 0 of the owner field with other non-zero bits will indicate
++ * that the rwsem is writer-owned with an unknown owner.
++ */
++#define RWSEM_OWNER_UNKNOWN	((struct task_struct *)-1L)
++
+ extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
+ extern struct rw_semaphore *rwsem_down_read_failed_killable(struct rw_semaphore *sem);
+ extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 710508af02c8..8145cb4ee838 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -113,17 +113,36 @@ struct task_group;
+ 
+ #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+ 
++/*
++ * Special states are those that do not use the normal wait-loop pattern. See
++ * the comment with set_special_state().
++ */
++#define is_special_task_state(state)				\
++	((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD))
++
+ #define __set_current_state(state_value)			\
+ 	do {							\
++		WARN_ON_ONCE(is_special_task_state(state_value));\
+ 		current->task_state_change = _THIS_IP_;		\
+ 		current->state = (state_value);			\
+ 	} while (0)
++
+ #define set_current_state(state_value)				\
+ 	do {							\
++		WARN_ON_ONCE(is_special_task_state(state_value));\
+ 		current->task_state_change = _THIS_IP_;		\
+ 		smp_store_mb(current->state, (state_value));	\
+ 	} while (0)
+ 
++#define set_special_state(state_value)					\
++	do {								\
++		unsigned long flags; /* may shadow */			\
++		WARN_ON_ONCE(!is_special_task_state(state_value));	\
++		raw_spin_lock_irqsave(&current->pi_lock, flags);	\
++		current->task_state_change = _THIS_IP_;			\
++		current->state = (state_value);				\
++		raw_spin_unlock_irqrestore(&current->pi_lock, flags);	\
++	} while (0)
+ #else
+ /*
+  * set_current_state() includes a barrier so that the write of current->state
+@@ -145,8 +164,8 @@ struct task_group;
+  *
+  * The above is typically ordered against the wakeup, which does:
+  *
+- *	need_sleep = false;
+- *	wake_up_state(p, TASK_UNINTERRUPTIBLE);
++ *   need_sleep = false;
++ *   wake_up_state(p, TASK_UNINTERRUPTIBLE);
+  *
+  * Where wake_up_state() (and all other wakeup primitives) imply enough
+  * barriers to order the store of the variable against wakeup.
+@@ -155,12 +174,33 @@ struct task_group;
+  * once it observes the TASK_UNINTERRUPTIBLE store the waking CPU can issue a
+  * TASK_RUNNING store which can collide with __set_current_state(TASK_RUNNING).
+  *
+- * This is obviously fine, since they both store the exact same value.
++ * However, with slightly different timing the wakeup TASK_RUNNING store can
++ * also collide with the TASK_UNINTERRUPTIBLE store. Loosing that store is not
++ * a problem either because that will result in one extra go around the loop
++ * and our @cond test will save the day.
+  *
+  * Also see the comments of try_to_wake_up().
+  */
+-#define __set_current_state(state_value) do { current->state = (state_value); } while (0)
+-#define set_current_state(state_value)	 smp_store_mb(current->state, (state_value))
++#define __set_current_state(state_value)				\
++	current->state = (state_value)
++
++#define set_current_state(state_value)					\
++	smp_store_mb(current->state, (state_value))
++
++/*
++ * set_special_state() should be used for those states when the blocking task
++ * can not use the regular condition based wait-loop. In that case we must
++ * serialize against wakeups such that any possible in-flight TASK_RUNNING stores
++ * will not collide with our state change.
++ */
++#define set_special_state(state_value)					\
++	do {								\
++		unsigned long flags; /* may shadow */			\
++		raw_spin_lock_irqsave(&current->pi_lock, flags);	\
++		current->state = (state_value);				\
++		raw_spin_unlock_irqrestore(&current->pi_lock, flags);	\
++	} while (0)
++
+ #endif
+ 
+ /* Task command name length: */
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 23b4f9cb82db..acf701e057af 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -280,7 +280,7 @@ static inline void kernel_signal_stop(void)
+ {
+ 	spin_lock_irq(&current->sighand->siglock);
+ 	if (current->jobctl & JOBCTL_STOP_DEQUEUED)
+-		__set_current_state(TASK_STOPPED);
++		set_special_state(TASK_STOPPED);
+ 	spin_unlock_irq(&current->sighand->siglock);
+ 
+ 	schedule();
+diff --git a/include/linux/stringhash.h b/include/linux/stringhash.h
+index e8f0f852968f..c0c5c5b73dc0 100644
+--- a/include/linux/stringhash.h
++++ b/include/linux/stringhash.h
+@@ -50,9 +50,9 @@ partial_name_hash(unsigned long c, unsigned long prevhash)
+  * losing bits).  This also has the property (wanted by the dcache)
+  * that the msbits make a good hash table index.
+  */
+-static inline unsigned long end_name_hash(unsigned long hash)
++static inline unsigned int end_name_hash(unsigned long hash)
+ {
+-	return __hash_32((unsigned int)hash);
++	return hash_long(hash, 32);
+ }
+ 
+ /*
+diff --git a/include/soc/bcm2835/raspberrypi-firmware.h b/include/soc/bcm2835/raspberrypi-firmware.h
+index cb979ad90401..b86c4c367004 100644
+--- a/include/soc/bcm2835/raspberrypi-firmware.h
++++ b/include/soc/bcm2835/raspberrypi-firmware.h
+@@ -125,13 +125,13 @@ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node);
+ static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag,
+ 					void *data, size_t len)
+ {
+-	return 0;
++	return -ENOSYS;
+ }
+ 
+ static inline int rpi_firmware_property_list(struct rpi_firmware *fw,
+ 					     void *data, size_t tag_size)
+ {
+-	return 0;
++	return -ENOSYS;
+ }
+ 
+ static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+diff --git a/init/main.c b/init/main.c
+index 21efbf6ace93..dacaf589226a 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -981,6 +981,13 @@ __setup("rodata=", set_debug_rodata);
+ static void mark_readonly(void)
+ {
+ 	if (rodata_enabled) {
++		/*
++		 * load_module() results in W+X mappings, which are cleaned up
++		 * with call_rcu_sched().  Let's make sure that queued work is
++		 * flushed so that we don't hit false positives looking for
++		 * insecure pages which are W+X.
++		 */
++		rcu_barrier_sched();
+ 		mark_rodata_ro();
+ 		rodata_test();
+ 	} else
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 43f95d190eea..d18c8bf4051b 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -26,6 +26,7 @@
+ #include <linux/cred.h>
+ #include <linux/timekeeping.h>
+ #include <linux/ctype.h>
++#include <linux/nospec.h>
+ 
+ #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PROG_ARRAY || \
+ 			   (map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
+@@ -102,12 +103,14 @@ const struct bpf_map_ops bpf_map_offload_ops = {
+ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
+ {
+ 	const struct bpf_map_ops *ops;
++	u32 type = attr->map_type;
+ 	struct bpf_map *map;
+ 	int err;
+ 
+-	if (attr->map_type >= ARRAY_SIZE(bpf_map_types))
++	if (type >= ARRAY_SIZE(bpf_map_types))
+ 		return ERR_PTR(-EINVAL);
+-	ops = bpf_map_types[attr->map_type];
++	type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types));
++	ops = bpf_map_types[type];
+ 	if (!ops)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -122,7 +125,7 @@ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
+ 	if (IS_ERR(map))
+ 		return map;
+ 	map->ops = ops;
+-	map->map_type = attr->map_type;
++	map->map_type = type;
+ 	return map;
+ }
+ 
+@@ -869,11 +872,17 @@ static const struct bpf_prog_ops * const bpf_prog_types[] = {
+ 
+ static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog)
+ {
+-	if (type >= ARRAY_SIZE(bpf_prog_types) || !bpf_prog_types[type])
++	const struct bpf_prog_ops *ops;
++
++	if (type >= ARRAY_SIZE(bpf_prog_types))
++		return -EINVAL;
++	type = array_index_nospec(type, ARRAY_SIZE(bpf_prog_types));
++	ops = bpf_prog_types[type];
++	if (!ops)
+ 		return -EINVAL;
+ 
+ 	if (!bpf_prog_is_dev_bound(prog->aux))
+-		prog->aux->ops = bpf_prog_types[type];
++		prog->aux->ops = ops;
+ 	else
+ 		prog->aux->ops = &bpf_offload_prog_ops;
+ 	prog->type = type;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index cd50e99202b0..2017a39ab490 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -55,7 +55,6 @@ enum KTHREAD_BITS {
+ 	KTHREAD_IS_PER_CPU = 0,
+ 	KTHREAD_SHOULD_STOP,
+ 	KTHREAD_SHOULD_PARK,
+-	KTHREAD_IS_PARKED,
+ };
+ 
+ static inline void set_kthread_struct(void *kthread)
+@@ -177,14 +176,12 @@ void *kthread_probe_data(struct task_struct *task)
+ 
+ static void __kthread_parkme(struct kthread *self)
+ {
+-	__set_current_state(TASK_PARKED);
+-	while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) {
+-		if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags))
+-			complete(&self->parked);
++	for (;;) {
++		set_current_state(TASK_PARKED);
++		if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))
++			break;
+ 		schedule();
+-		__set_current_state(TASK_PARKED);
+ 	}
+-	clear_bit(KTHREAD_IS_PARKED, &self->flags);
+ 	__set_current_state(TASK_RUNNING);
+ }
+ 
+@@ -194,6 +191,11 @@ void kthread_parkme(void)
+ }
+ EXPORT_SYMBOL_GPL(kthread_parkme);
+ 
++void kthread_park_complete(struct task_struct *k)
++{
++	complete(&to_kthread(k)->parked);
++}
++
+ static int kthread(void *_create)
+ {
+ 	/* Copy data: it's on kthread's stack */
+@@ -450,22 +452,15 @@ void kthread_unpark(struct task_struct *k)
+ {
+ 	struct kthread *kthread = to_kthread(k);
+ 
+-	clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+ 	/*
+-	 * We clear the IS_PARKED bit here as we don't wait
+-	 * until the task has left the park code. So if we'd
+-	 * park before that happens we'd see the IS_PARKED bit
+-	 * which might be about to be cleared.
++	 * Newly created kthread was parked when the CPU was offline.
++	 * The binding was lost and we need to set it again.
+ 	 */
+-	if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) {
+-		/*
+-		 * Newly created kthread was parked when the CPU was offline.
+-		 * The binding was lost and we need to set it again.
+-		 */
+-		if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))
+-			__kthread_bind(k, kthread->cpu, TASK_PARKED);
+-		wake_up_state(k, TASK_PARKED);
+-	}
++	if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))
++		__kthread_bind(k, kthread->cpu, TASK_PARKED);
++
++	clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
++	wake_up_state(k, TASK_PARKED);
+ }
+ EXPORT_SYMBOL_GPL(kthread_unpark);
+ 
+@@ -488,12 +483,13 @@ int kthread_park(struct task_struct *k)
+ 	if (WARN_ON(k->flags & PF_EXITING))
+ 		return -ENOSYS;
+ 
+-	if (!test_bit(KTHREAD_IS_PARKED, &kthread->flags)) {
+-		set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+-		if (k != current) {
+-			wake_up_process(k);
+-			wait_for_completion(&kthread->parked);
+-		}
++	if (WARN_ON_ONCE(test_bit(KTHREAD_SHOULD_PARK, &kthread->flags)))
++		return -EBUSY;
++
++	set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
++	if (k != current) {
++		wake_up_process(k);
++		wait_for_completion(&kthread->parked);
+ 	}
+ 
+ 	return 0;
+diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
+index e795908f3607..a90336779375 100644
+--- a/kernel/locking/rwsem-xadd.c
++++ b/kernel/locking/rwsem-xadd.c
+@@ -352,16 +352,15 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+ 	struct task_struct *owner;
+ 	bool ret = true;
+ 
++	BUILD_BUG_ON(!rwsem_has_anonymous_owner(RWSEM_OWNER_UNKNOWN));
++
+ 	if (need_resched())
+ 		return false;
+ 
+ 	rcu_read_lock();
+ 	owner = READ_ONCE(sem->owner);
+-	if (!rwsem_owner_is_writer(owner)) {
+-		/*
+-		 * Don't spin if the rwsem is readers owned.
+-		 */
+-		ret = !rwsem_owner_is_reader(owner);
++	if (!owner || !is_rwsem_owner_spinnable(owner)) {
++		ret = !owner;	/* !owner is spinnable */
+ 		goto done;
+ 	}
+ 
+@@ -382,11 +381,11 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
+ {
+ 	struct task_struct *owner = READ_ONCE(sem->owner);
+ 
+-	if (!rwsem_owner_is_writer(owner))
+-		goto out;
++	if (!is_rwsem_owner_spinnable(owner))
++		return false;
+ 
+ 	rcu_read_lock();
+-	while (sem->owner == owner) {
++	while (owner && (READ_ONCE(sem->owner) == owner)) {
+ 		/*
+ 		 * Ensure we emit the owner->on_cpu, dereference _after_
+ 		 * checking sem->owner still matches owner, if that fails,
+@@ -408,12 +407,12 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
+ 		cpu_relax();
+ 	}
+ 	rcu_read_unlock();
+-out:
++
+ 	/*
+ 	 * If there is a new owner or the owner is not set, we continue
+ 	 * spinning.
+ 	 */
+-	return !rwsem_owner_is_reader(READ_ONCE(sem->owner));
++	return is_rwsem_owner_spinnable(READ_ONCE(sem->owner));
+ }
+ 
+ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index f549c552dbf1..abbf506b1c72 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -217,5 +217,3 @@ void up_read_non_owner(struct rw_semaphore *sem)
+ EXPORT_SYMBOL(up_read_non_owner);
+ 
+ #endif
+-
+-
+diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
+index a883b8f1fdc6..410ee7b9ac2c 100644
+--- a/kernel/locking/rwsem.h
++++ b/kernel/locking/rwsem.h
+@@ -1,20 +1,24 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+  * The owner field of the rw_semaphore structure will be set to
+- * RWSEM_READ_OWNED when a reader grabs the lock. A writer will clear
++ * RWSEM_READER_OWNED when a reader grabs the lock. A writer will clear
+  * the owner field when it unlocks. A reader, on the other hand, will
+  * not touch the owner field when it unlocks.
+  *
+- * In essence, the owner field now has the following 3 states:
++ * In essence, the owner field now has the following 4 states:
+  *  1) 0
+  *     - lock is free or the owner hasn't set the field yet
+  *  2) RWSEM_READER_OWNED
+  *     - lock is currently or previously owned by readers (lock is free
+  *       or not set by owner yet)
+- *  3) Other non-zero value
+- *     - a writer owns the lock
++ *  3) RWSEM_ANONYMOUSLY_OWNED bit set with some other bits set as well
++ *     - lock is owned by an anonymous writer, so spinning on the lock
++ *       owner should be disabled.
++ *  4) Other non-zero value
++ *     - a writer owns the lock and other writers can spin on the lock owner.
+  */
+-#define RWSEM_READER_OWNED	((struct task_struct *)1UL)
++#define RWSEM_ANONYMOUSLY_OWNED	(1UL << 0)
++#define RWSEM_READER_OWNED	((struct task_struct *)RWSEM_ANONYMOUSLY_OWNED)
+ 
+ #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
+ /*
+@@ -45,14 +49,22 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
+ 		WRITE_ONCE(sem->owner, RWSEM_READER_OWNED);
+ }
+ 
+-static inline bool rwsem_owner_is_writer(struct task_struct *owner)
++/*
++ * Return true if the a rwsem waiter can spin on the rwsem's owner
++ * and steal the lock, i.e. the lock is not anonymously owned.
++ * N.B. !owner is considered spinnable.
++ */
++static inline bool is_rwsem_owner_spinnable(struct task_struct *owner)
+ {
+-	return owner && owner != RWSEM_READER_OWNED;
++	return !((unsigned long)owner & RWSEM_ANONYMOUSLY_OWNED);
+ }
+ 
+-static inline bool rwsem_owner_is_reader(struct task_struct *owner)
++/*
++ * Return true if rwsem is owned by an anonymous writer or readers.
++ */
++static inline bool rwsem_has_anonymous_owner(struct task_struct *owner)
+ {
+-	return owner == RWSEM_READER_OWNED;
++	return (unsigned long)owner & RWSEM_ANONYMOUSLY_OWNED;
+ }
+ #else
+ static inline void rwsem_set_owner(struct rw_semaphore *sem)
+diff --git a/kernel/module.c b/kernel/module.c
+index bbb45c038321..c3cc1f8615e1 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3521,6 +3521,11 @@ static noinline int do_init_module(struct module *mod)
+ 	 * walking this with preempt disabled.  In all the failure paths, we
+ 	 * call synchronize_sched(), but we don't want to slow down the success
+ 	 * path, so use actual RCU here.
++	 * Note that module_alloc() on most architectures creates W+X page
++	 * mappings which won't be cleaned up until do_free_init() runs.  Any
++	 * code such as mark_rodata_ro() which depends on those mappings to
++	 * be cleaned up needs to sync with the queued work - ie
++	 * rcu_barrier_sched()
+ 	 */
+ 	call_rcu_sched(&freeinit->rcu, do_free_init);
+ 	mutex_unlock(&module_mutex);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5f37ef9f6cd5..ce2716bccc8e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -30,6 +30,8 @@
+ #include <linux/syscalls.h>
+ #include <linux/sched/isolation.h>
+ 
++#include <linux/kthread.h>
++
+ #include <asm/switch_to.h>
+ #include <asm/tlb.h>
+ #ifdef CONFIG_PARAVIRT
+@@ -2733,20 +2735,28 @@ static struct rq *finish_task_switch(struct task_struct *prev)
+ 		membarrier_mm_sync_core_before_usermode(mm);
+ 		mmdrop(mm);
+ 	}
+-	if (unlikely(prev_state == TASK_DEAD)) {
+-		if (prev->sched_class->task_dead)
+-			prev->sched_class->task_dead(prev);
++	if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) {
++		switch (prev_state) {
++		case TASK_DEAD:
++			if (prev->sched_class->task_dead)
++				prev->sched_class->task_dead(prev);
+ 
+-		/*
+-		 * Remove function-return probe instances associated with this
+-		 * task and put them back on the free list.
+-		 */
+-		kprobe_flush_task(prev);
++			/*
++			 * Remove function-return probe instances associated with this
++			 * task and put them back on the free list.
++			 */
++			kprobe_flush_task(prev);
++
++			/* Task is done with its stack. */
++			put_task_stack(prev);
+ 
+-		/* Task is done with its stack. */
+-		put_task_stack(prev);
++			put_task_struct(prev);
++			break;
+ 
+-		put_task_struct(prev);
++		case TASK_PARKED:
++			kthread_park_complete(prev);
++			break;
++		}
+ 	}
+ 
+ 	tick_nohz_task_switch();
+@@ -3449,23 +3459,8 @@ static void __sched notrace __schedule(bool preempt)
+ 
+ void __noreturn do_task_dead(void)
+ {
+-	/*
+-	 * The setting of TASK_RUNNING by try_to_wake_up() may be delayed
+-	 * when the following two conditions become true.
+-	 *   - There is race condition of mmap_sem (It is acquired by
+-	 *     exit_mm()), and
+-	 *   - SMI occurs before setting TASK_RUNINNG.
+-	 *     (or hypervisor of virtual machine switches to other guest)
+-	 *  As a result, we may become TASK_RUNNING after becoming TASK_DEAD
+-	 *
+-	 * To avoid it, we have to wait for releasing tsk->pi_lock which
+-	 * is held by try_to_wake_up()
+-	 */
+-	raw_spin_lock_irq(&current->pi_lock);
+-	raw_spin_unlock_irq(&current->pi_lock);
+-
+ 	/* Causes final put_task_struct in finish_task_switch(): */
+-	__set_current_state(TASK_DEAD);
++	set_special_state(TASK_DEAD);
+ 
+ 	/* Tell freezer to ignore us: */
+ 	current->flags |= PF_NOFREEZE;
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 9df09782025c..a6b6b45a0c68 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1121,7 +1121,7 @@ extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
+  * should be larger than 2^(64 - 20 - 8), which is more than 64 seconds.
+  * So, overflow is not an issue here.
+  */
+-u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
++static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
+ {
+ 	u64 u_inact = rq->dl.this_bw - rq->dl.running_bw; /* Utot - Uact */
+ 	u64 u_act;
+@@ -2723,8 +2723,6 @@ bool dl_cpu_busy(unsigned int cpu)
+ #endif
+ 
+ #ifdef CONFIG_SCHED_DEBUG
+-extern void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq);
+-
+ void print_dl_stats(struct seq_file *m, int cpu)
+ {
+ 	print_dl_rq(m, cpu, &cpu_rq(cpu)->dl);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 84bf1a24a55a..cf52bf16aa7e 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2691,8 +2691,6 @@ int sched_rr_handler(struct ctl_table *table, int write,
+ }
+ 
+ #ifdef CONFIG_SCHED_DEBUG
+-extern void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq);
+-
+ void print_rt_stats(struct seq_file *m, int cpu)
+ {
+ 	rt_rq_iter_t iter;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index fb5fc458547f..b0c98ff56071 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1986,8 +1986,9 @@ extern bool sched_debug_enabled;
+ extern void print_cfs_stats(struct seq_file *m, int cpu);
+ extern void print_rt_stats(struct seq_file *m, int cpu);
+ extern void print_dl_stats(struct seq_file *m, int cpu);
+-extern void
+-print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
++extern void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
++extern void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq);
++extern void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq);
+ #ifdef CONFIG_NUMA_BALANCING
+ extern void
+ show_numa_stats(struct task_struct *p, struct seq_file *m);
+diff --git a/kernel/signal.c b/kernel/signal.c
+index c6e4c83dc090..365aacb46aa6 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1961,14 +1961,27 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info)
+ 			return;
+ 	}
+ 
++	set_special_state(TASK_TRACED);
++
+ 	/*
+ 	 * We're committing to trapping.  TRACED should be visible before
+ 	 * TRAPPING is cleared; otherwise, the tracer might fail do_wait().
+ 	 * Also, transition to TRACED and updates to ->jobctl should be
+ 	 * atomic with respect to siglock and should be done after the arch
+ 	 * hook as siglock is released and regrabbed across it.
++	 *
++	 *     TRACER				    TRACEE
++	 *
++	 *     ptrace_attach()
++	 * [L]   wait_on_bit(JOBCTL_TRAPPING)	[S] set_special_state(TRACED)
++	 *     do_wait()
++	 *       set_current_state()                smp_wmb();
++	 *       ptrace_do_wait()
++	 *         wait_task_stopped()
++	 *           task_stopped_code()
++	 * [L]         task_is_traced()		[S] task_clear_jobctl_trapping();
+ 	 */
+-	set_current_state(TASK_TRACED);
++	smp_wmb();
+ 
+ 	current->last_siginfo = info;
+ 	current->exit_code = exit_code;
+@@ -2176,7 +2189,7 @@ static bool do_signal_stop(int signr)
+ 		if (task_participate_group_stop(current))
+ 			notify = CLD_STOPPED;
+ 
+-		__set_current_state(TASK_STOPPED);
++		set_special_state(TASK_STOPPED);
+ 		spin_unlock_irq(&current->sighand->siglock);
+ 
+ 		/*
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index b7591261652d..64c0291b579c 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -21,6 +21,7 @@
+ #include <linux/smpboot.h>
+ #include <linux/atomic.h>
+ #include <linux/nmi.h>
++#include <linux/sched/wake_q.h>
+ 
+ /*
+  * Structure to determine completion condition and record errors.  May
+@@ -65,27 +66,31 @@ static void cpu_stop_signal_done(struct cpu_stop_done *done)
+ }
+ 
+ static void __cpu_stop_queue_work(struct cpu_stopper *stopper,
+-					struct cpu_stop_work *work)
++					struct cpu_stop_work *work,
++					struct wake_q_head *wakeq)
+ {
+ 	list_add_tail(&work->list, &stopper->works);
+-	wake_up_process(stopper->thread);
++	wake_q_add(wakeq, stopper->thread);
+ }
+ 
+ /* queue @work to @stopper.  if offline, @work is completed immediately */
+ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ {
+ 	struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
++	DEFINE_WAKE_Q(wakeq);
+ 	unsigned long flags;
+ 	bool enabled;
+ 
+ 	spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+-		__cpu_stop_queue_work(stopper, work);
++		__cpu_stop_queue_work(stopper, work, &wakeq);
+ 	else if (work->done)
+ 		cpu_stop_signal_done(work->done);
+ 	spin_unlock_irqrestore(&stopper->lock, flags);
+ 
++	wake_up_q(&wakeq);
++
+ 	return enabled;
+ }
+ 
+@@ -229,6 +234,7 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ {
+ 	struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper, cpu1);
+ 	struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
++	DEFINE_WAKE_Q(wakeq);
+ 	int err;
+ retry:
+ 	spin_lock_irq(&stopper1->lock);
+@@ -252,8 +258,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 			goto unlock;
+ 
+ 	err = 0;
+-	__cpu_stop_queue_work(stopper1, work1);
+-	__cpu_stop_queue_work(stopper2, work2);
++	__cpu_stop_queue_work(stopper1, work1, &wakeq);
++	__cpu_stop_queue_work(stopper2, work2, &wakeq);
+ unlock:
+ 	spin_unlock(&stopper2->lock);
+ 	spin_unlock_irq(&stopper1->lock);
+@@ -263,6 +269,9 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 			cpu_relax();
+ 		goto retry;
+ 	}
++
++	wake_up_q(&wakeq);
++
+ 	return err;
+ }
+ /**
+diff --git a/lib/find_bit_benchmark.c b/lib/find_bit_benchmark.c
+index 5985a25e6cbc..5367ffa5c18f 100644
+--- a/lib/find_bit_benchmark.c
++++ b/lib/find_bit_benchmark.c
+@@ -132,7 +132,12 @@ static int __init find_bit_test(void)
+ 	test_find_next_bit(bitmap, BITMAP_LEN);
+ 	test_find_next_zero_bit(bitmap, BITMAP_LEN);
+ 	test_find_last_bit(bitmap, BITMAP_LEN);
+-	test_find_first_bit(bitmap, BITMAP_LEN);
++
++	/*
++	 * test_find_first_bit() may take some time, so
++	 * traverse only part of bitmap to avoid soft lockup.
++	 */
++	test_find_first_bit(bitmap, BITMAP_LEN / 10);
+ 	test_find_next_and_bit(bitmap, bitmap2, BITMAP_LEN);
+ 
+ 	pr_err("\nStart testing find_bit() with sparse bitmap\n");
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 88719f53ae3b..b1b13c214e95 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2192,7 +2192,7 @@ static void __memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
+ {
+ 	struct memcg_kmem_cache_create_work *cw;
+ 
+-	cw = kmalloc(sizeof(*cw), GFP_NOWAIT);
++	cw = kmalloc(sizeof(*cw), GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!cw)
+ 		return;
+ 
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 3726dc797847..a57788b0082e 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -669,7 +669,7 @@ static void vti6_link_config(struct ip6_tnl *t, bool keep_mtu)
+ 	else
+ 		mtu = ETH_DATA_LEN - LL_MAX_HEADER - sizeof(struct ipv6hdr);
+ 
+-	dev->mtu = max_t(int, mtu, IPV6_MIN_MTU);
++	dev->mtu = max_t(int, mtu, IPV4_MIN_MTU);
+ }
+ 
+ /**
+@@ -881,7 +881,7 @@ static void vti6_dev_setup(struct net_device *dev)
+ 	dev->priv_destructor = vti6_dev_free;
+ 
+ 	dev->type = ARPHRD_TUNNEL6;
+-	dev->min_mtu = IPV6_MIN_MTU;
++	dev->min_mtu = IPV4_MIN_MTU;
+ 	dev->max_mtu = IP_MAX_MTU - sizeof(struct ipv6hdr);
+ 	dev->flags |= IFF_NOARP;
+ 	dev->addr_len = sizeof(struct in6_addr);
+diff --git a/net/ipv6/netfilter/Kconfig b/net/ipv6/netfilter/Kconfig
+index d395d1590699..94b2d286a1b2 100644
+--- a/net/ipv6/netfilter/Kconfig
++++ b/net/ipv6/netfilter/Kconfig
+@@ -48,6 +48,34 @@ config NFT_CHAIN_ROUTE_IPV6
+ 	  fields such as the source, destination, flowlabel, hop-limit and
+ 	  the packet mark.
+ 
++if NF_NAT_IPV6
++
++config NFT_CHAIN_NAT_IPV6
++	tristate "IPv6 nf_tables nat chain support"
++	help
++	  This option enables the "nat" chain for IPv6 in nf_tables. This
++	  chain type is used to perform Network Address Translation (NAT)
++	  packet transformations such as the source, destination address and
++	  source and destination ports.
++
++config NFT_MASQ_IPV6
++	tristate "IPv6 masquerade support for nf_tables"
++	depends on NFT_MASQ
++	select NF_NAT_MASQUERADE_IPV6
++	help
++	  This is the expression that provides IPv4 masquerading support for
++	  nf_tables.
++
++config NFT_REDIR_IPV6
++	tristate "IPv6 redirect support for nf_tables"
++	depends on NFT_REDIR
++	select NF_NAT_REDIRECT
++	help
++	  This is the expression that provides IPv4 redirect support for
++	  nf_tables.
++
++endif # NF_NAT_IPV6
++
+ config NFT_REJECT_IPV6
+ 	select NF_REJECT_IPV6
+ 	default NFT_REJECT
+@@ -107,39 +135,12 @@ config NF_NAT_IPV6
+ 
+ if NF_NAT_IPV6
+ 
+-config NFT_CHAIN_NAT_IPV6
+-	depends on NF_TABLES_IPV6
+-	tristate "IPv6 nf_tables nat chain support"
+-	help
+-	  This option enables the "nat" chain for IPv6 in nf_tables. This
+-	  chain type is used to perform Network Address Translation (NAT)
+-	  packet transformations such as the source, destination address and
+-	  source and destination ports.
+-
+ config NF_NAT_MASQUERADE_IPV6
+ 	tristate "IPv6 masquerade support"
+ 	help
+ 	  This is the kernel functionality to provide NAT in the masquerade
+ 	  flavour (automatic source address selection) for IPv6.
+ 
+-config NFT_MASQ_IPV6
+-	tristate "IPv6 masquerade support for nf_tables"
+-	depends on NF_TABLES_IPV6
+-	depends on NFT_MASQ
+-	select NF_NAT_MASQUERADE_IPV6
+-	help
+-	  This is the expression that provides IPv4 masquerading support for
+-	  nf_tables.
+-
+-config NFT_REDIR_IPV6
+-	tristate "IPv6 redirect support for nf_tables"
+-	depends on NF_TABLES_IPV6
+-	depends on NFT_REDIR
+-	select NF_NAT_REDIRECT
+-	help
+-	  This is the expression that provides IPv4 redirect support for
+-	  nf_tables.
+-
+ endif # NF_NAT_IPV6
+ 
+ config IP6_NF_IPTABLES
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 595c662a61e8..ac4295296514 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -8,6 +8,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
++ * Copyright (C) 2018 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License version 2 as
+@@ -970,6 +971,9 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
+ 
+ 		sta->ampdu_mlme.addba_req_num[tid] = 0;
+ 
++		tid_tx->timeout =
++			le16_to_cpu(mgmt->u.action.u.addba_resp.timeout);
++
+ 		if (tid_tx->timeout) {
+ 			mod_timer(&tid_tx->session_timer,
+ 				  TU_TO_EXP_TIME(tid_tx->timeout));
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 5f303abac5ad..b2457d560e7a 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -35,6 +35,7 @@
+ #define IEEE80211_AUTH_TIMEOUT		(HZ / 5)
+ #define IEEE80211_AUTH_TIMEOUT_LONG	(HZ / 2)
+ #define IEEE80211_AUTH_TIMEOUT_SHORT	(HZ / 10)
++#define IEEE80211_AUTH_TIMEOUT_SAE	(HZ * 2)
+ #define IEEE80211_AUTH_MAX_TRIES	3
+ #define IEEE80211_AUTH_WAIT_ASSOC	(HZ * 5)
+ #define IEEE80211_ASSOC_TIMEOUT		(HZ / 5)
+@@ -3788,16 +3789,19 @@ static int ieee80211_auth(struct ieee80211_sub_if_data *sdata)
+ 			    tx_flags);
+ 
+ 	if (tx_flags == 0) {
+-		auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT;
+-		auth_data->timeout_started = true;
+-		run_again(sdata, auth_data->timeout);
++		if (auth_data->algorithm == WLAN_AUTH_SAE)
++			auth_data->timeout = jiffies +
++				IEEE80211_AUTH_TIMEOUT_SAE;
++		else
++			auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT;
+ 	} else {
+ 		auth_data->timeout =
+ 			round_jiffies_up(jiffies + IEEE80211_AUTH_TIMEOUT_LONG);
+-		auth_data->timeout_started = true;
+-		run_again(sdata, auth_data->timeout);
+ 	}
+ 
++	auth_data->timeout_started = true;
++	run_again(sdata, auth_data->timeout);
++
+ 	return 0;
+ }
+ 
+@@ -3868,8 +3872,15 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
+ 		ifmgd->status_received = false;
+ 		if (ifmgd->auth_data && ieee80211_is_auth(fc)) {
+ 			if (status_acked) {
+-				ifmgd->auth_data->timeout =
+-					jiffies + IEEE80211_AUTH_TIMEOUT_SHORT;
++				if (ifmgd->auth_data->algorithm ==
++				    WLAN_AUTH_SAE)
++					ifmgd->auth_data->timeout =
++						jiffies +
++						IEEE80211_AUTH_TIMEOUT_SAE;
++				else
++					ifmgd->auth_data->timeout =
++						jiffies +
++						IEEE80211_AUTH_TIMEOUT_SHORT;
+ 				run_again(sdata, ifmgd->auth_data->timeout);
+ 			} else {
+ 				ifmgd->auth_data->timeout = jiffies - 1;
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 69722504e3e1..516b63db8d5d 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4,6 +4,7 @@
+  * Copyright 2006-2007	Jiri Benc <jbenc@suse.cz>
+  * Copyright 2007	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
++ * Copyright (C) 2018 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License version 2 as
+@@ -1138,7 +1139,7 @@ static bool ieee80211_tx_prep_agg(struct ieee80211_tx_data *tx,
+ 	}
+ 
+ 	/* reset session timer */
+-	if (reset_agg_timer && tid_tx->timeout)
++	if (reset_agg_timer)
+ 		tid_tx->last_tx = jiffies;
+ 
+ 	return queued;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index c853386b86ff..e6e6f4ce6322 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5741,7 +5741,7 @@ static void nft_chain_commit_update(struct nft_trans *trans)
+ 	struct nft_base_chain *basechain;
+ 
+ 	if (nft_trans_chain_name(trans))
+-		strcpy(trans->ctx.chain->name, nft_trans_chain_name(trans));
++		swap(trans->ctx.chain->name, nft_trans_chain_name(trans));
+ 
+ 	if (!nft_is_base_chain(trans->ctx.chain))
+ 		return;
+diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
+index eea1d8611b20..13b38ad0fa4a 100644
+--- a/net/rds/ib_cm.c
++++ b/net/rds/ib_cm.c
+@@ -547,7 +547,7 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
+ 	rdsdebug("conn %p pd %p cq %p %p\n", conn, ic->i_pd,
+ 		 ic->i_send_cq, ic->i_recv_cq);
+ 
+-	return ret;
++	goto out;
+ 
+ sends_out:
+ 	vfree(ic->i_sends);
+@@ -572,6 +572,7 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
+ 		ic->i_send_cq = NULL;
+ rds_ibdev_out:
+ 	rds_ib_remove_conn(rds_ibdev, conn);
++out:
+ 	rds_ib_dev_put(rds_ibdev);
+ 
+ 	return ret;
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index 0c9c18aa7c77..cfcedfcccf10 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -310,7 +310,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.local		= rx->local;
+ 	cp.key			= key;
+-	cp.security_level	= 0;
++	cp.security_level	= rx->min_sec_level;
+ 	cp.exclusive		= false;
+ 	cp.upgrade		= upgrade;
+ 	cp.service_id		= srx->srx_service;
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 416688381eb7..0aa8c7ff1143 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -464,6 +464,7 @@ enum rxrpc_call_flag {
+ 	RXRPC_CALL_SEND_PING,		/* A ping will need to be sent */
+ 	RXRPC_CALL_PINGING,		/* Ping in process */
+ 	RXRPC_CALL_RETRANS_TIMEOUT,	/* Retransmission due to timeout occurred */
++	RXRPC_CALL_BEGAN_RX_TIMER,	/* We began the expect_rx_by timer */
+ };
+ 
+ /*
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 34db634594c4..c01a7fb280cc 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -971,7 +971,7 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call,
+ 	if (timo) {
+ 		unsigned long now = jiffies, expect_rx_by;
+ 
+-		expect_rx_by = jiffies + timo;
++		expect_rx_by = now + timo;
+ 		WRITE_ONCE(call->expect_rx_by, expect_rx_by);
+ 		rxrpc_reduce_call_timer(call, expect_rx_by, now,
+ 					rxrpc_timer_set_for_normal);
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 38b99db30e54..2af42c7d5b82 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -133,22 +133,49 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 	}
+ 
+-	/* we want to receive ICMP errors */
+-	opt = 1;
+-	ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
+-				(char *) &opt, sizeof(opt));
+-	if (ret < 0) {
+-		_debug("setsockopt failed");
+-		goto error;
+-	}
++	switch (local->srx.transport.family) {
++	case AF_INET:
++		/* we want to receive ICMP errors */
++		opt = 1;
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
++					(char *) &opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
+ 
+-	/* we want to set the don't fragment bit */
+-	opt = IP_PMTUDISC_DO;
+-	ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
+-				(char *) &opt, sizeof(opt));
+-	if (ret < 0) {
+-		_debug("setsockopt failed");
+-		goto error;
++		/* we want to set the don't fragment bit */
++		opt = IP_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
++					(char *) &opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
++		break;
++
++	case AF_INET6:
++		/* we want to receive ICMP errors */
++		opt = 1;
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
++					(char *) &opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
++
++		/* we want to set the don't fragment bit */
++		opt = IPV6_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
++					(char *) &opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
++		break;
++
++	default:
++		BUG();
+ 	}
+ 
+ 	/* set the socket up */
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index cf73dc006c3b..8787ff39e4f8 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -407,6 +407,17 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 							rxrpc_timer_set_for_lost_ack);
+ 			}
+ 		}
++
++		if (sp->hdr.seq == 1 &&
++		    !test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER,
++				      &call->flags)) {
++			unsigned long nowj = jiffies, expect_rx_by;
++
++			expect_rx_by = nowj + call->next_rx_timo;
++			WRITE_ONCE(call->expect_rx_by, expect_rx_by);
++			rxrpc_reduce_call_timer(call, expect_rx_by, nowj,
++						rxrpc_timer_set_for_normal);
++		}
+ 	}
+ 
+ 	rxrpc_set_keepalive(call);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 7a94ce92ffdc..28f9e1584ff3 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -223,6 +223,15 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call,
+ 
+ 	ret = rxrpc_send_data_packet(call, skb, false);
+ 	if (ret < 0) {
++		switch (ret) {
++		case -ENETUNREACH:
++		case -EHOSTUNREACH:
++		case -ECONNREFUSED:
++			rxrpc_set_call_completion(call,
++						  RXRPC_CALL_LOCAL_ERROR,
++						  0, ret);
++			goto out;
++		}
+ 		_debug("need instant resend %d", ret);
+ 		rxrpc_instant_resend(call, ix);
+ 	} else {
+@@ -241,6 +250,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call,
+ 					rxrpc_timer_set_for_send);
+ 	}
+ 
++out:
+ 	rxrpc_free_skb(skb, rxrpc_skb_tx_freed);
+ 	_leave("");
+ }
+diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
+index 5a3f691bb545..c8ba29535919 100644
+--- a/net/sched/act_skbedit.c
++++ b/net/sched/act_skbedit.c
+@@ -121,7 +121,8 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
+ 		return 0;
+ 
+ 	if (!flags) {
+-		tcf_idr_release(*a, bind);
++		if (exists)
++			tcf_idr_release(*a, bind);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5a983c9bea53..0132c08b0680 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1313,8 +1313,11 @@ static ssize_t smc_sendpage(struct socket *sock, struct page *page,
+ 
+ 	smc = smc_sk(sk);
+ 	lock_sock(sk);
+-	if (sk->sk_state != SMC_ACTIVE)
++	if (sk->sk_state != SMC_ACTIVE) {
++		release_sock(sk);
+ 		goto out;
++	}
++	release_sock(sk);
+ 	if (smc->use_fallback)
+ 		rc = kernel_sendpage(smc->clcsock, page, offset,
+ 				     size, flags);
+@@ -1322,7 +1325,6 @@ static ssize_t smc_sendpage(struct socket *sock, struct page *page,
+ 		rc = sock_no_sendpage(sock, page, offset, size, flags);
+ 
+ out:
+-	release_sock(sk);
+ 	return rc;
+ }
+ 
+diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c
+index d5f95bb39300..5679b5374dfb 100644
+--- a/net/sunrpc/xprtrdma/fmr_ops.c
++++ b/net/sunrpc/xprtrdma/fmr_ops.c
+@@ -72,6 +72,7 @@ fmr_op_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr *mr)
+ 	if (IS_ERR(mr->fmr.fm_mr))
+ 		goto out_fmr_err;
+ 
++	INIT_LIST_HEAD(&mr->mr_list);
+ 	return 0;
+ 
+ out_fmr_err:
+@@ -102,10 +103,6 @@ fmr_op_release_mr(struct rpcrdma_mr *mr)
+ 	LIST_HEAD(unmap_list);
+ 	int rc;
+ 
+-	/* Ensure MW is not on any rl_registered list */
+-	if (!list_empty(&mr->mr_list))
+-		list_del(&mr->mr_list);
+-
+ 	kfree(mr->fmr.fm_physaddrs);
+ 	kfree(mr->mr_sg);
+ 
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index 90f688f19783..4d11dc5190b8 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -110,6 +110,7 @@ frwr_op_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr *mr)
+ 	if (!mr->mr_sg)
+ 		goto out_list_err;
+ 
++	INIT_LIST_HEAD(&mr->mr_list);
+ 	sg_init_table(mr->mr_sg, depth);
+ 	init_completion(&frwr->fr_linv_done);
+ 	return 0;
+@@ -133,10 +134,6 @@ frwr_op_release_mr(struct rpcrdma_mr *mr)
+ {
+ 	int rc;
+ 
+-	/* Ensure MR is not on any rl_registered list */
+-	if (!list_empty(&mr->mr_list))
+-		list_del(&mr->mr_list);
+-
+ 	rc = ib_dereg_mr(mr->frwr.fr_mr);
+ 	if (rc)
+ 		pr_err("rpcrdma: final ib_dereg_mr for %p returned %i\n",
+@@ -195,7 +192,7 @@ frwr_op_recover_mr(struct rpcrdma_mr *mr)
+ 	return;
+ 
+ out_release:
+-	pr_err("rpcrdma: FRWR reset failed %d, %p release\n", rc, mr);
++	pr_err("rpcrdma: FRWR reset failed %d, %p released\n", rc, mr);
+ 	r_xprt->rx_stats.mrs_orphaned++;
+ 
+ 	spin_lock(&r_xprt->rx_buf.rb_mrlock);
+@@ -458,7 +455,7 @@ frwr_op_reminv(struct rpcrdma_rep *rep, struct list_head *mrs)
+ 
+ 	list_for_each_entry(mr, mrs, mr_list)
+ 		if (mr->mr_handle == rep->rr_inv_rkey) {
+-			list_del(&mr->mr_list);
++			list_del_init(&mr->mr_list);
+ 			trace_xprtrdma_remoteinv(mr);
+ 			mr->frwr.fr_state = FRWR_IS_INVALID;
+ 			rpcrdma_mr_unmap_and_put(mr);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 25b0ecbd37e2..20ad7bc1021c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1244,6 +1244,11 @@ rpcrdma_mrs_destroy(struct rpcrdma_buffer *buf)
+ 		list_del(&mr->mr_all);
+ 
+ 		spin_unlock(&buf->rb_mrlock);
++
++		/* Ensure MW is not on any rl_registered list */
++		if (!list_empty(&mr->mr_list))
++			list_del(&mr->mr_list);
++
+ 		ia->ri_ops->ro_release_mr(mr);
+ 		count++;
+ 		spin_lock(&buf->rb_mrlock);
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 430a6de8300e..99c96bf33fce 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -381,7 +381,7 @@ rpcrdma_mr_pop(struct list_head *list)
+ 	struct rpcrdma_mr *mr;
+ 
+ 	mr = list_first_entry(list, struct rpcrdma_mr, mr_list);
+-	list_del(&mr->mr_list);
++	list_del_init(&mr->mr_list);
+ 	return mr;
+ }
+ 
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index 32dc33a94bc7..5453e564da82 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -777,7 +777,7 @@ int __tipc_nl_add_monitor(struct net *net, struct tipc_nl_msg *msg,
+ 
+ 	ret = tipc_bearer_get_name(net, bearer_name, bearer_id);
+ 	if (ret || !mon)
+-		return -EINVAL;
++		return 0;
+ 
+ 	hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family,
+ 			  NLM_F_MULTI, TIPC_NL_MON_GET);
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 9036d8756e73..63f621e13d63 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -1863,6 +1863,7 @@ int tipc_nl_node_set_link(struct sk_buff *skb, struct genl_info *info)
+ int tipc_nl_node_get_link(struct sk_buff *skb, struct genl_info *info)
+ {
+ 	struct net *net = genl_info_net(info);
++	struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];
+ 	struct tipc_nl_msg msg;
+ 	char *name;
+ 	int err;
+@@ -1870,9 +1871,19 @@ int tipc_nl_node_get_link(struct sk_buff *skb, struct genl_info *info)
+ 	msg.portid = info->snd_portid;
+ 	msg.seq = info->snd_seq;
+ 
+-	if (!info->attrs[TIPC_NLA_LINK_NAME])
++	if (!info->attrs[TIPC_NLA_LINK])
+ 		return -EINVAL;
+-	name = nla_data(info->attrs[TIPC_NLA_LINK_NAME]);
++
++	err = nla_parse_nested(attrs, TIPC_NLA_LINK_MAX,
++			       info->attrs[TIPC_NLA_LINK],
++			       tipc_nl_link_policy, info->extack);
++	if (err)
++		return err;
++
++	if (!attrs[TIPC_NLA_LINK_NAME])
++		return -EINVAL;
++
++	name = nla_data(attrs[TIPC_NLA_LINK_NAME]);
+ 
+ 	msg.skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ 	if (!msg.skb)
+@@ -2145,8 +2156,8 @@ int tipc_nl_node_dump_monitor(struct sk_buff *skb, struct netlink_callback *cb)
+ 	struct net *net = sock_net(skb->sk);
+ 	u32 prev_bearer = cb->args[0];
+ 	struct tipc_nl_msg msg;
++	int bearer_id;
+ 	int err;
+-	int i;
+ 
+ 	if (prev_bearer == MAX_BEARERS)
+ 		return 0;
+@@ -2156,16 +2167,13 @@ int tipc_nl_node_dump_monitor(struct sk_buff *skb, struct netlink_callback *cb)
+ 	msg.seq = cb->nlh->nlmsg_seq;
+ 
+ 	rtnl_lock();
+-	for (i = prev_bearer; i < MAX_BEARERS; i++) {
+-		prev_bearer = i;
+-		err = __tipc_nl_add_monitor(net, &msg, prev_bearer);
++	for (bearer_id = prev_bearer; bearer_id < MAX_BEARERS; bearer_id++) {
++		err = __tipc_nl_add_monitor(net, &msg, bearer_id);
+ 		if (err)
+-			goto out;
++			break;
+ 	}
+-
+-out:
+ 	rtnl_unlock();
+-	cb->args[0] = prev_bearer;
++	cb->args[0] = bearer_id;
+ 
+ 	return skb->len;
+ }
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index b2a95af7df18..7f5c86246138 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -14,7 +14,7 @@ ifdef CONFIG_GCC_PLUGINS
+   endif
+ 
+   ifdef CONFIG_GCC_PLUGIN_SANCOV
+-    ifeq ($(CFLAGS_KCOV),)
++    ifeq ($(strip $(CFLAGS_KCOV)),)
+       # It is needed because of the gcc-plugin.sh and gcc version checks.
+       gcc-plugin-$(CONFIG_GCC_PLUGIN_SANCOV)           += sancov_plugin.so
+ 
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 44062bb7bf2f..f53922f4ee4e 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -1185,7 +1185,8 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 		return irq;
+ 	}
+ 
+-	ret = devm_request_irq(dev, irq, pm8916_mbhc_switch_irq_handler,
++	ret = devm_request_threaded_irq(dev, irq, NULL,
++			       pm8916_mbhc_switch_irq_handler,
+ 			       IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING |
+ 			       IRQF_ONESHOT,
+ 			       "mbhc switch irq", priv);
+@@ -1199,7 +1200,8 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 			return irq;
+ 		}
+ 
+-		ret = devm_request_irq(dev, irq, mbhc_btn_press_irq_handler,
++		ret = devm_request_threaded_irq(dev, irq, NULL,
++				       mbhc_btn_press_irq_handler,
+ 				       IRQF_TRIGGER_RISING |
+ 				       IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				       "mbhc btn press irq", priv);
+@@ -1212,7 +1214,8 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ 			return irq;
+ 		}
+ 
+-		ret = devm_request_irq(dev, irq, mbhc_btn_release_irq_handler,
++		ret = devm_request_threaded_irq(dev, irq, NULL,
++				       mbhc_btn_release_irq_handler,
+ 				       IRQF_TRIGGER_RISING |
+ 				       IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				       "mbhc btn release irq", priv);
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 198df016802f..74cb1d28e0f4 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -89,6 +89,7 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_PLL3_CALIB_CTRL5,	0x40220012},
+ 	{RT5514_DELAY_BUF_CTRL1,	0x7fff006a},
+ 	{RT5514_DELAY_BUF_CTRL3,	0x00000000},
++	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
+@@ -181,6 +182,7 @@ static bool rt5514_readable_register(struct device *dev, unsigned int reg)
+ 	case RT5514_PLL3_CALIB_CTRL5:
+ 	case RT5514_DELAY_BUF_CTRL1:
+ 	case RT5514_DELAY_BUF_CTRL3:
++	case RT5514_ASRC_IN_CTRL1:
+ 	case RT5514_DOWNFILTER0_CTRL1:
+ 	case RT5514_DOWNFILTER0_CTRL2:
+ 	case RT5514_DOWNFILTER0_CTRL3:
+@@ -238,6 +240,7 @@ static bool rt5514_i2c_readable_register(struct device *dev,
+ 	case RT5514_DSP_MAPPING | RT5514_PLL3_CALIB_CTRL5:
+ 	case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL1:
+ 	case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL3:
++	case RT5514_DSP_MAPPING | RT5514_ASRC_IN_CTRL1:
+ 	case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL1:
+ 	case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL2:
+ 	case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL3:
+diff --git a/sound/soc/intel/Kconfig b/sound/soc/intel/Kconfig
+index ceb105cbd461..addac2a8e52a 100644
+--- a/sound/soc/intel/Kconfig
++++ b/sound/soc/intel/Kconfig
+@@ -72,24 +72,28 @@ config SND_SOC_INTEL_BAYTRAIL
+ 	  for Baytrail Chromebooks but this option is now deprecated and is
+ 	  not recommended, use SND_SST_ATOM_HIFI2_PLATFORM instead.
+ 
++config SND_SST_ATOM_HIFI2_PLATFORM
++	tristate
++	select SND_SOC_COMPRESS
++
+ config SND_SST_ATOM_HIFI2_PLATFORM_PCI
+-	tristate "PCI HiFi2 (Medfield, Merrifield) Platforms"
++	tristate "PCI HiFi2 (Merrifield) Platforms"
+ 	depends on X86 && PCI
+ 	select SND_SST_IPC_PCI
+-	select SND_SOC_COMPRESS
++	select SND_SST_ATOM_HIFI2_PLATFORM
+ 	help
+-	  If you have a Intel Medfield or Merrifield/Edison platform, then
++	  If you have a Intel Merrifield/Edison platform, then
+ 	  enable this option by saying Y or m. Distros will typically not
+-	  enable this option: Medfield devices are not available to
+-	  developers and while Merrifield/Edison can run a mainline kernel with
+-	  limited functionality it will require a firmware file which
+-	  is not in the standard firmware tree
++	  enable this option: while Merrifield/Edison can run a mainline
++	  kernel with limited functionality it will require a firmware file
++	  which is not in the standard firmware tree
+ 
+-config SND_SST_ATOM_HIFI2_PLATFORM
++config SND_SST_ATOM_HIFI2_PLATFORM_ACPI
+ 	tristate "ACPI HiFi2 (Baytrail, Cherrytrail) Platforms"
++	default ACPI
+ 	depends on X86 && ACPI
+ 	select SND_SST_IPC_ACPI
+-	select SND_SOC_COMPRESS
++	select SND_SST_ATOM_HIFI2_PLATFORM
+ 	select SND_SOC_ACPI_INTEL_MATCH
+ 	select IOSF_MBI
+ 	help
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index e5049fbfc4f1..30cdad2eab7f 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -510,7 +510,7 @@ static void remove_widget(struct snd_soc_component *comp,
+ 	 */
+ 	if (dobj->widget.kcontrol_type == SND_SOC_TPLG_TYPE_ENUM) {
+ 		/* enumerated widget mixer */
+-		for (i = 0; i < w->num_kcontrols; i++) {
++		for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {
+ 			struct snd_kcontrol *kcontrol = w->kcontrols[i];
+ 			struct soc_enum *se =
+ 				(struct soc_enum *)kcontrol->private_value;
+@@ -528,7 +528,7 @@ static void remove_widget(struct snd_soc_component *comp,
+ 		kfree(w->kcontrol_news);
+ 	} else {
+ 		/* volume mixer or bytes controls */
+-		for (i = 0; i < w->num_kcontrols; i++) {
++		for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {
+ 			struct snd_kcontrol *kcontrol = w->kcontrols[i];
+ 
+ 			if (dobj->widget.kcontrol_type
+@@ -2571,7 +2571,7 @@ int snd_soc_tplg_component_remove(struct snd_soc_component *comp, u32 index)
+ 
+ 			/* match index */
+ 			if (dobj->index != index &&
+-				dobj->index != SND_SOC_TPLG_INDEX_ALL)
++				index != SND_SOC_TPLG_INDEX_ALL)
+ 				continue;
+ 
+ 			switch (dobj->type) {
+diff --git a/tools/bpf/bpf_dbg.c b/tools/bpf/bpf_dbg.c
+index 4f254bcc4423..61b9aa5d6415 100644
+--- a/tools/bpf/bpf_dbg.c
++++ b/tools/bpf/bpf_dbg.c
+@@ -1063,7 +1063,7 @@ static int cmd_load_pcap(char *file)
+ 
+ static int cmd_load(char *arg)
+ {
+-	char *subcmd, *cont, *tmp = strdup(arg);
++	char *subcmd, *cont = NULL, *tmp = strdup(arg);
+ 	int ret = CMD_OK;
+ 
+ 	subcmd = strtok_r(tmp, " ", &cont);
+@@ -1073,7 +1073,10 @@ static int cmd_load(char *arg)
+ 		bpf_reset();
+ 		bpf_reset_breakpoints();
+ 
+-		ret = cmd_load_bpf(cont);
++		if (!cont)
++			ret = CMD_ERR;
++		else
++			ret = cmd_load_bpf(cont);
+ 	} else if (matches(subcmd, "pcap") == 0) {
+ 		ret = cmd_load_pcap(cont);
+ 	} else {
+diff --git a/tools/objtool/arch/x86/include/asm/insn.h b/tools/objtool/arch/x86/include/asm/insn.h
+index b3e32b010ab1..c2c01f84df75 100644
+--- a/tools/objtool/arch/x86/include/asm/insn.h
++++ b/tools/objtool/arch/x86/include/asm/insn.h
+@@ -208,4 +208,22 @@ static inline int insn_offset_immediate(struct insn *insn)
+ 	return insn_offset_displacement(insn) + insn->displacement.nbytes;
+ }
+ 
++#define POP_SS_OPCODE 0x1f
++#define MOV_SREG_OPCODE 0x8e
++
++/*
++ * Intel SDM Vol.3A 6.8.3 states;
++ * "Any single-step trap that would be delivered following the MOV to SS
++ * instruction or POP to SS instruction (because EFLAGS.TF is 1) is
++ * suppressed."
++ * This function returns true if @insn is MOV SS or POP SS. On these
++ * instructions, single stepping is suppressed.
++ */
++static inline int insn_masking_exception(struct insn *insn)
++{
++	return insn->opcode.bytes[0] == POP_SS_OPCODE ||
++		(insn->opcode.bytes[0] == MOV_SREG_OPCODE &&
++		 X86_MODRM_REG(insn->modrm.bytes[0]) == 2);
++}
++
+ #endif /* _ASM_X86_INSN_H */
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index b9f0a53dfa65..409d9d524bf9 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -212,6 +212,7 @@ static void cs_etm__free(struct perf_session *session)
+ 	for (i = 0; i < aux->num_cpu; i++)
+ 		zfree(&aux->metadata[i]);
+ 
++	thread__zput(aux->unknown_thread);
+ 	zfree(&aux->metadata);
+ 	zfree(&aux);
+ }
+@@ -980,6 +981,23 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
+ 	etm->auxtrace.free = cs_etm__free;
+ 	session->auxtrace = &etm->auxtrace;
+ 
++	etm->unknown_thread = thread__new(999999999, 999999999);
++	if (!etm->unknown_thread)
++		goto err_free_queues;
++
++	/*
++	 * Initialize list node so that at thread__zput() we can avoid
++	 * segmentation fault at list_del_init().
++	 */
++	INIT_LIST_HEAD(&etm->unknown_thread->node);
++
++	err = thread__set_comm(etm->unknown_thread, "unknown", 0);
++	if (err)
++		goto err_delete_thread;
++
++	if (thread__init_map_groups(etm->unknown_thread, etm->machine))
++		goto err_delete_thread;
++
+ 	if (dump_trace) {
+ 		cs_etm__print_auxtrace_info(auxtrace_info->priv, num_cpu);
+ 		return 0;
+@@ -994,16 +1012,18 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
+ 
+ 	err = cs_etm__synth_events(etm, session);
+ 	if (err)
+-		goto err_free_queues;
++		goto err_delete_thread;
+ 
+ 	err = auxtrace_queues__process_index(&etm->queues, session);
+ 	if (err)
+-		goto err_free_queues;
++		goto err_delete_thread;
+ 
+ 	etm->data_queued = etm->queues.populated;
+ 
+ 	return 0;
+ 
++err_delete_thread:
++	thread__zput(etm->unknown_thread);
+ err_free_queues:
+ 	auxtrace_queues__free(&etm->queues);
+ 	session->auxtrace = NULL;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 57e38fdf0b34..60d0419bd41e 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -539,9 +539,10 @@ static bool pmu_is_uncore(const char *name)
+ 
+ /*
+  *  PMU CORE devices have different name other than cpu in sysfs on some
+- *  platforms. looking for possible sysfs files to identify as core device.
++ *  platforms.
++ *  Looking for possible sysfs files to identify the arm core device.
+  */
+-static int is_pmu_core(const char *name)
++static int is_arm_pmu_core(const char *name)
+ {
+ 	struct stat st;
+ 	char path[PATH_MAX];
+@@ -550,12 +551,6 @@ static int is_pmu_core(const char *name)
+ 	if (!sysfs)
+ 		return 0;
+ 
+-	/* Look for cpu sysfs (x86 and others) */
+-	scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/cpu", sysfs);
+-	if ((stat(path, &st) == 0) &&
+-			(strncmp(name, "cpu", strlen("cpu")) == 0))
+-		return 1;
+-
+ 	/* Look for cpu sysfs (specific to arm) */
+ 	scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus",
+ 				sysfs, name);
+@@ -651,6 +646,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 	struct pmu_events_map *map;
+ 	struct pmu_event *pe;
+ 	const char *name = pmu->name;
++	const char *pname;
+ 
+ 	map = perf_pmu__find_map(pmu);
+ 	if (!map)
+@@ -669,11 +665,9 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 			break;
+ 		}
+ 
+-		if (!is_pmu_core(name)) {
+-			/* check for uncore devices */
+-			if (pe->pmu == NULL)
+-				continue;
+-			if (strncmp(pe->pmu, name, strlen(pe->pmu)))
++		if (!is_arm_pmu_core(name)) {
++			pname = pe->pmu ? pe->pmu : "cpu";
++			if (strncmp(pname, name, strlen(pname)))
+ 				continue;
+ 		}
+ 
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index cc065d4bfafc..902597b0e492 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -2094,16 +2094,14 @@ static bool symbol__read_kptr_restrict(void)
+ 
+ int symbol__annotation_init(void)
+ {
++	if (symbol_conf.init_annotation)
++		return 0;
++
+ 	if (symbol_conf.initialized) {
+ 		pr_err("Annotation needs to be init before symbol__init()\n");
+ 		return -1;
+ 	}
+ 
+-	if (symbol_conf.init_annotation) {
+-		pr_warning("Annotation being initialized multiple times\n");
+-		return 0;
+-	}
+-
+ 	symbol_conf.priv_size += sizeof(struct annotation);
+ 	symbol_conf.init_annotation = true;
+ 	return 0;
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-multi-actions-accept.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-multi-actions-accept.tc
+new file mode 100644
+index 000000000000..c193dce611a2
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-multi-actions-accept.tc
+@@ -0,0 +1,44 @@
++#!/bin/sh
++# description: event trigger - test multiple actions on hist trigger
++
++
++do_reset() {
++    reset_trigger
++    echo > set_event
++    clear_trace
++}
++
++fail() { #msg
++    do_reset
++    echo $1
++    exit_fail
++}
++
++if [ ! -f set_event ]; then
++    echo "event tracing is not supported"
++    exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++    echo "synthetic event is not supported"
++    exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test multiple actions on hist trigger"
++echo 'wakeup_latency u64 lat; pid_t pid' >> synthetic_events
++TRIGGER1=events/sched/sched_wakeup/trigger
++TRIGGER2=events/sched/sched_switch/trigger
++
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="cyclictest"' > $TRIGGER1
++echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0 if next_comm=="cyclictest"' >> $TRIGGER2
++echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,next_pid) if next_comm=="cyclictest"' >> $TRIGGER2
++echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,prev_pid) if next_comm=="cyclictest"' >> $TRIGGER2
++echo 'hist:keys=next_pid if next_comm=="cyclictest"' >> $TRIGGER2
++
++do_reset
++
++exit 0
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index d744991c0f4f..39f66bc29b82 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -11,7 +11,7 @@ CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
+ 
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
+ 			check_initial_reg_state sigreturn iopl mpx-mini-test ioperm \
+-			protection_keys test_vdso test_vsyscall
++			protection_keys test_vdso test_vsyscall mov_ss_trap
+ TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
+ 			test_FCMOV test_FCOMI test_FISTTP \
+ 			vdso_restorer
+diff --git a/tools/testing/selftests/x86/mov_ss_trap.c b/tools/testing/selftests/x86/mov_ss_trap.c
+new file mode 100644
+index 000000000000..3c3a022654f3
+--- /dev/null
++++ b/tools/testing/selftests/x86/mov_ss_trap.c
+@@ -0,0 +1,285 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * mov_ss_trap.c: Exercise the bizarre side effects of a watchpoint on MOV SS
++ *
++ * This does MOV SS from a watchpointed address followed by various
++ * types of kernel entries.  A MOV SS that hits a watchpoint will queue
++ * up a #DB trap but will not actually deliver that trap.  The trap
++ * will be delivered after the next instruction instead.  The CPU's logic
++ * seems to be:
++ *
++ *  - Any fault: drop the pending #DB trap.
++ *  - INT $N, INT3, INTO, SYSCALL, SYSENTER: enter the kernel and then
++ *    deliver #DB.
++ *  - ICEBP: enter the kernel but do not deliver the watchpoint trap
++ *  - breakpoint: only one #DB is delivered (phew!)
++ *
++ * There are plenty of ways for a kernel to handle this incorrectly.  This
++ * test tries to exercise all the cases.
++ *
++ * This should mostly cover CVE-2018-1087 and CVE-2018-8897.
++ */
++#define _GNU_SOURCE
++
++#include <stdlib.h>
++#include <sys/ptrace.h>
++#include <sys/types.h>
++#include <sys/wait.h>
++#include <sys/user.h>
++#include <sys/syscall.h>
++#include <unistd.h>
++#include <errno.h>
++#include <stddef.h>
++#include <stdio.h>
++#include <err.h>
++#include <string.h>
++#include <setjmp.h>
++#include <sys/prctl.h>
++
++#define X86_EFLAGS_RF (1UL << 16)
++
++#if __x86_64__
++# define REG_IP REG_RIP
++#else
++# define REG_IP REG_EIP
++#endif
++
++unsigned short ss;
++extern unsigned char breakpoint_insn[];
++sigjmp_buf jmpbuf;
++static unsigned char altstack_data[SIGSTKSZ];
++
++static void enable_watchpoint(void)
++{
++	pid_t parent = getpid();
++	int status;
++
++	pid_t child = fork();
++	if (child < 0)
++		err(1, "fork");
++
++	if (child) {
++		if (waitpid(child, &status, 0) != child)
++			err(1, "waitpid for child");
++	} else {
++		unsigned long dr0, dr1, dr7;
++
++		dr0 = (unsigned long)&ss;
++		dr1 = (unsigned long)breakpoint_insn;
++		dr7 = ((1UL << 1) |	/* G0 */
++		       (3UL << 16) |	/* RW0 = read or write */
++		       (1UL << 18) |	/* LEN0 = 2 bytes */
++		       (1UL << 3));	/* G1, RW1 = insn */
++
++		if (ptrace(PTRACE_ATTACH, parent, NULL, NULL) != 0)
++			err(1, "PTRACE_ATTACH");
++
++		if (waitpid(parent, &status, 0) != parent)
++			err(1, "waitpid for child");
++
++		if (ptrace(PTRACE_POKEUSER, parent, (void *)offsetof(struct user, u_debugreg[0]), dr0) != 0)
++			err(1, "PTRACE_POKEUSER DR0");
++
++		if (ptrace(PTRACE_POKEUSER, parent, (void *)offsetof(struct user, u_debugreg[1]), dr1) != 0)
++			err(1, "PTRACE_POKEUSER DR1");
++
++		if (ptrace(PTRACE_POKEUSER, parent, (void *)offsetof(struct user, u_debugreg[7]), dr7) != 0)
++			err(1, "PTRACE_POKEUSER DR7");
++
++		printf("\tDR0 = %lx, DR1 = %lx, DR7 = %lx\n", dr0, dr1, dr7);
++
++		if (ptrace(PTRACE_DETACH, parent, NULL, NULL) != 0)
++			err(1, "PTRACE_DETACH");
++
++		exit(0);
++	}
++}
++
++static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),
++		       int flags)
++{
++	struct sigaction sa;
++	memset(&sa, 0, sizeof(sa));
++	sa.sa_sigaction = handler;
++	sa.sa_flags = SA_SIGINFO | flags;
++	sigemptyset(&sa.sa_mask);
++	if (sigaction(sig, &sa, 0))
++		err(1, "sigaction");
++}
++
++static char const * const signames[] = {
++	[SIGSEGV] = "SIGSEGV",
++	[SIGBUS] = "SIBGUS",
++	[SIGTRAP] = "SIGTRAP",
++	[SIGILL] = "SIGILL",
++};
++
++static void sigtrap(int sig, siginfo_t *si, void *ctx_void)
++{
++	ucontext_t *ctx = ctx_void;
++
++	printf("\tGot SIGTRAP with RIP=%lx, EFLAGS.RF=%d\n",
++	       (unsigned long)ctx->uc_mcontext.gregs[REG_IP],
++	       !!(ctx->uc_mcontext.gregs[REG_EFL] & X86_EFLAGS_RF));
++}
++
++static void handle_and_return(int sig, siginfo_t *si, void *ctx_void)
++{
++	ucontext_t *ctx = ctx_void;
++
++	printf("\tGot %s with RIP=%lx\n", signames[sig],
++	       (unsigned long)ctx->uc_mcontext.gregs[REG_IP]);
++}
++
++static void handle_and_longjmp(int sig, siginfo_t *si, void *ctx_void)
++{
++	ucontext_t *ctx = ctx_void;
++
++	printf("\tGot %s with RIP=%lx\n", signames[sig],
++	       (unsigned long)ctx->uc_mcontext.gregs[REG_IP]);
++
++	siglongjmp(jmpbuf, 1);
++}
++
++int main()
++{
++	unsigned long nr;
++
++	asm volatile ("mov %%ss, %[ss]" : [ss] "=m" (ss));
++	printf("\tSS = 0x%hx, &SS = 0x%p\n", ss, &ss);
++
++	if (prctl(PR_SET_PTRACER, PR_SET_PTRACER_ANY, 0, 0, 0) == 0)
++		printf("\tPR_SET_PTRACER_ANY succeeded\n");
++
++	printf("\tSet up a watchpoint\n");
++	sethandler(SIGTRAP, sigtrap, 0);
++	enable_watchpoint();
++
++	printf("[RUN]\tRead from watched memory (should get SIGTRAP)\n");
++	asm volatile ("mov %[ss], %[tmp]" : [tmp] "=r" (nr) : [ss] "m" (ss));
++
++	printf("[RUN]\tMOV SS; INT3\n");
++	asm volatile ("mov %[ss], %%ss; int3" :: [ss] "m" (ss));
++
++	printf("[RUN]\tMOV SS; INT 3\n");
++	asm volatile ("mov %[ss], %%ss; .byte 0xcd, 0x3" :: [ss] "m" (ss));
++
++	printf("[RUN]\tMOV SS; CS CS INT3\n");
++	asm volatile ("mov %[ss], %%ss; .byte 0x2e, 0x2e; int3" :: [ss] "m" (ss));
++
++	printf("[RUN]\tMOV SS; CSx14 INT3\n");
++	asm volatile ("mov %[ss], %%ss; .fill 14,1,0x2e; int3" :: [ss] "m" (ss));
++
++	printf("[RUN]\tMOV SS; INT 4\n");
++	sethandler(SIGSEGV, handle_and_return, SA_RESETHAND);
++	asm volatile ("mov %[ss], %%ss; int $4" :: [ss] "m" (ss));
++
++#ifdef __i386__
++	printf("[RUN]\tMOV SS; INTO\n");
++	sethandler(SIGSEGV, handle_and_return, SA_RESETHAND);
++	nr = -1;
++	asm volatile ("add $1, %[tmp]; mov %[ss], %%ss; into"
++		      : [tmp] "+r" (nr) : [ss] "m" (ss));
++#endif
++
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; ICEBP\n");
++
++		/* Some emulators (e.g. QEMU TCG) don't emulate ICEBP. */
++		sethandler(SIGILL, handle_and_longjmp, SA_RESETHAND);
++
++		asm volatile ("mov %[ss], %%ss; .byte 0xf1" :: [ss] "m" (ss));
++	}
++
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; CLI\n");
++		sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND);
++		asm volatile ("mov %[ss], %%ss; cli" :: [ss] "m" (ss));
++	}
++
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; #PF\n");
++		sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND);
++		asm volatile ("mov %[ss], %%ss; mov (-1), %[tmp]"
++			      : [tmp] "=r" (nr) : [ss] "m" (ss));
++	}
++
++	/*
++	 * INT $1: if #DB has DPL=3 and there isn't special handling,
++	 * then the kernel will die.
++	 */
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; INT 1\n");
++		sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND);
++		asm volatile ("mov %[ss], %%ss; int $1" :: [ss] "m" (ss));
++	}
++
++#ifdef __x86_64__
++	/*
++	 * In principle, we should test 32-bit SYSCALL as well, but
++	 * the calling convention is so unpredictable that it's
++	 * not obviously worth the effort.
++	 */
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; SYSCALL\n");
++		sethandler(SIGILL, handle_and_longjmp, SA_RESETHAND);
++		nr = SYS_getpid;
++		/*
++		 * Toggle the high bit of RSP to make it noncanonical to
++		 * strengthen this test on non-SMAP systems.
++		 */
++		asm volatile ("btc $63, %%rsp\n\t"
++			      "mov %[ss], %%ss; syscall\n\t"
++			      "btc $63, %%rsp"
++			      : "+a" (nr) : [ss] "m" (ss)
++			      : "rcx"
++#ifdef __x86_64__
++				, "r11"
++#endif
++			);
++	}
++#endif
++
++	printf("[RUN]\tMOV SS; breakpointed NOP\n");
++	asm volatile ("mov %[ss], %%ss; breakpoint_insn: nop" :: [ss] "m" (ss));
++
++	/*
++	 * Invoking SYSENTER directly breaks all the rules.  Just handle
++	 * the SIGSEGV.
++	 */
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; SYSENTER\n");
++		stack_t stack = {
++			.ss_sp = altstack_data,
++			.ss_size = SIGSTKSZ,
++		};
++		if (sigaltstack(&stack, NULL) != 0)
++			err(1, "sigaltstack");
++		sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND | SA_ONSTACK);
++		nr = SYS_getpid;
++		asm volatile ("mov %[ss], %%ss; SYSENTER" : "+a" (nr)
++			      : [ss] "m" (ss) : "flags", "rcx"
++#ifdef __x86_64__
++				, "r11"
++#endif
++			);
++
++		/* We're unreachable here.  SYSENTER forgets RIP. */
++	}
++
++	if (sigsetjmp(jmpbuf, 1) == 0) {
++		printf("[RUN]\tMOV SS; INT $0x80\n");
++		sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND);
++		nr = 20;	/* compat getpid */
++		asm volatile ("mov %[ss], %%ss; int $0x80"
++			      : "+a" (nr) : [ss] "m" (ss)
++			      : "flags"
++#ifdef __x86_64__
++				, "r8", "r9", "r10", "r11"
++#endif
++			);
++	}
++
++	printf("[OK]\tI aten't dead\n");
++	return 0;
++}
+diff --git a/tools/testing/selftests/x86/mpx-mini-test.c b/tools/testing/selftests/x86/mpx-mini-test.c
+index 9c0325e1ea68..50f7e9272481 100644
+--- a/tools/testing/selftests/x86/mpx-mini-test.c
++++ b/tools/testing/selftests/x86/mpx-mini-test.c
+@@ -368,6 +368,11 @@ static int expected_bnd_index = -1;
+ uint64_t shadow_plb[NR_MPX_BOUNDS_REGISTERS][2]; /* shadow MPX bound registers */
+ unsigned long shadow_map[NR_MPX_BOUNDS_REGISTERS];
+ 
++/* Failed address bound checks: */
++#ifndef SEGV_BNDERR
++# define SEGV_BNDERR	3
++#endif
++
+ /*
+  * The kernel is supposed to provide some information about the bounds
+  * exception in the siginfo.  It should match what we have in the bounds
+@@ -419,8 +424,6 @@ void handler(int signum, siginfo_t *si, void *vucontext)
+ 		br_count++;
+ 		dprintf1("#BR 0x%jx (total seen: %d)\n", status, br_count);
+ 
+-#define SEGV_BNDERR     3  /* failed address bound checks */
+-
+ 		dprintf2("Saw a #BR! status 0x%jx at %016lx br_reason: %jx\n",
+ 				status, ip, br_reason);
+ 		dprintf2("si_signo: %d\n", si->si_signo);
+diff --git a/tools/testing/selftests/x86/pkey-helpers.h b/tools/testing/selftests/x86/pkey-helpers.h
+index b3cb7670e026..254e5436bdd9 100644
+--- a/tools/testing/selftests/x86/pkey-helpers.h
++++ b/tools/testing/selftests/x86/pkey-helpers.h
+@@ -26,30 +26,26 @@ static inline void sigsafe_printf(const char *format, ...)
+ {
+ 	va_list ap;
+ 
+-	va_start(ap, format);
+ 	if (!dprint_in_signal) {
++		va_start(ap, format);
+ 		vprintf(format, ap);
++		va_end(ap);
+ 	} else {
+ 		int ret;
+-		int len = vsnprintf(dprint_in_signal_buffer,
+-				    DPRINT_IN_SIGNAL_BUF_SIZE,
+-				    format, ap);
+ 		/*
+-		 * len is amount that would have been printed,
+-		 * but actual write is truncated at BUF_SIZE.
++		 * No printf() functions are signal-safe.
++		 * They deadlock easily. Write the format
++		 * string to get some output, even if
++		 * incomplete.
+ 		 */
+-		if (len > DPRINT_IN_SIGNAL_BUF_SIZE)
+-			len = DPRINT_IN_SIGNAL_BUF_SIZE;
+-		ret = write(1, dprint_in_signal_buffer, len);
++		ret = write(1, format, strlen(format));
+ 		if (ret < 0)
+-			abort();
++			exit(1);
+ 	}
+-	va_end(ap);
+ }
+ #define dprintf_level(level, args...) do {	\
+ 	if (level <= DEBUG_LEVEL)		\
+ 		sigsafe_printf(args);		\
+-	fflush(NULL);				\
+ } while (0)
+ #define dprintf0(args...) dprintf_level(0, args)
+ #define dprintf1(args...) dprintf_level(1, args)
+diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
+index f15aa5a76fe3..460b4bdf4c1e 100644
+--- a/tools/testing/selftests/x86/protection_keys.c
++++ b/tools/testing/selftests/x86/protection_keys.c
+@@ -72,10 +72,9 @@ extern void abort_hooks(void);
+ 				test_nr, iteration_nr);	\
+ 		dprintf0("errno at assert: %d", errno);	\
+ 		abort_hooks();			\
+-		assert(condition);		\
++		exit(__LINE__);			\
+ 	}					\
+ } while (0)
+-#define raw_assert(cond) assert(cond)
+ 
+ void cat_into_file(char *str, char *file)
+ {
+@@ -87,12 +86,17 @@ void cat_into_file(char *str, char *file)
+ 	 * these need to be raw because they are called under
+ 	 * pkey_assert()
+ 	 */
+-	raw_assert(fd >= 0);
++	if (fd < 0) {
++		fprintf(stderr, "error opening '%s'\n", str);
++		perror("error: ");
++		exit(__LINE__);
++	}
++
+ 	ret = write(fd, str, strlen(str));
+ 	if (ret != strlen(str)) {
+ 		perror("write to file failed");
+ 		fprintf(stderr, "filename: '%s' str: '%s'\n", file, str);
+-		raw_assert(0);
++		exit(__LINE__);
+ 	}
+ 	close(fd);
+ }
+@@ -191,26 +195,30 @@ void lots_o_noops_around_write(int *write_to_me)
+ #ifdef __i386__
+ 
+ #ifndef SYS_mprotect_key
+-# define SYS_mprotect_key 380
++# define SYS_mprotect_key	380
+ #endif
++
+ #ifndef SYS_pkey_alloc
+-# define SYS_pkey_alloc	 381
+-# define SYS_pkey_free	 382
++# define SYS_pkey_alloc		381
++# define SYS_pkey_free		382
+ #endif
+-#define REG_IP_IDX REG_EIP
+-#define si_pkey_offset 0x14
++
++#define REG_IP_IDX		REG_EIP
++#define si_pkey_offset		0x14
+ 
+ #else
+ 
+ #ifndef SYS_mprotect_key
+-# define SYS_mprotect_key 329
++# define SYS_mprotect_key	329
+ #endif
++
+ #ifndef SYS_pkey_alloc
+-# define SYS_pkey_alloc	 330
+-# define SYS_pkey_free	 331
++# define SYS_pkey_alloc		330
++# define SYS_pkey_free		331
+ #endif
+-#define REG_IP_IDX REG_RIP
+-#define si_pkey_offset 0x20
++
++#define REG_IP_IDX		REG_RIP
++#define si_pkey_offset		0x20
+ 
+ #endif
+ 
+@@ -225,8 +233,14 @@ void dump_mem(void *dumpme, int len_bytes)
+ 	}
+ }
+ 
+-#define SEGV_BNDERR     3  /* failed address bound checks */
+-#define SEGV_PKUERR     4
++/* Failed address bound checks: */
++#ifndef SEGV_BNDERR
++# define SEGV_BNDERR		3
++#endif
++
++#ifndef SEGV_PKUERR
++# define SEGV_PKUERR		4
++#endif
+ 
+ static char *si_code_str(int si_code)
+ {
+@@ -289,13 +303,6 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext)
+ 		dump_mem(pkru_ptr - 128, 256);
+ 	pkey_assert(*pkru_ptr);
+ 
+-	si_pkey_ptr = (u32 *)(((u8 *)si) + si_pkey_offset);
+-	dprintf1("si_pkey_ptr: %p\n", si_pkey_ptr);
+-	dump_mem(si_pkey_ptr - 8, 24);
+-	siginfo_pkey = *si_pkey_ptr;
+-	pkey_assert(siginfo_pkey < NR_PKEYS);
+-	last_si_pkey = siginfo_pkey;
+-
+ 	if ((si->si_code == SEGV_MAPERR) ||
+ 	    (si->si_code == SEGV_ACCERR) ||
+ 	    (si->si_code == SEGV_BNDERR)) {
+@@ -303,6 +310,13 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext)
+ 		exit(4);
+ 	}
+ 
++	si_pkey_ptr = (u32 *)(((u8 *)si) + si_pkey_offset);
++	dprintf1("si_pkey_ptr: %p\n", si_pkey_ptr);
++	dump_mem((u8 *)si_pkey_ptr - 8, 24);
++	siginfo_pkey = *si_pkey_ptr;
++	pkey_assert(siginfo_pkey < NR_PKEYS);
++	last_si_pkey = siginfo_pkey;
++
+ 	dprintf1("signal pkru from xsave: %08x\n", *pkru_ptr);
+ 	/* need __rdpkru() version so we do not do shadow_pkru checking */
+ 	dprintf1("signal pkru from  pkru: %08x\n", __rdpkru());
+@@ -311,22 +325,6 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext)
+ 	dprintf1("WARNING: set PRKU=0 to allow faulting instruction to continue\n");
+ 	pkru_faults++;
+ 	dprintf1("<<<<==================================================\n");
+-	return;
+-	if (trapno == 14) {
+-		fprintf(stderr,
+-			"ERROR: In signal handler, page fault, trapno = %d, ip = %016lx\n",
+-			trapno, ip);
+-		fprintf(stderr, "si_addr %p\n", si->si_addr);
+-		fprintf(stderr, "REG_ERR: %lx\n",
+-				(unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
+-		exit(1);
+-	} else {
+-		fprintf(stderr, "unexpected trap %d! at 0x%lx\n", trapno, ip);
+-		fprintf(stderr, "si_addr %p\n", si->si_addr);
+-		fprintf(stderr, "REG_ERR: %lx\n",
+-				(unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
+-		exit(2);
+-	}
+ 	dprint_in_signal = 0;
+ }
+ 
+@@ -393,10 +391,15 @@ pid_t fork_lazy_child(void)
+ 	return forkret;
+ }
+ 
+-#define PKEY_DISABLE_ACCESS    0x1
+-#define PKEY_DISABLE_WRITE     0x2
++#ifndef PKEY_DISABLE_ACCESS
++# define PKEY_DISABLE_ACCESS	0x1
++#endif
++
++#ifndef PKEY_DISABLE_WRITE
++# define PKEY_DISABLE_WRITE	0x2
++#endif
+ 
+-u32 pkey_get(int pkey, unsigned long flags)
++static u32 hw_pkey_get(int pkey, unsigned long flags)
+ {
+ 	u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ 	u32 pkru = __rdpkru();
+@@ -418,7 +421,7 @@ u32 pkey_get(int pkey, unsigned long flags)
+ 	return masked_pkru;
+ }
+ 
+-int pkey_set(int pkey, unsigned long rights, unsigned long flags)
++static int hw_pkey_set(int pkey, unsigned long rights, unsigned long flags)
+ {
+ 	u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ 	u32 old_pkru = __rdpkru();
+@@ -452,15 +455,15 @@ void pkey_disable_set(int pkey, int flags)
+ 		pkey, flags);
+ 	pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+ 
+-	pkey_rights = pkey_get(pkey, syscall_flags);
++	pkey_rights = hw_pkey_get(pkey, syscall_flags);
+ 
+-	dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
++	dprintf1("%s(%d) hw_pkey_get(%d): %x\n", __func__,
+ 			pkey, pkey, pkey_rights);
+ 	pkey_assert(pkey_rights >= 0);
+ 
+ 	pkey_rights |= flags;
+ 
+-	ret = pkey_set(pkey, pkey_rights, syscall_flags);
++	ret = hw_pkey_set(pkey, pkey_rights, syscall_flags);
+ 	assert(!ret);
+ 	/*pkru and flags have the same format */
+ 	shadow_pkru |= flags << (pkey * 2);
+@@ -468,8 +471,8 @@ void pkey_disable_set(int pkey, int flags)
+ 
+ 	pkey_assert(ret >= 0);
+ 
+-	pkey_rights = pkey_get(pkey, syscall_flags);
+-	dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
++	pkey_rights = hw_pkey_get(pkey, syscall_flags);
++	dprintf1("%s(%d) hw_pkey_get(%d): %x\n", __func__,
+ 			pkey, pkey, pkey_rights);
+ 
+ 	dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
+@@ -483,24 +486,24 @@ void pkey_disable_clear(int pkey, int flags)
+ {
+ 	unsigned long syscall_flags = 0;
+ 	int ret;
+-	int pkey_rights = pkey_get(pkey, syscall_flags);
++	int pkey_rights = hw_pkey_get(pkey, syscall_flags);
+ 	u32 orig_pkru = rdpkru();
+ 
+ 	pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+ 
+-	dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
++	dprintf1("%s(%d) hw_pkey_get(%d): %x\n", __func__,
+ 			pkey, pkey, pkey_rights);
+ 	pkey_assert(pkey_rights >= 0);
+ 
+ 	pkey_rights |= flags;
+ 
+-	ret = pkey_set(pkey, pkey_rights, 0);
++	ret = hw_pkey_set(pkey, pkey_rights, 0);
+ 	/* pkru and flags have the same format */
+ 	shadow_pkru &= ~(flags << (pkey * 2));
+ 	pkey_assert(ret >= 0);
+ 
+-	pkey_rights = pkey_get(pkey, syscall_flags);
+-	dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
++	pkey_rights = hw_pkey_get(pkey, syscall_flags);
++	dprintf1("%s(%d) hw_pkey_get(%d): %x\n", __func__,
+ 			pkey, pkey, pkey_rights);
+ 
+ 	dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
+@@ -674,10 +677,12 @@ int mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+ struct pkey_malloc_record {
+ 	void *ptr;
+ 	long size;
++	int prot;
+ };
+ struct pkey_malloc_record *pkey_malloc_records;
++struct pkey_malloc_record *pkey_last_malloc_record;
+ long nr_pkey_malloc_records;
+-void record_pkey_malloc(void *ptr, long size)
++void record_pkey_malloc(void *ptr, long size, int prot)
+ {
+ 	long i;
+ 	struct pkey_malloc_record *rec = NULL;
+@@ -709,6 +714,8 @@ void record_pkey_malloc(void *ptr, long size)
+ 		(int)(rec - pkey_malloc_records), rec, ptr, size);
+ 	rec->ptr = ptr;
+ 	rec->size = size;
++	rec->prot = prot;
++	pkey_last_malloc_record = rec;
+ 	nr_pkey_malloc_records++;
+ }
+ 
+@@ -753,7 +760,7 @@ void *malloc_pkey_with_mprotect(long size, int prot, u16 pkey)
+ 	pkey_assert(ptr != (void *)-1);
+ 	ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey);
+ 	pkey_assert(!ret);
+-	record_pkey_malloc(ptr, size);
++	record_pkey_malloc(ptr, size, prot);
+ 	rdpkru();
+ 
+ 	dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr);
+@@ -774,7 +781,7 @@ void *malloc_pkey_anon_huge(long size, int prot, u16 pkey)
+ 	size = ALIGN_UP(size, HPAGE_SIZE * 2);
+ 	ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ 	pkey_assert(ptr != (void *)-1);
+-	record_pkey_malloc(ptr, size);
++	record_pkey_malloc(ptr, size, prot);
+ 	mprotect_pkey(ptr, size, prot, pkey);
+ 
+ 	dprintf1("unaligned ptr: %p\n", ptr);
+@@ -847,7 +854,7 @@ void *malloc_pkey_hugetlb(long size, int prot, u16 pkey)
+ 	pkey_assert(ptr != (void *)-1);
+ 	mprotect_pkey(ptr, size, prot, pkey);
+ 
+-	record_pkey_malloc(ptr, size);
++	record_pkey_malloc(ptr, size, prot);
+ 
+ 	dprintf1("mmap()'d hugetlbfs for pkey %d @ %p\n", pkey, ptr);
+ 	return ptr;
+@@ -869,7 +876,7 @@ void *malloc_pkey_mmap_dax(long size, int prot, u16 pkey)
+ 
+ 	mprotect_pkey(ptr, size, prot, pkey);
+ 
+-	record_pkey_malloc(ptr, size);
++	record_pkey_malloc(ptr, size, prot);
+ 
+ 	dprintf1("mmap()'d for pkey %d @ %p\n", pkey, ptr);
+ 	close(fd);
+@@ -918,13 +925,21 @@ void *malloc_pkey(long size, int prot, u16 pkey)
+ }
+ 
+ int last_pkru_faults;
++#define UNKNOWN_PKEY -2
+ void expected_pk_fault(int pkey)
+ {
+ 	dprintf2("%s(): last_pkru_faults: %d pkru_faults: %d\n",
+ 			__func__, last_pkru_faults, pkru_faults);
+ 	dprintf2("%s(%d): last_si_pkey: %d\n", __func__, pkey, last_si_pkey);
+ 	pkey_assert(last_pkru_faults + 1 == pkru_faults);
+-	pkey_assert(last_si_pkey == pkey);
++
++       /*
++	* For exec-only memory, we do not know the pkey in
++	* advance, so skip this check.
++	*/
++	if (pkey != UNKNOWN_PKEY)
++		pkey_assert(last_si_pkey == pkey);
++
+ 	/*
+ 	 * The signal handler shold have cleared out PKRU to let the
+ 	 * test program continue.  We now have to restore it.
+@@ -939,10 +954,11 @@ void expected_pk_fault(int pkey)
+ 	last_si_pkey = -1;
+ }
+ 
+-void do_not_expect_pk_fault(void)
+-{
+-	pkey_assert(last_pkru_faults == pkru_faults);
+-}
++#define do_not_expect_pk_fault(msg)	do {			\
++	if (last_pkru_faults != pkru_faults)			\
++		dprintf0("unexpected PK fault: %s\n", msg);	\
++	pkey_assert(last_pkru_faults == pkru_faults);		\
++} while (0)
+ 
+ int test_fds[10] = { -1 };
+ int nr_test_fds;
+@@ -1151,12 +1167,15 @@ void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
+ 	pkey_assert(i < NR_PKEYS*2);
+ 
+ 	/*
+-	 * There are 16 pkeys supported in hardware.  One is taken
+-	 * up for the default (0) and another can be taken up by
+-	 * an execute-only mapping.  Ensure that we can allocate
+-	 * at least 14 (16-2).
++	 * There are 16 pkeys supported in hardware.  Three are
++	 * allocated by the time we get here:
++	 *   1. The default key (0)
++	 *   2. One possibly consumed by an execute-only mapping.
++	 *   3. One allocated by the test code and passed in via
++	 *      'pkey' to this function.
++	 * Ensure that we can allocate at least another 13 (16-3).
+ 	 */
+-	pkey_assert(i >= NR_PKEYS-2);
++	pkey_assert(i >= NR_PKEYS-3);
+ 
+ 	for (i = 0; i < nr_allocated_pkeys; i++) {
+ 		err = sys_pkey_free(allocated_pkeys[i]);
+@@ -1165,6 +1184,35 @@ void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
+ 	}
+ }
+ 
++/*
++ * pkey 0 is special.  It is allocated by default, so you do not
++ * have to call pkey_alloc() to use it first.  Make sure that it
++ * is usable.
++ */
++void test_mprotect_with_pkey_0(int *ptr, u16 pkey)
++{
++	long size;
++	int prot;
++
++	assert(pkey_last_malloc_record);
++	size = pkey_last_malloc_record->size;
++	/*
++	 * This is a bit of a hack.  But mprotect() requires
++	 * huge-page-aligned sizes when operating on hugetlbfs.
++	 * So, make sure that we use something that's a multiple
++	 * of a huge page when we can.
++	 */
++	if (size >= HPAGE_SIZE)
++		size = HPAGE_SIZE;
++	prot = pkey_last_malloc_record->prot;
++
++	/* Use pkey 0 */
++	mprotect_pkey(ptr, size, prot, 0);
++
++	/* Make sure that we can set it back to the original pkey. */
++	mprotect_pkey(ptr, size, prot, pkey);
++}
++
+ void test_ptrace_of_child(int *ptr, u16 pkey)
+ {
+ 	__attribute__((__unused__)) int peek_result;
+@@ -1228,7 +1276,7 @@ void test_ptrace_of_child(int *ptr, u16 pkey)
+ 	pkey_assert(ret != -1);
+ 	/* Now access from the current task, and expect NO exception: */
+ 	peek_result = read_ptr(plain_ptr);
+-	do_not_expect_pk_fault();
++	do_not_expect_pk_fault("read plain pointer after ptrace");
+ 
+ 	ret = ptrace(PTRACE_DETACH, child_pid, ignored, 0);
+ 	pkey_assert(ret != -1);
+@@ -1241,12 +1289,9 @@ void test_ptrace_of_child(int *ptr, u16 pkey)
+ 	free(plain_ptr_unaligned);
+ }
+ 
+-void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
++void *get_pointer_to_instructions(void)
+ {
+ 	void *p1;
+-	int scratch;
+-	int ptr_contents;
+-	int ret;
+ 
+ 	p1 = ALIGN_PTR_UP(&lots_o_noops_around_write, PAGE_SIZE);
+ 	dprintf3("&lots_o_noops: %p\n", &lots_o_noops_around_write);
+@@ -1256,7 +1301,23 @@ void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
+ 	/* Point 'p1' at the *second* page of the function: */
+ 	p1 += PAGE_SIZE;
+ 
++	/*
++	 * Try to ensure we fault this in on next touch to ensure
++	 * we get an instruction fault as opposed to a data one
++	 */
+ 	madvise(p1, PAGE_SIZE, MADV_DONTNEED);
++
++	return p1;
++}
++
++void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
++{
++	void *p1;
++	int scratch;
++	int ptr_contents;
++	int ret;
++
++	p1 = get_pointer_to_instructions();
+ 	lots_o_noops_around_write(&scratch);
+ 	ptr_contents = read_ptr(p1);
+ 	dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
+@@ -1272,12 +1333,55 @@ void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
+ 	 */
+ 	madvise(p1, PAGE_SIZE, MADV_DONTNEED);
+ 	lots_o_noops_around_write(&scratch);
+-	do_not_expect_pk_fault();
++	do_not_expect_pk_fault("executing on PROT_EXEC memory");
+ 	ptr_contents = read_ptr(p1);
+ 	dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
+ 	expected_pk_fault(pkey);
+ }
+ 
++void test_implicit_mprotect_exec_only_memory(int *ptr, u16 pkey)
++{
++	void *p1;
++	int scratch;
++	int ptr_contents;
++	int ret;
++
++	dprintf1("%s() start\n", __func__);
++
++	p1 = get_pointer_to_instructions();
++	lots_o_noops_around_write(&scratch);
++	ptr_contents = read_ptr(p1);
++	dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
++
++	/* Use a *normal* mprotect(), not mprotect_pkey(): */
++	ret = mprotect(p1, PAGE_SIZE, PROT_EXEC);
++	pkey_assert(!ret);
++
++	dprintf2("pkru: %x\n", rdpkru());
++
++	/* Make sure this is an *instruction* fault */
++	madvise(p1, PAGE_SIZE, MADV_DONTNEED);
++	lots_o_noops_around_write(&scratch);
++	do_not_expect_pk_fault("executing on PROT_EXEC memory");
++	ptr_contents = read_ptr(p1);
++	dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
++	expected_pk_fault(UNKNOWN_PKEY);
++
++	/*
++	 * Put the memory back to non-PROT_EXEC.  Should clear the
++	 * exec-only pkey off the VMA and allow it to be readable
++	 * again.  Go to PROT_NONE first to check for a kernel bug
++	 * that did not clear the pkey when doing PROT_NONE.
++	 */
++	ret = mprotect(p1, PAGE_SIZE, PROT_NONE);
++	pkey_assert(!ret);
++
++	ret = mprotect(p1, PAGE_SIZE, PROT_READ|PROT_EXEC);
++	pkey_assert(!ret);
++	ptr_contents = read_ptr(p1);
++	do_not_expect_pk_fault("plain read on recently PROT_EXEC area");
++}
++
+ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
+ {
+ 	int size = PAGE_SIZE;
+@@ -1302,6 +1406,8 @@ void (*pkey_tests[])(int *ptr, u16 pkey) = {
+ 	test_kernel_gup_of_access_disabled_region,
+ 	test_kernel_gup_write_to_write_disabled_region,
+ 	test_executing_on_unreadable_memory,
++	test_implicit_mprotect_exec_only_memory,
++	test_mprotect_with_pkey_0,
+ 	test_ptrace_of_child,
+ 	test_pkey_syscalls_on_non_allocated_pkey,
+ 	test_pkey_syscalls_bad_args,
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+index e21e2f49b005..ffc587bf4742 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+@@ -14,6 +14,8 @@
+ #include <linux/irqchip/arm-gic.h>
+ #include <linux/kvm.h>
+ #include <linux/kvm_host.h>
++#include <linux/nospec.h>
++
+ #include <kvm/iodev.h>
+ #include <kvm/arm_vgic.h>
+ 
+@@ -324,6 +326,9 @@ static unsigned long vgic_mmio_read_apr(struct kvm_vcpu *vcpu,
+ 
+ 		if (n > vgic_v3_max_apr_idx(vcpu))
+ 			return 0;
++
++		n = array_index_nospec(n, 4);
++
+ 		/* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */
+ 		return vgicv3->vgic_ap1r[n];
+ 	}
+diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
+index 4232c40b34f8..b38360c6c7d2 100644
+--- a/virt/kvm/arm/vgic/vgic.c
++++ b/virt/kvm/arm/vgic/vgic.c
+@@ -599,6 +599,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu)
+ 
+ 	list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) {
+ 		struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB;
++		bool target_vcpu_needs_kick = false;
+ 
+ 		spin_lock(&irq->irq_lock);
+ 
+@@ -669,11 +670,18 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu)
+ 			list_del(&irq->ap_list);
+ 			irq->vcpu = target_vcpu;
+ 			list_add_tail(&irq->ap_list, &new_cpu->ap_list_head);
++			target_vcpu_needs_kick = true;
+ 		}
+ 
+ 		spin_unlock(&irq->irq_lock);
+ 		spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock);
+ 		spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags);
++
++		if (target_vcpu_needs_kick) {
++			kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu);
++			kvm_vcpu_kick(target_vcpu);
++		}
++
+ 		goto retry;
+ 	}
+ 


             reply	other threads:[~2018-06-20 19:44 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-20 19:44 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-06-26 16:13 [gentoo-commits] proj/linux-patches:4.16 commit in: / Alice Ferrazzi
2018-06-16 15:45 Mike Pagano
2018-06-11 21:48 Mike Pagano
2018-06-05 11:23 Mike Pagano
2018-05-30 11:44 Mike Pagano
2018-05-25 15:37 Mike Pagano
2018-05-22 19:13 Mike Pagano
2018-05-20 22:22 Mike Pagano
2018-05-16 10:25 Mike Pagano
2018-05-09 10:57 Mike Pagano
2018-05-02 16:15 Mike Pagano
2018-04-30 10:30 Mike Pagano
2018-04-26 10:22 Mike Pagano
2018-04-24 11:31 Mike Pagano
2018-04-19 10:45 Mike Pagano
2018-04-12 12:21 Mike Pagano
2018-04-08 14:33 Mike Pagano
2018-03-09 19:24 Mike Pagano
2018-02-12 20:46 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1529523847.39d0fad6ec600bb1c3d7cb58750a0f1f96b7bf7b.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox