* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-03 22:19 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-03 22:19 UTC (permalink / raw
To: gentoo-commits
commit: 2dfff68d42f70e408cccc3aec773bd715db182e9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 3 22:18:10 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun 3 22:18:10 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2dfff68d
Remove 2900_dev-root-proc-mount-fix.patch for compilation errors.
Will take a look after this release.
0000_README | 4 ----
2900_dev-root-proc-mount-fix.patch | 38 --------------------------------------
2 files changed, 42 deletions(-)
diff --git a/0000_README b/0000_README
index 6546583..94eb66a 100644
--- a/0000_README
+++ b/0000_README
@@ -63,10 +63,6 @@ Patch: 2600_enable-key-swapping-for-apple-mac.patch
From: https://github.com/free5lot/hid-apple-patched
Desc: This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
-Patch: 2900_dev-root-proc-mount-fix.patch
-From: https://bugs.gentoo.org/show_bug.cgi?id=438380
-Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
-
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
deleted file mode 100644
index 83f96d2..0000000
--- a/2900_dev-root-proc-mount-fix.patch
+++ /dev/null
@@ -1,38 +0,0 @@
---- a/init/do_mounts.c 2018-05-23 14:30:36.870899527 -0400
-+++ b/init/do_mounts.c 2018-05-23 14:35:54.398659105 -0400
-@@ -489,7 +489,11 @@ void __init change_floppy(char *fmt, ...
- va_start(args, fmt);
- vsprintf(buf, fmt, args);
- va_end(args);
-- fd = ksys_open("/dev/root", O_RDWR | O_NDELAY, 0);
-+ if (saved_root_name[0])
-+ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
-+ else
-+ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
-+
- if (fd >= 0) {
- ksys_ioctl(fd, FDEJECT, 0);
- ksys_close(fd);
-@@ -533,11 +537,17 @@ void __init mount_root(void)
- #endif
- #ifdef CONFIG_BLOCK
- {
-- int err = create_dev("/dev/root", ROOT_DEV);
--
-- if (err < 0)
-- pr_emerg("Failed to create /dev/root: %d\n", err);
-- mount_block_root("/dev/root", root_mountflags);
-+ if (saved_root_name[0] == '/') {
-+ int err = create_dev(saved_root_name, ROOT_DEV);
-+ if (err < 0)
-+ pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
-+ mount_block_root(saved_root_name, root_mountflags);
-+ } else {
-+ int err = create_dev("/dev/root", ROOT_DEV);
-+ if (err < 0)
-+ pr_emerg("Failed to create /dev/root: %d\n", err);
-+ mount_block_root("/dev/root", root_mountflags);
-+ }
- }
- #endif
- }
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-24 11:45 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-24 11:45 UTC (permalink / raw
To: gentoo-commits
commit: c75c995c2729fd757a3c04492a7adac5e31b18c0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 24 11:44:56 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 24 11:44:56 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c75c995c
Linux patch 4.17.19
0000_README | 4 +
1018_linux-4.17.19.patch | 10954 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 10958 insertions(+)
diff --git a/0000_README b/0000_README
index 1887187..b1dd52b 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch: 1017_linux-4.17.18.patch
From: http://www.kernel.org
Desc: Linux 4.17.18
+Patch: 1018_linux-4.17.19.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.19
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1018_linux-4.17.19.patch b/1018_linux-4.17.19.patch
new file mode 100644
index 0000000..aac0886
--- /dev/null
+++ b/1018_linux-4.17.19.patch
@@ -0,0 +1,10954 @@
+diff --git a/Makefile b/Makefile
+index 429a1fe0b40b..32c83a163544 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+@@ -356,9 +356,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
+ else if [ -x /bin/bash ]; then echo /bin/bash; \
+ else echo sh; fi ; fi)
+
+-HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS)
+-HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS)
+-HOST_LFS_LIBS := $(shell getconf LFS_LIBS)
++HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
++HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
++HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+
+ HOSTCC = gcc
+ HOSTCXX = g++
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index d37f49d6a27f..6c1b20dd76ad 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -16,7 +16,7 @@ endif
+
+ KBUILD_DEFCONFIG := nsim_700_defconfig
+
+-cflags-y += -fno-common -pipe -fno-builtin -D__linux__
++cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+ cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7
+ cflags-$(CONFIG_ISA_ARCV2) += -mcpu=archs
+
+@@ -140,16 +140,3 @@ dtbs: scripts
+
+ archclean:
+ $(Q)$(MAKE) $(clean)=$(boot)
+-
+-# Hacks to enable final link due to absence of link-time branch relexation
+-# and gcc choosing optimal(shorter) branches at -O3
+-#
+-# vineetg Feb 2010: -mlong-calls switched off for overall kernel build
+-# However lib/decompress_inflate.o (.init.text) calls
+-# zlib_inflate_workspacesize (.text) causing relocation errors.
+-# Thus forcing all exten calls in this file to be long calls
+-export CFLAGS_decompress_inflate.o = -mmedium-calls
+-export CFLAGS_initramfs.o = -mmedium-calls
+-ifdef CONFIG_SMP
+-export CFLAGS_core.o = -mmedium-calls
+-endif
+diff --git a/arch/arc/include/asm/mach_desc.h b/arch/arc/include/asm/mach_desc.h
+index c28e6c347b49..871f3cb16af9 100644
+--- a/arch/arc/include/asm/mach_desc.h
++++ b/arch/arc/include/asm/mach_desc.h
+@@ -34,9 +34,7 @@ struct machine_desc {
+ const char *name;
+ const char **dt_compat;
+ void (*init_early)(void);
+-#ifdef CONFIG_SMP
+ void (*init_per_cpu)(unsigned int);
+-#endif
+ void (*init_machine)(void);
+ void (*init_late)(void);
+
+diff --git a/arch/arc/kernel/irq.c b/arch/arc/kernel/irq.c
+index 538b36afe89e..62b185057c04 100644
+--- a/arch/arc/kernel/irq.c
++++ b/arch/arc/kernel/irq.c
+@@ -31,10 +31,10 @@ void __init init_IRQ(void)
+ /* a SMP H/w block could do IPI IRQ request here */
+ if (plat_smp_ops.init_per_cpu)
+ plat_smp_ops.init_per_cpu(smp_processor_id());
++#endif
+
+ if (machine_desc->init_per_cpu)
+ machine_desc->init_per_cpu(smp_processor_id());
+-#endif
+ }
+
+ /*
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 5ac3b547453f..4674541eba3f 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -47,7 +47,8 @@ SYSCALL_DEFINE0(arc_gettls)
+ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
+ {
+ struct pt_regs *regs = current_pt_regs();
+- int uval = -EFAULT;
++ u32 uval;
++ int ret;
+
+ /*
+ * This is only for old cores lacking LLOCK/SCOND, which by defintion
+@@ -60,23 +61,47 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
+ /* Z indicates to userspace if operation succeded */
+ regs->status32 &= ~STATUS_Z_MASK;
+
+- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+- return -EFAULT;
++ ret = access_ok(VERIFY_WRITE, uaddr, sizeof(*uaddr));
++ if (!ret)
++ goto fail;
+
++again:
+ preempt_disable();
+
+- if (__get_user(uval, uaddr))
+- goto done;
++ ret = __get_user(uval, uaddr);
++ if (ret)
++ goto fault;
+
+- if (uval == expected) {
+- if (!__put_user(new, uaddr))
+- regs->status32 |= STATUS_Z_MASK;
+- }
++ if (uval != expected)
++ goto out;
+
+-done:
+- preempt_enable();
++ ret = __put_user(new, uaddr);
++ if (ret)
++ goto fault;
++
++ regs->status32 |= STATUS_Z_MASK;
+
++out:
++ preempt_enable();
+ return uval;
++
++fault:
++ preempt_enable();
++
++ if (unlikely(ret != -EFAULT))
++ goto fail;
++
++ down_read(¤t->mm->mmap_sem);
++ ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr,
++ FAULT_FLAG_WRITE, NULL);
++ up_read(¤t->mm->mmap_sem);
++
++ if (likely(!ret))
++ goto again;
++
++fail:
++ force_sig(SIGSEGV, current);
++ return ret;
+ }
+
+ #ifdef CONFIG_ISA_ARCV2
+diff --git a/arch/arc/plat-hsdk/platform.c b/arch/arc/plat-hsdk/platform.c
+index 2958aedb649a..2588b842407c 100644
+--- a/arch/arc/plat-hsdk/platform.c
++++ b/arch/arc/plat-hsdk/platform.c
+@@ -42,6 +42,66 @@ static void __init hsdk_init_per_cpu(unsigned int cpu)
+ #define SDIO_UHS_REG_EXT (SDIO_BASE + 0x108)
+ #define SDIO_UHS_REG_EXT_DIV_2 (2 << 30)
+
++#define HSDK_GPIO_INTC (ARC_PERIPHERAL_BASE + 0x3000)
++
++static void __init hsdk_enable_gpio_intc_wire(void)
++{
++ /*
++ * Peripherals on CPU Card are wired to cpu intc via intermediate
++ * DW APB GPIO blocks (mainly for debouncing)
++ *
++ * ---------------------
++ * | snps,archs-intc |
++ * ---------------------
++ * |
++ * ----------------------
++ * | snps,archs-idu-intc |
++ * ----------------------
++ * | | | | |
++ * | [eth] [USB] [... other peripherals]
++ * |
++ * -------------------
++ * | snps,dw-apb-intc |
++ * -------------------
++ * | | | |
++ * [Bt] [HAPS] [... other peripherals]
++ *
++ * Current implementation of "irq-dw-apb-ictl" driver doesn't work well
++ * with stacked INTCs. In particular problem happens if its master INTC
++ * not yet instantiated. See discussion here -
++ * https://lkml.org/lkml/2015/3/4/755
++ *
++ * So setup the first gpio block as a passive pass thru and hide it from
++ * DT hardware topology - connect intc directly to cpu intc
++ * The GPIO "wire" needs to be init nevertheless (here)
++ *
++ * One side adv is that peripheral interrupt handling avoids one nested
++ * intc ISR hop
++ *
++ * According to HSDK User's Manual [1], "Table 2 Interrupt Mapping"
++ * we have the following GPIO input lines used as sources of interrupt:
++ * - GPIO[0] - Bluetooth interrupt of RS9113 module
++ * - GPIO[2] - HAPS interrupt (on HapsTrak 3 connector)
++ * - GPIO[3] - Audio codec (MAX9880A) interrupt
++ * - GPIO[8-23] - Available on Arduino and PMOD_x headers
++ * For now there's no use of Arduino and PMOD_x headers in Linux
++ * use-case so we only enable lines 0, 2 and 3.
++ *
++ * [1] https://github.com/foss-for-synopsys-dwc-arc-processors/ARC-Development-Systems-Forum/wiki/docs/ARC_HSDK_User_Guide.pdf
++ */
++#define GPIO_INTEN (HSDK_GPIO_INTC + 0x30)
++#define GPIO_INTMASK (HSDK_GPIO_INTC + 0x34)
++#define GPIO_INTTYPE_LEVEL (HSDK_GPIO_INTC + 0x38)
++#define GPIO_INT_POLARITY (HSDK_GPIO_INTC + 0x3c)
++#define GPIO_INT_CONNECTED_MASK 0x0d
++
++ iowrite32(0xffffffff, (void __iomem *) GPIO_INTMASK);
++ iowrite32(~GPIO_INT_CONNECTED_MASK, (void __iomem *) GPIO_INTMASK);
++ iowrite32(0x00000000, (void __iomem *) GPIO_INTTYPE_LEVEL);
++ iowrite32(0xffffffff, (void __iomem *) GPIO_INT_POLARITY);
++ iowrite32(GPIO_INT_CONNECTED_MASK, (void __iomem *) GPIO_INTEN);
++}
++
+ static void __init hsdk_init_early(void)
+ {
+ /*
+@@ -62,6 +122,8 @@ static void __init hsdk_init_early(void)
+ * minimum possible div-by-2.
+ */
+ iowrite32(SDIO_UHS_REG_EXT_DIV_2, (void __iomem *) SDIO_UHS_REG_EXT);
++
++ hsdk_enable_gpio_intc_wire();
+ }
+
+ static const char *hsdk_compat[] __initconst = {
+diff --git a/arch/arm/boot/dts/am3517.dtsi b/arch/arm/boot/dts/am3517.dtsi
+index 4b6062b631b1..23ea381d363f 100644
+--- a/arch/arm/boot/dts/am3517.dtsi
++++ b/arch/arm/boot/dts/am3517.dtsi
+@@ -91,6 +91,11 @@
+ };
+ };
+
++/* Table Table 5-79 of the TRM shows 480ab000 is reserved */
++&usb_otg_hs {
++ status = "disabled";
++};
++
+ &iva {
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/am437x-sk-evm.dts b/arch/arm/boot/dts/am437x-sk-evm.dts
+index 4118802b7fea..f17ed89da06b 100644
+--- a/arch/arm/boot/dts/am437x-sk-evm.dts
++++ b/arch/arm/boot/dts/am437x-sk-evm.dts
+@@ -537,6 +537,8 @@
+
+ touchscreen-size-x = <480>;
+ touchscreen-size-y = <272>;
++
++ wakeup-source;
+ };
+
+ tlv320aic3106: tlv320aic3106@1b {
+diff --git a/arch/arm/boot/dts/armada-385-synology-ds116.dts b/arch/arm/boot/dts/armada-385-synology-ds116.dts
+index 6782ce481ac9..d8769956cbfc 100644
+--- a/arch/arm/boot/dts/armada-385-synology-ds116.dts
++++ b/arch/arm/boot/dts/armada-385-synology-ds116.dts
+@@ -139,7 +139,7 @@
+ 3700 5
+ 3900 6
+ 4000 7>;
+- cooling-cells = <2>;
++ #cooling-cells = <2>;
+ };
+
+ gpio-leds {
+diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi
+index 9fe4f5a6379e..2c4df2d2d4a6 100644
+--- a/arch/arm/boot/dts/bcm-cygnus.dtsi
++++ b/arch/arm/boot/dts/bcm-cygnus.dtsi
+@@ -216,7 +216,7 @@
+ reg = <0x18008000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 85 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+@@ -245,7 +245,7 @@
+ reg = <0x1800b000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 86 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+@@ -256,7 +256,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <0>;
+
+@@ -278,10 +278,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>,
+- <GIC_SPI 97 IRQ_TYPE_NONE>,
+- <GIC_SPI 98 IRQ_TYPE_NONE>,
+- <GIC_SPI 99 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+
+@@ -291,7 +291,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <1>;
+
+@@ -313,10 +313,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 102 IRQ_TYPE_NONE>,
+- <GIC_SPI 103 IRQ_TYPE_NONE>,
+- <GIC_SPI 104 IRQ_TYPE_NONE>,
+- <GIC_SPI 105 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi
+index 3f9cedd8011f..3084a7c95733 100644
+--- a/arch/arm/boot/dts/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/bcm-hr2.dtsi
+@@ -264,7 +264,7 @@
+ reg = <0x38000 0x50>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 95 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ };
+
+@@ -279,7 +279,7 @@
+ reg = <0x3b000 0x50>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ };
+ };
+@@ -300,7 +300,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <0>;
+
+@@ -322,10 +322,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 182 IRQ_TYPE_NONE>,
+- <GIC_SPI 183 IRQ_TYPE_NONE>,
+- <GIC_SPI 184 IRQ_TYPE_NONE>,
+- <GIC_SPI 185 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>;
+ brcm,pcie-msi-inten;
+ };
+ };
+@@ -336,7 +336,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <1>;
+
+@@ -358,10 +358,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 188 IRQ_TYPE_NONE>,
+- <GIC_SPI 189 IRQ_TYPE_NONE>,
+- <GIC_SPI 190 IRQ_TYPE_NONE>,
+- <GIC_SPI 191 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
+ brcm,pcie-msi-inten;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index dcc55aa84583..09ba85046322 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -391,7 +391,7 @@
+ reg = <0x38000 0x50>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 89 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ dma-coherent;
+ status = "disabled";
+@@ -496,7 +496,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <0>;
+
+@@ -519,10 +519,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 127 IRQ_TYPE_NONE>,
+- <GIC_SPI 128 IRQ_TYPE_NONE>,
+- <GIC_SPI 129 IRQ_TYPE_NONE>,
+- <GIC_SPI 130 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>;
+ brcm,pcie-msi-inten;
+ };
+ };
+@@ -533,7 +533,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <1>;
+
+@@ -556,10 +556,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 133 IRQ_TYPE_NONE>,
+- <GIC_SPI 134 IRQ_TYPE_NONE>,
+- <GIC_SPI 135 IRQ_TYPE_NONE>,
+- <GIC_SPI 136 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 134 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>;
+ brcm,pcie-msi-inten;
+ };
+ };
+@@ -570,7 +570,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <2>;
+
+@@ -593,10 +593,10 @@
+ compatible = "brcm,iproc-msi";
+ msi-controller;
+ interrupt-parent = <&gic>;
+- interrupts = <GIC_SPI 139 IRQ_TYPE_NONE>,
+- <GIC_SPI 140 IRQ_TYPE_NONE>,
+- <GIC_SPI 141 IRQ_TYPE_NONE>,
+- <GIC_SPI 142 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>;
+ brcm,pcie-msi-inten;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 9a076c409f4e..ef995e50ee12 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -365,7 +365,7 @@
+ i2c0: i2c@18009000 {
+ compatible = "brcm,iproc-i2c";
+ reg = <0x18009000 0x50>;
+- interrupts = <GIC_SPI 121 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clock-frequency = <100000>;
+diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi
+index 12010002dbdb..a8500f062706 100644
+--- a/arch/arm/boot/dts/da850.dtsi
++++ b/arch/arm/boot/dts/da850.dtsi
+@@ -539,11 +539,7 @@
+ gpio-controller;
+ #gpio-cells = <2>;
+ reg = <0x226000 0x1000>;
+- interrupts = <42 IRQ_TYPE_EDGE_BOTH
+- 43 IRQ_TYPE_EDGE_BOTH 44 IRQ_TYPE_EDGE_BOTH
+- 45 IRQ_TYPE_EDGE_BOTH 46 IRQ_TYPE_EDGE_BOTH
+- 47 IRQ_TYPE_EDGE_BOTH 48 IRQ_TYPE_EDGE_BOTH
+- 49 IRQ_TYPE_EDGE_BOTH 50 IRQ_TYPE_EDGE_BOTH>;
++ interrupts = <42 43 44 45 46 47 48 49 50>;
+ ti,ngpio = <144>;
+ ti,davinci-gpio-unbanked = <0>;
+ status = "disabled";
+diff --git a/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi b/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi
+index 911f7f0e3cea..e000e27f595b 100644
+--- a/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-zii-rdu2.dtsi
+@@ -672,7 +672,7 @@
+ dsa,member = <0 0>;
+ eeprom-length = <512>;
+ interrupt-parent = <&gpio6>;
+- interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index bdf73cbcec3a..e7c3c563ff8f 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -159,13 +159,7 @@
+
+ dais = <&mcbsp2_port>, <&mcbsp3_port>;
+ };
+-};
+-
+-&dss {
+- status = "okay";
+-};
+
+-&gpio6 {
+ pwm8: dmtimer-pwm-8 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&vibrator_direction_pin>;
+@@ -192,7 +186,10 @@
+ pwm-names = "enable", "direction";
+ direction-duty-cycle-ns = <10000000>;
+ };
++};
+
++&dss {
++ status = "okay";
+ };
+
+ &dsi1 {
+diff --git a/arch/arm/configs/imx_v4_v5_defconfig b/arch/arm/configs/imx_v4_v5_defconfig
+index 054591dc9a00..4cd2f4a2bff4 100644
+--- a/arch/arm/configs/imx_v4_v5_defconfig
++++ b/arch/arm/configs/imx_v4_v5_defconfig
+@@ -141,9 +141,11 @@ CONFIG_USB_STORAGE=y
+ CONFIG_USB_CHIPIDEA=y
+ CONFIG_USB_CHIPIDEA_UDC=y
+ CONFIG_USB_CHIPIDEA_HOST=y
++CONFIG_USB_CHIPIDEA_ULPI=y
+ CONFIG_NOP_USB_XCEIV=y
+ CONFIG_USB_GADGET=y
+ CONFIG_USB_ETH=m
++CONFIG_USB_ULPI_BUS=y
+ CONFIG_MMC=y
+ CONFIG_MMC_SDHCI=y
+ CONFIG_MMC_SDHCI_PLTFM=y
+diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig
+index 3a308437b088..19c924de353b 100644
+--- a/arch/arm/configs/imx_v6_v7_defconfig
++++ b/arch/arm/configs/imx_v6_v7_defconfig
+@@ -294,6 +294,7 @@ CONFIG_USB_STORAGE=y
+ CONFIG_USB_CHIPIDEA=y
+ CONFIG_USB_CHIPIDEA_UDC=y
+ CONFIG_USB_CHIPIDEA_HOST=y
++CONFIG_USB_CHIPIDEA_ULPI=y
+ CONFIG_USB_SERIAL=m
+ CONFIG_USB_SERIAL_GENERIC=y
+ CONFIG_USB_SERIAL_FTDI_SIO=m
+@@ -330,6 +331,7 @@ CONFIG_USB_GADGETFS=m
+ CONFIG_USB_FUNCTIONFS=m
+ CONFIG_USB_MASS_STORAGE=m
+ CONFIG_USB_G_SERIAL=m
++CONFIG_USB_ULPI_BUS=y
+ CONFIG_MMC=y
+ CONFIG_MMC_SDHCI=y
+ CONFIG_MMC_SDHCI_PLTFM=y
+diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
+index 3c1e203e53b9..57caa742016e 100644
+--- a/arch/arm/crypto/speck-neon-core.S
++++ b/arch/arm/crypto/speck-neon-core.S
+@@ -272,9 +272,11 @@
+ * Allocate stack space to store 128 bytes worth of tweaks. For
+ * performance, this space is aligned to a 16-byte boundary so that we
+ * can use the load/store instructions that declare 16-byte alignment.
++ * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
+ */
+- sub sp, #128
+- bic sp, #0xf
++ sub r12, sp, #128
++ bic r12, #0xf
++ mov sp, r12
+
+ .if \n == 64
+ // Load first tweak
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index 158ed9a1483f..826bd634d098 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -773,7 +773,7 @@ static struct gpiod_lookup_table mmc_gpios_table = {
+ GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd",
+ GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp",
+- GPIO_ACTIVE_LOW),
++ GPIO_ACTIVE_HIGH),
+ },
+ };
+
+diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c
+index 69df3620eca5..1c73694c871a 100644
+--- a/arch/arm/mach-omap2/omap-smp.c
++++ b/arch/arm/mach-omap2/omap-smp.c
+@@ -109,6 +109,45 @@ void omap5_erratum_workaround_801819(void)
+ static inline void omap5_erratum_workaround_801819(void) { }
+ #endif
+
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++/*
++ * Configure ACR and enable ACTLR[0] (Enable invalidates of BTB with
++ * ICIALLU) to activate the workaround for secondary Core.
++ * NOTE: it is assumed that the primary core's configuration is done
++ * by the boot loader (kernel will detect a misconfiguration and complain
++ * if this is not done).
++ *
++ * In General Purpose(GP) devices, ACR bit settings can only be done
++ * by ROM code in "secure world" using the smc call and there is no
++ * option to update the "firmware" on such devices. This also works for
++ * High security(HS) devices, as a backup option in case the
++ * "update" is not done in the "security firmware".
++ */
++static void omap5_secondary_harden_predictor(void)
++{
++ u32 acr, acr_mask;
++
++ asm volatile ("mrc p15, 0, %0, c1, c0, 1" : "=r" (acr));
++
++ /*
++ * ACTLR[0] (Enable invalidates of BTB with ICIALLU)
++ */
++ acr_mask = BIT(0);
++
++ /* Do we already have it done.. if yes, skip expensive smc */
++ if ((acr & acr_mask) == acr_mask)
++ return;
++
++ acr |= acr_mask;
++ omap_smc1(OMAP5_DRA7_MON_SET_ACR_INDEX, acr);
++
++ pr_debug("%s: ARM ACR setup for CVE_2017_5715 applied on CPU%d\n",
++ __func__, smp_processor_id());
++}
++#else
++static inline void omap5_secondary_harden_predictor(void) { }
++#endif
++
+ static void omap4_secondary_init(unsigned int cpu)
+ {
+ /*
+@@ -131,6 +170,8 @@ static void omap4_secondary_init(unsigned int cpu)
+ set_cntfreq();
+ /* Configure ACR to disable streaming WA for 801819 */
+ omap5_erratum_workaround_801819();
++ /* Enable ACR to allow for ICUALLU workaround */
++ omap5_secondary_harden_predictor();
+ }
+
+ /*
+diff --git a/arch/arm/mach-pxa/irq.c b/arch/arm/mach-pxa/irq.c
+index 9c10248fadcc..4e8c2116808e 100644
+--- a/arch/arm/mach-pxa/irq.c
++++ b/arch/arm/mach-pxa/irq.c
+@@ -185,7 +185,7 @@ static int pxa_irq_suspend(void)
+ {
+ int i;
+
+- for (i = 0; i < pxa_internal_irq_nr / 32; i++) {
++ for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) {
+ void __iomem *base = irq_base(i);
+
+ saved_icmr[i] = __raw_readl(base + ICMR);
+@@ -204,7 +204,7 @@ static void pxa_irq_resume(void)
+ {
+ int i;
+
+- for (i = 0; i < pxa_internal_irq_nr / 32; i++) {
++ for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) {
+ void __iomem *base = irq_base(i);
+
+ __raw_writel(saved_icmr[i], base + ICMR);
+diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
+index c186474422f3..0cc8e04295a4 100644
+--- a/arch/arm/mm/init.c
++++ b/arch/arm/mm/init.c
+@@ -736,20 +736,29 @@ static int __mark_rodata_ro(void *unused)
+ return 0;
+ }
+
++static int kernel_set_to_readonly __read_mostly;
++
+ void mark_rodata_ro(void)
+ {
++ kernel_set_to_readonly = 1;
+ stop_machine(__mark_rodata_ro, NULL, NULL);
+ debug_checkwx();
+ }
+
+ void set_kernel_text_rw(void)
+ {
++ if (!kernel_set_to_readonly)
++ return;
++
+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false,
+ current->active_mm);
+ }
+
+ void set_kernel_text_ro(void)
+ {
++ if (!kernel_set_to_readonly)
++ return;
++
+ set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true,
+ current->active_mm);
+ }
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts b/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts
+index 57eedced5a51..f7300cb6970f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-axg-s400.dts
+@@ -19,9 +19,22 @@
+
+ ðmac {
+ status = "okay";
+- phy-mode = "rgmii";
+ pinctrl-0 = <ð_rgmii_y_pins>;
+ pinctrl-names = "default";
++ phy-handle = <ð_phy0>;
++ phy-mode = "rgmii";
++
++ mdio {
++ compatible = "snps,dwmac-mdio";
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ eth_phy0: ethernet-phy@0 {
++ /* Realtek RTL8211F (0x001cc916) */
++ reg = <0>;
++ eee-broken-1000t;
++ };
++ };
+ };
+
+ &uart_A {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi
+index eb327664a4d8..6aaafff674f9 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi
+@@ -6,7 +6,7 @@
+
+ &apb {
+ mali: gpu@c0000 {
+- compatible = "amlogic,meson-gxbb-mali", "arm,mali-450";
++ compatible = "amlogic,meson-gxl-mali", "arm,mali-450";
+ reg = <0x0 0xc0000 0x0 0x40000>;
+ interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 4a2a6af8e752..4057197048dc 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -118,7 +118,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <0>;
+
+@@ -149,7 +149,7 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+- interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_NONE>;
++ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>;
+
+ linux,pci-domain = <4>;
+
+@@ -566,7 +566,7 @@
+ reg = <0x66080000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 394 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 394 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+@@ -594,7 +594,7 @@
+ reg = <0x660b0000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 395 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts b/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts
+index eb6f08cdbd79..77efa28c4dd5 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts
++++ b/arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts
+@@ -43,6 +43,10 @@
+ enet-phy-lane-swap;
+ };
+
++&sdio0 {
++ mmc-ddr-1_8v;
++};
++
+ &uart2 {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts b/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts
+index 5084b037320f..55ba495ef56e 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts
++++ b/arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts
+@@ -42,3 +42,7 @@
+ &gphy0 {
+ enet-phy-lane-swap;
+ };
++
++&sdio0 {
++ mmc-ddr-1_8v;
++};
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+index 99aaff0b6d72..b203152ad67c 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+@@ -409,7 +409,7 @@
+ reg = <0x000b0000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 177 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 177 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+@@ -453,7 +453,7 @@
+ reg = <0x000e0000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- interrupts = <GIC_SPI 178 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 178 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <100000>;
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 66b318e1de80..aef814b2dc9e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1191,14 +1191,14 @@
+
+ port@0 {
+ reg = <0>;
+- etf_out: endpoint {
++ etf_in: endpoint {
+ slave-mode;
+ remote-endpoint = <&funnel0_out>;
+ };
+ };
+ port@1 {
+ reg = <0>;
+- etf_in: endpoint {
++ etf_out: endpoint {
+ remote-endpoint = <&replicator_in>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts b/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts
+index 9b4dc41703e3..ae3b5adf32df 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts
+@@ -54,7 +54,7 @@
+ sound {
+ compatible = "audio-graph-card";
+ label = "UniPhier LD11";
+- widgets = "Headphone", "Headphone Jack";
++ widgets = "Headphone", "Headphones";
+ dais = <&i2s_port2
+ &i2s_port3
+ &i2s_port4
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts b/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts
+index fe6608ea3277..7919233c9ce2 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts
+@@ -54,7 +54,7 @@
+ sound {
+ compatible = "audio-graph-card";
+ label = "UniPhier LD20";
+- widgets = "Headphone", "Headphone Jack";
++ widgets = "Headphone", "Headphones";
+ dais = <&i2s_port2
+ &i2s_port3
+ &i2s_port4
+diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
+index a91933b1e2e6..4b650ec1d7dd 100644
+--- a/arch/arm64/include/asm/alternative.h
++++ b/arch/arm64/include/asm/alternative.h
+@@ -28,7 +28,12 @@ typedef void (*alternative_cb_t)(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst);
+
+ void __init apply_alternatives_all(void);
+-void apply_alternatives(void *start, size_t length);
++
++#ifdef CONFIG_MODULES
++void apply_alternatives_module(void *start, size_t length);
++#else
++static inline void apply_alternatives_module(void *start, size_t length) { }
++#endif
+
+ #define ALTINSTR_ENTRY(feature,cb) \
+ " .word 661b - .\n" /* label */ \
+diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
+index 5c4bce4ac381..36fb069fd049 100644
+--- a/arch/arm64/kernel/alternative.c
++++ b/arch/arm64/kernel/alternative.c
+@@ -122,7 +122,30 @@ static void patch_alternative(struct alt_instr *alt,
+ }
+ }
+
+-static void __apply_alternatives(void *alt_region, bool use_linear_alias)
++/*
++ * We provide our own, private D-cache cleaning function so that we don't
++ * accidentally call into the cache.S code, which is patched by us at
++ * runtime.
++ */
++static void clean_dcache_range_nopatch(u64 start, u64 end)
++{
++ u64 cur, d_size, ctr_el0;
++
++ ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0);
++ d_size = 4 << cpuid_feature_extract_unsigned_field(ctr_el0,
++ CTR_DMINLINE_SHIFT);
++ cur = start & ~(d_size - 1);
++ do {
++ /*
++ * We must clean+invalidate to the PoC in order to avoid
++ * Cortex-A53 errata 826319, 827319, 824069 and 819472
++ * (this corresponds to ARM64_WORKAROUND_CLEAN_CACHE)
++ */
++ asm volatile("dc civac, %0" : : "r" (cur) : "memory");
++ } while (cur += d_size, cur < end);
++}
++
++static void __apply_alternatives(void *alt_region, bool is_module)
+ {
+ struct alt_instr *alt;
+ struct alt_region *region = alt_region;
+@@ -145,7 +168,7 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
+ pr_info_once("patching kernel code\n");
+
+ origptr = ALT_ORIG_PTR(alt);
+- updptr = use_linear_alias ? lm_alias(origptr) : origptr;
++ updptr = is_module ? origptr : lm_alias(origptr);
+ nr_inst = alt->orig_len / AARCH64_INSN_SIZE;
+
+ if (alt->cpufeature < ARM64_CB_PATCH)
+@@ -155,8 +178,20 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
+
+ alt_cb(alt, origptr, updptr, nr_inst);
+
+- flush_icache_range((uintptr_t)origptr,
+- (uintptr_t)(origptr + nr_inst));
++ if (!is_module) {
++ clean_dcache_range_nopatch((u64)origptr,
++ (u64)(origptr + nr_inst));
++ }
++ }
++
++ /*
++ * The core module code takes care of cache maintenance in
++ * flush_module_icache().
++ */
++ if (!is_module) {
++ dsb(ish);
++ __flush_icache_all();
++ isb();
+ }
+ }
+
+@@ -178,7 +213,7 @@ static int __apply_alternatives_multi_stop(void *unused)
+ isb();
+ } else {
+ BUG_ON(alternatives_applied);
+- __apply_alternatives(®ion, true);
++ __apply_alternatives(®ion, false);
+ /* Barriers provided by the cache flushing */
+ WRITE_ONCE(alternatives_applied, 1);
+ }
+@@ -192,12 +227,14 @@ void __init apply_alternatives_all(void)
+ stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
+ }
+
+-void apply_alternatives(void *start, size_t length)
++#ifdef CONFIG_MODULES
++void apply_alternatives_module(void *start, size_t length)
+ {
+ struct alt_region region = {
+ .begin = start,
+ .end = start + length,
+ };
+
+- __apply_alternatives(®ion, false);
++ __apply_alternatives(®ion, true);
+ }
++#endif
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index 155fd91e78f4..f0f27aeefb73 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -448,9 +448,8 @@ int module_finalize(const Elf_Ehdr *hdr,
+ const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
+- if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) {
+- apply_alternatives((void *)s->sh_addr, s->sh_size);
+- }
++ if (strcmp(".altinstructions", secstrs + s->sh_name) == 0)
++ apply_alternatives_module((void *)s->sh_addr, s->sh_size);
+ #ifdef CONFIG_ARM64_MODULE_PLTS
+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) &&
+ !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index f3e2e3aec0b0..2faa9863d2e5 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -179,7 +179,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
+ * This is the secondary CPU boot entry. We're using this CPUs
+ * idle thread stack, but a set of temporary page tables.
+ */
+-asmlinkage void secondary_start_kernel(void)
++asmlinkage notrace void secondary_start_kernel(void)
+ {
+ u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
+ struct mm_struct *mm = &init_mm;
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index a96ec0181818..4ed7ffa26d27 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -588,13 +588,14 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size,
+ size >> PAGE_SHIFT);
+ return NULL;
+ }
+- if (!coherent)
+- __dma_flush_area(page_to_virt(page), iosize);
+-
+ addr = dma_common_contiguous_remap(page, size, VM_USERMAP,
+ prot,
+ __builtin_return_address(0));
+- if (!addr) {
++ if (addr) {
++ memset(addr, 0, size);
++ if (!coherent)
++ __dma_flush_area(page_to_virt(page), iosize);
++ } else {
+ iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs);
+ dma_release_from_contiguous(dev, page,
+ size >> PAGE_SHIFT);
+diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
+index 8fb280e33114..b483152875b5 100644
+--- a/arch/ia64/kernel/perfmon.c
++++ b/arch/ia64/kernel/perfmon.c
+@@ -2278,17 +2278,15 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t
+ DPRINT(("smpl_buf @%p\n", smpl_buf));
+
+ /* allocate vma */
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(mm);
+ if (!vma) {
+ DPRINT(("Cannot allocate vma\n"));
+ goto error_kmem;
+ }
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+
+ /*
+ * partially initialize the vma for the sampling buffer
+ */
+- vma->vm_mm = mm;
+ vma->vm_file = get_file(filp);
+ vma->vm_flags = VM_READ|VM_MAYREAD|VM_DONTEXPAND|VM_DONTDUMP;
+ vma->vm_page_prot = PAGE_READONLY; /* XXX may need to change */
+@@ -2346,7 +2344,7 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t
+ return 0;
+
+ error:
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ error_kmem:
+ pfm_rvfree(smpl_buf, size);
+
+diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
+index 18278b448530..bdb14a369137 100644
+--- a/arch/ia64/mm/init.c
++++ b/arch/ia64/mm/init.c
+@@ -114,10 +114,8 @@ ia64_init_addr_space (void)
+ * the problem. When the process attempts to write to the register backing store
+ * for the first time, it will get a SEGFAULT in this case.
+ */
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(current->mm);
+ if (vma) {
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+- vma->vm_mm = current->mm;
+ vma->vm_start = current->thread.rbs_bot & PAGE_MASK;
+ vma->vm_end = vma->vm_start + PAGE_SIZE;
+ vma->vm_flags = VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT;
+@@ -125,7 +123,7 @@ ia64_init_addr_space (void)
+ down_write(¤t->mm->mmap_sem);
+ if (insert_vm_struct(current->mm, vma)) {
+ up_write(¤t->mm->mmap_sem);
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return;
+ }
+ up_write(¤t->mm->mmap_sem);
+@@ -133,10 +131,8 @@ ia64_init_addr_space (void)
+
+ /* map NaT-page at address zero to speed up speculative dereferencing of NULL: */
+ if (!(current->personality & MMAP_PAGE_ZERO)) {
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(current->mm);
+ if (vma) {
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+- vma->vm_mm = current->mm;
+ vma->vm_end = PAGE_SIZE;
+ vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT);
+ vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO |
+@@ -144,7 +140,7 @@ ia64_init_addr_space (void)
+ down_write(¤t->mm->mmap_sem);
+ if (insert_vm_struct(current->mm, vma)) {
+ up_write(¤t->mm->mmap_sem);
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return;
+ }
+ up_write(¤t->mm->mmap_sem);
+diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
+index 8b707c249026..12fe700632f4 100644
+--- a/arch/m68k/include/asm/mcf_pgalloc.h
++++ b/arch/m68k/include/asm/mcf_pgalloc.h
+@@ -44,6 +44,7 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
+ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page,
+ unsigned long address)
+ {
++ pgtable_page_dtor(page);
+ __free_page(page);
+ }
+
+@@ -74,8 +75,9 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
+ return page;
+ }
+
+-extern inline void pte_free(struct mm_struct *mm, struct page *page)
++static inline void pte_free(struct mm_struct *mm, struct page *page)
+ {
++ pgtable_page_dtor(page);
+ __free_page(page);
+ }
+
+diff --git a/arch/nds32/kernel/setup.c b/arch/nds32/kernel/setup.c
+index 2f5b2ccebe47..63a1a5ef5219 100644
+--- a/arch/nds32/kernel/setup.c
++++ b/arch/nds32/kernel/setup.c
+@@ -278,7 +278,8 @@ static void __init setup_memory(void)
+
+ void __init setup_arch(char **cmdline_p)
+ {
+- early_init_devtree( __dtb_start);
++ early_init_devtree(__atags_pointer ? \
++ phys_to_virt(__atags_pointer) : __dtb_start);
+
+ setup_cpuinfo();
+
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index 690d55272ba6..0c826ad6e994 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -277,12 +277,6 @@ EXCEPTION_ENTRY(_data_page_fault_handler)
+ l.addi r3,r1,0 // pt_regs
+ /* r4 set be EXCEPTION_HANDLE */ // effective address of fault
+
+- /*
+- * __PHX__: TODO
+- *
+- * all this can be written much simpler. look at
+- * DTLB miss handler in the CONFIG_GUARD_PROTECTED_CORE part
+- */
+ #ifdef CONFIG_OPENRISC_NO_SPR_SR_DSX
+ l.lwz r6,PT_PC(r3) // address of an offending insn
+ l.lwz r6,0(r6) // instruction that caused pf
+@@ -314,7 +308,7 @@ EXCEPTION_ENTRY(_data_page_fault_handler)
+
+ #else
+
+- l.lwz r6,PT_SR(r3) // SR
++ l.mfspr r6,r0,SPR_SR // SR
+ l.andi r6,r6,SPR_SR_DSX // check for delay slot exception
+ l.sfne r6,r0 // exception happened in delay slot
+ l.bnf 7f
+diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S
+index fb02b2a1d6f2..9fc6b60140f0 100644
+--- a/arch/openrisc/kernel/head.S
++++ b/arch/openrisc/kernel/head.S
+@@ -210,8 +210,7 @@
+ * r4 - EEAR exception EA
+ * r10 - current pointing to current_thread_info struct
+ * r12 - syscall 0, since we didn't come from syscall
+- * r13 - temp it actually contains new SR, not needed anymore
+- * r31 - handler address of the handler we'll jump to
++ * r30 - handler address of the handler we'll jump to
+ *
+ * handler has to save remaining registers to the exception
+ * ksp frame *before* tainting them!
+@@ -244,6 +243,7 @@
+ /* r1 is KSP, r30 is __pa(KSP) */ ;\
+ tophys (r30,r1) ;\
+ l.sw PT_GPR12(r30),r12 ;\
++ /* r4 use for tmp before EA */ ;\
+ l.mfspr r12,r0,SPR_EPCR_BASE ;\
+ l.sw PT_PC(r30),r12 ;\
+ l.mfspr r12,r0,SPR_ESR_BASE ;\
+@@ -263,7 +263,10 @@
+ /* r12 == 1 if we come from syscall */ ;\
+ CLEAR_GPR(r12) ;\
+ /* ----- turn on MMU ----- */ ;\
+- l.ori r30,r0,(EXCEPTION_SR) ;\
++ /* Carry DSX into exception SR */ ;\
++ l.mfspr r30,r0,SPR_SR ;\
++ l.andi r30,r30,SPR_SR_DSX ;\
++ l.ori r30,r30,(EXCEPTION_SR) ;\
+ l.mtspr r0,r30,SPR_ESR_BASE ;\
+ /* r30: EA address of handler */ ;\
+ LOAD_SYMBOL_2_GPR(r30,handler) ;\
+diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c
+index 113c175fe469..f35b485555db 100644
+--- a/arch/openrisc/kernel/traps.c
++++ b/arch/openrisc/kernel/traps.c
+@@ -317,7 +317,7 @@ static inline int in_delay_slot(struct pt_regs *regs)
+ return 0;
+ }
+ #else
+- return regs->sr & SPR_SR_DSX;
++ return mfspr(SPR_SR) & SPR_SR_DSX;
+ #endif
+ }
+
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 6f84b6acc86e..8a63515f03bf 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ {
+ volatile unsigned int *a;
+
+- mb();
+ a = __ldcw_align(x);
+ while (__ldcw(a) == 0)
+ while (*a == 0)
+@@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ local_irq_disable();
+ } else
+ cpu_relax();
+- mb();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+
+ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ {
+ volatile unsigned int *a;
+- mb();
++
+ a = __ldcw_align(x);
+- *a = 1;
+ mb();
++ *a = 1;
+ }
+
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+@@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x)
+ volatile unsigned int *a;
+ int ret;
+
+- mb();
+ a = __ldcw_align(x);
+ ret = __ldcw(a) != 0;
+- mb();
+
+ return ret;
+ }
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4886a6db42e9..5f7e57fcaeef 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -629,12 +629,12 @@ cas_action:
+ stw %r1, 4(%sr2,%r20)
+ #endif
+ /* The load and store could fail */
+-1: ldw,ma 0(%r26), %r28
++1: ldw 0(%r26), %r28
+ sub,<> %r28, %r25, %r0
+-2: stw,ma %r24, 0(%r26)
++2: stw %r24, 0(%r26)
+ /* Free lock */
+ sync
+- stw,ma %r20, 0(%sr2,%r20)
++ stw %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ /* Clear thread register indicator */
+ stw %r0, 4(%sr2,%r20)
+@@ -798,30 +798,30 @@ cas2_action:
+ ldo 1(%r0),%r28
+
+ /* 8bit CAS */
+-13: ldb,ma 0(%r26), %r29
++13: ldb 0(%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+-14: stb,ma %r24, 0(%r26)
++14: stb %r24, 0(%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 16bit CAS */
+-15: ldh,ma 0(%r26), %r29
++15: ldh 0(%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+-16: sth,ma %r24, 0(%r26)
++16: sth %r24, 0(%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 32bit CAS */
+-17: ldw,ma 0(%r26), %r29
++17: ldw 0(%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+-18: stw,ma %r24, 0(%r26)
++18: stw %r24, 0(%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+@@ -829,10 +829,10 @@ cas2_action:
+
+ /* 64bit CAS */
+ #ifdef CONFIG_64BIT
+-19: ldd,ma 0(%r26), %r29
++19: ldd 0(%r26), %r29
+ sub,*= %r29, %r25, %r0
+ b,n cas2_end
+-20: std,ma %r24, 0(%r26)
++20: std %r24, 0(%r26)
+ copy %r0, %r28
+ #else
+ /* Compare first word */
+@@ -851,7 +851,7 @@ cas2_action:
+ cas2_end:
+ /* Free lock */
+ sync
+- stw,ma %r20, 0(%sr2,%r20)
++ stw %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+ /* Return to userspace, set no error */
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 9ca7148b5881..6d6cf14009cf 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -579,9 +579,6 @@ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ nmi_ipi_busy_count--;
+ nmi_ipi_unlock();
+
+- /* Remove this CPU */
+- set_cpu_online(smp_processor_id(), false);
+-
+ spin_begin();
+ while (1)
+ spin_cpu_relax();
+@@ -596,9 +593,6 @@ void smp_send_stop(void)
+
+ static void stop_this_cpu(void *dummy)
+ {
+- /* Remove this CPU */
+- set_cpu_online(smp_processor_id(), false);
+-
+ hard_irq_disable();
+ spin_begin();
+ while (1)
+diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c
+index b74cbfbce2d0..7bcdaed15703 100644
+--- a/arch/riscv/kernel/irq.c
++++ b/arch/riscv/kernel/irq.c
+@@ -16,10 +16,6 @@
+ #include <linux/irqchip.h>
+ #include <linux/irqdomain.h>
+
+-#ifdef CONFIG_RISCV_INTC
+-#include <linux/irqchip/irq-riscv-intc.h>
+-#endif
+-
+ void __init init_IRQ(void)
+ {
+ irqchip_init();
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 5dddba301d0a..ac7600b8709a 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -252,14 +252,14 @@ static int apply_r_riscv_align_rela(struct module *me, u32 *location,
+ static int apply_r_riscv_add32_rela(struct module *me, u32 *location,
+ Elf_Addr v)
+ {
+- *(u32 *)location += (*(u32 *)v);
++ *(u32 *)location += (u32)v;
+ return 0;
+ }
+
+ static int apply_r_riscv_sub32_rela(struct module *me, u32 *location,
+ Elf_Addr v)
+ {
+- *(u32 *)location -= (*(u32 *)v);
++ *(u32 *)location -= (u32)v;
+ return 0;
+ }
+
+diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
+index ba3e80712797..9f82a7e34c64 100644
+--- a/arch/riscv/kernel/ptrace.c
++++ b/arch/riscv/kernel/ptrace.c
+@@ -50,7 +50,7 @@ static int riscv_gpr_set(struct task_struct *target,
+ struct pt_regs *regs;
+
+ regs = task_pt_regs(target);
+- ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, ®s, 0, -1);
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, regs, 0, -1);
+ return ret;
+ }
+
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index dd2bcf0e7d00..7b08b0ffa2c7 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1391,6 +1391,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ goto free_addrs;
+ }
+ if (bpf_jit_prog(&jit, fp)) {
++ bpf_jit_binary_free(header);
+ fp = orig_fp;
+ goto free_addrs;
+ }
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index d7a9dea8563d..377d50509653 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -980,6 +980,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+
+ extern unsigned long arch_align_stack(unsigned long sp);
+ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
++extern void free_kernel_image_pages(void *begin, void *end);
+
+ void default_idle(void);
+ #ifdef CONFIG_XEN
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index bd090367236c..34cffcef7375 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, int numpages);
+ int set_memory_4k(unsigned long addr, int numpages);
+ int set_memory_encrypted(unsigned long addr, int numpages);
+ int set_memory_decrypted(unsigned long addr, int numpages);
++int set_memory_np_noalias(unsigned long addr, int numpages);
+
+ int set_memory_array_uc(unsigned long *addr, int addrinarray);
+ int set_memory_array_wc(unsigned long *addr, int addrinarray);
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 1c2cfa0644aa..97ccf4c3b45b 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -190,8 +190,11 @@ static void save_microcode_patch(void *data, unsigned int size)
+ p = memdup_patch(data, size);
+ if (!p)
+ pr_err("Error allocating buffer %p\n", data);
+- else
++ else {
+ list_replace(&iter->plist, &p->plist);
++ kfree(iter->data);
++ kfree(iter);
++ }
+ }
+ }
+
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index d79a18b4cf9d..4c53d12ca933 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -138,6 +138,7 @@ static unsigned long kvm_get_tsc_khz(void)
+ src = &hv_clock[cpu].pvti;
+ tsc_khz = pvclock_tsc_khz(src);
+ put_cpu();
++ setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ);
+ return tsc_khz;
+ }
+
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index f5d30c68fd09..f02ecaf97904 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -222,6 +222,11 @@ static void notrace start_secondary(void *unused)
+ #ifdef CONFIG_X86_32
+ /* switch away from the initial page table */
+ load_cr3(swapper_pg_dir);
++ /*
++ * Initialize the CR4 shadow before doing anything that could
++ * try to read it.
++ */
++ cr4_init_shadow();
+ __flush_tlb_all();
+ #endif
+ load_current_idt();
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 12cad70acc3b..71dd00a140d2 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -11791,7 +11791,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+- u32 msr_entry_idx;
+ u32 exit_qual;
+ int r;
+
+@@ -11813,10 +11812,10 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ nested_get_vmcs12_pages(vcpu, vmcs12);
+
+ r = EXIT_REASON_MSR_LOAD_FAIL;
+- msr_entry_idx = nested_vmx_load_msr(vcpu,
+- vmcs12->vm_entry_msr_load_addr,
+- vmcs12->vm_entry_msr_load_count);
+- if (msr_entry_idx)
++ exit_qual = nested_vmx_load_msr(vcpu,
++ vmcs12->vm_entry_msr_load_addr,
++ vmcs12->vm_entry_msr_load_count);
++ if (exit_qual)
+ goto fail;
+
+ /*
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 83241eb71cd4..acfab322fbe0 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -775,13 +775,44 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ }
+ }
+
++/*
++ * begin/end can be in the direct map or the "high kernel mapping"
++ * used for the kernel image only. free_init_pages() will do the
++ * right thing for either kind of address.
++ */
++void free_kernel_image_pages(void *begin, void *end)
++{
++ unsigned long begin_ul = (unsigned long)begin;
++ unsigned long end_ul = (unsigned long)end;
++ unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT;
++
++
++ free_init_pages("unused kernel image", begin_ul, end_ul);
++
++ /*
++ * PTI maps some of the kernel into userspace. For performance,
++ * this includes some kernel areas that do not contain secrets.
++ * Those areas might be adjacent to the parts of the kernel image
++ * being freed, which may contain secrets. Remove the "high kernel
++ * image mapping" for these freed areas, ensuring they are not even
++ * potentially vulnerable to Meltdown regardless of the specific
++ * optimizations PTI is currently using.
++ *
++ * The "noalias" prevents unmapping the direct map alias which is
++ * needed to access the freed pages.
++ *
++ * This is only valid for 64bit kernels. 32bit has only one mapping
++ * which can't be treated in this way for obvious reasons.
++ */
++ if (IS_ENABLED(CONFIG_X86_64) && cpu_feature_enabled(X86_FEATURE_PTI))
++ set_memory_np_noalias(begin_ul, len_pages);
++}
++
+ void __ref free_initmem(void)
+ {
+ e820__reallocate_tables();
+
+- free_init_pages("unused kernel",
+- (unsigned long)(&__init_begin),
+- (unsigned long)(&__init_end));
++ free_kernel_image_pages(&__init_begin, &__init_end);
+ }
+
+ #ifdef CONFIG_BLK_DEV_INITRD
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 20d8bf5fbceb..3060e1dda2ad 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1283,12 +1283,8 @@ void mark_rodata_ro(void)
+ set_memory_ro(start, (end-start) >> PAGE_SHIFT);
+ #endif
+
+- free_init_pages("unused kernel",
+- (unsigned long) __va(__pa_symbol(text_end)),
+- (unsigned long) __va(__pa_symbol(rodata_start)));
+- free_init_pages("unused kernel",
+- (unsigned long) __va(__pa_symbol(rodata_end)),
+- (unsigned long) __va(__pa_symbol(_sdata)));
++ free_kernel_image_pages((void *)text_end, (void *)rodata_start);
++ free_kernel_image_pages((void *)rodata_end, (void *)_sdata);
+
+ debug_checkwx();
+
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 29505724202a..8d6c34fe49be 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock);
+ #define CPA_FLUSHTLB 1
+ #define CPA_ARRAY 2
+ #define CPA_PAGES_ARRAY 4
++#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */
+
+ #ifdef CONFIG_PROC_FS
+ static unsigned long direct_pages_count[PG_LEVEL_NUM];
+@@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
+
+ /* No alias checking for _NX bit modifications */
+ checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX;
++ /* Has caller explicitly disabled alias checking? */
++ if (in_flag & CPA_NO_CHECK_ALIAS)
++ checkalias = 0;
+
+ ret = __change_page_attr_set_clr(&cpa, checkalias);
+
+@@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, int numpages)
+ return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
+ }
+
++int set_memory_np_noalias(unsigned long addr, int numpages)
++{
++ int cpa_flags = CPA_NO_CHECK_ALIAS;
++
++ return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
++ __pgprot(_PAGE_PRESENT), 0,
++ cpa_flags, NULL);
++}
++
+ int set_memory_4k(unsigned long addr, int numpages)
+ {
+ return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 3080e18cb859..62b5f3f21b4b 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -357,7 +357,7 @@ static const char *const blk_mq_rq_state_name_array[] = {
+
+ static const char *blk_mq_rq_state_name(enum mq_rq_state rq_state)
+ {
+- if (WARN_ON_ONCE((unsigned int)rq_state >
++ if (WARN_ON_ONCE((unsigned int)rq_state >=
+ ARRAY_SIZE(blk_mq_rq_state_name_array)))
+ return "(?)";
+ return blk_mq_rq_state_name_array[rq_state];
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 90ffd8151c57..ed9b11e6b997 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1174,6 +1174,9 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx,
+
+ #define BLK_MQ_RESOURCE_DELAY 3 /* ms units */
+
++/*
++ * Returns true if we did some work AND can potentially do more.
++ */
+ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ bool got_budget)
+ {
+@@ -1304,8 +1307,17 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ blk_mq_run_hw_queue(hctx, true);
+ else if (needs_restart && (ret == BLK_STS_RESOURCE))
+ blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
++
++ return false;
+ }
+
++ /*
++ * If the host/device is unable to accept more work, inform the
++ * caller of that.
++ */
++ if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
++ return false;
++
+ return (queued + errors) != 0;
+ }
+
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index 945f4b8610e0..e0de4dd448b3 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -877,7 +877,7 @@ static size_t response_get_string(const struct parsed_resp *resp, int n,
+ return 0;
+ }
+
+- if (n > resp->num) {
++ if (n >= resp->num) {
+ pr_debug("Response has %d tokens. Can't access %d\n",
+ resp->num, n);
+ return 0;
+@@ -916,7 +916,7 @@ static u64 response_get_u64(const struct parsed_resp *resp, int n)
+ return 0;
+ }
+
+- if (n > resp->num) {
++ if (n >= resp->num) {
+ pr_debug("Response has %d tokens. Can't access %d\n",
+ resp->num, n);
+ return 0;
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 30a572956557..70a0f8b2f6c1 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2031,6 +2031,17 @@ static inline void acpi_ec_query_exit(void)
+ }
+ }
+
++static const struct dmi_system_id acpi_ec_no_wakeup[] = {
++ {
++ .ident = "Thinkpad X1 Carbon 6th",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Thinkpad X1 Carbon 6th"),
++ },
++ },
++ { },
++};
++
+ int __init acpi_ec_init(void)
+ {
+ int result;
+@@ -2041,6 +2052,15 @@ int __init acpi_ec_init(void)
+ if (result)
+ return result;
+
++ /*
++ * Disable EC wakeup on following systems to prevent periodic
++ * wakeup from EC GPE.
++ */
++ if (dmi_check_system(acpi_ec_no_wakeup)) {
++ ec_no_wakeup = true;
++ pr_debug("Disabling EC wakeup on suspend-to-idle\n");
++ }
++
+ /* Drivers must be started after acpi_ec_query_init() */
+ dsdt_fail = acpi_bus_register_driver(&acpi_ec_driver);
+ /*
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 964106d173bd..3b1651426e12 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -408,6 +408,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ const guid_t *guid;
+ int rc, i;
+
++ if (cmd_rc)
++ *cmd_rc = -EINVAL;
+ func = cmd;
+ if (cmd == ND_CMD_CALL) {
+ call_pkg = buf;
+@@ -518,6 +520,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ * If we return an error (like elsewhere) then caller wouldn't
+ * be able to rely upon data returned to make calculation.
+ */
++ if (cmd_rc)
++ *cmd_rc = 0;
+ return 0;
+ }
+
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index e5d90977caec..5e4befdd8562 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -35,6 +35,7 @@
+ #include <linux/kernel.h>
+ #include <linux/gfp.h>
+ #include <linux/module.h>
++#include <linux/nospec.h>
+ #include <linux/blkdev.h>
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+@@ -1146,10 +1147,12 @@ static ssize_t ahci_led_store(struct ata_port *ap, const char *buf,
+
+ /* get the slot number from the message */
+ pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8;
+- if (pmp < EM_MAX_SLOTS)
++ if (pmp < EM_MAX_SLOTS) {
++ pmp = array_index_nospec(pmp, EM_MAX_SLOTS);
+ emp = &pp->em_priv[pmp];
+- else
++ } else {
+ return -EINVAL;
++ }
+
+ /* mask off the activity bits if we are in sw_activity
+ * mode, user should turn off sw_activity before setting
+diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
+index a500e738d929..f0ccb0a2b3bf 100644
+--- a/drivers/block/drbd/drbd_req.c
++++ b/drivers/block/drbd/drbd_req.c
+@@ -1244,8 +1244,8 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio, unsigned long
+ _drbd_start_io_acct(device, req);
+
+ /* process discards always from our submitter thread */
+- if ((bio_op(bio) & REQ_OP_WRITE_ZEROES) ||
+- (bio_op(bio) & REQ_OP_DISCARD))
++ if (bio_op(bio) == REQ_OP_WRITE_ZEROES ||
++ bio_op(bio) == REQ_OP_DISCARD)
+ goto queue_for_submitter_thread;
+
+ if (rw == WRITE && req->private_bio && req->i.size
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 64278f472efe..a1c0c1d1f264 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -76,6 +76,7 @@ struct link_dead_args {
+ #define NBD_HAS_CONFIG_REF 4
+ #define NBD_BOUND 5
+ #define NBD_DESTROY_ON_DISCONNECT 6
++#define NBD_DISCONNECT_ON_CLOSE 7
+
+ struct nbd_config {
+ u32 flags;
+@@ -138,6 +139,7 @@ static void nbd_config_put(struct nbd_device *nbd);
+ static void nbd_connect_reply(struct genl_info *info, int index);
+ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info);
+ static void nbd_dead_link_work(struct work_struct *work);
++static void nbd_disconnect_and_put(struct nbd_device *nbd);
+
+ static inline struct device *nbd_to_dev(struct nbd_device *nbd)
+ {
+@@ -1291,6 +1293,12 @@ out:
+ static void nbd_release(struct gendisk *disk, fmode_t mode)
+ {
+ struct nbd_device *nbd = disk->private_data;
++ struct block_device *bdev = bdget_disk(disk, 0);
++
++ if (test_bit(NBD_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
++ bdev->bd_openers == 0)
++ nbd_disconnect_and_put(nbd);
++
+ nbd_config_put(nbd);
+ nbd_put(nbd);
+ }
+@@ -1690,6 +1698,10 @@ again:
+ &config->runtime_flags);
+ put_dev = true;
+ }
++ if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) {
++ set_bit(NBD_DISCONNECT_ON_CLOSE,
++ &config->runtime_flags);
++ }
+ }
+
+ if (info->attrs[NBD_ATTR_SOCKETS]) {
+@@ -1734,6 +1746,16 @@ out:
+ return ret;
+ }
+
++static void nbd_disconnect_and_put(struct nbd_device *nbd)
++{
++ mutex_lock(&nbd->config_lock);
++ nbd_disconnect(nbd);
++ mutex_unlock(&nbd->config_lock);
++ if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
++ &nbd->config->runtime_flags))
++ nbd_config_put(nbd);
++}
++
+ static int nbd_genl_disconnect(struct sk_buff *skb, struct genl_info *info)
+ {
+ struct nbd_device *nbd;
+@@ -1766,12 +1788,7 @@ static int nbd_genl_disconnect(struct sk_buff *skb, struct genl_info *info)
+ nbd_put(nbd);
+ return 0;
+ }
+- mutex_lock(&nbd->config_lock);
+- nbd_disconnect(nbd);
+- mutex_unlock(&nbd->config_lock);
+- if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
+- &nbd->config->runtime_flags))
+- nbd_config_put(nbd);
++ nbd_disconnect_and_put(nbd);
+ nbd_config_put(nbd);
+ nbd_put(nbd);
+ return 0;
+@@ -1782,7 +1799,7 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
+ struct nbd_device *nbd = NULL;
+ struct nbd_config *config;
+ int index;
+- int ret = -EINVAL;
++ int ret = 0;
+ bool put_dev = false;
+
+ if (!netlink_capable(skb, CAP_SYS_ADMIN))
+@@ -1822,6 +1839,7 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
+ !nbd->task_recv) {
+ dev_err(nbd_to_dev(nbd),
+ "not configured, cannot reconfigure\n");
++ ret = -EINVAL;
+ goto out;
+ }
+
+@@ -1846,6 +1864,14 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
+ &config->runtime_flags))
+ refcount_inc(&nbd->refs);
+ }
++
++ if (flags & NBD_CFLAG_DISCONNECT_ON_CLOSE) {
++ set_bit(NBD_DISCONNECT_ON_CLOSE,
++ &config->runtime_flags);
++ } else {
++ clear_bit(NBD_DISCONNECT_ON_CLOSE,
++ &config->runtime_flags);
++ }
+ }
+
+ if (info->attrs[NBD_ATTR_SOCKETS]) {
+diff --git a/drivers/char/ipmi/kcs_bmc.c b/drivers/char/ipmi/kcs_bmc.c
+index fbfc05e3f3d1..bb882ab161fe 100644
+--- a/drivers/char/ipmi/kcs_bmc.c
++++ b/drivers/char/ipmi/kcs_bmc.c
+@@ -210,34 +210,23 @@ static void kcs_bmc_handle_cmd(struct kcs_bmc *kcs_bmc)
+ int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc)
+ {
+ unsigned long flags;
+- int ret = 0;
++ int ret = -ENODATA;
+ u8 status;
+
+ spin_lock_irqsave(&kcs_bmc->lock, flags);
+
+- if (!kcs_bmc->running) {
+- kcs_force_abort(kcs_bmc);
+- ret = -ENODEV;
+- goto out_unlock;
+- }
+-
+- status = read_status(kcs_bmc) & (KCS_STATUS_IBF | KCS_STATUS_CMD_DAT);
+-
+- switch (status) {
+- case KCS_STATUS_IBF | KCS_STATUS_CMD_DAT:
+- kcs_bmc_handle_cmd(kcs_bmc);
+- break;
+-
+- case KCS_STATUS_IBF:
+- kcs_bmc_handle_data(kcs_bmc);
+- break;
++ status = read_status(kcs_bmc);
++ if (status & KCS_STATUS_IBF) {
++ if (!kcs_bmc->running)
++ kcs_force_abort(kcs_bmc);
++ else if (status & KCS_STATUS_CMD_DAT)
++ kcs_bmc_handle_cmd(kcs_bmc);
++ else
++ kcs_bmc_handle_data(kcs_bmc);
+
+- default:
+- ret = -ENODATA;
+- break;
++ ret = 0;
+ }
+
+-out_unlock:
+ spin_unlock_irqrestore(&kcs_bmc->lock, flags);
+
+ return ret;
+diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
+index de6d06ac790b..23a7fdcfc4e0 100644
+--- a/drivers/clk/Makefile
++++ b/drivers/clk/Makefile
+@@ -94,7 +94,7 @@ obj-$(CONFIG_ARCH_SPRD) += sprd/
+ obj-$(CONFIG_ARCH_STI) += st/
+ obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
+ obj-$(CONFIG_ARCH_SUNXI) += sunxi/
+-obj-$(CONFIG_ARCH_SUNXI) += sunxi-ng/
++obj-$(CONFIG_SUNXI_CCU) += sunxi-ng/
+ obj-$(CONFIG_ARCH_TEGRA) += tegra/
+ obj-y += ti/
+ obj-$(CONFIG_CLK_UNIPHIER) += uniphier/
+diff --git a/drivers/clk/davinci/da8xx-cfgchip.c b/drivers/clk/davinci/da8xx-cfgchip.c
+index c971111d2601..20a120aa147e 100644
+--- a/drivers/clk/davinci/da8xx-cfgchip.c
++++ b/drivers/clk/davinci/da8xx-cfgchip.c
+@@ -672,7 +672,7 @@ static int of_da8xx_usb_phy_clk_init(struct device *dev, struct regmap *regmap)
+
+ usb1 = da8xx_cfgchip_register_usb1_clk48(dev, regmap);
+ if (IS_ERR(usb1)) {
+- if (PTR_ERR(usb0) == -EPROBE_DEFER)
++ if (PTR_ERR(usb1) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+
+ dev_warn(dev, "Failed to register usb1_clk48 (%ld)\n",
+diff --git a/drivers/clk/sunxi-ng/Makefile b/drivers/clk/sunxi-ng/Makefile
+index 128a40ee5c5e..9ac0fb948101 100644
+--- a/drivers/clk/sunxi-ng/Makefile
++++ b/drivers/clk/sunxi-ng/Makefile
+@@ -1,24 +1,24 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Common objects
+-lib-$(CONFIG_SUNXI_CCU) += ccu_common.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_mmc_timing.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_reset.o
++obj-y += ccu_common.o
++obj-y += ccu_mmc_timing.o
++obj-y += ccu_reset.o
+
+ # Base clock types
+-lib-$(CONFIG_SUNXI_CCU) += ccu_div.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_frac.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_gate.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_mux.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_mult.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_phase.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_sdm.o
++obj-y += ccu_div.o
++obj-y += ccu_frac.o
++obj-y += ccu_gate.o
++obj-y += ccu_mux.o
++obj-y += ccu_mult.o
++obj-y += ccu_phase.o
++obj-y += ccu_sdm.o
+
+ # Multi-factor clocks
+-lib-$(CONFIG_SUNXI_CCU) += ccu_nk.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_nkm.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_nkmp.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_nm.o
+-lib-$(CONFIG_SUNXI_CCU) += ccu_mp.o
++obj-y += ccu_nk.o
++obj-y += ccu_nkm.o
++obj-y += ccu_nkmp.o
++obj-y += ccu_nm.o
++obj-y += ccu_mp.o
+
+ # SoC support
+ obj-$(CONFIG_SUN50I_A64_CCU) += ccu-sun50i-a64.o
+@@ -37,12 +37,3 @@ obj-$(CONFIG_SUN8I_R40_CCU) += ccu-sun8i-r40.o
+ obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80.o
+ obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-de.o
+ obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-usb.o
+-
+-# The lib-y file goals is supposed to work only in arch/*/lib or lib/. In our
+-# case, we want to use that goal, but even though lib.a will be properly
+-# generated, it will not be linked in, eventually resulting in a linker error
+-# for missing symbols.
+-#
+-# We can work around that by explicitly adding lib.a to the obj-y goal. This is
+-# an undocumented behaviour, but works well for now.
+-obj-$(CONFIG_SUNXI_CCU) += lib.a
+diff --git a/drivers/clocksource/timer-stm32.c b/drivers/clocksource/timer-stm32.c
+index e5cdc3af684c..2717f88c7904 100644
+--- a/drivers/clocksource/timer-stm32.c
++++ b/drivers/clocksource/timer-stm32.c
+@@ -304,8 +304,10 @@ static int __init stm32_timer_init(struct device_node *node)
+
+ to->private_data = kzalloc(sizeof(struct stm32_timer_private),
+ GFP_KERNEL);
+- if (!to->private_data)
++ if (!to->private_data) {
++ ret = -ENOMEM;
+ goto deinit;
++ }
+
+ rstc = of_reset_control_get(node, NULL);
+ if (!IS_ERR(rstc)) {
+diff --git a/drivers/dax/device.c b/drivers/dax/device.c
+index aff2c1594220..a26b7016367a 100644
+--- a/drivers/dax/device.c
++++ b/drivers/dax/device.c
+@@ -189,14 +189,16 @@ static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
+
+ /* prevent private mappings from being established */
+ if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) {
+- dev_info(dev, "%s: %s: fail, attempted private mapping\n",
++ dev_info_ratelimited(dev,
++ "%s: %s: fail, attempted private mapping\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+
+ mask = dax_region->align - 1;
+ if (vma->vm_start & mask || vma->vm_end & mask) {
+- dev_info(dev, "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n",
++ dev_info_ratelimited(dev,
++ "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n",
+ current->comm, func, vma->vm_start, vma->vm_end,
+ mask);
+ return -EINVAL;
+@@ -204,13 +206,15 @@ static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
+
+ if ((dax_region->pfn_flags & (PFN_DEV|PFN_MAP)) == PFN_DEV
+ && (vma->vm_flags & VM_DONTCOPY) == 0) {
+- dev_info(dev, "%s: %s: fail, dax range requires MADV_DONTFORK\n",
++ dev_info_ratelimited(dev,
++ "%s: %s: fail, dax range requires MADV_DONTFORK\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+
+ if (!vma_is_dax(vma)) {
+- dev_info(dev, "%s: %s: fail, vma is not DAX capable\n",
++ dev_info_ratelimited(dev,
++ "%s: %s: fail, vma is not DAX capable\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+diff --git a/drivers/dma/k3dma.c b/drivers/dma/k3dma.c
+index 26b67455208f..e27adc4ab59d 100644
+--- a/drivers/dma/k3dma.c
++++ b/drivers/dma/k3dma.c
+@@ -794,7 +794,7 @@ static struct dma_chan *k3_of_dma_simple_xlate(struct of_phandle_args *dma_spec,
+ struct k3_dma_dev *d = ofdma->of_dma_data;
+ unsigned int request = dma_spec->args[0];
+
+- if (request > d->dma_requests)
++ if (request >= d->dma_requests)
+ return NULL;
+
+ return dma_get_slave_channel(&(d->chans[request].vc.chan));
+diff --git a/drivers/dma/omap-dma.c b/drivers/dma/omap-dma.c
+index d21c19822feb..56399bd45179 100644
+--- a/drivers/dma/omap-dma.c
++++ b/drivers/dma/omap-dma.c
+@@ -1485,7 +1485,11 @@ static int omap_dma_probe(struct platform_device *pdev)
+ od->ddev.src_addr_widths = OMAP_DMA_BUSWIDTHS;
+ od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS;
+ od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+- od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
++ if (__dma_omap15xx(od->plat->dma_attr))
++ od->ddev.residue_granularity =
++ DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
++ else
++ od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+ od->ddev.max_burst = SZ_16M - 1; /* CCEN: 24bit unsigned */
+ od->ddev.dev = &pdev->dev;
+ INIT_LIST_HEAD(&od->ddev.channels);
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index de1fd59fe136..96a8ab3cec27 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2924,7 +2924,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
+ pd->src_addr_widths = PL330_DMA_BUSWIDTHS;
+ pd->dst_addr_widths = PL330_DMA_BUSWIDTHS;
+ pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+- pd->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
++ pd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+ pd->max_burst = ((pl330->quirks & PL330_QUIRK_BROKEN_NO_FLUSHP) ?
+ 1 : PL330_MAX_BURST);
+
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 3bb82e511eca..7d3edd713932 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -215,6 +215,7 @@ const char * const edac_mem_types[] = {
+ [MEM_LRDDR3] = "Load-Reduced-DDR3-RAM",
+ [MEM_DDR4] = "Unbuffered-DDR4",
+ [MEM_RDDR4] = "Registered-DDR4",
++ [MEM_LRDDR4] = "Load-Reduced-DDR4-RAM",
+ [MEM_NVDIMM] = "Non-volatile-RAM",
+ };
+ EXPORT_SYMBOL_GPL(edac_mem_types);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index 361975cf45a9..11e7eadf1166 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -31,7 +31,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+-
++#include <linux/nospec.h>
+
+ static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
+
+@@ -309,6 +309,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
+ count = -EINVAL;
+ goto fail;
+ }
++ idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+
+ amdgpu_dpm_get_pp_num_states(adev, &data);
+ state = data.states[idx];
+diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+index ac9617269a2f..085f0ba564df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+@@ -899,7 +899,7 @@ static const struct amdgpu_ring_funcs vce_v3_0_ring_phys_funcs = {
+ .emit_frame_size =
+ 4 + /* vce_v3_0_emit_pipeline_sync */
+ 6, /* amdgpu_vce_ring_emit_fence x1 no user fence */
+- .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */
++ .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */
+ .emit_ib = amdgpu_vce_ring_emit_ib,
+ .emit_fence = amdgpu_vce_ring_emit_fence,
+ .test_ring = amdgpu_vce_ring_test_ring,
+@@ -923,7 +923,7 @@ static const struct amdgpu_ring_funcs vce_v3_0_ring_vm_funcs = {
+ 6 + /* vce_v3_0_emit_vm_flush */
+ 4 + /* vce_v3_0_emit_pipeline_sync */
+ 6 + 6, /* amdgpu_vce_ring_emit_fence x2 vm fence */
+- .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */
++ .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */
+ .emit_ib = vce_v3_0_ring_emit_ib,
+ .emit_vm_flush = vce_v3_0_emit_vm_flush,
+ .emit_pipeline_sync = vce_v3_0_emit_pipeline_sync,
+diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+index 3092f76bdb75..6531ee7f3af4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+@@ -678,9 +678,22 @@ bool dce100_validate_bandwidth(
+ struct dc *dc,
+ struct dc_state *context)
+ {
+- /* TODO implement when needed but for now hardcode max value*/
+- context->bw.dce.dispclk_khz = 681000;
+- context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER;
++ int i;
++ bool at_least_one_pipe = false;
++
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ if (context->res_ctx.pipe_ctx[i].stream)
++ at_least_one_pipe = true;
++ }
++
++ if (at_least_one_pipe) {
++ /* TODO implement when needed but for now hardcode max value*/
++ context->bw.dce.dispclk_khz = 681000;
++ context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER;
++ } else {
++ context->bw.dce.dispclk_khz = 0;
++ context->bw.dce.yclk_khz = 0;
++ }
+
+ return true;
+ }
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+index 200de46bd06b..0d497d0f6056 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+@@ -82,6 +82,7 @@ static void vega12_set_default_registry_data(struct pp_hwmgr *hwmgr)
+
+ data->registry_data.disallowed_features = 0x0;
+ data->registry_data.od_state_in_dc_support = 0;
++ data->registry_data.thermal_support = 1;
+ data->registry_data.skip_baco_hardware = 0;
+
+ data->registry_data.log_avfs_param = 0;
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index 8d20faa198cf..0a788d76ed5f 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -278,7 +278,6 @@ static int malidp_init(struct drm_device *drm)
+
+ static void malidp_fini(struct drm_device *drm)
+ {
+- drm_atomic_helper_shutdown(drm);
+ drm_mode_config_cleanup(drm);
+ }
+
+@@ -646,6 +645,7 @@ vblank_fail:
+ malidp_de_irq_fini(drm);
+ drm->irq_enabled = false;
+ irq_init_fail:
++ drm_atomic_helper_shutdown(drm);
+ component_unbind_all(dev, drm);
+ bind_fail:
+ of_node_put(malidp->crtc.port);
+@@ -681,6 +681,7 @@ static void malidp_unbind(struct device *dev)
+ malidp_se_irq_fini(drm);
+ malidp_de_irq_fini(drm);
+ drm->irq_enabled = false;
++ drm_atomic_helper_shutdown(drm);
+ component_unbind_all(dev, drm);
+ of_node_put(malidp->crtc.port);
+ malidp->crtc.port = NULL;
+diff --git a/drivers/gpu/drm/arm/malidp_hw.c b/drivers/gpu/drm/arm/malidp_hw.c
+index d789b46dc817..069783e715f1 100644
+--- a/drivers/gpu/drm/arm/malidp_hw.c
++++ b/drivers/gpu/drm/arm/malidp_hw.c
+@@ -634,7 +634,8 @@ const struct malidp_hw malidp_device[MALIDP_MAX_DEVICES] = {
+ .vsync_irq = MALIDP500_DE_IRQ_VSYNC,
+ },
+ .se_irq_map = {
+- .irq_mask = MALIDP500_SE_IRQ_CONF_MODE,
++ .irq_mask = MALIDP500_SE_IRQ_CONF_MODE |
++ MALIDP500_SE_IRQ_GLOBAL,
+ .vsync_irq = 0,
+ },
+ .dc_irq_map = {
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index 7a44897c50fe..29409a65d864 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -23,6 +23,7 @@
+
+ /* Layer specific register offsets */
+ #define MALIDP_LAYER_FORMAT 0x000
++#define LAYER_FORMAT_MASK 0x3f
+ #define MALIDP_LAYER_CONTROL 0x004
+ #define LAYER_ENABLE (1 << 0)
+ #define LAYER_FLOWCFG_MASK 7
+@@ -235,8 +236,8 @@ static int malidp_de_plane_check(struct drm_plane *plane,
+ if (state->rotation & MALIDP_ROTATED_MASK) {
+ int val;
+
+- val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_h,
+- state->crtc_w,
++ val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_w,
++ state->crtc_h,
+ fb->format->format);
+ if (val < 0)
+ return val;
+@@ -337,7 +338,9 @@ static void malidp_de_plane_update(struct drm_plane *plane,
+ dest_w = plane->state->crtc_w;
+ dest_h = plane->state->crtc_h;
+
+- malidp_hw_write(mp->hwdev, ms->format, mp->layer->base);
++ val = malidp_hw_read(mp->hwdev, mp->layer->base);
++ val = (val & ~LAYER_FORMAT_MASK) | ms->format;
++ malidp_hw_write(mp->hwdev, val, mp->layer->base);
+
+ for (i = 0; i < ms->n_planes; i++) {
+ /* calculate the offset for the layer's plane registers */
+diff --git a/drivers/gpu/drm/armada/armada_crtc.c b/drivers/gpu/drm/armada/armada_crtc.c
+index 03eeee11dd5b..42a40daff132 100644
+--- a/drivers/gpu/drm/armada/armada_crtc.c
++++ b/drivers/gpu/drm/armada/armada_crtc.c
+@@ -519,8 +519,9 @@ static irqreturn_t armada_drm_irq(int irq, void *arg)
+ u32 v, stat = readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR);
+
+ /*
+- * This is rediculous - rather than writing bits to clear, we
+- * have to set the actual status register value. This is racy.
++ * Reading the ISR appears to clear bits provided CLEAN_SPU_IRQ_ISR
++ * is set. Writing has some other effect to acknowledge the IRQ -
++ * without this, we only get a single IRQ.
+ */
+ writel_relaxed(0, dcrtc->base + LCD_SPU_IRQ_ISR);
+
+@@ -1116,16 +1117,22 @@ armada_drm_crtc_set_property(struct drm_crtc *crtc,
+ static int armada_drm_crtc_enable_vblank(struct drm_crtc *crtc)
+ {
+ struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc);
++ unsigned long flags;
+
++ spin_lock_irqsave(&dcrtc->irq_lock, flags);
+ armada_drm_crtc_enable_irq(dcrtc, VSYNC_IRQ_ENA);
++ spin_unlock_irqrestore(&dcrtc->irq_lock, flags);
+ return 0;
+ }
+
+ static void armada_drm_crtc_disable_vblank(struct drm_crtc *crtc)
+ {
+ struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc);
++ unsigned long flags;
+
++ spin_lock_irqsave(&dcrtc->irq_lock, flags);
+ armada_drm_crtc_disable_irq(dcrtc, VSYNC_IRQ_ENA);
++ spin_unlock_irqrestore(&dcrtc->irq_lock, flags);
+ }
+
+ static const struct drm_crtc_funcs armada_crtc_funcs = {
+@@ -1415,6 +1422,7 @@ static int armada_drm_crtc_create(struct drm_device *drm, struct device *dev,
+ CFG_PDWN64x66, dcrtc->base + LCD_SPU_SRAM_PARA1);
+ writel_relaxed(0x2032ff81, dcrtc->base + LCD_SPU_DMA_CTRL1);
+ writel_relaxed(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA);
++ readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR);
+ writel_relaxed(0, dcrtc->base + LCD_SPU_IRQ_ISR);
+
+ ret = devm_request_irq(dev, irq, armada_drm_irq, 0, "armada_drm_crtc",
+diff --git a/drivers/gpu/drm/armada/armada_hw.h b/drivers/gpu/drm/armada/armada_hw.h
+index 27319a8335e2..345dc4d0851e 100644
+--- a/drivers/gpu/drm/armada/armada_hw.h
++++ b/drivers/gpu/drm/armada/armada_hw.h
+@@ -160,6 +160,7 @@ enum {
+ CFG_ALPHAM_GRA = 0x1 << 16,
+ CFG_ALPHAM_CFG = 0x2 << 16,
+ CFG_ALPHA_MASK = 0xff << 8,
++#define CFG_ALPHA(x) ((x) << 8)
+ CFG_PIXCMD_MASK = 0xff,
+ };
+
+diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c
+index c391955009d6..afa7ded3ae31 100644
+--- a/drivers/gpu/drm/armada/armada_overlay.c
++++ b/drivers/gpu/drm/armada/armada_overlay.c
+@@ -28,6 +28,7 @@ struct armada_ovl_plane_properties {
+ uint16_t contrast;
+ uint16_t saturation;
+ uint32_t colorkey_mode;
++ uint32_t colorkey_enable;
+ };
+
+ struct armada_ovl_plane {
+@@ -54,11 +55,13 @@ armada_ovl_update_attr(struct armada_ovl_plane_properties *prop,
+ writel_relaxed(0x00002000, dcrtc->base + LCD_SPU_CBSH_HUE);
+
+ spin_lock_irq(&dcrtc->irq_lock);
+- armada_updatel(prop->colorkey_mode | CFG_ALPHAM_GRA,
+- CFG_CKMODE_MASK | CFG_ALPHAM_MASK | CFG_ALPHA_MASK,
+- dcrtc->base + LCD_SPU_DMA_CTRL1);
+-
+- armada_updatel(ADV_GRACOLORKEY, 0, dcrtc->base + LCD_SPU_ADV_REG);
++ armada_updatel(prop->colorkey_mode,
++ CFG_CKMODE_MASK | CFG_ALPHAM_MASK | CFG_ALPHA_MASK,
++ dcrtc->base + LCD_SPU_DMA_CTRL1);
++ if (dcrtc->variant->has_spu_adv_reg)
++ armada_updatel(prop->colorkey_enable,
++ ADV_GRACOLORKEY | ADV_VIDCOLORKEY,
++ dcrtc->base + LCD_SPU_ADV_REG);
+ spin_unlock_irq(&dcrtc->irq_lock);
+ }
+
+@@ -321,8 +324,17 @@ static int armada_ovl_plane_set_property(struct drm_plane *plane,
+ dplane->prop.colorkey_vb |= K2B(val);
+ update_attr = true;
+ } else if (property == priv->colorkey_mode_prop) {
+- dplane->prop.colorkey_mode &= ~CFG_CKMODE_MASK;
+- dplane->prop.colorkey_mode |= CFG_CKMODE(val);
++ if (val == CKMODE_DISABLE) {
++ dplane->prop.colorkey_mode =
++ CFG_CKMODE(CKMODE_DISABLE) |
++ CFG_ALPHAM_CFG | CFG_ALPHA(255);
++ dplane->prop.colorkey_enable = 0;
++ } else {
++ dplane->prop.colorkey_mode =
++ CFG_CKMODE(val) |
++ CFG_ALPHAM_GRA | CFG_ALPHA(0);
++ dplane->prop.colorkey_enable = ADV_GRACOLORKEY;
++ }
+ update_attr = true;
+ } else if (property == priv->brightness_prop) {
+ dplane->prop.brightness = val - 256;
+@@ -453,7 +465,9 @@ int armada_overlay_plane_create(struct drm_device *dev, unsigned long crtcs)
+ dplane->prop.colorkey_yr = 0xfefefe00;
+ dplane->prop.colorkey_ug = 0x01010100;
+ dplane->prop.colorkey_vb = 0x01010100;
+- dplane->prop.colorkey_mode = CFG_CKMODE(CKMODE_RGB);
++ dplane->prop.colorkey_mode = CFG_CKMODE(CKMODE_RGB) |
++ CFG_ALPHAM_GRA | CFG_ALPHA(0);
++ dplane->prop.colorkey_enable = ADV_GRACOLORKEY;
+ dplane->prop.brightness = 0;
+ dplane->prop.contrast = 0x4000;
+ dplane->prop.saturation = 0x4000;
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 7ab36042a822..f1b33fc79fbb 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -36,8 +36,11 @@
+
+ #define SII8620_BURST_BUF_LEN 288
+ #define VAL_RX_HDMI_CTRL2_DEFVAL VAL_RX_HDMI_CTRL2_IDLE_CNT(3)
+-#define MHL1_MAX_LCLK 225000
+-#define MHL3_MAX_LCLK 600000
++
++#define MHL1_MAX_PCLK 75000
++#define MHL1_MAX_PCLK_PP_MODE 150000
++#define MHL3_MAX_PCLK 200000
++#define MHL3_MAX_PCLK_PP_MODE 300000
+
+ enum sii8620_mode {
+ CM_DISCONNECTED,
+@@ -807,6 +810,7 @@ static void sii8620_burst_rx_all(struct sii8620 *ctx)
+ static void sii8620_fetch_edid(struct sii8620 *ctx)
+ {
+ u8 lm_ddc, ddc_cmd, int3, cbus;
++ unsigned long timeout;
+ int fetched, i;
+ int edid_len = EDID_LENGTH;
+ u8 *edid;
+@@ -856,23 +860,31 @@ static void sii8620_fetch_edid(struct sii8620 *ctx)
+ REG_DDC_CMD, ddc_cmd | VAL_DDC_CMD_ENH_DDC_READ_NO_ACK
+ );
+
+- do {
+- int3 = sii8620_readb(ctx, REG_INTR3);
++ int3 = 0;
++ timeout = jiffies + msecs_to_jiffies(200);
++ for (;;) {
+ cbus = sii8620_readb(ctx, REG_CBUS_STATUS);
+-
+- if (int3 & BIT_DDC_CMD_DONE)
+- break;
+-
+- if (!(cbus & BIT_CBUS_STATUS_CBUS_CONNECTED)) {
++ if (~cbus & BIT_CBUS_STATUS_CBUS_CONNECTED) {
++ kfree(edid);
++ edid = NULL;
++ goto end;
++ }
++ if (int3 & BIT_DDC_CMD_DONE) {
++ if (sii8620_readb(ctx, REG_DDC_DOUT_CNT)
++ >= FETCH_SIZE)
++ break;
++ } else {
++ int3 = sii8620_readb(ctx, REG_INTR3);
++ }
++ if (time_is_before_jiffies(timeout)) {
++ ctx->error = -ETIMEDOUT;
++ dev_err(ctx->dev, "timeout during EDID read\n");
+ kfree(edid);
+ edid = NULL;
+ goto end;
+ }
+- } while (1);
+-
+- sii8620_readb(ctx, REG_DDC_STATUS);
+- while (sii8620_readb(ctx, REG_DDC_DOUT_CNT) < FETCH_SIZE)
+ usleep_range(10, 20);
++ }
+
+ sii8620_read_buf(ctx, REG_DDC_DATA, edid + fetched, FETCH_SIZE);
+ if (fetched + FETCH_SIZE == EDID_LENGTH) {
+@@ -1055,23 +1067,23 @@ static void sii8620_set_format(struct sii8620 *ctx)
+ BIT_M3_P0CTRL_MHL3_P0_PIXEL_MODE_PACKED,
+ ctx->use_packed_pixel ? ~0 : 0);
+ } else {
+- if (ctx->use_packed_pixel)
++ if (ctx->use_packed_pixel) {
+ sii8620_write_seq_static(ctx,
+ REG_VID_MODE, BIT_VID_MODE_M1080P,
+ REG_MHL_TOP_CTL, BIT_MHL_TOP_CTL_MHL_PP_SEL | 1,
+ REG_MHLTX_CTL6, 0x60
+ );
+- else
++ } else {
+ sii8620_write_seq_static(ctx,
+ REG_VID_MODE, 0,
+ REG_MHL_TOP_CTL, 1,
+ REG_MHLTX_CTL6, 0xa0
+ );
++ }
+ }
+
+ if (ctx->use_packed_pixel)
+- out_fmt = VAL_TPI_FORMAT(YCBCR422, FULL) |
+- BIT_TPI_OUTPUT_CSCMODE709;
++ out_fmt = VAL_TPI_FORMAT(YCBCR422, FULL);
+ else
+ out_fmt = VAL_TPI_FORMAT(RGB, FULL);
+
+@@ -1216,7 +1228,7 @@ static void sii8620_start_video(struct sii8620 *ctx)
+ int clk = ctx->pixel_clock * (ctx->use_packed_pixel ? 2 : 3);
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(clk_spec); ++i)
++ for (i = 0; i < ARRAY_SIZE(clk_spec) - 1; ++i)
+ if (clk < clk_spec[i].max_clk)
+ break;
+
+@@ -2268,17 +2280,43 @@ static void sii8620_detach(struct drm_bridge *bridge)
+ rc_unregister_device(ctx->rc_dev);
+ }
+
++static int sii8620_is_packing_required(struct sii8620 *ctx,
++ const struct drm_display_mode *mode)
++{
++ int max_pclk, max_pclk_pp_mode;
++
++ if (sii8620_is_mhl3(ctx)) {
++ max_pclk = MHL3_MAX_PCLK;
++ max_pclk_pp_mode = MHL3_MAX_PCLK_PP_MODE;
++ } else {
++ max_pclk = MHL1_MAX_PCLK;
++ max_pclk_pp_mode = MHL1_MAX_PCLK_PP_MODE;
++ }
++
++ if (mode->clock < max_pclk)
++ return 0;
++ else if (mode->clock < max_pclk_pp_mode)
++ return 1;
++ else
++ return -1;
++}
++
+ static enum drm_mode_status sii8620_mode_valid(struct drm_bridge *bridge,
+ const struct drm_display_mode *mode)
+ {
+ struct sii8620 *ctx = bridge_to_sii8620(bridge);
++ int pack_required = sii8620_is_packing_required(ctx, mode);
+ bool can_pack = ctx->devcap[MHL_DCAP_VID_LINK_MODE] &
+ MHL_DCAP_VID_LINK_PPIXEL;
+- unsigned int max_pclk = sii8620_is_mhl3(ctx) ? MHL3_MAX_LCLK :
+- MHL1_MAX_LCLK;
+- max_pclk /= can_pack ? 2 : 3;
+
+- return (mode->clock > max_pclk) ? MODE_CLOCK_HIGH : MODE_OK;
++ switch (pack_required) {
++ case 0:
++ return MODE_OK;
++ case 1:
++ return (can_pack) ? MODE_OK : MODE_CLOCK_HIGH;
++ default:
++ return MODE_CLOCK_HIGH;
++ }
+ }
+
+ static bool sii8620_mode_fixup(struct drm_bridge *bridge,
+@@ -2286,43 +2324,16 @@ static bool sii8620_mode_fixup(struct drm_bridge *bridge,
+ struct drm_display_mode *adjusted_mode)
+ {
+ struct sii8620 *ctx = bridge_to_sii8620(bridge);
+- int max_lclk;
+- bool ret = true;
+
+ mutex_lock(&ctx->lock);
+
+- max_lclk = sii8620_is_mhl3(ctx) ? MHL3_MAX_LCLK : MHL1_MAX_LCLK;
+- if (max_lclk > 3 * adjusted_mode->clock) {
+- ctx->use_packed_pixel = 0;
+- goto end;
+- }
+- if ((ctx->devcap[MHL_DCAP_VID_LINK_MODE] & MHL_DCAP_VID_LINK_PPIXEL) &&
+- max_lclk > 2 * adjusted_mode->clock) {
+- ctx->use_packed_pixel = 1;
+- goto end;
+- }
+- ret = false;
+-end:
+- if (ret) {
+- u8 vic = drm_match_cea_mode(adjusted_mode);
+-
+- if (!vic) {
+- union hdmi_infoframe frm;
+- u8 mhl_vic[] = { 0, 95, 94, 93, 98 };
+-
+- /* FIXME: We need the connector here */
+- drm_hdmi_vendor_infoframe_from_display_mode(
+- &frm.vendor.hdmi, NULL, adjusted_mode);
+- vic = frm.vendor.hdmi.vic;
+- if (vic >= ARRAY_SIZE(mhl_vic))
+- vic = 0;
+- vic = mhl_vic[vic];
+- }
+- ctx->video_code = vic;
+- ctx->pixel_clock = adjusted_mode->clock;
+- }
++ ctx->use_packed_pixel = sii8620_is_packing_required(ctx, adjusted_mode);
++ ctx->video_code = drm_match_cea_mode(adjusted_mode);
++ ctx->pixel_clock = adjusted_mode->clock;
++
+ mutex_unlock(&ctx->lock);
+- return ret;
++
++ return true;
+ }
+
+ static const struct drm_bridge_funcs sii8620_bridge_funcs = {
+diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+index 1c330f2a7a5d..7acfd0ed79cb 100644
+--- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+@@ -260,7 +260,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
+ unsigned long val;
+
+ val = readl(ctx->addr + DECON_WINCONx(win));
+- val &= ~WINCONx_BPPMODE_MASK;
++ val &= WINCONx_ENWIN_F;
+
+ switch (fb->format->format) {
+ case DRM_FORMAT_XRGB1555:
+@@ -351,8 +351,8 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
+ writel(val, ctx->addr + DECON_VIDOSDxB(win));
+ }
+
+- val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) |
+- VIDOSD_Wx_ALPHA_B_F(0x0);
++ val = VIDOSD_Wx_ALPHA_R_F(0xff) | VIDOSD_Wx_ALPHA_G_F(0xff) |
++ VIDOSD_Wx_ALPHA_B_F(0xff);
+ writel(val, ctx->addr + DECON_VIDOSDxC(win));
+
+ val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) |
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+index 0506b2b17ac1..48f913d8208c 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+@@ -532,21 +532,25 @@ static int gsc_src_set_fmt(struct device *dev, u32 fmt)
+ GSC_IN_CHROMA_ORDER_CRCB);
+ break;
+ case DRM_FORMAT_NV21:
++ cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_2P);
++ break;
+ case DRM_FORMAT_NV61:
+- cfg |= (GSC_IN_CHROMA_ORDER_CRCB |
+- GSC_IN_YUV420_2P);
++ cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV422_2P);
+ break;
+ case DRM_FORMAT_YUV422:
+ cfg |= GSC_IN_YUV422_3P;
+ break;
+ case DRM_FORMAT_YUV420:
++ cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_3P);
++ break;
+ case DRM_FORMAT_YVU420:
+- cfg |= GSC_IN_YUV420_3P;
++ cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_3P);
+ break;
+ case DRM_FORMAT_NV12:
++ cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_2P);
++ break;
+ case DRM_FORMAT_NV16:
+- cfg |= (GSC_IN_CHROMA_ORDER_CBCR |
+- GSC_IN_YUV420_2P);
++ cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV422_2P);
+ break;
+ default:
+ dev_err(ippdrv->dev, "invalid target yuv order 0x%x.\n", fmt);
+@@ -806,18 +810,25 @@ static int gsc_dst_set_fmt(struct device *dev, u32 fmt)
+ GSC_OUT_CHROMA_ORDER_CRCB);
+ break;
+ case DRM_FORMAT_NV21:
+- case DRM_FORMAT_NV61:
+ cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_2P);
+ break;
++ case DRM_FORMAT_NV61:
++ cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV422_2P);
++ break;
+ case DRM_FORMAT_YUV422:
++ cfg |= GSC_OUT_YUV422_3P;
++ break;
+ case DRM_FORMAT_YUV420:
++ cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_3P);
++ break;
+ case DRM_FORMAT_YVU420:
+- cfg |= GSC_OUT_YUV420_3P;
++ cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_3P);
+ break;
+ case DRM_FORMAT_NV12:
++ cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_2P);
++ break;
+ case DRM_FORMAT_NV16:
+- cfg |= (GSC_OUT_CHROMA_ORDER_CBCR |
+- GSC_OUT_YUV420_2P);
++ cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV422_2P);
+ break;
+ default:
+ dev_err(ippdrv->dev, "invalid target yuv order 0x%x.\n", fmt);
+diff --git a/drivers/gpu/drm/exynos/regs-gsc.h b/drivers/gpu/drm/exynos/regs-gsc.h
+index 4704a993cbb7..16b39734115c 100644
+--- a/drivers/gpu/drm/exynos/regs-gsc.h
++++ b/drivers/gpu/drm/exynos/regs-gsc.h
+@@ -138,6 +138,7 @@
+ #define GSC_OUT_YUV420_3P (3 << 4)
+ #define GSC_OUT_YUV422_1P (4 << 4)
+ #define GSC_OUT_YUV422_2P (5 << 4)
++#define GSC_OUT_YUV422_3P (6 << 4)
+ #define GSC_OUT_YUV444 (7 << 4)
+ #define GSC_OUT_TILE_TYPE_MASK (1 << 2)
+ #define GSC_OUT_TILE_C_16x8 (0 << 2)
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index 1466d8769ec9..857a647fabf2 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -43,6 +43,8 @@
+ #include <linux/mdev.h>
+ #include <linux/debugfs.h>
+
++#include <linux/nospec.h>
++
+ #include "i915_drv.h"
+ #include "gvt.h"
+
+@@ -1064,7 +1066,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ } else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
+ struct vfio_region_info info;
+ struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+- int i, ret;
++ unsigned int i;
++ int ret;
+ struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
+ size_t size;
+ int nr_areas = 1;
+@@ -1149,6 +1152,10 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ if (info.index >= VFIO_PCI_NUM_REGIONS +
+ vgpu->vdev.num_regions)
+ return -EINVAL;
++ info.index =
++ array_index_nospec(info.index,
++ VFIO_PCI_NUM_REGIONS +
++ vgpu->vdev.num_regions);
+
+ i = info.index - VFIO_PCI_NUM_REGIONS;
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 707e02c80f18..95dfd169ef57 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -617,7 +617,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli,
+ struct nouveau_bo *nvbo;
+ uint32_t data;
+
+- if (unlikely(r->bo_index > req->nr_buffers)) {
++ if (unlikely(r->bo_index >= req->nr_buffers)) {
+ NV_PRINTK(err, cli, "reloc bo index invalid\n");
+ ret = -EINVAL;
+ break;
+@@ -627,7 +627,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli,
+ if (b->presumed.valid)
+ continue;
+
+- if (unlikely(r->reloc_bo_index > req->nr_buffers)) {
++ if (unlikely(r->reloc_bo_index >= req->nr_buffers)) {
+ NV_PRINTK(err, cli, "reloc container bo index invalid\n");
+ ret = -EINVAL;
+ break;
+diff --git a/drivers/gpu/drm/sun4i/Makefile b/drivers/gpu/drm/sun4i/Makefile
+index 330843ce4280..a27ade6cf2bf 100644
+--- a/drivers/gpu/drm/sun4i/Makefile
++++ b/drivers/gpu/drm/sun4i/Makefile
+@@ -29,7 +29,10 @@ obj-$(CONFIG_DRM_SUN4I) += sun4i-tcon.o
+ obj-$(CONFIG_DRM_SUN4I) += sun4i_tv.o
+ obj-$(CONFIG_DRM_SUN4I) += sun6i_drc.o
+
+-obj-$(CONFIG_DRM_SUN4I_BACKEND) += sun4i-backend.o sun4i-frontend.o
++obj-$(CONFIG_DRM_SUN4I_BACKEND) += sun4i-backend.o
++ifdef CONFIG_DRM_SUN4I_BACKEND
++obj-$(CONFIG_DRM_SUN4I) += sun4i-frontend.o
++endif
+ obj-$(CONFIG_DRM_SUN4I_HDMI) += sun4i-drm-hdmi.o
+ obj-$(CONFIG_DRM_SUN8I_DW_HDMI) += sun8i-drm-hdmi.o
+ obj-$(CONFIG_DRM_SUN8I_MIXER) += sun8i-mixer.o
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index 7afe2f635f74..500b7c5b6672 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -436,7 +436,7 @@ int tegra_drm_submit(struct tegra_drm_context *context,
+ * unaligned offset is malformed and cause commands stream
+ * corruption on the buffer address relocation.
+ */
+- if (offset & 3 || offset >= obj->gem.size) {
++ if (offset & 3 || offset > obj->gem.size) {
+ err = -EINVAL;
+ goto fail;
+ }
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index f1d5f76e9c33..d88073e7d22d 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -218,6 +218,9 @@ static int host1x_probe(struct platform_device *pdev)
+ return err;
+ }
+
++ if (IS_ENABLED(CONFIG_TEGRA_HOST1X_FIREWALL))
++ goto skip_iommu;
++
+ host->group = iommu_group_get(&pdev->dev);
+ if (host->group) {
+ struct iommu_domain_geometry *geometry;
+diff --git a/drivers/gpu/host1x/job.c b/drivers/gpu/host1x/job.c
+index db509ab8874e..acd99783bbca 100644
+--- a/drivers/gpu/host1x/job.c
++++ b/drivers/gpu/host1x/job.c
+@@ -686,7 +686,8 @@ void host1x_job_unpin(struct host1x_job *job)
+ for (i = 0; i < job->num_unpins; i++) {
+ struct host1x_job_unpin_data *unpin = &job->unpins[i];
+
+- if (!IS_ENABLED(CONFIG_TEGRA_HOST1X_FIREWALL) && host->domain) {
++ if (!IS_ENABLED(CONFIG_TEGRA_HOST1X_FIREWALL) &&
++ unpin->size && host->domain) {
+ iommu_unmap(host->domain, job->addr_phys[i],
+ unpin->size);
+ free_iova(&host->iova,
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 7b8e17b03cb8..6bf4da7ad63a 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -124,6 +124,8 @@ static const struct hid_device_id hammer_devices[] = {
+ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_STAFF) },
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_WAND) },
++ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_WHISKERS) },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, hammer_devices);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 46f5ecd11bf7..5a8b3362cf65 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -452,6 +452,7 @@
+ #define USB_DEVICE_ID_GOOGLE_TOUCH_ROSE 0x5028
+ #define USB_DEVICE_ID_GOOGLE_STAFF 0x502b
+ #define USB_DEVICE_ID_GOOGLE_WAND 0x502d
++#define USB_DEVICE_ID_GOOGLE_WHISKERS 0x5030
+
+ #define USB_VENDOR_ID_GOTOP 0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2 0x007f
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 5f947ec20dcb..815a7b0b88cd 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3361,8 +3361,14 @@ void wacom_setup_device_quirks(struct wacom *wacom)
+ if (features->type >= INTUOSHT && features->type <= BAMBOO_PT)
+ features->device_type |= WACOM_DEVICETYPE_PAD;
+
+- features->x_max = 4096;
+- features->y_max = 4096;
++ if (features->type == INTUOSHT2) {
++ features->x_max = features->x_max / 10;
++ features->y_max = features->y_max / 10;
++ }
++ else {
++ features->x_max = 4096;
++ features->y_max = 4096;
++ }
+ }
+ else if (features->pktlen == WACOM_PKGLEN_BBTOUCH) {
+ features->device_type |= WACOM_DEVICETYPE_PAD;
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index bf3bb7e1adab..9d3ef879dc51 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -1074,6 +1074,13 @@ static struct dmi_system_id i8k_blacklist_fan_support_dmi_table[] __initdata = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Vostro 3360"),
+ },
+ },
++ {
++ .ident = "Dell XPS13 9333",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS13 9333"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index aebce560bfaf..b14eb73bc3c9 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -4175,7 +4175,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ * The temperature is already monitored if the respective bit in <mask>
+ * is set.
+ */
+- for (i = 0; i < 32; i++) {
++ for (i = 0; i < 31; i++) {
+ if (!(data->temp_mask & BIT(i + 1)))
+ continue;
+ if (!reg_temp_alternate[i])
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 6fca5e64cffb..f83405d3e8c2 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -677,9 +677,6 @@ static int i2c_imx_dma_read(struct imx_i2c_struct *i2c_imx,
+ struct imx_i2c_dma *dma = i2c_imx->dma;
+ struct device *dev = &i2c_imx->adapter.dev;
+
+- temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
+- temp |= I2CR_DMAEN;
+- imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+
+ dma->chan_using = dma->chan_rx;
+ dma->dma_transfer_dir = DMA_DEV_TO_MEM;
+@@ -792,6 +789,7 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ int i, result;
+ unsigned int temp;
+ int block_data = msgs->flags & I2C_M_RECV_LEN;
++ int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
+
+ dev_dbg(&i2c_imx->adapter.dev,
+ "<%s> write slave address: addr=0x%x\n",
+@@ -818,12 +816,14 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ */
+ if ((msgs->len - 1) || block_data)
+ temp &= ~I2CR_TXAK;
++ if (use_dma)
++ temp |= I2CR_DMAEN;
+ imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); /* dummy read */
+
+ dev_dbg(&i2c_imx->adapter.dev, "<%s> read data\n", __func__);
+
+- if (i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data)
++ if (use_dma)
+ return i2c_imx_dma_read(i2c_imx, msgs, is_lastmsg);
+
+ /* read data */
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 7c3b4740b94b..b8f303dea305 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -482,11 +482,16 @@ static int acpi_gsb_i2c_write_bytes(struct i2c_client *client,
+ msgs[0].buf = buffer;
+
+ ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
+- if (ret < 0)
+- dev_err(&client->adapter->dev, "i2c write failed\n");
+
+ kfree(buffer);
+- return ret;
++
++ if (ret < 0) {
++ dev_err(&client->adapter->dev, "i2c write failed: %d\n", ret);
++ return ret;
++ }
++
++ /* 1 transfer must have completed successfully */
++ return (ret == 1) ? 0 : -EIO;
+ }
+
+ static acpi_status
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 5ec3e41b65f2..fe87d27779d9 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -415,10 +415,9 @@ static int bmp280_read_humid(struct bmp280_data *data, int *val, int *val2)
+ }
+ comp_humidity = bmp280_compensate_humidity(data, adc_humidity);
+
+- *val = comp_humidity;
+- *val2 = 1024;
++ *val = comp_humidity * 1000 / 1024;
+
+- return IIO_VAL_FRACTIONAL;
++ return IIO_VAL_INT;
+ }
+
+ static int bmp280_read_raw(struct iio_dev *indio_dev,
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index ed1f253faf97..c7c85c22e4e3 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -486,8 +486,11 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
+ }
+
+ if (flags & IB_MR_REREG_ACCESS) {
+- if (ib_access_writable(mr_access_flags) && !mmr->umem->writable)
+- return -EPERM;
++ if (ib_access_writable(mr_access_flags) &&
++ !mmr->umem->writable) {
++ err = -EPERM;
++ goto release_mpt_entry;
++ }
+
+ err = mlx4_mr_hw_change_access(dev->dev, *pmpt_entry,
+ convert_access(mr_access_flags));
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index 3c7522d025f2..93d67d97c279 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -266,18 +266,24 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
+
+ desc_size = sizeof(struct mlx5_wqe_srq_next_seg) +
+ srq->msrq.max_gs * sizeof(struct mlx5_wqe_data_seg);
+- if (desc_size == 0 || srq->msrq.max_gs > desc_size)
+- return ERR_PTR(-EINVAL);
++ if (desc_size == 0 || srq->msrq.max_gs > desc_size) {
++ err = -EINVAL;
++ goto err_srq;
++ }
+ desc_size = roundup_pow_of_two(desc_size);
+ desc_size = max_t(size_t, 32, desc_size);
+- if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg))
+- return ERR_PTR(-EINVAL);
++ if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg)) {
++ err = -EINVAL;
++ goto err_srq;
++ }
+ srq->msrq.max_avail_gather = (desc_size - sizeof(struct mlx5_wqe_srq_next_seg)) /
+ sizeof(struct mlx5_wqe_data_seg);
+ srq->msrq.wqe_shift = ilog2(desc_size);
+ buf_size = srq->msrq.max * desc_size;
+- if (buf_size < desc_size)
+- return ERR_PTR(-EINVAL);
++ if (buf_size < desc_size) {
++ err = -EINVAL;
++ goto err_srq;
++ }
+ in.type = init_attr->srq_type;
+
+ if (pd->uobject)
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 3f9afc02d166..f86223aca7b8 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1957,6 +1957,9 @@ int qedr_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ }
+
+ if (attr_mask & (IB_QP_AV | IB_QP_PATH_MTU)) {
++ if (rdma_protocol_iwarp(&dev->ibdev, 1))
++ return -EINVAL;
++
+ if (attr_mask & IB_QP_PATH_MTU) {
+ if (attr->path_mtu < IB_MTU_256 ||
+ attr->path_mtu > IB_MTU_4096) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 785199990457..d048ac13e65b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -645,6 +645,9 @@ next_wqe:
+ } else {
+ goto exit;
+ }
++ if ((wqe->wr.send_flags & IB_SEND_SIGNALED) ||
++ qp->sq_sig_type == IB_SIGNAL_ALL_WR)
++ rxe_run_task(&qp->comp.task, 1);
+ qp->req.wqe_index = next_index(qp->sq.queue,
+ qp->req.wqe_index);
+ goto next_wqe;
+@@ -709,6 +712,7 @@ next_wqe:
+
+ if (fill_packet(qp, wqe, &pkt, skb, payload)) {
+ pr_debug("qp#%d Error during fill packet\n", qp_num(qp));
++ kfree_skb(skb);
+ goto err;
+ }
+
+@@ -740,7 +744,6 @@ next_wqe:
+ goto next_wqe;
+
+ err:
+- kfree_skb(skb);
+ wqe->status = IB_WC_LOC_PROT_ERR;
+ wqe->state = wqe_state_error;
+ __rxe_do_task(&qp->comp.task);
+diff --git a/drivers/input/rmi4/rmi_2d_sensor.c b/drivers/input/rmi4/rmi_2d_sensor.c
+index 8bb866c7b985..8eeffa066022 100644
+--- a/drivers/input/rmi4/rmi_2d_sensor.c
++++ b/drivers/input/rmi4/rmi_2d_sensor.c
+@@ -32,15 +32,15 @@ void rmi_2d_sensor_abs_process(struct rmi_2d_sensor *sensor,
+ if (obj->type == RMI_2D_OBJECT_NONE)
+ return;
+
+- if (axis_align->swap_axes)
+- swap(obj->x, obj->y);
+-
+ if (axis_align->flip_x)
+ obj->x = sensor->max_x - obj->x;
+
+ if (axis_align->flip_y)
+ obj->y = sensor->max_y - obj->y;
+
++ if (axis_align->swap_axes)
++ swap(obj->x, obj->y);
++
+ /*
+ * Here checking if X offset or y offset are specified is
+ * redundant. We just add the offsets or clip the values.
+@@ -120,15 +120,15 @@ void rmi_2d_sensor_rel_report(struct rmi_2d_sensor *sensor, int x, int y)
+ x = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)x));
+ y = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)y));
+
+- if (axis_align->swap_axes)
+- swap(x, y);
+-
+ if (axis_align->flip_x)
+ x = min(RMI_2D_REL_POS_MAX, -x);
+
+ if (axis_align->flip_y)
+ y = min(RMI_2D_REL_POS_MAX, -y);
+
++ if (axis_align->swap_axes)
++ swap(x, y);
++
+ if (x || y) {
+ input_report_rel(sensor->input, REL_X, x);
+ input_report_rel(sensor->input, REL_Y, y);
+@@ -141,17 +141,10 @@ static void rmi_2d_sensor_set_input_params(struct rmi_2d_sensor *sensor)
+ struct input_dev *input = sensor->input;
+ int res_x;
+ int res_y;
++ int max_x, max_y;
+ int input_flags = 0;
+
+ if (sensor->report_abs) {
+- if (sensor->axis_align.swap_axes) {
+- swap(sensor->max_x, sensor->max_y);
+- swap(sensor->axis_align.clip_x_low,
+- sensor->axis_align.clip_y_low);
+- swap(sensor->axis_align.clip_x_high,
+- sensor->axis_align.clip_y_high);
+- }
+-
+ sensor->min_x = sensor->axis_align.clip_x_low;
+ if (sensor->axis_align.clip_x_high)
+ sensor->max_x = min(sensor->max_x,
+@@ -163,14 +156,19 @@ static void rmi_2d_sensor_set_input_params(struct rmi_2d_sensor *sensor)
+ sensor->axis_align.clip_y_high);
+
+ set_bit(EV_ABS, input->evbit);
+- input_set_abs_params(input, ABS_MT_POSITION_X, 0, sensor->max_x,
+- 0, 0);
+- input_set_abs_params(input, ABS_MT_POSITION_Y, 0, sensor->max_y,
+- 0, 0);
++
++ max_x = sensor->max_x;
++ max_y = sensor->max_y;
++ if (sensor->axis_align.swap_axes)
++ swap(max_x, max_y);
++ input_set_abs_params(input, ABS_MT_POSITION_X, 0, max_x, 0, 0);
++ input_set_abs_params(input, ABS_MT_POSITION_Y, 0, max_y, 0, 0);
+
+ if (sensor->x_mm && sensor->y_mm) {
+ res_x = (sensor->max_x - sensor->min_x) / sensor->x_mm;
+ res_y = (sensor->max_y - sensor->min_y) / sensor->y_mm;
++ if (sensor->axis_align.swap_axes)
++ swap(res_x, res_y);
+
+ input_abs_set_res(input, ABS_X, res_x);
+ input_abs_set_res(input, ABS_Y, res_y);
+diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
+index 1ff38aff9f29..29dd8a9939b1 100644
+--- a/drivers/irqchip/irq-gic-v2m.c
++++ b/drivers/irqchip/irq-gic-v2m.c
+@@ -199,7 +199,7 @@ static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+
+ fail:
+ irq_domain_free_irqs_parent(domain, virq, nr_irqs);
+- gicv2m_unalloc_msi(v2m, hwirq, get_count_order(nr_irqs));
++ gicv2m_unalloc_msi(v2m, hwirq, nr_irqs);
+ return err;
+ }
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index ab16968fced8..bb1580077054 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3405,6 +3405,16 @@ static int redist_disable_lpis(void)
+ u64 timeout = USEC_PER_SEC;
+ u64 val;
+
++ /*
++ * If coming via a CPU hotplug event, we don't need to disable
++ * LPIs before trying to re-enable them. They are already
++ * configured and all is well in the world. Detect this case
++ * by checking the allocation of the pending table for the
++ * current CPU.
++ */
++ if (gic_data_rdist()->pend_page)
++ return 0;
++
+ if (!gic_rdists_supports_plpis()) {
+ pr_info("CPU%d: LPIs not supported\n", smp_processor_id());
+ return -ENXIO;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 3c60774c8430..61dffc7bf6bf 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3892,6 +3892,13 @@ static int raid10_run(struct mddev *mddev)
+ disk->rdev->saved_raid_disk < 0)
+ conf->fullsync = 1;
+ }
++
++ if (disk->replacement &&
++ !test_bit(In_sync, &disk->replacement->flags) &&
++ disk->replacement->saved_raid_disk < 0) {
++ conf->fullsync = 1;
++ }
++
+ disk->recovery_disabled = mddev->recovery_disabled - 1;
+ }
+
+diff --git a/drivers/mtd/devices/mtd_dataflash.c b/drivers/mtd/devices/mtd_dataflash.c
+index aaaeaae01e1d..eeff2285fb8b 100644
+--- a/drivers/mtd/devices/mtd_dataflash.c
++++ b/drivers/mtd/devices/mtd_dataflash.c
+@@ -733,8 +733,8 @@ static struct flash_info dataflash_data[] = {
+ { "AT45DB642x", 0x1f2800, 8192, 1056, 11, SUP_POW2PS},
+ { "at45db642d", 0x1f2800, 8192, 1024, 10, SUP_POW2PS | IS_POW2PS},
+
+- { "AT45DB641E", 0x1f28000100, 32768, 264, 9, SUP_EXTID | SUP_POW2PS},
+- { "at45db641e", 0x1f28000100, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS},
++ { "AT45DB641E", 0x1f28000100ULL, 32768, 264, 9, SUP_EXTID | SUP_POW2PS},
++ { "at45db641e", 0x1f28000100ULL, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS},
+ };
+
+ static struct flash_info *jedec_lookup(struct spi_device *spi,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index d847e1b9c37b..be1506169076 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -1533,6 +1533,7 @@ struct bnx2x {
+ struct link_vars link_vars;
+ u32 link_cnt;
+ struct bnx2x_link_report_data last_reported_link;
++ bool force_link_down;
+
+ struct mdio_if_info mdio;
+
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 95871576ab92..e7b305efa3fe 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -1261,6 +1261,11 @@ void __bnx2x_link_report(struct bnx2x *bp)
+ {
+ struct bnx2x_link_report_data cur_data;
+
++ if (bp->force_link_down) {
++ bp->link_vars.link_up = 0;
++ return;
++ }
++
+ /* reread mf_cfg */
+ if (IS_PF(bp) && !CHIP_IS_E1(bp))
+ bnx2x_read_mf_cfg(bp);
+@@ -2817,6 +2822,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
+ bp->pending_max = 0;
+ }
+
++ bp->force_link_down = false;
+ if (bp->port.pmf) {
+ rc = bnx2x_initial_phy_init(bp, load_mode);
+ if (rc)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index c766ae23bc74..89484efbaba4 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -10279,6 +10279,12 @@ static void bnx2x_sp_rtnl_task(struct work_struct *work)
+ bp->sp_rtnl_state = 0;
+ smp_mb();
+
++ /* Immediately indicate link as down */
++ bp->link_vars.link_up = 0;
++ bp->force_link_down = true;
++ netif_carrier_off(bp->dev);
++ BNX2X_ERR("Indicating link is down due to Tx-timeout\n");
++
+ bnx2x_nic_unload(bp, UNLOAD_NORMAL, true);
+ /* When ret value shows failure of allocation failure,
+ * the nic is rebooted again. If open still fails, a error
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 401e58939795..cb026e500127 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5915,7 +5915,7 @@ unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
+ return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_cp_rings);
+ }
+
+-void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs)
++static void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs)
+ {
+ bp->hw_resc.max_irqs = max_irqs;
+ }
+@@ -6875,7 +6875,7 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ rc = bnxt_request_irq(bp);
+ if (rc) {
+ netdev_err(bp->dev, "bnxt_request_irq err: %x\n", rc);
+- goto open_err;
++ goto open_err_irq;
+ }
+ }
+
+@@ -6913,6 +6913,8 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+
+ open_err:
+ bnxt_disable_napi(bp);
++
++open_err_irq:
+ bnxt_del_napi(bp);
+
+ open_err_free_mem:
+@@ -8467,11 +8469,11 @@ int bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, bool shared)
+ int rx, tx, cp;
+
+ _bnxt_get_max_rings(bp, &rx, &tx, &cp);
++ *max_rx = rx;
++ *max_tx = tx;
+ if (!rx || !tx || !cp)
+ return -ENOMEM;
+
+- *max_rx = rx;
+- *max_tx = tx;
+ return bnxt_trim_rings(bp, max_rx, max_tx, cp, shared);
+ }
+
+@@ -8485,8 +8487,11 @@ static int bnxt_get_dflt_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ /* Not enough rings, try disabling agg rings. */
+ bp->flags &= ~BNXT_FLAG_AGG_RINGS;
+ rc = bnxt_get_max_rings(bp, max_rx, max_tx, shared);
+- if (rc)
++ if (rc) {
++ /* set BNXT_FLAG_AGG_RINGS back for consistency */
++ bp->flags |= BNXT_FLAG_AGG_RINGS;
+ return rc;
++ }
+ bp->flags |= BNXT_FLAG_NO_AGG_RINGS;
+ bp->dev->hw_features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW);
+ bp->dev->features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 3d55d3b56865..79bce5dcf7fe 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1455,7 +1455,6 @@ void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+ void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max);
+ unsigned int bnxt_get_max_func_irqs(struct bnxt *bp);
+-void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+ int bnxt_reserve_rings(struct bnxt *bp);
+ void bnxt_tx_disable(struct bnxt *bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 795f45024c20..491bd40a254d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -27,6 +27,15 @@
+ #define BNXT_FID_INVALID 0xffff
+ #define VLAN_TCI(vid, prio) ((vid) | ((prio) << VLAN_PRIO_SHIFT))
+
++#define is_vlan_pcp_wildcarded(vlan_tci_mask) \
++ ((ntohs(vlan_tci_mask) & VLAN_PRIO_MASK) == 0x0000)
++#define is_vlan_pcp_exactmatch(vlan_tci_mask) \
++ ((ntohs(vlan_tci_mask) & VLAN_PRIO_MASK) == VLAN_PRIO_MASK)
++#define is_vlan_pcp_zero(vlan_tci) \
++ ((ntohs(vlan_tci) & VLAN_PRIO_MASK) == 0x0000)
++#define is_vid_exactmatch(vlan_tci_mask) \
++ ((ntohs(vlan_tci_mask) & VLAN_VID_MASK) == VLAN_VID_MASK)
++
+ /* Return the dst fid of the func for flow forwarding
+ * For PFs: src_fid is the fid of the PF
+ * For VF-reps: src_fid the fid of the VF
+@@ -389,6 +398,21 @@ static bool is_exactmatch(void *mask, int len)
+ return true;
+ }
+
++static bool is_vlan_tci_allowed(__be16 vlan_tci_mask,
++ __be16 vlan_tci)
++{
++ /* VLAN priority must be either exactly zero or fully wildcarded and
++ * VLAN id must be exact match.
++ */
++ if (is_vid_exactmatch(vlan_tci_mask) &&
++ ((is_vlan_pcp_exactmatch(vlan_tci_mask) &&
++ is_vlan_pcp_zero(vlan_tci)) ||
++ is_vlan_pcp_wildcarded(vlan_tci_mask)))
++ return true;
++
++ return false;
++}
++
+ static bool bits_set(void *key, int len)
+ {
+ const u8 *p = key;
+@@ -803,9 +827,9 @@ static bool bnxt_tc_can_offload(struct bnxt *bp, struct bnxt_tc_flow *flow)
+ /* Currently VLAN fields cannot be partial wildcard */
+ if (bits_set(&flow->l2_key.inner_vlan_tci,
+ sizeof(flow->l2_key.inner_vlan_tci)) &&
+- !is_exactmatch(&flow->l2_mask.inner_vlan_tci,
+- sizeof(flow->l2_mask.inner_vlan_tci))) {
+- netdev_info(bp->dev, "Wildcard match unsupported for VLAN TCI\n");
++ !is_vlan_tci_allowed(flow->l2_mask.inner_vlan_tci,
++ flow->l2_key.inner_vlan_tci)) {
++ netdev_info(bp->dev, "Unsupported VLAN TCI\n");
+ return false;
+ }
+ if (bits_set(&flow->l2_key.inner_vlan_tpid,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 347e4f946eb2..840f6e505f73 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -169,7 +169,6 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
+ }
+ bnxt_fill_msix_vecs(bp, ent);
+- bnxt_set_max_func_irqs(bp, bnxt_get_max_func_irqs(bp) - avail_msix);
+ bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix);
+ edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED;
+ return avail_msix;
+@@ -192,7 +191,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ msix_requested = edev->ulp_tbl[ulp_id].msix_requested;
+ bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested);
+ edev->ulp_tbl[ulp_id].msix_requested = 0;
+- bnxt_set_max_func_irqs(bp, bnxt_get_max_func_irqs(bp) + msix_requested);
+ edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
+ if (netif_running(dev)) {
+ bnxt_close_nic(bp, true, false);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 068f991395dc..01032f37a308 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1818,13 +1818,7 @@ static void macb_free_consistent(struct macb *bp)
+ struct macb_queue *queue;
+ unsigned int q;
+
+- queue = &bp->queues[0];
+ bp->macbgem_ops.mog_free_rx_buffers(bp);
+- if (queue->rx_ring) {
+- dma_free_coherent(&bp->pdev->dev, RX_RING_BYTES(bp),
+- queue->rx_ring, queue->rx_ring_dma);
+- queue->rx_ring = NULL;
+- }
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+ kfree(queue->tx_skb);
+@@ -1834,6 +1828,11 @@ static void macb_free_consistent(struct macb *bp)
+ queue->tx_ring, queue->tx_ring_dma);
+ queue->tx_ring = NULL;
+ }
++ if (queue->rx_ring) {
++ dma_free_coherent(&bp->pdev->dev, RX_RING_BYTES(bp),
++ queue->rx_ring, queue->rx_ring_dma);
++ queue->rx_ring = NULL;
++ }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/cavium/Kconfig b/drivers/net/ethernet/cavium/Kconfig
+index 043e3c11c42b..92d88c5f76fb 100644
+--- a/drivers/net/ethernet/cavium/Kconfig
++++ b/drivers/net/ethernet/cavium/Kconfig
+@@ -15,7 +15,7 @@ if NET_VENDOR_CAVIUM
+
+ config THUNDER_NIC_PF
+ tristate "Thunder Physical function driver"
+- depends on 64BIT
++ depends on 64BIT && PCI
+ select THUNDER_NIC_BGX
+ ---help---
+ This driver supports Thunder's NIC physical function.
+@@ -28,13 +28,13 @@ config THUNDER_NIC_PF
+ config THUNDER_NIC_VF
+ tristate "Thunder Virtual function driver"
+ imply CAVIUM_PTP
+- depends on 64BIT
++ depends on 64BIT && PCI
+ ---help---
+ This driver supports Thunder's NIC virtual function
+
+ config THUNDER_NIC_BGX
+ tristate "Thunder MAC interface driver (BGX)"
+- depends on 64BIT
++ depends on 64BIT && PCI
+ select PHYLIB
+ select MDIO_THUNDER
+ select THUNDER_NIC_RGX
+@@ -44,7 +44,7 @@ config THUNDER_NIC_BGX
+
+ config THUNDER_NIC_RGX
+ tristate "Thunder MAC interface driver (RGX)"
+- depends on 64BIT
++ depends on 64BIT && PCI
+ select PHYLIB
+ select MDIO_THUNDER
+ ---help---
+@@ -53,7 +53,7 @@ config THUNDER_NIC_RGX
+
+ config CAVIUM_PTP
+ tristate "Cavium PTP coprocessor as PTP clock"
+- depends on 64BIT
++ depends on 64BIT && PCI
+ imply PTP_1588_CLOCK
+ default y
+ ---help---
+@@ -65,7 +65,7 @@ config CAVIUM_PTP
+
+ config LIQUIDIO
+ tristate "Cavium LiquidIO support"
+- depends on 64BIT
++ depends on 64BIT && PCI
+ depends on MAY_USE_DEVLINK
+ imply PTP_1588_CLOCK
+ select FW_LOADER
+diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+index 3f6afb54a5eb..bb43ddb7539e 100644
+--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+@@ -643,13 +643,21 @@ static int octeon_mgmt_set_mac_address(struct net_device *netdev, void *addr)
+ static int octeon_mgmt_change_mtu(struct net_device *netdev, int new_mtu)
+ {
+ struct octeon_mgmt *p = netdev_priv(netdev);
+- int size_without_fcs = new_mtu + OCTEON_MGMT_RX_HEADROOM;
++ int max_packet = new_mtu + ETH_HLEN + ETH_FCS_LEN;
+
+ netdev->mtu = new_mtu;
+
+- cvmx_write_csr(p->agl + AGL_GMX_RX_FRM_MAX, size_without_fcs);
++ /* HW lifts the limit if the frame is VLAN tagged
++ * (+4 bytes per each tag, up to two tags)
++ */
++ cvmx_write_csr(p->agl + AGL_GMX_RX_FRM_MAX, max_packet);
++ /* Set the hardware to truncate packets larger than the MTU. The jabber
++ * register must be set to a multiple of 8 bytes, so round up. JABBER is
++ * an unconditional limit, so we need to account for two possible VLAN
++ * tags.
++ */
+ cvmx_write_csr(p->agl + AGL_GMX_RX_JABBER,
+- (size_without_fcs + 7) & 0xfff8);
++ (max_packet + 7 + VLAN_HLEN * 2) & 0xfff8);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 72c83496e01f..da73bf702e15 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -263,7 +263,7 @@ static void dcb_tx_queue_prio_enable(struct net_device *dev, int enable)
+ "Can't %s DCB Priority on port %d, TX Queue %d: err=%d\n",
+ enable ? "set" : "unset", pi->port_id, i, -err);
+ else
+- txq->dcb_prio = value;
++ txq->dcb_prio = enable ? value : 0;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 7cb3ef466cc7..c7a94aacc664 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -8526,7 +8526,7 @@ static int t4_get_flash_params(struct adapter *adap)
+ };
+
+ unsigned int part, manufacturer;
+- unsigned int density, size;
++ unsigned int density, size = 0;
+ u32 flashid = 0;
+ int ret;
+
+@@ -8596,11 +8596,6 @@ static int t4_get_flash_params(struct adapter *adap)
+ case 0x22: /* 256MB */
+ size = 1 << 28;
+ break;
+-
+- default:
+- dev_err(adap->pdev_dev, "Micron Flash Part has bad size, ID = %#x, Density code = %#x\n",
+- flashid, density);
+- return -EINVAL;
+ }
+ break;
+ }
+@@ -8616,10 +8611,6 @@ static int t4_get_flash_params(struct adapter *adap)
+ case 0x17: /* 64MB */
+ size = 1 << 26;
+ break;
+- default:
+- dev_err(adap->pdev_dev, "ISSI Flash Part has bad size, ID = %#x, Density code = %#x\n",
+- flashid, density);
+- return -EINVAL;
+ }
+ break;
+ }
+@@ -8635,10 +8626,6 @@ static int t4_get_flash_params(struct adapter *adap)
+ case 0x18: /* 16MB */
+ size = 1 << 24;
+ break;
+- default:
+- dev_err(adap->pdev_dev, "Macronix Flash Part has bad size, ID = %#x, Density code = %#x\n",
+- flashid, density);
+- return -EINVAL;
+ }
+ break;
+ }
+@@ -8654,17 +8641,21 @@ static int t4_get_flash_params(struct adapter *adap)
+ case 0x18: /* 16MB */
+ size = 1 << 24;
+ break;
+- default:
+- dev_err(adap->pdev_dev, "Winbond Flash Part has bad size, ID = %#x, Density code = %#x\n",
+- flashid, density);
+- return -EINVAL;
+ }
+ break;
+ }
+- default:
+- dev_err(adap->pdev_dev, "Unsupported Flash Part, ID = %#x\n",
+- flashid);
+- return -EINVAL;
++ }
++
++ /* If we didn't recognize the FLASH part, that's no real issue: the
++ * Hardware/Software contract says that Hardware will _*ALWAYS*_
++ * use a FLASH part which is at least 4MB in size and has 64KB
++ * sectors. The unrecognized FLASH part is likely to be much larger
++ * than 4MB, but that's all we really need.
++ */
++ if (size == 0) {
++ dev_warn(adap->pdev_dev, "Unknown Flash Part, ID = %#x, assuming 4MB\n",
++ flashid);
++ size = 1 << 22;
+ }
+
+ /* Store decoded Flash size and fall through into vetting code. */
+diff --git a/drivers/net/ethernet/cisco/enic/enic_clsf.c b/drivers/net/ethernet/cisco/enic/enic_clsf.c
+index 973c1fb70d09..99038dfc7fbe 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_clsf.c
++++ b/drivers/net/ethernet/cisco/enic/enic_clsf.c
+@@ -79,7 +79,6 @@ void enic_rfs_flw_tbl_init(struct enic *enic)
+ enic->rfs_h.max = enic->config.num_arfs;
+ enic->rfs_h.free = enic->rfs_h.max;
+ enic->rfs_h.toclean = 0;
+- enic_rfs_timer_start(enic);
+ }
+
+ void enic_rfs_flw_tbl_free(struct enic *enic)
+@@ -88,7 +87,6 @@ void enic_rfs_flw_tbl_free(struct enic *enic)
+
+ enic_rfs_timer_stop(enic);
+ spin_lock_bh(&enic->rfs_h.lock);
+- enic->rfs_h.free = 0;
+ for (i = 0; i < (1 << ENIC_RFS_FLW_BITSHIFT); i++) {
+ struct hlist_head *hhead;
+ struct hlist_node *tmp;
+@@ -99,6 +97,7 @@ void enic_rfs_flw_tbl_free(struct enic *enic)
+ enic_delfltr(enic, n->fltr_id);
+ hlist_del(&n->node);
+ kfree(n);
++ enic->rfs_h.free++;
+ }
+ }
+ spin_unlock_bh(&enic->rfs_h.lock);
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 454e57ef047a..84eac2b0837d 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -1971,7 +1971,7 @@ static int enic_open(struct net_device *netdev)
+ vnic_intr_unmask(&enic->intr[i]);
+
+ enic_notify_timer_start(enic);
+- enic_rfs_flw_tbl_init(enic);
++ enic_rfs_timer_start(enic);
+
+ return 0;
+
+@@ -2895,6 +2895,7 @@ static int enic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ timer_setup(&enic->notify_timer, enic_notify_timer, 0);
+
++ enic_rfs_flw_tbl_init(enic);
+ enic_set_rx_coal_setting(enic);
+ INIT_WORK(&enic->reset, enic_reset);
+ INIT_WORK(&enic->tx_hang_reset, enic_tx_hang_reset);
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index fd43f98ddbe7..38498cfa405e 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -125,6 +125,9 @@ MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
+ /* Default alignment for start of data in an Rx FD */
+ #define DPAA_FD_DATA_ALIGNMENT 16
+
++/* The DPAA requires 256 bytes reserved and mapped for the SGT */
++#define DPAA_SGT_SIZE 256
++
+ /* Values for the L3R field of the FM Parse Results
+ */
+ /* L3 Type field: First IP Present IPv4 */
+@@ -1617,8 +1620,8 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
+
+ if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
+ nr_frags = skb_shinfo(skb)->nr_frags;
+- dma_unmap_single(dev, addr, qm_fd_get_offset(fd) +
+- sizeof(struct qm_sg_entry) * (1 + nr_frags),
++ dma_unmap_single(dev, addr,
++ qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
+ dma_dir);
+
+ /* The sgt buffer has been allocated with netdev_alloc_frag(),
+@@ -1903,8 +1906,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
+ void *sgt_buf;
+
+ /* get a page frag to store the SGTable */
+- sz = SKB_DATA_ALIGN(priv->tx_headroom +
+- sizeof(struct qm_sg_entry) * (1 + nr_frags));
++ sz = SKB_DATA_ALIGN(priv->tx_headroom + DPAA_SGT_SIZE);
+ sgt_buf = netdev_alloc_frag(sz);
+ if (unlikely(!sgt_buf)) {
+ netdev_err(net_dev, "netdev_alloc_frag() failed for size %d\n",
+@@ -1972,9 +1974,8 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
+ skbh = (struct sk_buff **)buffer_start;
+ *skbh = skb;
+
+- addr = dma_map_single(dev, buffer_start, priv->tx_headroom +
+- sizeof(struct qm_sg_entry) * (1 + nr_frags),
+- dma_dir);
++ addr = dma_map_single(dev, buffer_start,
++ priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
+ if (unlikely(dma_mapping_error(dev, addr))) {
+ dev_err(dev, "DMA mapping failed");
+ err = -EINVAL;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
+index 6552d68ea6e1..4aa47bc8b02f 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_port.c
++++ b/drivers/net/ethernet/freescale/fman/fman_port.c
+@@ -324,6 +324,10 @@ struct fman_port_qmi_regs {
+ #define HWP_HXS_PHE_REPORT 0x00000800
+ #define HWP_HXS_PCAC_PSTAT 0x00000100
+ #define HWP_HXS_PCAC_PSTOP 0x00000001
++#define HWP_HXS_TCP_OFFSET 0xA
++#define HWP_HXS_UDP_OFFSET 0xB
++#define HWP_HXS_SH_PAD_REM 0x80000000
++
+ struct fman_port_hwp_regs {
+ struct {
+ u32 ssa; /* Soft Sequence Attachment */
+@@ -728,6 +732,10 @@ static void init_hwp(struct fman_port *port)
+ iowrite32be(0xffffffff, ®s->pmda[i].lcv);
+ }
+
++ /* Short packet padding removal from checksum calculation */
++ iowrite32be(HWP_HXS_SH_PAD_REM, ®s->pmda[HWP_HXS_TCP_OFFSET].ssa);
++ iowrite32be(HWP_HXS_SH_PAD_REM, ®s->pmda[HWP_HXS_UDP_OFFSET].ssa);
++
+ start_port_hwp(port);
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5ec1185808e5..65aa7a6e33a2 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -319,7 +319,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
+ return;
+
+ failure:
+- dev_info(dev, "replenish pools failure\n");
++ if (lpar_rc != H_PARAMETER && lpar_rc != H_CLOSED)
++ dev_err_ratelimited(dev, "rx: replenish packet buffer failed\n");
+ pool->free_map[pool->next_free] = index;
+ pool->rx_buff[index].skb = NULL;
+
+@@ -1594,7 +1595,8 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ &tx_crq);
+ }
+ if (lpar_rc != H_SUCCESS) {
+- dev_err(dev, "tx failed with code %ld\n", lpar_rc);
++ if (lpar_rc != H_CLOSED && lpar_rc != H_PARAMETER)
++ dev_err_ratelimited(dev, "tx: send failed\n");
+ dev_kfree_skb_any(skb);
+ tx_buff->skb = NULL;
+
+@@ -1799,8 +1801,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+
+ rc = ibmvnic_login(netdev);
+ if (rc) {
+- adapter->state = VNIC_PROBED;
+- return 0;
++ adapter->state = reset_state;
++ return rc;
+ }
+
+ if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM ||
+@@ -3085,6 +3087,25 @@ static union ibmvnic_crq *ibmvnic_next_crq(struct ibmvnic_adapter *adapter)
+ return crq;
+ }
+
++static void print_subcrq_error(struct device *dev, int rc, const char *func)
++{
++ switch (rc) {
++ case H_PARAMETER:
++ dev_warn_ratelimited(dev,
++ "%s failed: Send request is malformed or adapter failover pending. (rc=%d)\n",
++ func, rc);
++ break;
++ case H_CLOSED:
++ dev_warn_ratelimited(dev,
++ "%s failed: Backing queue closed. Adapter is down or failover pending. (rc=%d)\n",
++ func, rc);
++ break;
++ default:
++ dev_err_ratelimited(dev, "%s failed: (rc=%d)\n", func, rc);
++ break;
++ }
++}
++
+ static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
+ union sub_crq *sub_crq)
+ {
+@@ -3111,11 +3132,8 @@ static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
+ cpu_to_be64(u64_crq[2]),
+ cpu_to_be64(u64_crq[3]));
+
+- if (rc) {
+- if (rc == H_CLOSED)
+- dev_warn(dev, "CRQ Queue closed\n");
+- dev_err(dev, "Send error (rc=%d)\n", rc);
+- }
++ if (rc)
++ print_subcrq_error(dev, rc, __func__);
+
+ return rc;
+ }
+@@ -3133,11 +3151,8 @@ static int send_subcrq_indirect(struct ibmvnic_adapter *adapter,
+ cpu_to_be64(remote_handle),
+ ioba, num_entries);
+
+- if (rc) {
+- if (rc == H_CLOSED)
+- dev_warn(dev, "CRQ Queue closed\n");
+- dev_err(dev, "Send (indirect) error (rc=%d)\n", rc);
+- }
++ if (rc)
++ print_subcrq_error(dev, rc, __func__);
+
+ return rc;
+ }
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+index 633be93f3dbb..b8f1f904e5c2 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+@@ -1897,7 +1897,12 @@ s32 ixgbe_set_rar_generic(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ if (enable_addr != 0)
+ rar_high |= IXGBE_RAH_AV;
+
++ /* Record lower 32 bits of MAC address and then make
++ * sure that write is flushed to hardware before writing
++ * the upper 16 bits and setting the valid bit.
++ */
+ IXGBE_WRITE_REG(hw, IXGBE_RAL(index), rar_low);
++ IXGBE_WRITE_FLUSH(hw);
+ IXGBE_WRITE_REG(hw, IXGBE_RAH(index), rar_high);
+
+ return 0;
+@@ -1929,8 +1934,13 @@ s32 ixgbe_clear_rar_generic(struct ixgbe_hw *hw, u32 index)
+ rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(index));
+ rar_high &= ~(0x0000FFFF | IXGBE_RAH_AV);
+
+- IXGBE_WRITE_REG(hw, IXGBE_RAL(index), 0);
++ /* Clear the address valid bit and upper 16 bits of the address
++ * before clearing the lower bits. This way we aren't updating
++ * a live filter.
++ */
+ IXGBE_WRITE_REG(hw, IXGBE_RAH(index), rar_high);
++ IXGBE_WRITE_FLUSH(hw);
++ IXGBE_WRITE_REG(hw, IXGBE_RAL(index), 0);
+
+ /* clear VMDq pool/queue selection for this RAR */
+ hw->mac.ops.clear_vmdq(hw, index, IXGBE_CLEAR_VMDQ_ALL);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index cead23e3db0c..eea4b6f0efe5 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -759,7 +759,7 @@ int ixgbe_ipsec_tx(struct ixgbe_ring *tx_ring,
+ }
+
+ itd->sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_TX_INDEX;
+- if (unlikely(itd->sa_idx > IXGBE_IPSEC_MAX_SA_COUNT)) {
++ if (unlikely(itd->sa_idx >= IXGBE_IPSEC_MAX_SA_COUNT)) {
+ netdev_err(tx_ring->netdev, "%s: bad sa_idx=%d handle=%lu\n",
+ __func__, itd->sa_idx, xs->xso.offload_handle);
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 4f52f87cf210..b6624e218962 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1594,17 +1594,15 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
+ }
+
+ /* Public E-Switch API */
+-#define ESW_ALLOWED(esw) ((esw) && MLX5_VPORT_MANAGER((esw)->dev))
++#define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev))
++
+
+ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
+ {
+ int err;
+ int i, enabled_events;
+
+- if (!ESW_ALLOWED(esw))
+- return 0;
+-
+- if (!MLX5_ESWITCH_MANAGER(esw->dev) ||
++ if (!ESW_ALLOWED(esw) ||
+ !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) {
+ esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n");
+ return -EOPNOTSUPP;
+@@ -1806,7 +1804,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
+ u64 node_guid;
+ int err = 0;
+
+- if (!ESW_ALLOWED(esw))
++ if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))
+ return -EINVAL;
+@@ -1883,7 +1881,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
+ {
+ struct mlx5_vport *evport;
+
+- if (!ESW_ALLOWED(esw))
++ if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport))
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index 177e076b8d17..1f3ccb435b06 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -549,8 +549,6 @@ int mlx5_modify_nic_vport_node_guid(struct mlx5_core_dev *mdev,
+ return -EINVAL;
+ if (!MLX5_CAP_GEN(mdev, vport_group_manager))
+ return -EACCES;
+- if (!MLX5_CAP_ESW(mdev, nic_vport_node_guid_modify))
+- return -EOPNOTSUPP;
+
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in)
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+index 1a781281c57a..a86ae1318043 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+@@ -73,10 +73,10 @@ nfp_bpf_xdp_offload(struct nfp_app *app, struct nfp_net *nn,
+
+ ret = nfp_net_bpf_offload(nn, prog, running, extack);
+ /* Stop offload if replace not possible */
+- if (ret && prog)
+- nfp_bpf_xdp_offload(app, nn, NULL, extack);
++ if (ret)
++ return ret;
+
+- nn->dp.bpf_offload_xdp = prog && !ret;
++ nn->dp.bpf_offload_xdp = !!prog;
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c
+index cd34097b79f1..37a6d7822a38 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c
+@@ -232,7 +232,7 @@ struct nfp_nffw_info *nfp_nffw_info_open(struct nfp_cpp *cpp)
+ err = nfp_cpp_read(cpp, nfp_resource_cpp_id(state->res),
+ nfp_resource_address(state->res),
+ fwinf, sizeof(*fwinf));
+- if (err < sizeof(*fwinf))
++ if (err < (int)sizeof(*fwinf))
+ goto err_release;
+
+ if (!nffw_res_flg_init_get(fwinf))
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index e82986df9b8e..6292c38ef597 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -255,9 +255,8 @@ qed_dcbx_get_app_protocol_type(struct qed_hwfn *p_hwfn,
+ *type = DCBX_PROTOCOL_ROCE_V2;
+ } else {
+ *type = DCBX_MAX_PROTOCOL_TYPE;
+- DP_ERR(p_hwfn,
+- "No action required, App TLV id = 0x%x app_prio_bitmap = 0x%x\n",
+- id, app_prio_bitmap);
++ DP_ERR(p_hwfn, "No action required, App TLV entry = 0x%x\n",
++ app_prio_bitmap);
+ return false;
+ }
+
+@@ -1469,8 +1468,8 @@ static u8 qed_dcbnl_getcap(struct qed_dev *cdev, int capid, u8 *cap)
+ *cap = 0x80;
+ break;
+ case DCB_CAP_ATTR_DCBX:
+- *cap = (DCB_CAP_DCBX_LLD_MANAGED | DCB_CAP_DCBX_VER_CEE |
+- DCB_CAP_DCBX_VER_IEEE | DCB_CAP_DCBX_STATIC);
++ *cap = (DCB_CAP_DCBX_VER_CEE | DCB_CAP_DCBX_VER_IEEE |
++ DCB_CAP_DCBX_STATIC);
+ break;
+ default:
+ *cap = false;
+@@ -1538,8 +1537,6 @@ static u8 qed_dcbnl_getdcbx(struct qed_dev *cdev)
+ if (!dcbx_info)
+ return 0;
+
+- if (dcbx_info->operational.enabled)
+- mode |= DCB_CAP_DCBX_LLD_MANAGED;
+ if (dcbx_info->operational.ieee)
+ mode |= DCB_CAP_DCBX_VER_IEEE;
+ if (dcbx_info->operational.cee)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index 4926c5532fba..13641096a002 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -6663,7 +6663,7 @@ static enum dbg_status qed_parse_mcp_trace_buf(u8 *trace_buf,
+ format_idx = header & MFW_TRACE_EVENTID_MASK;
+
+ /* Skip message if its index doesn't exist in the meta data */
+- if (format_idx > s_mcp_trace_meta.formats_num) {
++ if (format_idx >= s_mcp_trace_meta.formats_num) {
+ u8 format_size =
+ (u8)((header & MFW_TRACE_PRM_SIZE_MASK) >>
+ MFW_TRACE_PRM_SIZE_SHIFT);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+index 468c59d2e491..fa0598cf0ad6 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+@@ -201,8 +201,9 @@ void qed_ll2b_complete_rx_packet(void *cxt, struct qed_ll2_comp_rx_data *data)
+
+ skb = build_skb(buffer->data, 0);
+ if (!skb) {
+- rc = -ENOMEM;
+- goto out_post;
++ DP_INFO(cdev, "Failed to build SKB\n");
++ kfree(buffer->data);
++ goto out_post1;
+ }
+
+ data->u.placement_offset += NET_SKB_PAD;
+@@ -224,8 +225,14 @@ void qed_ll2b_complete_rx_packet(void *cxt, struct qed_ll2_comp_rx_data *data)
+ cdev->ll2->cbs->rx_cb(cdev->ll2->cb_cookie, skb,
+ data->opaque_data_0,
+ data->opaque_data_1);
++ } else {
++ DP_VERBOSE(p_hwfn, (NETIF_MSG_RX_STATUS | NETIF_MSG_PKTDATA |
++ QED_MSG_LL2 | QED_MSG_STORAGE),
++ "Dropping the packet\n");
++ kfree(buffer->data);
+ }
+
++out_post1:
+ /* Update Buffer information and update FW producer */
+ buffer->data = new_data;
+ buffer->phys_addr = new_phys_addr;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 261f21d6b0b0..30e9718fefbb 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -566,8 +566,16 @@ static irqreturn_t qed_single_int(int irq, void *dev_instance)
+ /* Fastpath interrupts */
+ for (j = 0; j < 64; j++) {
+ if ((0x2ULL << j) & status) {
+- hwfn->simd_proto_handler[j].func(
+- hwfn->simd_proto_handler[j].token);
++ struct qed_simd_fp_handler *p_handler =
++ &hwfn->simd_proto_handler[j];
++
++ if (p_handler->func)
++ p_handler->func(p_handler->token);
++ else
++ DP_NOTICE(hwfn,
++ "Not calling fastpath handler as it is NULL [handler #%d, status 0x%llx]\n",
++ j, status);
++
+ status &= ~(0x2ULL << j);
+ rc = IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
+index 891f03a7a33d..8d7b9bb910f2 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
+@@ -1128,6 +1128,8 @@ static ssize_t qlcnic_83xx_sysfs_flash_write_handler(struct file *filp,
+ struct qlcnic_adapter *adapter = dev_get_drvdata(dev);
+
+ ret = kstrtoul(buf, 16, &data);
++ if (ret)
++ return ret;
+
+ switch (data) {
+ case QLC_83XX_FLASH_SECTOR_ERASE_CMD:
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 5803cd6db406..206f0266463e 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -658,7 +658,7 @@ qcaspi_netdev_open(struct net_device *dev)
+ return ret;
+ }
+
+- netif_start_queue(qca->net_dev);
++ /* SPI thread takes care of TX queue */
+
+ return 0;
+ }
+@@ -760,6 +760,9 @@ qcaspi_netdev_tx_timeout(struct net_device *dev)
+ qca->net_dev->stats.tx_errors++;
+ /* Trigger tx queue flush and QCA7000 reset */
+ qca->sync = QCASPI_SYNC_UNKNOWN;
++
++ if (qca->spi_thread)
++ wake_up_process(qca->spi_thread);
+ }
+
+ static int
+@@ -878,22 +881,22 @@ qca_spi_probe(struct spi_device *spi)
+
+ if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) ||
+ (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) {
+- dev_info(&spi->dev, "Invalid clkspeed: %d\n",
+- qcaspi_clkspeed);
++ dev_err(&spi->dev, "Invalid clkspeed: %d\n",
++ qcaspi_clkspeed);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_burst_len < QCASPI_BURST_LEN_MIN) ||
+ (qcaspi_burst_len > QCASPI_BURST_LEN_MAX)) {
+- dev_info(&spi->dev, "Invalid burst len: %d\n",
+- qcaspi_burst_len);
++ dev_err(&spi->dev, "Invalid burst len: %d\n",
++ qcaspi_burst_len);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_pluggable < QCASPI_PLUGGABLE_MIN) ||
+ (qcaspi_pluggable > QCASPI_PLUGGABLE_MAX)) {
+- dev_info(&spi->dev, "Invalid pluggable: %d\n",
+- qcaspi_pluggable);
++ dev_err(&spi->dev, "Invalid pluggable: %d\n",
++ qcaspi_pluggable);
+ return -EINVAL;
+ }
+
+@@ -955,8 +958,8 @@ qca_spi_probe(struct spi_device *spi)
+ }
+
+ if (register_netdev(qcaspi_devs)) {
+- dev_info(&spi->dev, "Unable to register net device %s\n",
+- qcaspi_devs->name);
++ dev_err(&spi->dev, "Unable to register net device %s\n",
++ qcaspi_devs->name);
+ free_netdev(qcaspi_devs);
+ return -EFAULT;
+ }
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index d3c6ce074571..07cc71cc9b76 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -8345,6 +8345,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ NETIF_F_HW_VLAN_CTAG_RX;
+ dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+ NETIF_F_HIGHDMA;
++ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+
+ tp->cp_cmd |= RxChkSum | RxVlan;
+
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 68f122140966..40266fe01186 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -980,6 +980,13 @@ static void ravb_adjust_link(struct net_device *ndev)
+ struct ravb_private *priv = netdev_priv(ndev);
+ struct phy_device *phydev = ndev->phydev;
+ bool new_state = false;
++ unsigned long flags;
++
++ spin_lock_irqsave(&priv->lock, flags);
++
++ /* Disable TX and RX right over here, if E-MAC change is ignored */
++ if (priv->no_avb_link)
++ ravb_rcv_snd_disable(ndev);
+
+ if (phydev->link) {
+ if (phydev->duplex != priv->duplex) {
+@@ -997,18 +1004,21 @@ static void ravb_adjust_link(struct net_device *ndev)
+ ravb_modify(ndev, ECMR, ECMR_TXF, 0);
+ new_state = true;
+ priv->link = phydev->link;
+- if (priv->no_avb_link)
+- ravb_rcv_snd_enable(ndev);
+ }
+ } else if (priv->link) {
+ new_state = true;
+ priv->link = 0;
+ priv->speed = 0;
+ priv->duplex = -1;
+- if (priv->no_avb_link)
+- ravb_rcv_snd_disable(ndev);
+ }
+
++ /* Enable TX and RX right over here, if E-MAC change is ignored */
++ if (priv->no_avb_link && phydev->link)
++ ravb_rcv_snd_enable(ndev);
++
++ mmiowb();
++ spin_unlock_irqrestore(&priv->lock, flags);
++
+ if (new_state && netif_msg_link(priv))
+ phy_print_status(phydev);
+ }
+@@ -1115,52 +1125,18 @@ static int ravb_get_link_ksettings(struct net_device *ndev,
+ static int ravb_set_link_ksettings(struct net_device *ndev,
+ const struct ethtool_link_ksettings *cmd)
+ {
+- struct ravb_private *priv = netdev_priv(ndev);
+- unsigned long flags;
+- int error;
+-
+ if (!ndev->phydev)
+ return -ENODEV;
+
+- spin_lock_irqsave(&priv->lock, flags);
+-
+- /* Disable TX and RX */
+- ravb_rcv_snd_disable(ndev);
+-
+- error = phy_ethtool_ksettings_set(ndev->phydev, cmd);
+- if (error)
+- goto error_exit;
+-
+- if (cmd->base.duplex == DUPLEX_FULL)
+- priv->duplex = 1;
+- else
+- priv->duplex = 0;
+-
+- ravb_set_duplex(ndev);
+-
+-error_exit:
+- mdelay(1);
+-
+- /* Enable TX and RX */
+- ravb_rcv_snd_enable(ndev);
+-
+- mmiowb();
+- spin_unlock_irqrestore(&priv->lock, flags);
+-
+- return error;
++ return phy_ethtool_ksettings_set(ndev->phydev, cmd);
+ }
+
+ static int ravb_nway_reset(struct net_device *ndev)
+ {
+- struct ravb_private *priv = netdev_priv(ndev);
+ int error = -ENODEV;
+- unsigned long flags;
+
+- if (ndev->phydev) {
+- spin_lock_irqsave(&priv->lock, flags);
++ if (ndev->phydev)
+ error = phy_start_aneg(ndev->phydev);
+- spin_unlock_irqrestore(&priv->lock, flags);
+- }
+
+ return error;
+ }
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index b6b90a6314e3..d14914495a65 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -1855,8 +1855,15 @@ static void sh_eth_adjust_link(struct net_device *ndev)
+ {
+ struct sh_eth_private *mdp = netdev_priv(ndev);
+ struct phy_device *phydev = ndev->phydev;
++ unsigned long flags;
+ int new_state = 0;
+
++ spin_lock_irqsave(&mdp->lock, flags);
++
++ /* Disable TX and RX right over here, if E-MAC change is ignored */
++ if (mdp->cd->no_psr || mdp->no_ether_link)
++ sh_eth_rcv_snd_disable(ndev);
++
+ if (phydev->link) {
+ if (phydev->duplex != mdp->duplex) {
+ new_state = 1;
+@@ -1875,18 +1882,21 @@ static void sh_eth_adjust_link(struct net_device *ndev)
+ sh_eth_modify(ndev, ECMR, ECMR_TXF, 0);
+ new_state = 1;
+ mdp->link = phydev->link;
+- if (mdp->cd->no_psr || mdp->no_ether_link)
+- sh_eth_rcv_snd_enable(ndev);
+ }
+ } else if (mdp->link) {
+ new_state = 1;
+ mdp->link = 0;
+ mdp->speed = 0;
+ mdp->duplex = -1;
+- if (mdp->cd->no_psr || mdp->no_ether_link)
+- sh_eth_rcv_snd_disable(ndev);
+ }
+
++ /* Enable TX and RX right over here, if E-MAC change is ignored */
++ if ((mdp->cd->no_psr || mdp->no_ether_link) && phydev->link)
++ sh_eth_rcv_snd_enable(ndev);
++
++ mmiowb();
++ spin_unlock_irqrestore(&mdp->lock, flags);
++
+ if (new_state && netif_msg_link(mdp))
+ phy_print_status(phydev);
+ }
+@@ -1977,39 +1987,10 @@ static int sh_eth_get_link_ksettings(struct net_device *ndev,
+ static int sh_eth_set_link_ksettings(struct net_device *ndev,
+ const struct ethtool_link_ksettings *cmd)
+ {
+- struct sh_eth_private *mdp = netdev_priv(ndev);
+- unsigned long flags;
+- int ret;
+-
+ if (!ndev->phydev)
+ return -ENODEV;
+
+- spin_lock_irqsave(&mdp->lock, flags);
+-
+- /* disable tx and rx */
+- sh_eth_rcv_snd_disable(ndev);
+-
+- ret = phy_ethtool_ksettings_set(ndev->phydev, cmd);
+- if (ret)
+- goto error_exit;
+-
+- if (cmd->base.duplex == DUPLEX_FULL)
+- mdp->duplex = 1;
+- else
+- mdp->duplex = 0;
+-
+- if (mdp->cd->set_duplex)
+- mdp->cd->set_duplex(ndev);
+-
+-error_exit:
+- mdelay(1);
+-
+- /* enable tx and rx */
+- sh_eth_rcv_snd_enable(ndev);
+-
+- spin_unlock_irqrestore(&mdp->lock, flags);
+-
+- return ret;
++ return phy_ethtool_ksettings_set(ndev->phydev, cmd);
+ }
+
+ /* If it is ever necessary to increase SH_ETH_REG_DUMP_MAX_REGS, the
+@@ -2193,18 +2174,10 @@ static void sh_eth_get_regs(struct net_device *ndev, struct ethtool_regs *regs,
+
+ static int sh_eth_nway_reset(struct net_device *ndev)
+ {
+- struct sh_eth_private *mdp = netdev_priv(ndev);
+- unsigned long flags;
+- int ret;
+-
+ if (!ndev->phydev)
+ return -ENODEV;
+
+- spin_lock_irqsave(&mdp->lock, flags);
+- ret = phy_start_aneg(ndev->phydev);
+- spin_unlock_irqrestore(&mdp->lock, flags);
+-
+- return ret;
++ return phy_start_aneg(ndev->phydev);
+ }
+
+ static u32 sh_eth_get_msglevel(struct net_device *ndev)
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index d90a7b1f4088..56ff390e6795 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -4288,9 +4288,9 @@ static int efx_ef10_filter_pri(struct efx_ef10_filter_table *table,
+ return -EPROTONOSUPPORT;
+ }
+
+-static s32 efx_ef10_filter_insert(struct efx_nic *efx,
+- struct efx_filter_spec *spec,
+- bool replace_equal)
++static s32 efx_ef10_filter_insert_locked(struct efx_nic *efx,
++ struct efx_filter_spec *spec,
++ bool replace_equal)
+ {
+ DECLARE_BITMAP(mc_rem_map, EFX_EF10_FILTER_SEARCH_LIMIT);
+ struct efx_ef10_nic_data *nic_data = efx->nic_data;
+@@ -4307,7 +4307,7 @@ static s32 efx_ef10_filter_insert(struct efx_nic *efx,
+ bool is_mc_recip;
+ s32 rc;
+
+- down_read(&efx->filter_sem);
++ WARN_ON(!rwsem_is_locked(&efx->filter_sem));
+ table = efx->filter_state;
+ down_write(&table->lock);
+
+@@ -4498,10 +4498,22 @@ out_unlock:
+ if (rss_locked)
+ mutex_unlock(&efx->rss_lock);
+ up_write(&table->lock);
+- up_read(&efx->filter_sem);
+ return rc;
+ }
+
++static s32 efx_ef10_filter_insert(struct efx_nic *efx,
++ struct efx_filter_spec *spec,
++ bool replace_equal)
++{
++ s32 ret;
++
++ down_read(&efx->filter_sem);
++ ret = efx_ef10_filter_insert_locked(efx, spec, replace_equal);
++ up_read(&efx->filter_sem);
++
++ return ret;
++}
++
+ static void efx_ef10_filter_update_rx_scatter(struct efx_nic *efx)
+ {
+ /* no need to do anything here on EF10 */
+@@ -5284,7 +5296,7 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
+ EFX_WARN_ON_PARANOID(ids[i] != EFX_EF10_FILTER_ID_INVALID);
+ efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
+ efx_filter_set_eth_local(&spec, vlan->vid, addr_list[i].addr);
+- rc = efx_ef10_filter_insert(efx, &spec, true);
++ rc = efx_ef10_filter_insert_locked(efx, &spec, true);
+ if (rc < 0) {
+ if (rollback) {
+ netif_info(efx, drv, efx->net_dev,
+@@ -5313,7 +5325,7 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
+ efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
+ eth_broadcast_addr(baddr);
+ efx_filter_set_eth_local(&spec, vlan->vid, baddr);
+- rc = efx_ef10_filter_insert(efx, &spec, true);
++ rc = efx_ef10_filter_insert_locked(efx, &spec, true);
+ if (rc < 0) {
+ netif_warn(efx, drv, efx->net_dev,
+ "Broadcast filter insert failed rc=%d\n", rc);
+@@ -5369,7 +5381,7 @@ static int efx_ef10_filter_insert_def(struct efx_nic *efx,
+ if (vlan->vid != EFX_FILTER_VID_UNSPEC)
+ efx_filter_set_eth_local(&spec, vlan->vid, NULL);
+
+- rc = efx_ef10_filter_insert(efx, &spec, true);
++ rc = efx_ef10_filter_insert_locked(efx, &spec, true);
+ if (rc < 0) {
+ const char *um = multicast ? "Multicast" : "Unicast";
+ const char *encap_name = "";
+@@ -5429,7 +5441,7 @@ static int efx_ef10_filter_insert_def(struct efx_nic *efx,
+ filter_flags, 0);
+ eth_broadcast_addr(baddr);
+ efx_filter_set_eth_local(&spec, vlan->vid, baddr);
+- rc = efx_ef10_filter_insert(efx, &spec, true);
++ rc = efx_ef10_filter_insert_locked(efx, &spec, true);
+ if (rc < 0) {
+ netif_warn(efx, drv, efx->net_dev,
+ "Broadcast filter insert failed rc=%d\n",
+diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
+index a4ebd8715494..bf41157cc712 100644
+--- a/drivers/net/ethernet/sfc/efx.c
++++ b/drivers/net/ethernet/sfc/efx.c
+@@ -1840,12 +1840,6 @@ static void efx_remove_filters(struct efx_nic *efx)
+ up_write(&efx->filter_sem);
+ }
+
+-static void efx_restore_filters(struct efx_nic *efx)
+-{
+- down_read(&efx->filter_sem);
+- efx->type->filter_table_restore(efx);
+- up_read(&efx->filter_sem);
+-}
+
+ /**************************************************************************
+ *
+@@ -2657,6 +2651,7 @@ void efx_reset_down(struct efx_nic *efx, enum reset_type method)
+ efx_disable_interrupts(efx);
+
+ mutex_lock(&efx->mac_lock);
++ down_write(&efx->filter_sem);
+ mutex_lock(&efx->rss_lock);
+ if (efx->port_initialized && method != RESET_TYPE_INVISIBLE &&
+ method != RESET_TYPE_DATAPATH)
+@@ -2714,9 +2709,8 @@ int efx_reset_up(struct efx_nic *efx, enum reset_type method, bool ok)
+ if (efx->type->rx_restore_rss_contexts)
+ efx->type->rx_restore_rss_contexts(efx);
+ mutex_unlock(&efx->rss_lock);
+- down_read(&efx->filter_sem);
+- efx_restore_filters(efx);
+- up_read(&efx->filter_sem);
++ efx->type->filter_table_restore(efx);
++ up_write(&efx->filter_sem);
+ if (efx->type->sriov_reset)
+ efx->type->sriov_reset(efx);
+
+@@ -2733,6 +2727,7 @@ fail:
+ efx->port_initialized = false;
+
+ mutex_unlock(&efx->rss_lock);
++ up_write(&efx->filter_sem);
+ mutex_unlock(&efx->mac_lock);
+
+ return rc;
+@@ -3440,7 +3435,9 @@ static int efx_pci_probe_main(struct efx_nic *efx)
+
+ efx_init_napi(efx);
+
++ down_write(&efx->filter_sem);
+ rc = efx->type->init(efx);
++ up_write(&efx->filter_sem);
+ if (rc) {
+ netif_err(efx, probe, efx->net_dev,
+ "failed to initialise NIC\n");
+@@ -3729,7 +3726,9 @@ static int efx_pm_resume(struct device *dev)
+ rc = efx->type->reset(efx, RESET_TYPE_ALL);
+ if (rc)
+ return rc;
++ down_write(&efx->filter_sem);
+ rc = efx->type->init(efx);
++ up_write(&efx->filter_sem);
+ if (rc)
+ return rc;
+ rc = efx_pm_thaw(dev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
+index e28c0d2c58e9..bf4acebb6bcd 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
++++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
+@@ -111,7 +111,7 @@ config DWMAC_ROCKCHIP
+ config DWMAC_SOCFPGA
+ tristate "SOCFPGA dwmac support"
+ default ARCH_SOCFPGA
+- depends on OF && (ARCH_SOCFPGA || COMPILE_TEST)
++ depends on OF && (ARCH_SOCFPGA || ARCH_STRATIX10 || COMPILE_TEST)
+ select MFD_SYSCON
+ help
+ Support for ethernet controller on Altera SOCFPGA
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index 6e359572b9f0..5b3b06a0a3bf 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -55,6 +55,7 @@ struct socfpga_dwmac {
+ struct device *dev;
+ struct regmap *sys_mgr_base_addr;
+ struct reset_control *stmmac_rst;
++ struct reset_control *stmmac_ocp_rst;
+ void __iomem *splitter_base;
+ bool f2h_ptp_ref_clk;
+ struct tse_pcs pcs;
+@@ -262,8 +263,8 @@ static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
+ val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+
+ /* Assert reset to the enet controller before changing the phy mode */
+- if (dwmac->stmmac_rst)
+- reset_control_assert(dwmac->stmmac_rst);
++ reset_control_assert(dwmac->stmmac_ocp_rst);
++ reset_control_assert(dwmac->stmmac_rst);
+
+ regmap_read(sys_mgr_base_addr, reg_offset, &ctrl);
+ ctrl &= ~(SYSMGR_EMACGRP_CTRL_PHYSEL_MASK << reg_shift);
+@@ -288,8 +289,8 @@ static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
+ /* Deassert reset for the phy configuration to be sampled by
+ * the enet controller, and operation to start in requested mode
+ */
+- if (dwmac->stmmac_rst)
+- reset_control_deassert(dwmac->stmmac_rst);
++ reset_control_deassert(dwmac->stmmac_ocp_rst);
++ reset_control_deassert(dwmac->stmmac_rst);
+ if (phymode == PHY_INTERFACE_MODE_SGMII) {
+ if (tse_pcs_init(dwmac->pcs.tse_pcs_base, &dwmac->pcs) != 0) {
+ dev_err(dwmac->dev, "Unable to initialize TSE PCS");
+@@ -324,6 +325,15 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
+ goto err_remove_config_dt;
+ }
+
++ dwmac->stmmac_ocp_rst = devm_reset_control_get_optional(dev, "stmmaceth-ocp");
++ if (IS_ERR(dwmac->stmmac_ocp_rst)) {
++ ret = PTR_ERR(dwmac->stmmac_ocp_rst);
++ dev_err(dev, "error getting reset control of ocp %d\n", ret);
++ goto err_remove_config_dt;
++ }
++
++ reset_control_deassert(dwmac->stmmac_ocp_rst);
++
+ ret = socfpga_dwmac_parse_data(dwmac, dev);
+ if (ret) {
+ dev_err(dev, "Unable to parse OF data\n");
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index 1480c094b57d..a817fad26a7b 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -1387,6 +1387,10 @@ static int emac_devioctl(struct net_device *ndev, struct ifreq *ifrq, int cmd)
+
+ static int match_first_device(struct device *dev, void *data)
+ {
++ if (dev->parent && dev->parent->of_node)
++ return of_device_is_compatible(dev->parent->of_node,
++ "ti,davinci_mdio");
++
+ return !strncmp(dev_name(dev), "davinci_mdio", 12);
+ }
+
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index dfabbae72efd..4a22e169b5a5 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -89,10 +89,6 @@
+ static const char banner[] __initconst = KERN_INFO \
+ "AX.25: bpqether driver version 004\n";
+
+-static char bcast_addr[6]={0xFF,0xFF,0xFF,0xFF,0xFF,0xFF};
+-
+-static char bpq_eth_addr[6];
+-
+ static int bpq_rcv(struct sk_buff *, struct net_device *, struct packet_type *, struct net_device *);
+ static int bpq_device_event(struct notifier_block *, unsigned long, void *);
+
+@@ -515,8 +511,8 @@ static int bpq_new_device(struct net_device *edev)
+ bpq->ethdev = edev;
+ bpq->axdev = ndev;
+
+- memcpy(bpq->dest_addr, bcast_addr, sizeof(bpq_eth_addr));
+- memcpy(bpq->acpt_addr, bcast_addr, sizeof(bpq_eth_addr));
++ eth_broadcast_addr(bpq->dest_addr);
++ eth_broadcast_addr(bpq->acpt_addr);
+
+ err = register_netdevice(ndev);
+ if (err)
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index f362cda85425..fde0cddac71a 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1307,6 +1307,7 @@ out:
+ /* setting up multiple channels failed */
+ net_device->max_chn = 1;
+ net_device->num_chn = 1;
++ return 0;
+
+ err_dev_remv:
+ rndis_filter_device_remove(dev, net_device);
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 64f1b1e77bc0..23a52b9293f3 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -275,6 +275,8 @@ struct adf7242_local {
+ struct spi_message stat_msg;
+ struct spi_transfer stat_xfer;
+ struct dentry *debugfs_root;
++ struct delayed_work work;
++ struct workqueue_struct *wqueue;
+ unsigned long flags;
+ int tx_stat;
+ bool promiscuous;
+@@ -575,10 +577,26 @@ static int adf7242_cmd_rx(struct adf7242_local *lp)
+ /* Wait until the ACK is sent */
+ adf7242_wait_status(lp, RC_STATUS_PHY_RDY, RC_STATUS_MASK, __LINE__);
+ adf7242_clear_irqstat(lp);
++ mod_delayed_work(lp->wqueue, &lp->work, msecs_to_jiffies(400));
+
+ return adf7242_cmd(lp, CMD_RC_RX);
+ }
+
++static void adf7242_rx_cal_work(struct work_struct *work)
++{
++ struct adf7242_local *lp =
++ container_of(work, struct adf7242_local, work.work);
++
++ /* Reissuing RC_RX every 400ms - to adjust for offset
++ * drift in receiver (datasheet page 61, OCL section)
++ */
++
++ if (!test_bit(FLAG_XMIT, &lp->flags)) {
++ adf7242_cmd(lp, CMD_RC_PHY_RDY);
++ adf7242_cmd_rx(lp);
++ }
++}
++
+ static int adf7242_set_txpower(struct ieee802154_hw *hw, int mbm)
+ {
+ struct adf7242_local *lp = hw->priv;
+@@ -686,7 +704,7 @@ static int adf7242_start(struct ieee802154_hw *hw)
+ enable_irq(lp->spi->irq);
+ set_bit(FLAG_START, &lp->flags);
+
+- return adf7242_cmd(lp, CMD_RC_RX);
++ return adf7242_cmd_rx(lp);
+ }
+
+ static void adf7242_stop(struct ieee802154_hw *hw)
+@@ -694,6 +712,7 @@ static void adf7242_stop(struct ieee802154_hw *hw)
+ struct adf7242_local *lp = hw->priv;
+
+ disable_irq(lp->spi->irq);
++ cancel_delayed_work_sync(&lp->work);
+ adf7242_cmd(lp, CMD_RC_IDLE);
+ clear_bit(FLAG_START, &lp->flags);
+ adf7242_clear_irqstat(lp);
+@@ -719,7 +738,10 @@ static int adf7242_channel(struct ieee802154_hw *hw, u8 page, u8 channel)
+ adf7242_write_reg(lp, REG_CH_FREQ1, freq >> 8);
+ adf7242_write_reg(lp, REG_CH_FREQ2, freq >> 16);
+
+- return adf7242_cmd(lp, CMD_RC_RX);
++ if (test_bit(FLAG_START, &lp->flags))
++ return adf7242_cmd_rx(lp);
++ else
++ return adf7242_cmd(lp, CMD_RC_PHY_RDY);
+ }
+
+ static int adf7242_set_hw_addr_filt(struct ieee802154_hw *hw,
+@@ -814,6 +836,7 @@ static int adf7242_xmit(struct ieee802154_hw *hw, struct sk_buff *skb)
+ /* ensure existing instances of the IRQ handler have completed */
+ disable_irq(lp->spi->irq);
+ set_bit(FLAG_XMIT, &lp->flags);
++ cancel_delayed_work_sync(&lp->work);
+ reinit_completion(&lp->tx_complete);
+ adf7242_cmd(lp, CMD_RC_PHY_RDY);
+ adf7242_clear_irqstat(lp);
+@@ -952,6 +975,7 @@ static irqreturn_t adf7242_isr(int irq, void *data)
+ unsigned int xmit;
+ u8 irq1;
+
++ mod_delayed_work(lp->wqueue, &lp->work, msecs_to_jiffies(400));
+ adf7242_read_reg(lp, REG_IRQ1_SRC1, &irq1);
+
+ if (!(irq1 & (IRQ_RX_PKT_RCVD | IRQ_CSMA_CA)))
+@@ -1241,6 +1265,9 @@ static int adf7242_probe(struct spi_device *spi)
+ spi_message_add_tail(&lp->stat_xfer, &lp->stat_msg);
+
+ spi_set_drvdata(spi, lp);
++ INIT_DELAYED_WORK(&lp->work, adf7242_rx_cal_work);
++ lp->wqueue = alloc_ordered_workqueue(dev_name(&spi->dev),
++ WQ_MEM_RECLAIM);
+
+ ret = adf7242_hw_init(lp);
+ if (ret)
+@@ -1284,6 +1311,9 @@ static int adf7242_remove(struct spi_device *spi)
+ if (!IS_ERR_OR_NULL(lp->debugfs_root))
+ debugfs_remove_recursive(lp->debugfs_root);
+
++ cancel_delayed_work_sync(&lp->work);
++ destroy_workqueue(lp->wqueue);
++
+ ieee802154_unregister_hw(lp->hw);
+ mutex_destroy(&lp->bmux);
+ ieee802154_free_hw(lp->hw);
+diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c
+index 77abedf0b524..3d9e91579866 100644
+--- a/drivers/net/ieee802154/at86rf230.c
++++ b/drivers/net/ieee802154/at86rf230.c
+@@ -940,7 +940,7 @@ at86rf230_xmit(struct ieee802154_hw *hw, struct sk_buff *skb)
+ static int
+ at86rf230_ed(struct ieee802154_hw *hw, u8 *level)
+ {
+- BUG_ON(!level);
++ WARN_ON(!level);
+ *level = 0xbe;
+ return 0;
+ }
+@@ -1121,8 +1121,7 @@ at86rf230_set_hw_addr_filt(struct ieee802154_hw *hw,
+ if (changed & IEEE802154_AFILT_SADDR_CHANGED) {
+ u16 addr = le16_to_cpu(filt->short_addr);
+
+- dev_vdbg(&lp->spi->dev,
+- "at86rf230_set_hw_addr_filt called for saddr\n");
++ dev_vdbg(&lp->spi->dev, "%s called for saddr\n", __func__);
+ __at86rf230_write(lp, RG_SHORT_ADDR_0, addr);
+ __at86rf230_write(lp, RG_SHORT_ADDR_1, addr >> 8);
+ }
+@@ -1130,8 +1129,7 @@ at86rf230_set_hw_addr_filt(struct ieee802154_hw *hw,
+ if (changed & IEEE802154_AFILT_PANID_CHANGED) {
+ u16 pan = le16_to_cpu(filt->pan_id);
+
+- dev_vdbg(&lp->spi->dev,
+- "at86rf230_set_hw_addr_filt called for pan id\n");
++ dev_vdbg(&lp->spi->dev, "%s called for pan id\n", __func__);
+ __at86rf230_write(lp, RG_PAN_ID_0, pan);
+ __at86rf230_write(lp, RG_PAN_ID_1, pan >> 8);
+ }
+@@ -1140,15 +1138,13 @@ at86rf230_set_hw_addr_filt(struct ieee802154_hw *hw,
+ u8 i, addr[8];
+
+ memcpy(addr, &filt->ieee_addr, 8);
+- dev_vdbg(&lp->spi->dev,
+- "at86rf230_set_hw_addr_filt called for IEEE addr\n");
++ dev_vdbg(&lp->spi->dev, "%s called for IEEE addr\n", __func__);
+ for (i = 0; i < 8; i++)
+ __at86rf230_write(lp, RG_IEEE_ADDR_0 + i, addr[i]);
+ }
+
+ if (changed & IEEE802154_AFILT_PANC_CHANGED) {
+- dev_vdbg(&lp->spi->dev,
+- "at86rf230_set_hw_addr_filt called for panc change\n");
++ dev_vdbg(&lp->spi->dev, "%s called for panc change\n", __func__);
+ if (filt->pan_coord)
+ at86rf230_write_subreg(lp, SR_AACK_I_AM_COORD, 1);
+ else
+@@ -1252,7 +1248,6 @@ at86rf230_set_cca_mode(struct ieee802154_hw *hw,
+ return at86rf230_write_subreg(lp, SR_CCA_MODE, val);
+ }
+
+-
+ static int
+ at86rf230_set_cca_ed_level(struct ieee802154_hw *hw, s32 mbm)
+ {
+diff --git a/drivers/net/ieee802154/fakelb.c b/drivers/net/ieee802154/fakelb.c
+index 0d673f7682ee..176395e4b7bb 100644
+--- a/drivers/net/ieee802154/fakelb.c
++++ b/drivers/net/ieee802154/fakelb.c
+@@ -49,7 +49,7 @@ struct fakelb_phy {
+
+ static int fakelb_hw_ed(struct ieee802154_hw *hw, u8 *level)
+ {
+- BUG_ON(!level);
++ WARN_ON(!level);
+ *level = 0xbe;
+
+ return 0;
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index de0d7f28a181..e428277781ac 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -15,10 +15,11 @@
+ */
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/spi/spi.h>
+ #include <linux/workqueue.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/skbuff.h>
+ #include <linux/of_gpio.h>
+ #include <linux/regmap.h>
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 6641fd5355e0..6511b1309940 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -75,10 +75,23 @@ static int ipvlan_set_port_mode(struct ipvl_port *port, u16 nval)
+ {
+ struct ipvl_dev *ipvlan;
+ struct net_device *mdev = port->dev;
+- int err = 0;
++ unsigned int flags;
++ int err;
+
+ ASSERT_RTNL();
+ if (port->mode != nval) {
++ list_for_each_entry(ipvlan, &port->ipvlans, pnode) {
++ flags = ipvlan->dev->flags;
++ if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) {
++ err = dev_change_flags(ipvlan->dev,
++ flags | IFF_NOARP);
++ } else {
++ err = dev_change_flags(ipvlan->dev,
++ flags & ~IFF_NOARP);
++ }
++ if (unlikely(err))
++ goto fail;
++ }
+ if (nval == IPVLAN_MODE_L3S) {
+ /* New mode is L3S */
+ err = ipvlan_register_nf_hook(read_pnet(&port->pnet));
+@@ -86,21 +99,28 @@ static int ipvlan_set_port_mode(struct ipvl_port *port, u16 nval)
+ mdev->l3mdev_ops = &ipvl_l3mdev_ops;
+ mdev->priv_flags |= IFF_L3MDEV_MASTER;
+ } else
+- return err;
++ goto fail;
+ } else if (port->mode == IPVLAN_MODE_L3S) {
+ /* Old mode was L3S */
+ mdev->priv_flags &= ~IFF_L3MDEV_MASTER;
+ ipvlan_unregister_nf_hook(read_pnet(&port->pnet));
+ mdev->l3mdev_ops = NULL;
+ }
+- list_for_each_entry(ipvlan, &port->ipvlans, pnode) {
+- if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S)
+- ipvlan->dev->flags |= IFF_NOARP;
+- else
+- ipvlan->dev->flags &= ~IFF_NOARP;
+- }
+ port->mode = nval;
+ }
++ return 0;
++
++fail:
++ /* Undo the flags changes that have been done so far. */
++ list_for_each_entry_continue_reverse(ipvlan, &port->ipvlans, pnode) {
++ flags = ipvlan->dev->flags;
++ if (port->mode == IPVLAN_MODE_L3 ||
++ port->mode == IPVLAN_MODE_L3S)
++ dev_change_flags(ipvlan->dev, flags | IFF_NOARP);
++ else
++ dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP);
++ }
++
+ return err;
+ }
+
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 25e2a099b71c..adb2ec74ffc0 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -130,8 +130,9 @@
+ #define MII_88E1318S_PHY_WOL_CTRL_CLEAR_WOL_STATUS BIT(12)
+ #define MII_88E1318S_PHY_WOL_CTRL_MAGIC_PACKET_MATCH_ENABLE BIT(14)
+
+-#define MII_88E1121_PHY_LED_CTRL 16
++#define MII_PHY_LED_CTRL 16
+ #define MII_88E1121_PHY_LED_DEF 0x0030
++#define MII_88E1510_PHY_LED_DEF 0x1177
+
+ #define MII_M1011_PHY_STATUS 0x11
+ #define MII_M1011_PHY_STATUS_1000 0x8000
+@@ -632,8 +633,40 @@ error:
+ return err;
+ }
+
++static void marvell_config_led(struct phy_device *phydev)
++{
++ u16 def_config;
++ int err;
++
++ switch (MARVELL_PHY_FAMILY_ID(phydev->phy_id)) {
++ /* Default PHY LED config: LED[0] .. Link, LED[1] .. Activity */
++ case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1121R):
++ case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1318S):
++ def_config = MII_88E1121_PHY_LED_DEF;
++ break;
++ /* Default PHY LED config:
++ * LED[0] .. 1000Mbps Link
++ * LED[1] .. 100Mbps Link
++ * LED[2] .. Blink, Activity
++ */
++ case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1510):
++ def_config = MII_88E1510_PHY_LED_DEF;
++ break;
++ default:
++ return;
++ }
++
++ err = phy_write_paged(phydev, MII_MARVELL_LED_PAGE, MII_PHY_LED_CTRL,
++ def_config);
++ if (err < 0)
++ pr_warn("Fail to config marvell phy LED.\n");
++}
++
+ static int marvell_config_init(struct phy_device *phydev)
+ {
++ /* Set defalut LED */
++ marvell_config_led(phydev);
++
+ /* Set registers from marvell,reg-init DT property */
+ return marvell_of_reg_init(phydev);
+ }
+@@ -813,21 +846,6 @@ static int m88e1111_config_init(struct phy_device *phydev)
+ return genphy_soft_reset(phydev);
+ }
+
+-static int m88e1121_config_init(struct phy_device *phydev)
+-{
+- int err;
+-
+- /* Default PHY LED config: LED[0] .. Link, LED[1] .. Activity */
+- err = phy_write_paged(phydev, MII_MARVELL_LED_PAGE,
+- MII_88E1121_PHY_LED_CTRL,
+- MII_88E1121_PHY_LED_DEF);
+- if (err < 0)
+- return err;
+-
+- /* Set marvell,reg-init configuration from device tree */
+- return marvell_config_init(phydev);
+-}
+-
+ static int m88e1318_config_init(struct phy_device *phydev)
+ {
+ if (phy_interrupt_is_valid(phydev)) {
+@@ -841,7 +859,7 @@ static int m88e1318_config_init(struct phy_device *phydev)
+ return err;
+ }
+
+- return m88e1121_config_init(phydev);
++ return marvell_config_init(phydev);
+ }
+
+ static int m88e1510_config_init(struct phy_device *phydev)
+@@ -2090,7 +2108,7 @@ static struct phy_driver marvell_drivers[] = {
+ .features = PHY_GBIT_FEATURES,
+ .flags = PHY_HAS_INTERRUPT,
+ .probe = &m88e1121_probe,
+- .config_init = &m88e1121_config_init,
++ .config_init = &marvell_config_init,
+ .config_aneg = &m88e1121_config_aneg,
+ .read_status = &marvell_read_status,
+ .ack_interrupt = &marvell_ack_interrupt,
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index d437f4f5ed52..740655261e5b 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -349,7 +349,6 @@ static int sfp_register_bus(struct sfp_bus *bus)
+ }
+ if (bus->started)
+ bus->socket_ops->start(bus->sfp);
+- bus->netdev->sfp_bus = bus;
+ bus->registered = true;
+ return 0;
+ }
+@@ -364,7 +363,6 @@ static void sfp_unregister_bus(struct sfp_bus *bus)
+ if (bus->phydev && ops && ops->disconnect_phy)
+ ops->disconnect_phy(bus->upstream);
+ }
+- bus->netdev->sfp_bus = NULL;
+ bus->registered = false;
+ }
+
+@@ -436,6 +434,14 @@ void sfp_upstream_stop(struct sfp_bus *bus)
+ }
+ EXPORT_SYMBOL_GPL(sfp_upstream_stop);
+
++static void sfp_upstream_clear(struct sfp_bus *bus)
++{
++ bus->upstream_ops = NULL;
++ bus->upstream = NULL;
++ bus->netdev->sfp_bus = NULL;
++ bus->netdev = NULL;
++}
++
+ /**
+ * sfp_register_upstream() - Register the neighbouring device
+ * @fwnode: firmware node for the SFP bus
+@@ -461,9 +467,13 @@ struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
+ bus->upstream_ops = ops;
+ bus->upstream = upstream;
+ bus->netdev = ndev;
++ ndev->sfp_bus = bus;
+
+- if (bus->sfp)
++ if (bus->sfp) {
+ ret = sfp_register_bus(bus);
++ if (ret)
++ sfp_upstream_clear(bus);
++ }
+ rtnl_unlock();
+ }
+
+@@ -488,8 +498,7 @@ void sfp_unregister_upstream(struct sfp_bus *bus)
+ rtnl_lock();
+ if (bus->sfp)
+ sfp_unregister_bus(bus);
+- bus->upstream = NULL;
+- bus->netdev = NULL;
++ sfp_upstream_clear(bus);
+ rtnl_unlock();
+
+ sfp_bus_put(bus);
+@@ -561,6 +570,13 @@ void sfp_module_remove(struct sfp_bus *bus)
+ }
+ EXPORT_SYMBOL_GPL(sfp_module_remove);
+
++static void sfp_socket_clear(struct sfp_bus *bus)
++{
++ bus->sfp_dev = NULL;
++ bus->sfp = NULL;
++ bus->socket_ops = NULL;
++}
++
+ struct sfp_bus *sfp_register_socket(struct device *dev, struct sfp *sfp,
+ const struct sfp_socket_ops *ops)
+ {
+@@ -573,8 +589,11 @@ struct sfp_bus *sfp_register_socket(struct device *dev, struct sfp *sfp,
+ bus->sfp = sfp;
+ bus->socket_ops = ops;
+
+- if (bus->netdev)
++ if (bus->netdev) {
+ ret = sfp_register_bus(bus);
++ if (ret)
++ sfp_socket_clear(bus);
++ }
+ rtnl_unlock();
+ }
+
+@@ -592,9 +611,7 @@ void sfp_unregister_socket(struct sfp_bus *bus)
+ rtnl_lock();
+ if (bus->netdev)
+ sfp_unregister_bus(bus);
+- bus->sfp_dev = NULL;
+- bus->sfp = NULL;
+- bus->socket_ops = NULL;
++ sfp_socket_clear(bus);
+ rtnl_unlock();
+
+ sfp_bus_put(bus);
+diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
+index 5f565bd574da..48ba80a8ca5c 100644
+--- a/drivers/net/usb/rtl8150.c
++++ b/drivers/net/usb/rtl8150.c
+@@ -681,7 +681,7 @@ static void rtl8150_set_multicast(struct net_device *netdev)
+ (netdev->flags & IFF_ALLMULTI)) {
+ rx_creg &= 0xfffe;
+ rx_creg |= 0x0002;
+- dev_info(&netdev->dev, "%s: allmulti set\n", netdev->name);
++ dev_dbg(&netdev->dev, "%s: allmulti set\n", netdev->name);
+ } else {
+ /* ~RX_MULTICAST, ~RX_PROMISCUOUS */
+ rx_creg &= 0x00fc;
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 7a6a1fe79309..05553d252446 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -82,6 +82,9 @@ static bool turbo_mode = true;
+ module_param(turbo_mode, bool, 0644);
+ MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
+
++static int smsc75xx_link_ok_nopm(struct usbnet *dev);
++static int smsc75xx_phy_gig_workaround(struct usbnet *dev);
++
+ static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index,
+ u32 *data, int in_pm)
+ {
+@@ -852,6 +855,9 @@ static int smsc75xx_phy_initialize(struct usbnet *dev)
+ return -EIO;
+ }
+
++ /* phy workaround for gig link */
++ smsc75xx_phy_gig_workaround(dev);
++
+ smsc75xx_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE,
+ ADVERTISE_ALL | ADVERTISE_CSMA | ADVERTISE_PAUSE_CAP |
+ ADVERTISE_PAUSE_ASYM);
+@@ -987,6 +993,62 @@ static int smsc75xx_wait_ready(struct usbnet *dev, int in_pm)
+ return -EIO;
+ }
+
++static int smsc75xx_phy_gig_workaround(struct usbnet *dev)
++{
++ struct mii_if_info *mii = &dev->mii;
++ int ret = 0, timeout = 0;
++ u32 buf, link_up = 0;
++
++ /* Set the phy in Gig loopback */
++ smsc75xx_mdio_write(dev->net, mii->phy_id, MII_BMCR, 0x4040);
++
++ /* Wait for the link up */
++ do {
++ link_up = smsc75xx_link_ok_nopm(dev);
++ usleep_range(10000, 20000);
++ timeout++;
++ } while ((!link_up) && (timeout < 1000));
++
++ if (timeout >= 1000) {
++ netdev_warn(dev->net, "Timeout waiting for PHY link up\n");
++ return -EIO;
++ }
++
++ /* phy reset */
++ ret = smsc75xx_read_reg(dev, PMT_CTL, &buf);
++ if (ret < 0) {
++ netdev_warn(dev->net, "Failed to read PMT_CTL: %d\n", ret);
++ return ret;
++ }
++
++ buf |= PMT_CTL_PHY_RST;
++
++ ret = smsc75xx_write_reg(dev, PMT_CTL, buf);
++ if (ret < 0) {
++ netdev_warn(dev->net, "Failed to write PMT_CTL: %d\n", ret);
++ return ret;
++ }
++
++ timeout = 0;
++ do {
++ usleep_range(10000, 20000);
++ ret = smsc75xx_read_reg(dev, PMT_CTL, &buf);
++ if (ret < 0) {
++ netdev_warn(dev->net, "Failed to read PMT_CTL: %d\n",
++ ret);
++ return ret;
++ }
++ timeout++;
++ } while ((buf & PMT_CTL_PHY_RST) && (timeout < 100));
++
++ if (timeout >= 100) {
++ netdev_warn(dev->net, "timeout waiting for PHY Reset\n");
++ return -EIO;
++ }
++
++ return 0;
++}
++
+ static int smsc75xx_reset(struct usbnet *dev)
+ {
+ struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index bf05a3689558..9fb89f3b8c59 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -6018,8 +6018,19 @@ static void ath10k_sta_rc_update_wk(struct work_struct *wk)
+ ath10k_mac_max_vht_nss(vht_mcs_mask)));
+
+ if (changed & IEEE80211_RC_BW_CHANGED) {
+- ath10k_dbg(ar, ATH10K_DBG_MAC, "mac update sta %pM peer bw %d\n",
+- sta->addr, bw);
++ enum wmi_phy_mode mode;
++
++ mode = chan_to_phymode(&def);
++ ath10k_dbg(ar, ATH10K_DBG_MAC, "mac update sta %pM peer bw %d phymode %d\n",
++ sta->addr, bw, mode);
++
++ err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
++ WMI_PEER_PHYMODE, mode);
++ if (err) {
++ ath10k_warn(ar, "failed to update STA %pM peer phymode %d: %d\n",
++ sta->addr, mode, err);
++ goto exit;
++ }
+
+ err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
+ WMI_PEER_CHAN_WIDTH, bw);
+@@ -6060,6 +6071,7 @@ static void ath10k_sta_rc_update_wk(struct work_struct *wk)
+ sta->addr);
+ }
+
++exit:
+ mutex_unlock(&ar->conf_mutex);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 7fde22ea2ffa..d0a380d81d74 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -6098,6 +6098,7 @@ enum wmi_peer_param {
+ WMI_PEER_NSS = 0x5,
+ WMI_PEER_USE_4ADDR = 0x6,
+ WMI_PEER_DEBUG = 0xa,
++ WMI_PEER_PHYMODE = 0xd,
+ WMI_PEER_DUMMY_VAR = 0xff, /* dummy parameter for STA PS workaround */
+ };
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 1037df7297bb..32f2f8b63970 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -4294,6 +4294,13 @@ void brcmf_sdio_remove(struct brcmf_sdio *bus)
+ brcmf_dbg(TRACE, "Enter\n");
+
+ if (bus) {
++ /* Stop watchdog task */
++ if (bus->watchdog_tsk) {
++ send_sig(SIGTERM, bus->watchdog_tsk, 1);
++ kthread_stop(bus->watchdog_tsk);
++ bus->watchdog_tsk = NULL;
++ }
++
+ /* De-register interrupt handler */
+ brcmf_sdiod_intr_unregister(bus->sdiodev);
+
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index d5553c47014f..5d823e965883 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -74,7 +74,7 @@ static void pn533_recv_response(struct urb *urb)
+ struct sk_buff *skb = NULL;
+
+ if (!urb->status) {
+- skb = alloc_skb(urb->actual_length, GFP_KERNEL);
++ skb = alloc_skb(urb->actual_length, GFP_ATOMIC);
+ if (!skb) {
+ nfc_err(&phy->udev->dev, "failed to alloc memory\n");
+ } else {
+@@ -186,7 +186,7 @@ static int pn533_usb_send_frame(struct pn533 *dev,
+
+ if (dev->protocol_type == PN533_PROTO_REQ_RESP) {
+ /* request for response for sent packet directly */
+- rc = pn533_submit_urb_for_response(phy, GFP_ATOMIC);
++ rc = pn533_submit_urb_for_response(phy, GFP_KERNEL);
+ if (rc)
+ goto error;
+ } else if (dev->protocol_type == PN533_PROTO_REQ_ACK_RESP) {
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index b9ca782fe82d..620f837c1234 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -100,6 +100,22 @@ static struct class *nvme_subsys_class;
+ static void nvme_ns_remove(struct nvme_ns *ns);
+ static int nvme_revalidate_disk(struct gendisk *disk);
+ static void nvme_put_subsystem(struct nvme_subsystem *subsys);
++static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
++ unsigned nsid);
++
++static void nvme_set_queue_dying(struct nvme_ns *ns)
++{
++ /*
++ * Revalidating a dead namespace sets capacity to 0. This will end
++ * buffered writers dirtying pages that can't be synced.
++ */
++ if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
++ return;
++ revalidate_disk(ns->disk);
++ blk_set_queue_dying(ns->queue);
++ /* Forcibly unquiesce queues to avoid blocking dispatch */
++ blk_mq_unquiesce_queue(ns->queue);
++}
+
+ int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
+ {
+@@ -1130,19 +1146,15 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+
+ static void nvme_update_formats(struct nvme_ctrl *ctrl)
+ {
+- struct nvme_ns *ns, *next;
+- LIST_HEAD(rm_list);
++ struct nvme_ns *ns;
+
+- down_write(&ctrl->namespaces_rwsem);
+- list_for_each_entry(ns, &ctrl->namespaces, list) {
+- if (ns->disk && nvme_revalidate_disk(ns->disk)) {
+- list_move_tail(&ns->list, &rm_list);
+- }
+- }
+- up_write(&ctrl->namespaces_rwsem);
++ down_read(&ctrl->namespaces_rwsem);
++ list_for_each_entry(ns, &ctrl->namespaces, list)
++ if (ns->disk && nvme_revalidate_disk(ns->disk))
++ nvme_set_queue_dying(ns);
++ up_read(&ctrl->namespaces_rwsem);
+
+- list_for_each_entry_safe(ns, next, &rm_list, list)
+- nvme_ns_remove(ns);
++ nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
+ }
+
+ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
+@@ -1197,7 +1209,7 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
+ status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
+ (void __user *)(uintptr_t)cmd.addr, cmd.data_len,
+- (void __user *)(uintptr_t)cmd.metadata, cmd.metadata,
++ (void __user *)(uintptr_t)cmd.metadata, cmd.metadata_len,
+ 0, &cmd.result, timeout);
+ nvme_passthru_end(ctrl, effects);
+
+@@ -3110,7 +3122,7 @@ static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
+
+ down_write(&ctrl->namespaces_rwsem);
+ list_for_each_entry_safe(ns, next, &ctrl->namespaces, list) {
+- if (ns->head->ns_id > nsid)
++ if (ns->head->ns_id > nsid || test_bit(NVME_NS_DEAD, &ns->flags))
+ list_move_tail(&ns->list, &rm_list);
+ }
+ up_write(&ctrl->namespaces_rwsem);
+@@ -3488,19 +3500,9 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
+ if (ctrl->admin_q)
+ blk_mq_unquiesce_queue(ctrl->admin_q);
+
+- list_for_each_entry(ns, &ctrl->namespaces, list) {
+- /*
+- * Revalidating a dead namespace sets capacity to 0. This will
+- * end buffered writers dirtying pages that can't be synced.
+- */
+- if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
+- continue;
+- revalidate_disk(ns->disk);
+- blk_set_queue_dying(ns->queue);
++ list_for_each_entry(ns, &ctrl->namespaces, list)
++ nvme_set_queue_dying(ns);
+
+- /* Forcibly unquiesce queues to avoid blocking dispatch */
+- blk_mq_unquiesce_queue(ns->queue);
+- }
+ up_read(&ctrl->namespaces_rwsem);
+ }
+ EXPORT_SYMBOL_GPL(nvme_kill_queues);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 0483c33a3567..de9c3762a994 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2291,6 +2291,7 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev, int status)
+
+ nvme_get_ctrl(&dev->ctrl);
+ nvme_dev_disable(dev, false);
++ nvme_kill_queues(&dev->ctrl);
+ if (!queue_work(nvme_wq, &dev->remove_work))
+ nvme_put_ctrl(&dev->ctrl);
+ }
+@@ -2407,7 +2408,6 @@ static void nvme_remove_dead_ctrl_work(struct work_struct *work)
+ struct nvme_dev *dev = container_of(work, struct nvme_dev, remove_work);
+ struct pci_dev *pdev = to_pci_dev(dev->dev);
+
+- nvme_kill_queues(&dev->ctrl);
+ if (pci_get_drvdata(pdev))
+ device_release_driver(&pdev->dev);
+ nvme_put_ctrl(&dev->ctrl);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 2181299ce8f5..d1e8aa04d313 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -734,7 +734,6 @@ out:
+ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+- nvme_rdma_stop_queue(&ctrl->queues[0]);
+ if (remove) {
+ blk_cleanup_queue(ctrl->ctrl.admin_q);
+ nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
+@@ -819,7 +818,6 @@ out_free_queue:
+ static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+- nvme_rdma_stop_io_queues(ctrl);
+ if (remove) {
+ blk_cleanup_queue(ctrl->ctrl.connect_q);
+ nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.tagset);
+@@ -888,9 +886,9 @@ static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
+ list_del(&ctrl->list);
+ mutex_unlock(&nvme_rdma_ctrl_mutex);
+
+- kfree(ctrl->queues);
+ nvmf_free_options(nctrl->opts);
+ free_ctrl:
++ kfree(ctrl->queues);
+ kfree(ctrl);
+ }
+
+@@ -949,6 +947,7 @@ static void nvme_rdma_reconnect_ctrl_work(struct work_struct *work)
+ return;
+
+ destroy_admin:
++ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ nvme_rdma_destroy_admin_queue(ctrl, false);
+ requeue:
+ dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",
+@@ -965,12 +964,14 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+
+ if (ctrl->ctrl.queue_count > 1) {
+ nvme_stop_queues(&ctrl->ctrl);
++ nvme_rdma_stop_io_queues(ctrl);
+ blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ nvme_cancel_request, &ctrl->ctrl);
+ nvme_rdma_destroy_io_queues(ctrl, false);
+ }
+
+ blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
+ nvme_cancel_request, &ctrl->ctrl);
+ nvme_rdma_destroy_admin_queue(ctrl, false);
+@@ -1720,6 +1721,7 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ {
+ if (ctrl->ctrl.queue_count > 1) {
+ nvme_stop_queues(&ctrl->ctrl);
++ nvme_rdma_stop_io_queues(ctrl);
+ blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ nvme_cancel_request, &ctrl->ctrl);
+ nvme_rdma_destroy_io_queues(ctrl, shutdown);
+@@ -1731,6 +1733,7 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ nvme_disable_ctrl(&ctrl->ctrl, ctrl->ctrl.cap);
+
+ blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
+ nvme_cancel_request, &ctrl->ctrl);
+ blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+@@ -1916,11 +1919,6 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
+ goto out_free_ctrl;
+ }
+
+- ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_rdma_ctrl_ops,
+- 0 /* no quirks, we're perfect! */);
+- if (ret)
+- goto out_free_ctrl;
+-
+ INIT_DELAYED_WORK(&ctrl->reconnect_work,
+ nvme_rdma_reconnect_ctrl_work);
+ INIT_WORK(&ctrl->err_work, nvme_rdma_error_recovery_work);
+@@ -1934,14 +1932,19 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
+ ctrl->queues = kcalloc(ctrl->ctrl.queue_count, sizeof(*ctrl->queues),
+ GFP_KERNEL);
+ if (!ctrl->queues)
+- goto out_uninit_ctrl;
++ goto out_free_ctrl;
++
++ ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_rdma_ctrl_ops,
++ 0 /* no quirks, we're perfect! */);
++ if (ret)
++ goto out_kfree_queues;
+
+ changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING);
+ WARN_ON_ONCE(!changed);
+
+ ret = nvme_rdma_configure_admin_queue(ctrl, true);
+ if (ret)
+- goto out_kfree_queues;
++ goto out_uninit_ctrl;
+
+ /* sanity check icdoff */
+ if (ctrl->ctrl.icdoff) {
+@@ -1996,15 +1999,16 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
+ return &ctrl->ctrl;
+
+ out_remove_admin_queue:
++ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ nvme_rdma_destroy_admin_queue(ctrl, true);
+-out_kfree_queues:
+- kfree(ctrl->queues);
+ out_uninit_ctrl:
+ nvme_uninit_ctrl(&ctrl->ctrl);
+ nvme_put_ctrl(&ctrl->ctrl);
+ if (ret > 0)
+ ret = -EIO;
+ return ERR_PTR(ret);
++out_kfree_queues:
++ kfree(ctrl->queues);
+ out_free_ctrl:
+ kfree(ctrl);
+ return ERR_PTR(ret);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index e95424f172fd..0547d5f7d3ba 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -624,6 +624,14 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
+ }
+
+ ctrl->csts = NVME_CSTS_RDY;
++
++ /*
++ * Controllers that are not yet enabled should not really enforce the
++ * keep alive timeout, but we still want to track a timeout and cleanup
++ * in case a host died before it enabled the controller. Hence, simply
++ * reset the keep alive timer when the controller is enabled.
++ */
++ mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ }
+
+ static void nvmet_clear_ctrl(struct nvmet_ctrl *ctrl)
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 1e28597138c8..2d23479b9053 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -924,6 +924,10 @@ struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *cell_id)
+ return cell;
+ }
+
++ /* NULL cell_id only allowed for device tree; invalid otherwise */
++ if (!cell_id)
++ return ERR_PTR(-EINVAL);
++
+ return nvmem_cell_get_from_list(cell_id);
+ }
+ EXPORT_SYMBOL_GPL(nvmem_cell_get);
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 848f549164cd..466e3c8582f0 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -102,7 +102,7 @@ static u32 phandle_cache_mask;
+ * - the phandle lookup overhead reduction provided by the cache
+ * will likely be less
+ */
+-static void of_populate_phandle_cache(void)
++void of_populate_phandle_cache(void)
+ {
+ unsigned long flags;
+ u32 cache_entries;
+@@ -134,8 +134,7 @@ out:
+ raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ }
+
+-#ifndef CONFIG_MODULES
+-static int __init of_free_phandle_cache(void)
++int of_free_phandle_cache(void)
+ {
+ unsigned long flags;
+
+@@ -148,6 +147,7 @@ static int __init of_free_phandle_cache(void)
+
+ return 0;
+ }
++#if !defined(CONFIG_MODULES)
+ late_initcall_sync(of_free_phandle_cache);
+ #endif
+
+diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
+index 891d780c076a..216175d11d3d 100644
+--- a/drivers/of/of_private.h
++++ b/drivers/of/of_private.h
+@@ -79,6 +79,8 @@ int of_resolve_phandles(struct device_node *tree);
+ #if defined(CONFIG_OF_OVERLAY)
+ void of_overlay_mutex_lock(void);
+ void of_overlay_mutex_unlock(void);
++int of_free_phandle_cache(void);
++void of_populate_phandle_cache(void);
+ #else
+ static inline void of_overlay_mutex_lock(void) {};
+ static inline void of_overlay_mutex_unlock(void) {};
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 7baa53e5b1d7..eda57ef12fd0 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -804,6 +804,8 @@ static int of_overlay_apply(const void *fdt, struct device_node *tree,
+ goto err_free_overlay_changeset;
+ }
+
++ of_populate_phandle_cache();
++
+ ret = __of_changeset_apply_notify(&ovcs->cset);
+ if (ret)
+ pr_err("overlay changeset entry notify error %d\n", ret);
+@@ -1046,8 +1048,17 @@ int of_overlay_remove(int *ovcs_id)
+
+ list_del(&ovcs->ovcs_list);
+
++ /*
++ * Disable phandle cache. Avoids race condition that would arise
++ * from removing cache entry when the associated node is deleted.
++ */
++ of_free_phandle_cache();
++
+ ret_apply = 0;
+ ret = __of_changeset_revert_entries(&ovcs->cset, &ret_apply);
++
++ of_populate_phandle_cache();
++
+ if (ret) {
+ if (ret_apply)
+ devicetree_state_flags |= DTSF_REVERT_FAIL;
+diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
+index 6c409079d514..35a2df4ddf20 100644
+--- a/drivers/pci/dwc/pcie-designware-host.c
++++ b/drivers/pci/dwc/pcie-designware-host.c
+@@ -355,7 +355,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ resource_list_for_each_entry_safe(win, tmp, &bridge->windows) {
+ switch (resource_type(win->res)) {
+ case IORESOURCE_IO:
+- ret = pci_remap_iospace(win->res, pp->io_base);
++ ret = devm_pci_remap_iospace(dev, win->res,
++ pp->io_base);
+ if (ret) {
+ dev_warn(dev, "error %d: failed to map resource %pR\n",
+ ret, win->res);
+diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c
+index 9abf549631b4..d0867a311f42 100644
+--- a/drivers/pci/host/pci-aardvark.c
++++ b/drivers/pci/host/pci-aardvark.c
+@@ -848,7 +848,7 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
+ 0, 0xF8000000, 0,
+ lower_32_bits(res->start),
+ OB_PCIE_IO);
+- err = pci_remap_iospace(res, iobase);
++ err = devm_pci_remap_iospace(dev, res, iobase);
+ if (err) {
+ dev_warn(dev, "error %d: failed to map resource %pR\n",
+ err, res);
+diff --git a/drivers/pci/host/pci-ftpci100.c b/drivers/pci/host/pci-ftpci100.c
+index 5008fd87956a..0e966219d66d 100644
+--- a/drivers/pci/host/pci-ftpci100.c
++++ b/drivers/pci/host/pci-ftpci100.c
+@@ -353,11 +353,13 @@ static int faraday_pci_setup_cascaded_irq(struct faraday_pci *p)
+ irq = of_irq_get(intc, 0);
+ if (irq <= 0) {
+ dev_err(p->dev, "failed to get parent IRQ\n");
++ of_node_put(intc);
+ return irq ?: -EINVAL;
+ }
+
+ p->irqdomain = irq_domain_add_linear(intc, PCI_NUM_INTX,
+ &faraday_pci_irqdomain_ops, p);
++ of_node_put(intc);
+ if (!p->irqdomain) {
+ dev_err(p->dev, "failed to create Gemini PCI IRQ domain\n");
+ return -EINVAL;
+@@ -499,7 +501,7 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ dev_err(dev, "illegal IO mem size\n");
+ return -EINVAL;
+ }
+- ret = pci_remap_iospace(io, io_base);
++ ret = devm_pci_remap_iospace(dev, io, io_base);
+ if (ret) {
+ dev_warn(dev, "error %d: failed to map resource %pR\n",
+ ret, io);
+diff --git a/drivers/pci/host/pci-v3-semi.c b/drivers/pci/host/pci-v3-semi.c
+index 0a4dea796663..3381bf29b59f 100644
+--- a/drivers/pci/host/pci-v3-semi.c
++++ b/drivers/pci/host/pci-v3-semi.c
+@@ -535,7 +535,7 @@ static int v3_pci_setup_resource(struct v3_pci *v3,
+ v3->io_bus_addr = io->start - win->offset;
+ dev_dbg(dev, "I/O window %pR, bus addr %pap\n",
+ io, &v3->io_bus_addr);
+- ret = pci_remap_iospace(io, io_base);
++ ret = devm_pci_remap_iospace(dev, io, io_base);
+ if (ret) {
+ dev_warn(dev,
+ "error %d: failed to map resource %pR\n",
+diff --git a/drivers/pci/host/pci-versatile.c b/drivers/pci/host/pci-versatile.c
+index 5b3876f5312b..df9408c4873a 100644
+--- a/drivers/pci/host/pci-versatile.c
++++ b/drivers/pci/host/pci-versatile.c
+@@ -81,7 +81,7 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
+
+ switch (resource_type(res)) {
+ case IORESOURCE_IO:
+- err = pci_remap_iospace(res, iobase);
++ err = devm_pci_remap_iospace(dev, res, iobase);
+ if (err) {
+ dev_warn(dev, "error %d: failed to map resource %pR\n",
+ err, res);
+diff --git a/drivers/pci/host/pci-xgene.c b/drivers/pci/host/pci-xgene.c
+index 0a0d7ee6d3c9..e256d94cafb3 100644
+--- a/drivers/pci/host/pci-xgene.c
++++ b/drivers/pci/host/pci-xgene.c
+@@ -421,7 +421,7 @@ static int xgene_pcie_map_ranges(struct xgene_pcie_port *port,
+ case IORESOURCE_IO:
+ xgene_pcie_setup_ob_reg(port, res, OMR3BARL, io_base,
+ res->start - window->offset);
+- ret = pci_remap_iospace(res, io_base);
++ ret = devm_pci_remap_iospace(dev, res, io_base);
+ if (ret < 0)
+ return ret;
+ break;
+diff --git a/drivers/pci/host/pcie-mediatek.c b/drivers/pci/host/pcie-mediatek.c
+index a8b20c5012a9..35e9fd028da4 100644
+--- a/drivers/pci/host/pcie-mediatek.c
++++ b/drivers/pci/host/pcie-mediatek.c
+@@ -1063,7 +1063,7 @@ static int mtk_pcie_request_resources(struct mtk_pcie *pcie)
+ if (err < 0)
+ return err;
+
+- pci_remap_iospace(&pcie->pio, pcie->io.start);
++ devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start);
+
+ return 0;
+ }
+diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c
+index 4839ae578711..62545e510389 100644
+--- a/drivers/pci/host/pcie-xilinx-nwl.c
++++ b/drivers/pci/host/pcie-xilinx-nwl.c
+@@ -557,7 +557,7 @@ static int nwl_pcie_init_irq_domain(struct nwl_pcie *pcie)
+ PCI_NUM_INTX,
+ &legacy_domain_ops,
+ pcie);
+-
++ of_node_put(legacy_intc_node);
+ if (!pcie->legacy_irq_domain) {
+ dev_err(dev, "failed to create IRQ domain\n");
+ return -ENOMEM;
+diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
+index 0ad188effc09..fd9c73dd347b 100644
+--- a/drivers/pci/host/pcie-xilinx.c
++++ b/drivers/pci/host/pcie-xilinx.c
+@@ -507,6 +507,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
+ port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
+ &intx_domain_ops,
+ port);
++ of_node_put(pcie_intc_node);
+ if (!port->leg_domain) {
+ dev_err(dev, "Failed to get a INTx IRQ domain\n");
+ return -ENODEV;
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index af92fed46ab7..fd93783a87b0 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -438,8 +438,17 @@ int __pci_hp_register(struct hotplug_slot *slot, struct pci_bus *bus,
+ list_add(&slot->slot_list, &pci_hotplug_slot_list);
+
+ result = fs_add_slot(pci_slot);
++ if (result)
++ goto err_list_del;
++
+ kobject_uevent(&pci_slot->kobj, KOBJ_ADD);
+ dbg("Added slot %s to the list\n", name);
++ goto out;
++
++err_list_del:
++ list_del(&slot->slot_list);
++ pci_slot->hotplug = NULL;
++ pci_destroy_slot(pci_slot);
+ out:
+ mutex_unlock(&pci_hp_mutex);
+ return result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 5f892065585e..fca87a1a2b22 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -119,6 +119,7 @@ int pciehp_unconfigure_device(struct slot *p_slot);
+ void pciehp_queue_pushbutton_work(struct work_struct *work);
+ struct controller *pcie_init(struct pcie_device *dev);
+ int pcie_init_notification(struct controller *ctrl);
++void pcie_shutdown_notification(struct controller *ctrl);
+ int pciehp_enable_slot(struct slot *p_slot);
+ int pciehp_disable_slot(struct slot *p_slot);
+ void pcie_reenable_notification(struct controller *ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 44a6a63802d5..2ba59fc94827 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -62,6 +62,12 @@ static int reset_slot(struct hotplug_slot *slot, int probe);
+ */
+ static void release_slot(struct hotplug_slot *hotplug_slot)
+ {
++ struct slot *slot = hotplug_slot->private;
++
++ /* queued work needs hotplug_slot name */
++ cancel_delayed_work(&slot->work);
++ drain_workqueue(slot->wq);
++
+ kfree(hotplug_slot->ops);
+ kfree(hotplug_slot->info);
+ kfree(hotplug_slot);
+@@ -264,6 +270,7 @@ static void pciehp_remove(struct pcie_device *dev)
+ {
+ struct controller *ctrl = get_service_data(dev);
+
++ pcie_shutdown_notification(ctrl);
+ cleanup_slot(ctrl);
+ pciehp_release_ctrl(ctrl);
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 98ea75aa32c7..6635ae13962f 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -545,8 +545,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ struct controller *ctrl = (struct controller *)dev_id;
+ struct pci_dev *pdev = ctrl_dev(ctrl);
+- struct pci_bus *subordinate = pdev->subordinate;
+- struct pci_dev *dev;
+ struct slot *slot = ctrl->slot;
+ u16 status, events;
+ u8 present;
+@@ -594,14 +592,9 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ wake_up(&ctrl->queue);
+ }
+
+- if (subordinate) {
+- list_for_each_entry(dev, &subordinate->devices, bus_list) {
+- if (dev->ignore_hotplug) {
+- ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n",
+- events, pci_name(dev));
+- return IRQ_HANDLED;
+- }
+- }
++ if (pdev->ignore_hotplug) {
++ ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events);
++ return IRQ_HANDLED;
+ }
+
+ /* Check Attention Button Pressed */
+@@ -771,7 +764,7 @@ int pcie_init_notification(struct controller *ctrl)
+ return 0;
+ }
+
+-static void pcie_shutdown_notification(struct controller *ctrl)
++void pcie_shutdown_notification(struct controller *ctrl)
+ {
+ if (ctrl->notification_enabled) {
+ pcie_disable_notification(ctrl);
+@@ -806,7 +799,7 @@ abort:
+ static void pcie_cleanup_slot(struct controller *ctrl)
+ {
+ struct slot *slot = ctrl->slot;
+- cancel_delayed_work(&slot->work);
++
+ destroy_workqueue(slot->wq);
+ kfree(slot);
+ }
+@@ -898,7 +891,6 @@ abort:
+
+ void pciehp_release_ctrl(struct controller *ctrl)
+ {
+- pcie_shutdown_notification(ctrl);
+ pcie_cleanup_slot(ctrl);
+ kfree(ctrl);
+ }
+diff --git a/drivers/pci/of.c b/drivers/pci/of.c
+index a28355c273ae..f8bcfe209464 100644
+--- a/drivers/pci/of.c
++++ b/drivers/pci/of.c
+@@ -617,7 +617,7 @@ int pci_parse_request_of_pci_ranges(struct device *dev,
+
+ switch (resource_type(res)) {
+ case IORESOURCE_IO:
+- err = pci_remap_iospace(res, iobase);
++ err = devm_pci_remap_iospace(dev, res, iobase);
+ if (err) {
+ dev_warn(dev, "error %d: failed to map resource %pR\n",
+ err, res);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 054974055ea4..41d24b273d6f 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -601,13 +601,11 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ /*
+ * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over
+ * system-wide suspend/resume confuses the platform firmware, so avoid
+- * doing that, unless the bridge has a driver that should take care of
+- * the PM handling. According to Section 16.1.6 of ACPI 6.2, endpoint
++ * doing that. According to Section 16.1.6 of ACPI 6.2, endpoint
+ * devices are expected to be in D3 before invoking the S3 entry path
+ * from the firmware, so they should not be affected by this issue.
+ */
+- if (pci_is_bridge(dev) && !dev->driver &&
+- acpi_target_system_state() != ACPI_STATE_S0)
++ if (pci_is_bridge(dev) && acpi_target_system_state() != ACPI_STATE_S0)
+ return true;
+
+ if (!adev || !acpi_device_power_manageable(adev))
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index dbfe7c4f3776..04ce05f9c2cb 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1163,6 +1163,33 @@ static void pci_restore_config_space(struct pci_dev *pdev)
+ }
+ }
+
++static void pci_restore_rebar_state(struct pci_dev *pdev)
++{
++ unsigned int pos, nbars, i;
++ u32 ctrl;
++
++ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
++ if (!pos)
++ return;
++
++ pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++ nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >>
++ PCI_REBAR_CTRL_NBAR_SHIFT;
++
++ for (i = 0; i < nbars; i++, pos += 8) {
++ struct resource *res;
++ int bar_idx, size;
++
++ pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++ bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
++ res = pdev->resource + bar_idx;
++ size = order_base_2((resource_size(res) >> 20) | 1) - 1;
++ ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
++ ctrl |= size << 8;
++ pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
++ }
++}
++
+ /**
+ * pci_restore_state - Restore the saved state of a PCI device
+ * @dev: - PCI device that we're dealing with
+@@ -1178,6 +1205,7 @@ void pci_restore_state(struct pci_dev *dev)
+ pci_restore_pri_state(dev);
+ pci_restore_ats_state(dev);
+ pci_restore_vc_state(dev);
++ pci_restore_rebar_state(dev);
+
+ pci_cleanup_aer_error_status_regs(dev);
+
+@@ -3573,6 +3601,44 @@ void pci_unmap_iospace(struct resource *res)
+ }
+ EXPORT_SYMBOL(pci_unmap_iospace);
+
++static void devm_pci_unmap_iospace(struct device *dev, void *ptr)
++{
++ struct resource **res = ptr;
++
++ pci_unmap_iospace(*res);
++}
++
++/**
++ * devm_pci_remap_iospace - Managed pci_remap_iospace()
++ * @dev: Generic device to remap IO address for
++ * @res: Resource describing the I/O space
++ * @phys_addr: physical address of range to be mapped
++ *
++ * Managed pci_remap_iospace(). Map is automatically unmapped on driver
++ * detach.
++ */
++int devm_pci_remap_iospace(struct device *dev, const struct resource *res,
++ phys_addr_t phys_addr)
++{
++ const struct resource **ptr;
++ int error;
++
++ ptr = devres_alloc(devm_pci_unmap_iospace, sizeof(*ptr), GFP_KERNEL);
++ if (!ptr)
++ return -ENOMEM;
++
++ error = pci_remap_iospace(res, phys_addr);
++ if (error) {
++ devres_free(ptr);
++ } else {
++ *ptr = res;
++ devres_add(dev, ptr);
++ }
++
++ return error;
++}
++EXPORT_SYMBOL(devm_pci_remap_iospace);
++
+ /**
+ * devm_pci_remap_cfgspace - Managed pci_remap_cfgspace()
+ * @dev: Generic device to remap IO address for
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index d21686ad3ce5..979fd599fc66 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1677,6 +1677,10 @@ static void pci_configure_mps(struct pci_dev *dev)
+ if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ return;
+
++ /* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */
++ if (dev->is_virtfn)
++ return;
++
+ mps = pcie_get_mps(dev);
+ p_mps = pcie_get_mps(bridge);
+
+diff --git a/drivers/perf/xgene_pmu.c b/drivers/perf/xgene_pmu.c
+index 6bdb1dad805f..0e31f1392a53 100644
+--- a/drivers/perf/xgene_pmu.c
++++ b/drivers/perf/xgene_pmu.c
+@@ -1463,7 +1463,7 @@ static char *xgene_pmu_dev_name(struct device *dev, u32 type, int id)
+ case PMU_TYPE_IOB:
+ return devm_kasprintf(dev, GFP_KERNEL, "iob%d", id);
+ case PMU_TYPE_IOB_SLOW:
+- return devm_kasprintf(dev, GFP_KERNEL, "iob-slow%d", id);
++ return devm_kasprintf(dev, GFP_KERNEL, "iob_slow%d", id);
+ case PMU_TYPE_MCB:
+ return devm_kasprintf(dev, GFP_KERNEL, "mcb%d", id);
+ case PMU_TYPE_MC:
+diff --git a/drivers/pinctrl/bcm/pinctrl-nsp-mux.c b/drivers/pinctrl/bcm/pinctrl-nsp-mux.c
+index 35c17653c694..87618a4e90e4 100644
+--- a/drivers/pinctrl/bcm/pinctrl-nsp-mux.c
++++ b/drivers/pinctrl/bcm/pinctrl-nsp-mux.c
+@@ -460,8 +460,8 @@ static int nsp_pinmux_enable(struct pinctrl_dev *pctrl_dev,
+ const struct nsp_pin_function *func;
+ const struct nsp_pin_group *grp;
+
+- if (grp_select > pinctrl->num_groups ||
+- func_select > pinctrl->num_functions)
++ if (grp_select >= pinctrl->num_groups ||
++ func_select >= pinctrl->num_functions)
+ return -EINVAL;
+
+ func = &pinctrl->functions[func_select];
+@@ -577,6 +577,8 @@ static int nsp_pinmux_probe(struct platform_device *pdev)
+ return PTR_ERR(pinctrl->base0);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++ if (!res)
++ return -EINVAL;
+ pinctrl->base1 = devm_ioremap_nocache(&pdev->dev, res->start,
+ resource_size(res));
+ if (!pinctrl->base1) {
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index ac38a3f9f86b..4699b55a0990 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -536,7 +536,7 @@ static int ingenic_pinmux_gpio_set_direction(struct pinctrl_dev *pctldev,
+ ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, input);
+ } else {
+ ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, false);
+- ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DIR, input);
++ ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DIR, !input);
+ ingenic_config_pin(jzpc, pin, JZ4740_GPIO_FUNC, false);
+ }
+
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index c52c6723374b..d3f4b6e91f49 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -2170,7 +2170,7 @@ static int __init dell_init(void)
+ dell_fill_request(&buffer, token->location, 0, 0, 0);
+ ret = dell_send_request(&buffer,
+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+- if (ret)
++ if (ret == 0)
+ max_intensity = buffer.output[3];
+ }
+
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 6d4012dd6922..bac1eeb3d312 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -265,8 +265,10 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ return err;
+
+ /* full-function RTCs won't have such missing fields */
+- if (rtc_valid_tm(&alarm->time) == 0)
++ if (rtc_valid_tm(&alarm->time) == 0) {
++ rtc_add_offset(rtc, &alarm->time);
+ return 0;
++ }
+
+ /* get the "after" timestamp, to detect wrapped fields */
+ err = rtc_read_time(rtc, &now);
+@@ -409,7 +411,6 @@ static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ if (err)
+ return err;
+
+- rtc_subtract_offset(rtc, &alarm->time);
+ scheduled = rtc_tm_to_time64(&alarm->time);
+
+ /* Make sure we're not setting alarms in the past */
+@@ -426,6 +427,8 @@ static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ * over right here, before we set the alarm.
+ */
+
++ rtc_subtract_offset(rtc, &alarm->time);
++
+ if (!rtc->ops)
+ err = -ENODEV;
+ else if (!rtc->ops->set_alarm)
+@@ -467,7 +470,6 @@ int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+
+ mutex_unlock(&rtc->ops_lock);
+
+- rtc_add_offset(rtc, &alarm->time);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(rtc_set_alarm);
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index ea6a2d0b2894..770fa9cfc310 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -177,6 +177,7 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
+ {
+ struct vfio_ccw_private *private = dev_get_drvdata(&sch->dev);
+ unsigned long flags;
++ int rc = -EAGAIN;
+
+ spin_lock_irqsave(sch->lock, flags);
+ if (!device_is_registered(&sch->dev))
+@@ -187,6 +188,7 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
+
+ if (cio_update_schib(sch)) {
+ vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
++ rc = 0;
+ goto out_unlock;
+ }
+
+@@ -195,11 +197,12 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
+ private->state = private->mdev ? VFIO_CCW_STATE_IDLE :
+ VFIO_CCW_STATE_STANDBY;
+ }
++ rc = 0;
+
+ out_unlock:
+ spin_unlock_irqrestore(sch->lock, flags);
+
+- return 0;
++ return rc;
+ }
+
+ static struct css_device_id vfio_ccw_sch_ids[] = {
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index b7f75339683e..a05c53a44973 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -1003,7 +1003,7 @@ struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *,
+ __u16, __u16,
+ enum qeth_prot_versions);
+ int qeth_set_features(struct net_device *, netdev_features_t);
+-void qeth_recover_features(struct net_device *dev);
++void qeth_enable_hw_features(struct net_device *dev);
+ netdev_features_t qeth_fix_features(struct net_device *, netdev_features_t);
+ netdev_features_t qeth_features_check(struct sk_buff *skb,
+ struct net_device *dev,
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index b2eebcffd502..32075de2f1e5 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -6432,28 +6432,27 @@ static int qeth_set_ipa_tso(struct qeth_card *card, int on)
+ #define QETH_HW_FEATURES (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_TSO)
+
+ /**
+- * qeth_recover_features() - Restore device features after recovery
+- * @dev: the recovering net_device
+- *
+- * Caller must hold rtnl lock.
++ * qeth_enable_hw_features() - (Re-)Enable HW functions for device features
++ * @dev: a net_device
+ */
+-void qeth_recover_features(struct net_device *dev)
++void qeth_enable_hw_features(struct net_device *dev)
+ {
+- netdev_features_t features = dev->features;
+ struct qeth_card *card = dev->ml_priv;
++ netdev_features_t features;
+
++ rtnl_lock();
++ features = dev->features;
+ /* force-off any feature that needs an IPA sequence.
+ * netdev_update_features() will restart them.
+ */
+ dev->features &= ~QETH_HW_FEATURES;
+ netdev_update_features(dev);
+-
+- if (features == dev->features)
+- return;
+- dev_warn(&card->gdev->dev,
+- "Device recovery failed to restore all offload features\n");
++ if (features != dev->features)
++ dev_warn(&card->gdev->dev,
++ "Device recovery failed to restore all offload features\n");
++ rtnl_unlock();
+ }
+-EXPORT_SYMBOL_GPL(qeth_recover_features);
++EXPORT_SYMBOL_GPL(qeth_enable_hw_features);
+
+ int qeth_set_features(struct net_device *dev, netdev_features_t features)
+ {
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 16dc8b83ca6f..525c82ba923c 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -1130,6 +1130,8 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode)
+ netif_carrier_off(card->dev);
+
+ qeth_set_allowed_threads(card, 0xffffffff, 0);
++
++ qeth_enable_hw_features(card->dev);
+ if (recover_flag == CARD_STATE_RECOVER) {
+ if (recovery_mode &&
+ card->info.type != QETH_CARD_TYPE_OSN) {
+@@ -1141,9 +1143,6 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode)
+ }
+ /* this also sets saved unicast addresses */
+ qeth_l2_set_rx_mode(card->dev);
+- rtnl_lock();
+- qeth_recover_features(card->dev);
+- rtnl_unlock();
+ }
+ /* let user_space know that device is online */
+ kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index c1a16a74aa83..8de498befde2 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -2792,6 +2792,8 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode)
+ netif_carrier_on(card->dev);
+ else
+ netif_carrier_off(card->dev);
++
++ qeth_enable_hw_features(card->dev);
+ if (recover_flag == CARD_STATE_RECOVER) {
+ rtnl_lock();
+ if (recovery_mode)
+@@ -2799,7 +2801,6 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode)
+ else
+ dev_open(card->dev);
+ qeth_l3_set_rx_mode(card->dev);
+- qeth_recover_features(card->dev);
+ rtnl_unlock();
+ }
+ qeth_trace_features(card);
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index b92f86acb8bb..d37e8dd538f2 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -3438,11 +3438,11 @@ static void hpsa_get_enclosure_info(struct ctlr_info *h,
+ struct ext_report_lun_entry *rle = &rlep->LUN[rle_index];
+ u16 bmic_device_index = 0;
+
+- bmic_device_index = GET_BMIC_DRIVE_NUMBER(&rle->lunid[0]);
+-
+- encl_dev->sas_address =
++ encl_dev->eli =
+ hpsa_get_enclosure_logical_identifier(h, scsi3addr);
+
++ bmic_device_index = GET_BMIC_DRIVE_NUMBER(&rle->lunid[0]);
++
+ if (encl_dev->target == -1 || encl_dev->lun == -1) {
+ rc = IO_OK;
+ goto out;
+@@ -9695,7 +9695,24 @@ hpsa_sas_get_linkerrors(struct sas_phy *phy)
+ static int
+ hpsa_sas_get_enclosure_identifier(struct sas_rphy *rphy, u64 *identifier)
+ {
+- *identifier = rphy->identify.sas_address;
++ struct Scsi_Host *shost = phy_to_shost(rphy);
++ struct ctlr_info *h;
++ struct hpsa_scsi_dev_t *sd;
++
++ if (!shost)
++ return -ENXIO;
++
++ h = shost_to_hba(shost);
++
++ if (!h)
++ return -ENXIO;
++
++ sd = hpsa_find_device_by_sas_rphy(h, rphy);
++ if (!sd)
++ return -ENXIO;
++
++ *identifier = sd->eli;
++
+ return 0;
+ }
+
+diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
+index fb9f5e7f8209..59e023696fff 100644
+--- a/drivers/scsi/hpsa.h
++++ b/drivers/scsi/hpsa.h
+@@ -68,6 +68,7 @@ struct hpsa_scsi_dev_t {
+ #define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0"
+ unsigned char device_id[16]; /* from inquiry pg. 0x83 */
+ u64 sas_address;
++ u64 eli; /* from report diags. */
+ unsigned char vendor[8]; /* bytes 8-15 of inquiry data */
+ unsigned char model[16]; /* bytes 16-31 of inquiry data */
+ unsigned char rev; /* byte 2 of inquiry data */
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 5015b8fbbfc5..b62239b548c4 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3241,6 +3241,11 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+
+ init_completion(&qedf->flogi_compl);
+
++ status = qed_ops->common->update_drv_state(qedf->cdev, true);
++ if (status)
++ QEDF_ERR(&(qedf->dbg_ctx),
++ "Failed to send drv state to MFW.\n");
++
+ memset(&link_params, 0, sizeof(struct qed_link_params));
+ link_params.link_up = true;
+ status = qed_ops->common->set_link(qedf->cdev, &link_params);
+@@ -3289,6 +3294,7 @@ static int qedf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ static void __qedf_remove(struct pci_dev *pdev, int mode)
+ {
+ struct qedf_ctx *qedf;
++ int rc;
+
+ if (!pdev) {
+ QEDF_ERR(NULL, "pdev is NULL.\n");
+@@ -3383,6 +3389,12 @@ static void __qedf_remove(struct pci_dev *pdev, int mode)
+ qed_ops->common->set_power_state(qedf->cdev, PCI_D0);
+ pci_set_drvdata(pdev, NULL);
+ }
++
++ rc = qed_ops->common->update_drv_state(qedf->cdev, false);
++ if (rc)
++ QEDF_ERR(&(qedf->dbg_ctx),
++ "Failed to send drv state to MFW.\n");
++
+ qed_ops->common->slowpath_stop(qedf->cdev);
+ qed_ops->common->remove(qedf->cdev);
+
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 4da3592aec0f..1e674eb6dd17 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2075,6 +2075,7 @@ kset_free:
+ static void __qedi_remove(struct pci_dev *pdev, int mode)
+ {
+ struct qedi_ctx *qedi = pci_get_drvdata(pdev);
++ int rval;
+
+ if (qedi->tmf_thread) {
+ flush_workqueue(qedi->tmf_thread);
+@@ -2104,6 +2105,10 @@ static void __qedi_remove(struct pci_dev *pdev, int mode)
+ if (mode == QEDI_MODE_NORMAL)
+ qedi_free_iscsi_pf_param(qedi);
+
++ rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
++ if (rval)
++ QEDI_ERR(&qedi->dbg_ctx, "Failed to send drv state to MFW\n");
++
+ if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags)) {
+ qedi_ops->common->slowpath_stop(qedi->cdev);
+ qedi_ops->common->remove(qedi->cdev);
+@@ -2378,6 +2383,12 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
+ if (qedi_setup_boot_info(qedi))
+ QEDI_ERR(&qedi->dbg_ctx,
+ "No iSCSI boot target configured\n");
++
++ rc = qedi_ops->common->update_drv_state(qedi->cdev, true);
++ if (rc)
++ QEDI_ERR(&qedi->dbg_ctx,
++ "Failed to send drv state to MFW\n");
++
+ }
+
+ return 0;
+diff --git a/drivers/scsi/xen-scsifront.c b/drivers/scsi/xen-scsifront.c
+index 36f59a1be7e9..61389bdc7926 100644
+--- a/drivers/scsi/xen-scsifront.c
++++ b/drivers/scsi/xen-scsifront.c
+@@ -654,10 +654,17 @@ static int scsifront_dev_reset_handler(struct scsi_cmnd *sc)
+ static int scsifront_sdev_configure(struct scsi_device *sdev)
+ {
+ struct vscsifrnt_info *info = shost_priv(sdev->host);
++ int err;
+
+- if (info && current == info->curr)
+- xenbus_printf(XBT_NIL, info->dev->nodename,
++ if (info && current == info->curr) {
++ err = xenbus_printf(XBT_NIL, info->dev->nodename,
+ info->dev_state_path, "%d", XenbusStateConnected);
++ if (err) {
++ xenbus_dev_error(info->dev, err,
++ "%s: writing dev_state_path", __func__);
++ return err;
++ }
++ }
+
+ return 0;
+ }
+@@ -665,10 +672,15 @@ static int scsifront_sdev_configure(struct scsi_device *sdev)
+ static void scsifront_sdev_destroy(struct scsi_device *sdev)
+ {
+ struct vscsifrnt_info *info = shost_priv(sdev->host);
++ int err;
+
+- if (info && current == info->curr)
+- xenbus_printf(XBT_NIL, info->dev->nodename,
++ if (info && current == info->curr) {
++ err = xenbus_printf(XBT_NIL, info->dev->nodename,
+ info->dev_state_path, "%d", XenbusStateClosed);
++ if (err)
++ xenbus_dev_error(info->dev, err,
++ "%s: writing dev_state_path", __func__);
++ }
+ }
+
+ static struct scsi_host_template scsifront_sht = {
+@@ -1003,9 +1015,12 @@ static void scsifront_do_lun_hotplug(struct vscsifrnt_info *info, int op)
+
+ if (scsi_add_device(info->host, chn, tgt, lun)) {
+ dev_err(&dev->dev, "scsi_add_device\n");
+- xenbus_printf(XBT_NIL, dev->nodename,
++ err = xenbus_printf(XBT_NIL, dev->nodename,
+ info->dev_state_path,
+ "%d", XenbusStateClosed);
++ if (err)
++ xenbus_dev_error(dev, err,
++ "%s: writing dev_state_path", __func__);
+ }
+ break;
+ case VSCSIFRONT_OP_DEL_LUN:
+@@ -1019,10 +1034,14 @@ static void scsifront_do_lun_hotplug(struct vscsifrnt_info *info, int op)
+ }
+ break;
+ case VSCSIFRONT_OP_READD_LUN:
+- if (device_state == XenbusStateConnected)
+- xenbus_printf(XBT_NIL, dev->nodename,
++ if (device_state == XenbusStateConnected) {
++ err = xenbus_printf(XBT_NIL, dev->nodename,
+ info->dev_state_path,
+ "%d", XenbusStateConnected);
++ if (err)
++ xenbus_dev_error(dev, err,
++ "%s: writing dev_state_path", __func__);
++ }
+ break;
+ default:
+ break;
+diff --git a/drivers/soc/imx/gpc.c b/drivers/soc/imx/gpc.c
+index c4d35f32af8d..f86f0ebab06a 100644
+--- a/drivers/soc/imx/gpc.c
++++ b/drivers/soc/imx/gpc.c
+@@ -27,9 +27,16 @@
+ #define GPC_PGC_SW2ISO_SHIFT 0x8
+ #define GPC_PGC_SW_SHIFT 0x0
+
++#define GPC_PGC_PCI_PDN 0x200
++#define GPC_PGC_PCI_SR 0x20c
++
+ #define GPC_PGC_GPU_PDN 0x260
+ #define GPC_PGC_GPU_PUPSCR 0x264
+ #define GPC_PGC_GPU_PDNSCR 0x268
++#define GPC_PGC_GPU_SR 0x26c
++
++#define GPC_PGC_DISP_PDN 0x240
++#define GPC_PGC_DISP_SR 0x24c
+
+ #define GPU_VPU_PUP_REQ BIT(1)
+ #define GPU_VPU_PDN_REQ BIT(0)
+@@ -318,10 +325,24 @@ static const struct of_device_id imx_gpc_dt_ids[] = {
+ { }
+ };
+
++static const struct regmap_range yes_ranges[] = {
++ regmap_reg_range(GPC_CNTR, GPC_CNTR),
++ regmap_reg_range(GPC_PGC_PCI_PDN, GPC_PGC_PCI_SR),
++ regmap_reg_range(GPC_PGC_GPU_PDN, GPC_PGC_GPU_SR),
++ regmap_reg_range(GPC_PGC_DISP_PDN, GPC_PGC_DISP_SR),
++};
++
++static const struct regmap_access_table access_table = {
++ .yes_ranges = yes_ranges,
++ .n_yes_ranges = ARRAY_SIZE(yes_ranges),
++};
++
+ static const struct regmap_config imx_gpc_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
++ .rd_table = &access_table,
++ .wr_table = &access_table,
+ .max_register = 0x2ac,
+ };
+
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index f4e3bd40c72e..6ef18cf8f243 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -39,10 +39,15 @@
+
+ #define GPC_M4_PU_PDN_FLG 0x1bc
+
+-
+-#define PGC_MIPI 4
+-#define PGC_PCIE 5
+-#define PGC_USB_HSIC 8
++/*
++ * The PGC offset values in Reference Manual
++ * (Rev. 1, 01/2018 and the older ones) GPC chapter's
++ * GPC_PGC memory map are incorrect, below offset
++ * values are from design RTL.
++ */
++#define PGC_MIPI 16
++#define PGC_PCIE 17
++#define PGC_USB_HSIC 20
+ #define GPC_PGC_CTRL(n) (0x800 + (n) * 0x40)
+ #define GPC_PGC_SR(n) (GPC_PGC_CTRL(n) + 0xc)
+
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index b0e2c4847a5d..678406e0948b 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -625,7 +625,7 @@ int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags)
+ if (tty->driver != ptm_driver)
+ return -EIO;
+
+- fd = get_unused_fd_flags(0);
++ fd = get_unused_fd_flags(flags);
+ if (fd < 0) {
+ retval = fd;
+ goto err;
+diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c
+index af45aa3222b5..4638d9b066be 100644
+--- a/drivers/usb/chipidea/host.c
++++ b/drivers/usb/chipidea/host.c
+@@ -124,8 +124,11 @@ static int host_start(struct ci_hdrc *ci)
+
+ hcd->power_budget = ci->platdata->power_budget;
+ hcd->tpl_support = ci->platdata->tpl_support;
+- if (ci->phy || ci->usb_phy)
++ if (ci->phy || ci->usb_phy) {
+ hcd->skip_phy_initialization = 1;
++ if (ci->usb_phy)
++ hcd->usb_phy = ci->usb_phy;
++ }
+
+ ehci = hcd_to_ehci(hcd);
+ ehci->caps = ci->hw_bank.cap;
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index a666e0758a99..143309341e11 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -915,6 +915,7 @@ struct dwc2_hregs_backup {
+ * @frame_list_sz: Frame list size
+ * @desc_gen_cache: Kmem cache for generic descriptors
+ * @desc_hsisoc_cache: Kmem cache for hs isochronous descriptors
++ * @unaligned_cache: Kmem cache for DMA mode to handle non-aligned buf
+ *
+ * These are for peripheral mode:
+ *
+@@ -1061,6 +1062,8 @@ struct dwc2_hsotg {
+ u32 frame_list_sz;
+ struct kmem_cache *desc_gen_cache;
+ struct kmem_cache *desc_hsisoc_cache;
++ struct kmem_cache *unaligned_cache;
++#define DWC2_KMEM_UNALIGNED_BUF_SIZE 1024
+
+ #endif /* CONFIG_USB_DWC2_HOST || CONFIG_USB_DWC2_DUAL_ROLE */
+
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 83cb5577a52f..22240f8fe4ad 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -808,6 +808,7 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
+ u32 index;
+ u32 maxsize = 0;
+ u32 mask = 0;
++ u8 pid = 0;
+
+ maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask);
+ if (len > maxsize) {
+@@ -853,7 +854,11 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
+ ((len << DEV_DMA_NBYTES_SHIFT) & mask));
+
+ if (hs_ep->dir_in) {
+- desc->status |= ((hs_ep->mc << DEV_DMA_ISOC_PID_SHIFT) &
++ if (len)
++ pid = DIV_ROUND_UP(len, hs_ep->ep.maxpacket);
++ else
++ pid = 1;
++ desc->status |= ((pid << DEV_DMA_ISOC_PID_SHIFT) &
+ DEV_DMA_ISOC_PID_MASK) |
+ ((len % hs_ep->ep.maxpacket) ?
+ DEV_DMA_SHORT : 0) |
+@@ -892,6 +897,7 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
+ u32 ctrl;
+
+ if (list_empty(&hs_ep->queue)) {
++ hs_ep->target_frame = TARGET_FRAME_INITIAL;
+ dev_dbg(hsotg->dev, "%s: No requests in queue\n", __func__);
+ return;
+ }
+@@ -4720,9 +4726,11 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
+ }
+
+ ret = usb_add_gadget_udc(dev, &hsotg->gadget);
+- if (ret)
++ if (ret) {
++ dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep,
++ hsotg->ctrl_req);
+ return ret;
+-
++ }
+ dwc2_hsotg_dump(hsotg);
+
+ return 0;
+@@ -4735,6 +4743,7 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
+ int dwc2_hsotg_remove(struct dwc2_hsotg *hsotg)
+ {
+ usb_del_gadget_udc(&hsotg->gadget);
++ dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep, hsotg->ctrl_req);
+
+ return 0;
+ }
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 3a5f0005fae5..0d66ec3f59a2 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -1567,11 +1567,20 @@ static void dwc2_hc_start_transfer(struct dwc2_hsotg *hsotg,
+ }
+
+ if (hsotg->params.host_dma) {
+- dwc2_writel((u32)chan->xfer_dma,
+- hsotg->regs + HCDMA(chan->hc_num));
++ dma_addr_t dma_addr;
++
++ if (chan->align_buf) {
++ if (dbg_hc(chan))
++ dev_vdbg(hsotg->dev, "align_buf\n");
++ dma_addr = chan->align_buf;
++ } else {
++ dma_addr = chan->xfer_dma;
++ }
++ dwc2_writel((u32)dma_addr, hsotg->regs + HCDMA(chan->hc_num));
++
+ if (dbg_hc(chan))
+ dev_vdbg(hsotg->dev, "Wrote %08lx to HCDMA(%d)\n",
+- (unsigned long)chan->xfer_dma, chan->hc_num);
++ (unsigned long)dma_addr, chan->hc_num);
+ }
+
+ /* Start the split */
+@@ -2625,6 +2634,35 @@ static void dwc2_hc_init_xfer(struct dwc2_hsotg *hsotg,
+ }
+ }
+
++static int dwc2_alloc_split_dma_aligned_buf(struct dwc2_hsotg *hsotg,
++ struct dwc2_qh *qh,
++ struct dwc2_host_chan *chan)
++{
++ if (!hsotg->unaligned_cache ||
++ chan->max_packet > DWC2_KMEM_UNALIGNED_BUF_SIZE)
++ return -ENOMEM;
++
++ if (!qh->dw_align_buf) {
++ qh->dw_align_buf = kmem_cache_alloc(hsotg->unaligned_cache,
++ GFP_ATOMIC | GFP_DMA);
++ if (!qh->dw_align_buf)
++ return -ENOMEM;
++ }
++
++ qh->dw_align_buf_dma = dma_map_single(hsotg->dev, qh->dw_align_buf,
++ DWC2_KMEM_UNALIGNED_BUF_SIZE,
++ DMA_FROM_DEVICE);
++
++ if (dma_mapping_error(hsotg->dev, qh->dw_align_buf_dma)) {
++ dev_err(hsotg->dev, "can't map align_buf\n");
++ chan->align_buf = 0;
++ return -EINVAL;
++ }
++
++ chan->align_buf = qh->dw_align_buf_dma;
++ return 0;
++}
++
+ #define DWC2_USB_DMA_ALIGN 4
+
+ static void dwc2_free_dma_aligned_buffer(struct urb *urb)
+@@ -2804,6 +2842,32 @@ static int dwc2_assign_and_init_hc(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh)
+ /* Set the transfer attributes */
+ dwc2_hc_init_xfer(hsotg, chan, qtd);
+
++ /* For non-dword aligned buffers */
++ if (hsotg->params.host_dma && qh->do_split &&
++ chan->ep_is_in && (chan->xfer_dma & 0x3)) {
++ dev_vdbg(hsotg->dev, "Non-aligned buffer\n");
++ if (dwc2_alloc_split_dma_aligned_buf(hsotg, qh, chan)) {
++ dev_err(hsotg->dev,
++ "Failed to allocate memory to handle non-aligned buffer\n");
++ /* Add channel back to free list */
++ chan->align_buf = 0;
++ chan->multi_count = 0;
++ list_add_tail(&chan->hc_list_entry,
++ &hsotg->free_hc_list);
++ qtd->in_process = 0;
++ qh->channel = NULL;
++ return -ENOMEM;
++ }
++ } else {
++ /*
++ * We assume that DMA is always aligned in non-split
++ * case or split out case. Warn if not.
++ */
++ WARN_ON_ONCE(hsotg->params.host_dma &&
++ (chan->xfer_dma & 0x3));
++ chan->align_buf = 0;
++ }
++
+ if (chan->ep_type == USB_ENDPOINT_XFER_INT ||
+ chan->ep_type == USB_ENDPOINT_XFER_ISOC)
+ /*
+@@ -5248,6 +5312,19 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
+ }
+ }
+
++ if (hsotg->params.host_dma) {
++ /*
++ * Create kmem caches to handle non-aligned buffer
++ * in Buffer DMA mode.
++ */
++ hsotg->unaligned_cache = kmem_cache_create("dwc2-unaligned-dma",
++ DWC2_KMEM_UNALIGNED_BUF_SIZE, 4,
++ SLAB_CACHE_DMA, NULL);
++ if (!hsotg->unaligned_cache)
++ dev_err(hsotg->dev,
++ "unable to create dwc2 unaligned cache\n");
++ }
++
+ hsotg->otg_port = 1;
+ hsotg->frame_list = NULL;
+ hsotg->frame_list_dma = 0;
+@@ -5282,8 +5359,9 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
+ return 0;
+
+ error4:
+- kmem_cache_destroy(hsotg->desc_gen_cache);
++ kmem_cache_destroy(hsotg->unaligned_cache);
+ kmem_cache_destroy(hsotg->desc_hsisoc_cache);
++ kmem_cache_destroy(hsotg->desc_gen_cache);
+ error3:
+ dwc2_hcd_release(hsotg);
+ error2:
+@@ -5324,8 +5402,9 @@ void dwc2_hcd_remove(struct dwc2_hsotg *hsotg)
+ usb_remove_hcd(hcd);
+ hsotg->priv = NULL;
+
+- kmem_cache_destroy(hsotg->desc_gen_cache);
++ kmem_cache_destroy(hsotg->unaligned_cache);
+ kmem_cache_destroy(hsotg->desc_hsisoc_cache);
++ kmem_cache_destroy(hsotg->desc_gen_cache);
+
+ dwc2_hcd_release(hsotg);
+ usb_put_hcd(hcd);
+@@ -5437,7 +5516,7 @@ int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg)
+ dwc2_writel(hprt0, hsotg->regs + HPRT0);
+
+ /* Wait for the HPRT0.PrtSusp register field to be set */
+- if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 300))
++ if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000))
+ dev_warn(hsotg->dev, "Suspend wasn't generated\n");
+
+ /*
+@@ -5618,6 +5697,8 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ return ret;
+ }
+
++ dwc2_hcd_rem_wakeup(hsotg);
++
+ hsotg->hibernated = 0;
+ hsotg->bus_suspended = 0;
+ hsotg->lx_state = DWC2_L0;
+diff --git a/drivers/usb/dwc2/hcd.h b/drivers/usb/dwc2/hcd.h
+index 96a9da5fb202..5b5b9e6f2feb 100644
+--- a/drivers/usb/dwc2/hcd.h
++++ b/drivers/usb/dwc2/hcd.h
+@@ -76,6 +76,8 @@ struct dwc2_qh;
+ * (micro)frame
+ * @xfer_buf: Pointer to current transfer buffer position
+ * @xfer_dma: DMA address of xfer_buf
++ * @align_buf: In Buffer DMA mode this will be used if xfer_buf is not
++ * DWORD aligned
+ * @xfer_len: Total number of bytes to transfer
+ * @xfer_count: Number of bytes transferred so far
+ * @start_pkt_count: Packet count at start of transfer
+@@ -133,6 +135,7 @@ struct dwc2_host_chan {
+
+ u8 *xfer_buf;
+ dma_addr_t xfer_dma;
++ dma_addr_t align_buf;
+ u32 xfer_len;
+ u32 xfer_count;
+ u16 start_pkt_count;
+@@ -303,6 +306,9 @@ struct dwc2_hs_transfer_time {
+ * is tightly packed.
+ * @ls_duration_us: Duration on the low speed bus schedule.
+ * @ntd: Actual number of transfer descriptors in a list
++ * @dw_align_buf: Used instead of original buffer if its physical address
++ * is not dword-aligned
++ * @dw_align_buf_dma: DMA address for dw_align_buf
+ * @qtd_list: List of QTDs for this QH
+ * @channel: Host channel currently processing transfers for this QH
+ * @qh_list_entry: Entry for QH in either the periodic or non-periodic
+@@ -350,6 +356,8 @@ struct dwc2_qh {
+ struct dwc2_hs_transfer_time hs_transfers[DWC2_HS_SCHEDULE_UFRAMES];
+ u32 ls_start_schedule_slice;
+ u16 ntd;
++ u8 *dw_align_buf;
++ dma_addr_t dw_align_buf_dma;
+ struct list_head qtd_list;
+ struct dwc2_host_chan *channel;
+ struct list_head qh_list_entry;
+diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c
+index a5dfd9d8bd9a..9751785ec561 100644
+--- a/drivers/usb/dwc2/hcd_intr.c
++++ b/drivers/usb/dwc2/hcd_intr.c
+@@ -930,14 +930,21 @@ static int dwc2_xfercomp_isoc_split_in(struct dwc2_hsotg *hsotg,
+ frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
+ len = dwc2_get_actual_xfer_length(hsotg, chan, chnum, qtd,
+ DWC2_HC_XFER_COMPLETE, NULL);
+- if (!len) {
++ if (!len && !qtd->isoc_split_offset) {
+ qtd->complete_split = 0;
+- qtd->isoc_split_offset = 0;
+ return 0;
+ }
+
+ frame_desc->actual_length += len;
+
++ if (chan->align_buf) {
++ dev_vdbg(hsotg->dev, "non-aligned buffer\n");
++ dma_unmap_single(hsotg->dev, chan->qh->dw_align_buf_dma,
++ DWC2_KMEM_UNALIGNED_BUF_SIZE, DMA_FROM_DEVICE);
++ memcpy(qtd->urb->buf + (chan->xfer_dma - qtd->urb->dma),
++ chan->qh->dw_align_buf, len);
++ }
++
+ qtd->isoc_split_offset += len;
+
+ hctsiz = dwc2_readl(hsotg->regs + HCTSIZ(chnum));
+diff --git a/drivers/usb/dwc2/hcd_queue.c b/drivers/usb/dwc2/hcd_queue.c
+index 6baa75da7907..7995106c98e8 100644
+--- a/drivers/usb/dwc2/hcd_queue.c
++++ b/drivers/usb/dwc2/hcd_queue.c
+@@ -1695,6 +1695,9 @@ void dwc2_hcd_qh_free(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh)
+
+ if (qh->desc_list)
+ dwc2_hcd_qh_free_ddma(hsotg, qh);
++ else if (hsotg->unaligned_cache && qh->dw_align_buf)
++ kmem_cache_free(hsotg->unaligned_cache, qh->dw_align_buf);
++
+ kfree(qh);
+ }
+
+diff --git a/drivers/usb/dwc3/dwc3-of-simple.c b/drivers/usb/dwc3/dwc3-of-simple.c
+index cb2ee96fd3e8..048922d549dd 100644
+--- a/drivers/usb/dwc3/dwc3-of-simple.c
++++ b/drivers/usb/dwc3/dwc3-of-simple.c
+@@ -165,8 +165,9 @@ static int dwc3_of_simple_remove(struct platform_device *pdev)
+
+ reset_control_put(simple->resets);
+
+- pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_put_noidle(dev);
++ pm_runtime_set_suspended(dev);
+
+ return 0;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index c961a94d136b..f57e7c94b8e5 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -34,6 +34,7 @@
+ #define PCI_DEVICE_ID_INTEL_GLK 0x31aa
+ #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee
+ #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e
++#define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee
+
+ #define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511"
+ #define PCI_INTEL_BXT_FUNC_PMU_PWR 4
+@@ -289,6 +290,7 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_GLK), },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPLP), },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPH), },
++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICLLP), },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), },
+ { } /* Terminating Entry */
+ };
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 330c591fd7d6..afd6c977beec 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1719,6 +1719,8 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ */
+ if (w_value && !f->get_alt)
+ break;
++
++ spin_lock(&cdev->lock);
+ value = f->set_alt(f, w_index, w_value);
+ if (value == USB_GADGET_DELAYED_STATUS) {
+ DBG(cdev,
+@@ -1728,6 +1730,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ DBG(cdev, "delayed_status count %d\n",
+ cdev->delayed_status);
+ }
++ spin_unlock(&cdev->lock);
+ break;
+ case USB_REQ_GET_INTERFACE:
+ if (ctrl->bRequestType != (USB_DIR_IN|USB_RECIP_INTERFACE))
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 7e57439ac282..32fdc79cd73b 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -215,6 +215,7 @@ struct ffs_io_data {
+
+ struct mm_struct *mm;
+ struct work_struct work;
++ struct work_struct cancellation_work;
+
+ struct usb_ep *ep;
+ struct usb_request *req;
+@@ -1072,22 +1073,31 @@ ffs_epfile_open(struct inode *inode, struct file *file)
+ return 0;
+ }
+
++static void ffs_aio_cancel_worker(struct work_struct *work)
++{
++ struct ffs_io_data *io_data = container_of(work, struct ffs_io_data,
++ cancellation_work);
++
++ ENTER();
++
++ usb_ep_dequeue(io_data->ep, io_data->req);
++}
++
+ static int ffs_aio_cancel(struct kiocb *kiocb)
+ {
+ struct ffs_io_data *io_data = kiocb->private;
+- struct ffs_epfile *epfile = kiocb->ki_filp->private_data;
++ struct ffs_data *ffs = io_data->ffs;
+ int value;
+
+ ENTER();
+
+- spin_lock_irq(&epfile->ffs->eps_lock);
+-
+- if (likely(io_data && io_data->ep && io_data->req))
+- value = usb_ep_dequeue(io_data->ep, io_data->req);
+- else
++ if (likely(io_data && io_data->ep && io_data->req)) {
++ INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker);
++ queue_work(ffs->io_completion_wq, &io_data->cancellation_work);
++ value = -EINPROGRESS;
++ } else {
+ value = -EINVAL;
+-
+- spin_unlock_irq(&epfile->ffs->eps_lock);
++ }
+
+ return value;
+ }
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index c359bae7b754..18925c0dde59 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -507,16 +507,18 @@ static int xhci_do_dbc_start(struct xhci_hcd *xhci)
+ return 0;
+ }
+
+-static void xhci_do_dbc_stop(struct xhci_hcd *xhci)
++static int xhci_do_dbc_stop(struct xhci_hcd *xhci)
+ {
+ struct xhci_dbc *dbc = xhci->dbc;
+
+ if (dbc->state == DS_DISABLED)
+- return;
++ return -1;
+
+ writel(0, &dbc->regs->control);
+ xhci_dbc_mem_cleanup(xhci);
+ dbc->state = DS_DISABLED;
++
++ return 0;
+ }
+
+ static int xhci_dbc_start(struct xhci_hcd *xhci)
+@@ -543,6 +545,7 @@ static int xhci_dbc_start(struct xhci_hcd *xhci)
+
+ static void xhci_dbc_stop(struct xhci_hcd *xhci)
+ {
++ int ret;
+ unsigned long flags;
+ struct xhci_dbc *dbc = xhci->dbc;
+ struct dbc_port *port = &dbc->port;
+@@ -555,10 +558,11 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci)
+ xhci_dbc_tty_unregister_device(xhci);
+
+ spin_lock_irqsave(&dbc->lock, flags);
+- xhci_do_dbc_stop(xhci);
++ ret = xhci_do_dbc_stop(xhci);
+ spin_unlock_irqrestore(&dbc->lock, flags);
+
+- pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
++ if (!ret)
++ pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
+ }
+
+ static void
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 2c076ea80522..1ed87cee8d21 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -479,7 +479,7 @@ static void tegra_xusb_mbox_handle(struct tegra_xusb *tegra,
+ unsigned long mask;
+ unsigned int port;
+ bool idle, enable;
+- int err;
++ int err = 0;
+
+ memset(&rsp, 0, sizeof(rsp));
+
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 5fb4319d7fd1..ae56eac34fc7 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1014,8 +1014,13 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ command = readl(&xhci->op_regs->command);
+ command |= CMD_CRS;
+ writel(command, &xhci->op_regs->command);
++ /*
++ * Some controllers take up to 55+ ms to complete the controller
++ * restore so setting the timeout to 100ms. Xhci specification
++ * doesn't mention any timeout value.
++ */
+ if (xhci_handshake(&xhci->op_regs->status,
+- STS_RESTORE, 0, 10 * 1000)) {
++ STS_RESTORE, 0, 100 * 1000)) {
+ xhci_warn(xhci, "WARN: xHC restore state timeout\n");
+ spin_unlock_irq(&xhci->lock);
+ return -ETIMEDOUT;
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index 9b29b67191bc..58b6598f8e10 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -2543,7 +2543,8 @@ static void run_state_machine(struct tcpm_port *port)
+ tcpm_port_is_sink(port) &&
+ time_is_after_jiffies(port->delayed_runtime)) {
+ tcpm_set_state(port, SNK_DISCOVERY,
+- port->delayed_runtime - jiffies);
++ jiffies_to_msecs(port->delayed_runtime -
++ jiffies));
+ break;
+ }
+ tcpm_set_state(port, unattached_state(port), 0);
+diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
+index 8835065029d3..c93d8ef8df34 100644
+--- a/drivers/xen/manage.c
++++ b/drivers/xen/manage.c
+@@ -289,8 +289,15 @@ static void sysrq_handler(struct xenbus_watch *watch, const char *path,
+ return;
+ }
+
+- if (sysrq_key != '\0')
+- xenbus_printf(xbt, "control", "sysrq", "%c", '\0');
++ if (sysrq_key != '\0') {
++ err = xenbus_printf(xbt, "control", "sysrq", "%c", '\0');
++ if (err) {
++ pr_err("%s: Error %d writing sysrq in control/sysrq\n",
++ __func__, err);
++ xenbus_transaction_end(xbt, 1);
++ return;
++ }
++ }
+
+ err = xenbus_transaction_end(xbt, 0);
+ if (err == -EAGAIN)
+@@ -342,7 +349,12 @@ static int setup_shutdown_watcher(void)
+ continue;
+ snprintf(node, FEATURE_PATH_SIZE, "feature-%s",
+ shutdown_handlers[idx].command);
+- xenbus_printf(XBT_NIL, "control", node, "%u", 1);
++ err = xenbus_printf(XBT_NIL, "control", node, "%u", 1);
++ if (err) {
++ pr_err("%s: Error %d writing %s\n", __func__,
++ err, node);
++ return err;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
+index 7bc88fd43cfc..e2f3e8b0fba9 100644
+--- a/drivers/xen/xen-scsiback.c
++++ b/drivers/xen/xen-scsiback.c
+@@ -1012,6 +1012,7 @@ static void scsiback_do_add_lun(struct vscsibk_info *info, const char *state,
+ {
+ struct v2p_entry *entry;
+ unsigned long flags;
++ int err;
+
+ if (try) {
+ spin_lock_irqsave(&info->v2p_lock, flags);
+@@ -1027,8 +1028,11 @@ static void scsiback_do_add_lun(struct vscsibk_info *info, const char *state,
+ scsiback_del_translation_entry(info, vir);
+ }
+ } else if (!try) {
+- xenbus_printf(XBT_NIL, info->dev->nodename, state,
++ err = xenbus_printf(XBT_NIL, info->dev->nodename, state,
+ "%d", XenbusStateClosed);
++ if (err)
++ xenbus_dev_error(info->dev, err,
++ "%s: writing %s", __func__, state);
+ }
+ }
+
+@@ -1067,8 +1071,11 @@ static void scsiback_do_1lun_hotplug(struct vscsibk_info *info, int op,
+ snprintf(str, sizeof(str), "vscsi-devs/%s/p-dev", ent);
+ val = xenbus_read(XBT_NIL, dev->nodename, str, NULL);
+ if (IS_ERR(val)) {
+- xenbus_printf(XBT_NIL, dev->nodename, state,
++ err = xenbus_printf(XBT_NIL, dev->nodename, state,
+ "%d", XenbusStateClosed);
++ if (err)
++ xenbus_dev_error(info->dev, err,
++ "%s: writing %s", __func__, state);
+ return;
+ }
+ strlcpy(phy, val, VSCSI_NAMELEN);
+@@ -1079,8 +1086,11 @@ static void scsiback_do_1lun_hotplug(struct vscsibk_info *info, int op,
+ err = xenbus_scanf(XBT_NIL, dev->nodename, str, "%u:%u:%u:%u",
+ &vir.hst, &vir.chn, &vir.tgt, &vir.lun);
+ if (XENBUS_EXIST_ERR(err)) {
+- xenbus_printf(XBT_NIL, dev->nodename, state,
++ err = xenbus_printf(XBT_NIL, dev->nodename, state,
+ "%d", XenbusStateClosed);
++ if (err)
++ xenbus_dev_error(info->dev, err,
++ "%s: writing %s", __func__, state);
+ return;
+ }
+
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index ad8a69ba7f13..7391b123a17a 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1151,11 +1151,6 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
+ return ret;
+ }
+
+- if (sctx->is_dev_replace && !is_metadata && !have_csum) {
+- sblocks_for_recheck = NULL;
+- goto nodatasum_case;
+- }
+-
+ /*
+ * read all mirrors one after the other. This includes to
+ * re-read the extent or metadata block that failed (that was
+@@ -1268,13 +1263,19 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
+ goto out;
+ }
+
+- if (!is_metadata && !have_csum) {
++ /*
++ * NOTE: Even for nodatasum case, it's still possible that it's a
++ * compressed data extent, thus scrub_fixup_nodatasum(), which write
++ * inode page cache onto disk, could cause serious data corruption.
++ *
++ * So here we could only read from disk, and hope our recovery could
++ * reach disk before the newer write.
++ */
++ if (0 && !is_metadata && !have_csum) {
+ struct scrub_fixup_nodatasum *fixup_nodatasum;
+
+ WARN_ON(sctx->is_dev_replace);
+
+-nodatasum_case:
+-
+ /*
+ * !is_metadata and !have_csum, this means that the data
+ * might not be COWed, that it might be modified
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index ae056927080d..6b6868d0cdcc 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1123,6 +1123,7 @@ static struct dentry *splice_dentry(struct dentry *dn, struct inode *in)
+ if (IS_ERR(realdn)) {
+ pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n",
+ PTR_ERR(realdn), dn, in, ceph_vinop(in));
++ dput(dn);
+ dn = realdn; /* note realdn contains the error */
+ goto out;
+ } else if (realdn) {
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 71013c5268b9..9c8d5f12546b 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -923,8 +923,9 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data)
+ req->PreviousSessionId = sess_data->previous_session;
+
+ req->Flags = 0; /* MBZ */
+- /* to enable echos and oplocks */
+- req->sync_hdr.CreditRequest = cpu_to_le16(3);
++
++ /* enough to enable echos and oplocks and one max size write */
++ req->sync_hdr.CreditRequest = cpu_to_le16(130);
+
+ /* only one of SMB2 signing flags may be set in SMB2 request */
+ if (server->sign)
+diff --git a/fs/exec.c b/fs/exec.c
+index 183059c427b9..7d00f8ceba3f 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -290,7 +290,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
+ struct vm_area_struct *vma = NULL;
+ struct mm_struct *mm = bprm->mm;
+
+- bprm->vma = vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ bprm->vma = vma = vm_area_alloc(mm);
+ if (!vma)
+ return -ENOMEM;
+
+@@ -298,7 +298,6 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
+ err = -EINTR;
+ goto err_free;
+ }
+- vma->vm_mm = mm;
+
+ /*
+ * Place the stack at the largest stack address the architecture
+@@ -311,7 +310,6 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
+ vma->vm_start = vma->vm_end - PAGE_SIZE;
+ vma->vm_flags = VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+
+ err = insert_vm_struct(mm, vma);
+ if (err)
+@@ -326,7 +324,7 @@ err:
+ up_write(&mm->mmap_sem);
+ err_free:
+ bprm->vma = NULL;
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return err;
+ }
+
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 39187e7b3748..050838f03328 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -14,6 +14,7 @@
+ #include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/backing-dev.h>
+ #include <trace/events/ext4.h>
+
+@@ -2140,7 +2141,8 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
+ * This should tell if fe_len is exactly power of 2
+ */
+ if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
+- ac->ac_2order = i - 1;
++ ac->ac_2order = array_index_nospec(i - 1,
++ sb->s_blocksize_bits + 2);
+ }
+
+ /* if stream allocation is enabled, use global goal */
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 2d94eb9cd386..0f6291675123 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8620,6 +8620,8 @@ nfs4_layoutget_handle_exception(struct rpc_task *task,
+
+ dprintk("--> %s tk_status => %d\n", __func__, -task->tk_status);
+
++ nfs4_sequence_free_slot(&lgp->res.seq_res);
++
+ switch (nfs4err) {
+ case 0:
+ goto out;
+@@ -8684,7 +8686,6 @@ nfs4_layoutget_handle_exception(struct rpc_task *task,
+ goto out;
+ }
+
+- nfs4_sequence_free_slot(&lgp->res.seq_res);
+ err = nfs4_handle_exception(server, nfs4err, exception);
+ if (!status) {
+ if (exception->retry)
+@@ -8810,20 +8811,22 @@ nfs4_proc_layoutget(struct nfs4_layoutget *lgp, long *timeout, gfp_t gfp_flags)
+ if (IS_ERR(task))
+ return ERR_CAST(task);
+ status = rpc_wait_for_completion_task(task);
+- if (status == 0) {
++ if (status != 0)
++ goto out;
++
++ /* if layoutp->len is 0, nfs4_layoutget_prepare called rpc_exit */
++ if (task->tk_status < 0 || lgp->res.layoutp->len == 0) {
+ status = nfs4_layoutget_handle_exception(task, lgp, &exception);
+ *timeout = exception.timeout;
+- }
+-
++ } else
++ lseg = pnfs_layout_process(lgp);
++out:
+ trace_nfs4_layoutget(lgp->args.ctx,
+ &lgp->args.range,
+ &lgp->res.range,
+ &lgp->res.stateid,
+ status);
+
+- /* if layoutp->len is 0, nfs4_layoutget_prepare called rpc_exit */
+- if (status == 0 && lgp->res.layoutp->len)
+- lseg = pnfs_layout_process(lgp);
+ rpc_put_task(task);
+ dprintk("<-- %s status=%d\n", __func__, status);
+ if (status)
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index 5dbf5324bdda..8fb367c5bdd7 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -792,8 +792,10 @@ static int listxattr_filler(struct dir_context *ctx, const char *name,
+ return 0;
+ size = namelen + 1;
+ if (b->buf) {
+- if (size > b->size)
++ if (b->pos + size > b->size) {
++ b->pos = -ERANGE;
+ return -ERANGE;
++ }
+ memcpy(b->buf + b->pos, name, namelen);
+ b->buf[b->pos + namelen] = 0;
+ }
+diff --git a/include/linux/fsl/guts.h b/include/linux/fsl/guts.h
+index 3efa3b861d44..941b11811f85 100644
+--- a/include/linux/fsl/guts.h
++++ b/include/linux/fsl/guts.h
+@@ -16,6 +16,7 @@
+ #define __FSL_GUTS_H__
+
+ #include <linux/types.h>
++#include <linux/io.h>
+
+ /**
+ * Global Utility Registers.
+diff --git a/include/linux/kthread.h b/include/linux/kthread.h
+index 2803264c512f..c1961761311d 100644
+--- a/include/linux/kthread.h
++++ b/include/linux/kthread.h
+@@ -62,7 +62,6 @@ void *kthread_probe_data(struct task_struct *k);
+ int kthread_park(struct task_struct *k);
+ void kthread_unpark(struct task_struct *k);
+ void kthread_parkme(void);
+-void kthread_park_complete(struct task_struct *k);
+
+ int kthreadd(void *unused);
+ extern struct task_struct *kthreadd_task;
+diff --git a/include/linux/marvell_phy.h b/include/linux/marvell_phy.h
+index 4f5f8c21e283..1eb6f244588d 100644
+--- a/include/linux/marvell_phy.h
++++ b/include/linux/marvell_phy.h
+@@ -27,6 +27,8 @@
+ */
+ #define MARVELL_PHY_ID_88E6390 0x01410f90
+
++#define MARVELL_PHY_FAMILY_ID(id) ((id) >> 4)
++
+ /* struct phy_device dev_flags definitions */
+ #define MARVELL_PHY_M1145_FLAGS_RESISTANCE 0x00000001
+ #define MARVELL_PHY_M1118_DNS323_LEDS 0x00000002
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index edab43d2bec8..28477ff9cf04 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -154,7 +154,9 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *,
+ * mmap() functions).
+ */
+
+-extern struct kmem_cache *vm_area_cachep;
++struct vm_area_struct *vm_area_alloc(struct mm_struct *);
++struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
++void vm_area_free(struct vm_area_struct *);
+
+ #ifndef CONFIG_MMU
+ extern struct rb_root nommu_region_tree;
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 73178a2fcee0..27f9bdd7e46d 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1236,6 +1236,8 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
+ unsigned long pci_address_to_pio(phys_addr_t addr);
+ phys_addr_t pci_pio_to_address(unsigned long pio);
+ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
++int devm_pci_remap_iospace(struct device *dev, const struct resource *res,
++ phys_addr_t phys_addr);
+ void pci_unmap_iospace(struct resource *res);
+ void __iomem *devm_pci_remap_cfgspace(struct device *dev,
+ resource_size_t offset,
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ca3f3eae8980..5c32faa4af99 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -117,7 +117,7 @@ struct task_group;
+ * the comment with set_special_state().
+ */
+ #define is_special_task_state(state) \
+- ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD))
++ ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_PARKED | TASK_DEAD))
+
+ #define __set_current_state(state_value) \
+ do { \
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index aeebbbb9e0bd..62b6bfcce152 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -354,14 +354,7 @@ struct ipv6_txoptions *ipv6_dup_options(struct sock *sk,
+ struct ipv6_txoptions *ipv6_renew_options(struct sock *sk,
+ struct ipv6_txoptions *opt,
+ int newtype,
+- struct ipv6_opt_hdr __user *newopt,
+- int newoptlen);
+-struct ipv6_txoptions *
+-ipv6_renew_options_kern(struct sock *sk,
+- struct ipv6_txoptions *opt,
+- int newtype,
+- struct ipv6_opt_hdr *newopt,
+- int newoptlen);
++ struct ipv6_opt_hdr *newopt);
+ struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
+ struct ipv6_txoptions *opt);
+
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 47e35cce3b64..a71264d75d7f 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -128,6 +128,7 @@ struct net {
+ #endif
+ #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ struct netns_nf_frag nf_frag;
++ struct ctl_table_header *nf_frag_frags_hdr;
+ #endif
+ struct sock *nfnl;
+ struct sock *nfnl_stash;
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index c29f09cfc9d7..5ce433f9e88a 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -107,7 +107,6 @@ struct netns_ipv6 {
+
+ #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ struct netns_nf_frag {
+- struct netns_sysctl_ipv6 sysctl;
+ struct netns_frags frags;
+ };
+ #endif
+diff --git a/include/net/tc_act/tc_csum.h b/include/net/tc_act/tc_csum.h
+index 9470fd7e4350..32d2454c0479 100644
+--- a/include/net/tc_act/tc_csum.h
++++ b/include/net/tc_act/tc_csum.h
+@@ -7,7 +7,6 @@
+ #include <linux/tc_act/tc_csum.h>
+
+ struct tcf_csum_params {
+- int action;
+ u32 update_flags;
+ struct rcu_head rcu;
+ };
+diff --git a/include/net/tc_act/tc_tunnel_key.h b/include/net/tc_act/tc_tunnel_key.h
+index efef0b4b1b2b..46b8c7f1c8d5 100644
+--- a/include/net/tc_act/tc_tunnel_key.h
++++ b/include/net/tc_act/tc_tunnel_key.h
+@@ -18,7 +18,6 @@
+ struct tcf_tunnel_key_params {
+ struct rcu_head rcu;
+ int tcft_action;
+- int action;
+ struct metadata_dst *tcft_enc_metadata;
+ };
+
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 5ccc4ec646cb..b0639f336976 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -907,8 +907,6 @@ enum tcp_ca_event {
+ CA_EVENT_LOSS, /* loss timeout */
+ CA_EVENT_ECN_NO_CE, /* ECT set, but not CE marked */
+ CA_EVENT_ECN_IS_CE, /* received CE marked IP packet */
+- CA_EVENT_DELAYED_ACK, /* Delayed ack is sent */
+- CA_EVENT_NON_DELAYED_ACK,
+ };
+
+ /* Information about inbound ACK, passed to cong_ops->in_ack_event() */
+diff --git a/include/uapi/linux/nbd.h b/include/uapi/linux/nbd.h
+index 85a3fb65e40a..20d6cc91435d 100644
+--- a/include/uapi/linux/nbd.h
++++ b/include/uapi/linux/nbd.h
+@@ -53,6 +53,9 @@ enum {
+ /* These are client behavior specific flags. */
+ #define NBD_CFLAG_DESTROY_ON_DISCONNECT (1 << 0) /* delete the nbd device on
+ disconnect. */
++#define NBD_CFLAG_DISCONNECT_ON_CLOSE (1 << 1) /* disconnect the nbd device on
++ * close by last opener.
++ */
+
+ /* userspace doesn't need the nbd_device structure */
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index b76828f23b49..3bd61b004418 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -743,13 +743,15 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ * old element will be freed immediately.
+ * Otherwise return an error
+ */
+- atomic_dec(&htab->count);
+- return ERR_PTR(-E2BIG);
++ l_new = ERR_PTR(-E2BIG);
++ goto dec_count;
+ }
+ l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+ htab->map.numa_node);
+- if (!l_new)
+- return ERR_PTR(-ENOMEM);
++ if (!l_new) {
++ l_new = ERR_PTR(-ENOMEM);
++ goto dec_count;
++ }
+ }
+
+ memcpy(l_new->key, key, key_size);
+@@ -762,7 +764,8 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (!pptr) {
+ kfree(l_new);
+- return ERR_PTR(-ENOMEM);
++ l_new = ERR_PTR(-ENOMEM);
++ goto dec_count;
+ }
+ }
+
+@@ -776,6 +779,9 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+
+ l_new->hash = hash;
+ return l_new;
++dec_count:
++ atomic_dec(&htab->count);
++ return l_new;
+ }
+
+ static int check_flags(struct bpf_htab *htab, struct htab_elem *l_old,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 5ad558e6f8fe..b9d9b39d4afc 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -303,11 +303,38 @@ struct kmem_cache *files_cachep;
+ struct kmem_cache *fs_cachep;
+
+ /* SLAB cache for vm_area_struct structures */
+-struct kmem_cache *vm_area_cachep;
++static struct kmem_cache *vm_area_cachep;
+
+ /* SLAB cache for mm_struct structures (tsk->mm) */
+ static struct kmem_cache *mm_cachep;
+
++struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
++{
++ struct vm_area_struct *vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++
++ if (vma) {
++ vma->vm_mm = mm;
++ INIT_LIST_HEAD(&vma->anon_vma_chain);
++ }
++ return vma;
++}
++
++struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
++{
++ struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
++
++ if (new) {
++ *new = *orig;
++ INIT_LIST_HEAD(&new->anon_vma_chain);
++ }
++ return new;
++}
++
++void vm_area_free(struct vm_area_struct *vma)
++{
++ kmem_cache_free(vm_area_cachep, vma);
++}
++
+ static void account_kernel_stack(struct task_struct *tsk, int account)
+ {
+ void *stack = task_stack_page(tsk);
+@@ -455,11 +482,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ goto fail_nomem;
+ charge = len;
+ }
+- tmp = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
++ tmp = vm_area_dup(mpnt);
+ if (!tmp)
+ goto fail_nomem;
+- *tmp = *mpnt;
+- INIT_LIST_HEAD(&tmp->anon_vma_chain);
+ retval = vma_dup_policy(mpnt, tmp);
+ if (retval)
+ goto fail_nomem_policy;
+@@ -539,7 +564,7 @@ fail_uprobe_end:
+ fail_nomem_anon_vma_fork:
+ mpol_put(vma_policy(tmp));
+ fail_nomem_policy:
+- kmem_cache_free(vm_area_cachep, tmp);
++ vm_area_free(tmp);
+ fail_nomem:
+ retval = -ENOMEM;
+ vm_unacct_memory(charge);
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 1a481ae12dec..486dedbd9af5 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -177,9 +177,20 @@ void *kthread_probe_data(struct task_struct *task)
+ static void __kthread_parkme(struct kthread *self)
+ {
+ for (;;) {
+- set_current_state(TASK_PARKED);
++ /*
++ * TASK_PARKED is a special state; we must serialize against
++ * possible pending wakeups to avoid store-store collisions on
++ * task->state.
++ *
++ * Such a collision might possibly result in the task state
++ * changin from TASK_PARKED and us failing the
++ * wait_task_inactive() in kthread_park().
++ */
++ set_special_state(TASK_PARKED);
+ if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))
+ break;
++
++ complete_all(&self->parked);
+ schedule();
+ }
+ __set_current_state(TASK_RUNNING);
+@@ -191,11 +202,6 @@ void kthread_parkme(void)
+ }
+ EXPORT_SYMBOL_GPL(kthread_parkme);
+
+-void kthread_park_complete(struct task_struct *k)
+-{
+- complete_all(&to_kthread(k)->parked);
+-}
+-
+ static int kthread(void *_create)
+ {
+ /* Copy data: it's on kthread's stack */
+@@ -467,6 +473,9 @@ void kthread_unpark(struct task_struct *k)
+
+ reinit_completion(&kthread->parked);
+ clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
++ /*
++ * __kthread_parkme() will either see !SHOULD_PARK or get the wakeup.
++ */
+ wake_up_state(k, TASK_PARKED);
+ }
+ EXPORT_SYMBOL_GPL(kthread_unpark);
+@@ -493,7 +502,16 @@ int kthread_park(struct task_struct *k)
+ set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+ if (k != current) {
+ wake_up_process(k);
++ /*
++ * Wait for __kthread_parkme() to complete(), this means we
++ * _will_ have TASK_PARKED and are about to call schedule().
++ */
+ wait_for_completion(&kthread->parked);
++ /*
++ * Now wait for that schedule() to complete and the task to
++ * get scheduled out.
++ */
++ WARN_ON_ONCE(!wait_task_inactive(k, TASK_PARKED));
+ }
+
+ return 0;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 023386338269..7184cea3ca10 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -1261,11 +1261,11 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class)
+ this.parent = NULL;
+ this.class = class;
+
+- local_irq_save(flags);
++ raw_local_irq_save(flags);
+ arch_spin_lock(&lockdep_lock);
+ ret = __lockdep_count_forward_deps(&this);
+ arch_spin_unlock(&lockdep_lock);
+- local_irq_restore(flags);
++ raw_local_irq_restore(flags);
+
+ return ret;
+ }
+@@ -1288,11 +1288,11 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
+ this.parent = NULL;
+ this.class = class;
+
+- local_irq_save(flags);
++ raw_local_irq_save(flags);
+ arch_spin_lock(&lockdep_lock);
+ ret = __lockdep_count_backward_deps(&this);
+ arch_spin_unlock(&lockdep_lock);
+- local_irq_restore(flags);
++ raw_local_irq_restore(flags);
+
+ return ret;
+ }
+@@ -4407,7 +4407,7 @@ void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
+ if (unlikely(!debug_locks))
+ return;
+
+- local_irq_save(flags);
++ raw_local_irq_save(flags);
+ for (i = 0; i < curr->lockdep_depth; i++) {
+ hlock = curr->held_locks + i;
+
+@@ -4418,7 +4418,7 @@ void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
+ print_freed_lock_bug(curr, mem_from, mem_from + mem_len, hlock);
+ break;
+ }
+- local_irq_restore(flags);
++ raw_local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL_GPL(debug_check_no_locks_freed);
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index ec945451b9ef..0b817812f17f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -7,7 +7,6 @@
+ */
+ #include "sched.h"
+
+-#include <linux/kthread.h>
+ #include <linux/nospec.h>
+
+ #include <asm/switch_to.h>
+@@ -2738,28 +2737,20 @@ static struct rq *finish_task_switch(struct task_struct *prev)
+ membarrier_mm_sync_core_before_usermode(mm);
+ mmdrop(mm);
+ }
+- if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) {
+- switch (prev_state) {
+- case TASK_DEAD:
+- if (prev->sched_class->task_dead)
+- prev->sched_class->task_dead(prev);
++ if (unlikely(prev_state == TASK_DEAD)) {
++ if (prev->sched_class->task_dead)
++ prev->sched_class->task_dead(prev);
+
+- /*
+- * Remove function-return probe instances associated with this
+- * task and put them back on the free list.
+- */
+- kprobe_flush_task(prev);
+-
+- /* Task is done with its stack. */
+- put_task_stack(prev);
++ /*
++ * Remove function-return probe instances associated with this
++ * task and put them back on the free list.
++ */
++ kprobe_flush_task(prev);
+
+- put_task_struct(prev);
+- break;
++ /* Task is done with its stack. */
++ put_task_stack(prev);
+
+- case TASK_PARKED:
+- kthread_park_complete(prev);
+- break;
+- }
++ put_task_struct(prev);
+ }
+
+ tick_nohz_task_switch();
+@@ -3127,7 +3118,9 @@ static void sched_tick_remote(struct work_struct *work)
+ struct tick_work *twork = container_of(dwork, struct tick_work, work);
+ int cpu = twork->cpu;
+ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr;
+ struct rq_flags rf;
++ u64 delta;
+
+ /*
+ * Handle the tick only if it appears the remote CPU is running in full
+@@ -3136,24 +3129,28 @@ static void sched_tick_remote(struct work_struct *work)
+ * statistics and checks timeslices in a time-independent way, regardless
+ * of when exactly it is running.
+ */
+- if (!idle_cpu(cpu) && tick_nohz_tick_stopped_cpu(cpu)) {
+- struct task_struct *curr;
+- u64 delta;
++ if (idle_cpu(cpu) || !tick_nohz_tick_stopped_cpu(cpu))
++ goto out_requeue;
+
+- rq_lock_irq(rq, &rf);
+- update_rq_clock(rq);
+- curr = rq->curr;
+- delta = rq_clock_task(rq) - curr->se.exec_start;
++ rq_lock_irq(rq, &rf);
++ curr = rq->curr;
++ if (is_idle_task(curr))
++ goto out_unlock;
+
+- /*
+- * Make sure the next tick runs within a reasonable
+- * amount of time.
+- */
+- WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
+- curr->sched_class->task_tick(rq, curr, 0);
+- rq_unlock_irq(rq, &rf);
+- }
++ update_rq_clock(rq);
++ delta = rq_clock_task(rq) - curr->se.exec_start;
++
++ /*
++ * Make sure the next tick runs within a reasonable
++ * amount of time.
++ */
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ curr->sched_class->task_tick(rq, curr, 0);
++
++out_unlock:
++ rq_unlock_irq(rq, &rf);
+
++out_requeue:
+ /*
+ * Run the remote tick once per second (1Hz). This arbitrary
+ * frequency is large enough to avoid overload but short enough
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 8b50eea4b607..b5fbdde6afa9 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2296,8 +2296,17 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
+ if (task_on_rq_queued(p) && p->dl.dl_runtime)
+ task_non_contending(p);
+
+- if (!task_on_rq_queued(p))
++ if (!task_on_rq_queued(p)) {
++ /*
++ * Inactive timer is armed. However, p is leaving DEADLINE and
++ * might migrate away from this rq while continuing to run on
++ * some other class. We need to remove its contribution from
++ * this rq running_bw now, or sub_rq_bw (below) will complain.
++ */
++ if (p->dl.dl_non_contending)
++ sub_running_bw(&p->dl, &rq->dl);
+ sub_rq_bw(&p->dl, &rq->dl);
++ }
+
+ /*
+ * We cannot use inactive_task_timer() to invoke sub_running_bw()
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 183068d22849..9b0c02ec7517 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3941,18 +3941,10 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep)
+ if (!sched_feat(UTIL_EST))
+ return;
+
+- /*
+- * Update root cfs_rq's estimated utilization
+- *
+- * If *p is the last task then the root cfs_rq's estimated utilization
+- * of a CPU is 0 by definition.
+- */
+- ue.enqueued = 0;
+- if (cfs_rq->nr_running) {
+- ue.enqueued = cfs_rq->avg.util_est.enqueued;
+- ue.enqueued -= min_t(unsigned int, ue.enqueued,
+- (_task_util_est(p) | UTIL_AVG_UNCHANGED));
+- }
++ /* Update root cfs_rq's estimated utilization */
++ ue.enqueued = cfs_rq->avg.util_est.enqueued;
++ ue.enqueued -= min_t(unsigned int, ue.enqueued,
++ (_task_util_est(p) | UTIL_AVG_UNCHANGED));
+ WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued);
+
+ /*
+@@ -4549,6 +4541,7 @@ void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b)
+ now = sched_clock_cpu(smp_processor_id());
+ cfs_b->runtime = cfs_b->quota;
+ cfs_b->runtime_expires = now + ktime_to_ns(cfs_b->period);
++ cfs_b->expires_seq++;
+ }
+
+ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
+@@ -4571,6 +4564,7 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ struct task_group *tg = cfs_rq->tg;
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);
+ u64 amount = 0, min_amount, expires;
++ int expires_seq;
+
+ /* note: this is a positive sum as runtime_remaining <= 0 */
+ min_amount = sched_cfs_bandwidth_slice() - cfs_rq->runtime_remaining;
+@@ -4587,6 +4581,7 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ cfs_b->idle = 0;
+ }
+ }
++ expires_seq = cfs_b->expires_seq;
+ expires = cfs_b->runtime_expires;
+ raw_spin_unlock(&cfs_b->lock);
+
+@@ -4596,8 +4591,10 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ * spread between our sched_clock and the one on which runtime was
+ * issued.
+ */
+- if ((s64)(expires - cfs_rq->runtime_expires) > 0)
++ if (cfs_rq->expires_seq != expires_seq) {
++ cfs_rq->expires_seq = expires_seq;
+ cfs_rq->runtime_expires = expires;
++ }
+
+ return cfs_rq->runtime_remaining > 0;
+ }
+@@ -4623,12 +4620,9 @@ static void expire_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ * has not truly expired.
+ *
+ * Fortunately we can check determine whether this the case by checking
+- * whether the global deadline has advanced. It is valid to compare
+- * cfs_b->runtime_expires without any locks since we only care about
+- * exact equality, so a partial write will still work.
++ * whether the global deadline(cfs_b->expires_seq) has advanced.
+ */
+-
+- if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {
++ if (cfs_rq->expires_seq == cfs_b->expires_seq) {
+ /* extend local deadline, drift is bounded above by 2 ticks */
+ cfs_rq->runtime_expires += TICK_NSEC;
+ } else {
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index cb467c221b15..7548b373d1c5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -334,9 +334,10 @@ struct cfs_bandwidth {
+ u64 runtime;
+ s64 hierarchical_quota;
+ u64 runtime_expires;
++ int expires_seq;
+
+- int idle;
+- int period_active;
++ short idle;
++ short period_active;
+ struct hrtimer period_timer;
+ struct hrtimer slack_timer;
+ struct list_head throttled_cfs_rq;
+@@ -551,6 +552,7 @@ struct cfs_rq {
+
+ #ifdef CONFIG_CFS_BANDWIDTH
+ int runtime_enabled;
++ int expires_seq;
+ u64 runtime_expires;
+ s64 runtime_remaining;
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index a583b6494b95..214820a14edf 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2962,6 +2962,7 @@ out_nobuffer:
+ }
+ EXPORT_SYMBOL_GPL(trace_vbprintk);
+
++__printf(3, 0)
+ static int
+ __trace_array_vprintk(struct ring_buffer *buffer,
+ unsigned long ip, const char *fmt, va_list args)
+@@ -3016,12 +3017,14 @@ out_nobuffer:
+ return len;
+ }
+
++__printf(3, 0)
+ int trace_array_vprintk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, va_list args)
+ {
+ return __trace_array_vprintk(tr->trace_buffer.buffer, ip, fmt, args);
+ }
+
++__printf(3, 0)
+ int trace_array_printk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, ...)
+ {
+@@ -3037,6 +3040,7 @@ int trace_array_printk(struct trace_array *tr,
+ return ret;
+ }
+
++__printf(3, 4)
+ int trace_array_printk_buf(struct ring_buffer *buffer,
+ unsigned long ip, const char *fmt, ...)
+ {
+@@ -3052,6 +3056,7 @@ int trace_array_printk_buf(struct ring_buffer *buffer,
+ return ret;
+ }
+
++__printf(2, 0)
+ int trace_vprintk(unsigned long ip, const char *fmt, va_list args)
+ {
+ return trace_array_vprintk(&global_trace, ip, fmt, args);
+diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
+index f185455b3406..c3bd5209da38 100644
+--- a/mm/kasan/kasan.c
++++ b/mm/kasan/kasan.c
+@@ -619,12 +619,13 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
+ int kasan_module_alloc(void *addr, size_t size)
+ {
+ void *ret;
++ size_t scaled_size;
+ size_t shadow_size;
+ unsigned long shadow_start;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+- shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+- PAGE_SIZE);
++ scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
++ shadow_size = round_up(scaled_size, PAGE_SIZE);
+
+ if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+ return -EINVAL;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 540cfab8c2c4..55d68c24e742 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -182,7 +182,7 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
+ if (vma->vm_file)
+ fput(vma->vm_file);
+ mpol_put(vma_policy(vma));
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return next;
+ }
+
+@@ -911,7 +911,7 @@ again:
+ anon_vma_merge(vma, next);
+ mm->map_count--;
+ mpol_put(vma_policy(next));
+- kmem_cache_free(vm_area_cachep, next);
++ vm_area_free(next);
+ /*
+ * In mprotect's case 6 (see comments on vma_merge),
+ * we must remove another next too. It would clutter
+@@ -1729,19 +1729,17 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
+ * specific mapper. the address has already been validated, but
+ * not unmapped, but the maps are removed from the list.
+ */
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(mm);
+ if (!vma) {
+ error = -ENOMEM;
+ goto unacct_error;
+ }
+
+- vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + len;
+ vma->vm_flags = vm_flags;
+ vma->vm_page_prot = vm_get_page_prot(vm_flags);
+ vma->vm_pgoff = pgoff;
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+
+ if (file) {
+ if (vm_flags & VM_DENYWRITE) {
+@@ -1832,7 +1830,7 @@ allow_write_and_free_vma:
+ if (vm_flags & VM_DENYWRITE)
+ allow_write_access(file);
+ free_vma:
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ unacct_error:
+ if (charged)
+ vm_unacct_memory(charged);
+@@ -2620,15 +2618,10 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
+ return err;
+ }
+
+- new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
++ new = vm_area_dup(vma);
+ if (!new)
+ return -ENOMEM;
+
+- /* most fields are the same, copy all, and then fixup */
+- *new = *vma;
+-
+- INIT_LIST_HEAD(&new->anon_vma_chain);
+-
+ if (new_below)
+ new->vm_end = addr;
+ else {
+@@ -2669,7 +2662,7 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
+ out_free_mpol:
+ mpol_put(vma_policy(new));
+ out_free_vma:
+- kmem_cache_free(vm_area_cachep, new);
++ vm_area_free(new);
+ return err;
+ }
+
+@@ -2984,14 +2977,12 @@ static int do_brk_flags(unsigned long addr, unsigned long len, unsigned long fla
+ /*
+ * create a vma struct for an anonymous mapping
+ */
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(mm);
+ if (!vma) {
+ vm_unacct_memory(len >> PAGE_SHIFT);
+ return -ENOMEM;
+ }
+
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+- vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + len;
+ vma->vm_pgoff = pgoff;
+@@ -3202,16 +3193,14 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+ }
+ *need_rmap_locks = (new_vma->vm_pgoff <= vma->vm_pgoff);
+ } else {
+- new_vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
++ new_vma = vm_area_dup(vma);
+ if (!new_vma)
+ goto out;
+- *new_vma = *vma;
+ new_vma->vm_start = addr;
+ new_vma->vm_end = addr + len;
+ new_vma->vm_pgoff = pgoff;
+ if (vma_dup_policy(vma, new_vma))
+ goto out_free_vma;
+- INIT_LIST_HEAD(&new_vma->anon_vma_chain);
+ if (anon_vma_clone(new_vma, vma))
+ goto out_free_mempol;
+ if (new_vma->vm_file)
+@@ -3226,7 +3215,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+ out_free_mempol:
+ mpol_put(vma_policy(new_vma));
+ out_free_vma:
+- kmem_cache_free(vm_area_cachep, new_vma);
++ vm_area_free(new_vma);
+ out:
+ return NULL;
+ }
+@@ -3350,12 +3339,10 @@ static struct vm_area_struct *__install_special_mapping(
+ int ret;
+ struct vm_area_struct *vma;
+
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(mm);
+ if (unlikely(vma == NULL))
+ return ERR_PTR(-ENOMEM);
+
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+- vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + len;
+
+@@ -3376,7 +3363,7 @@ static struct vm_area_struct *__install_special_mapping(
+ return vma;
+
+ out:
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/mm/nommu.c b/mm/nommu.c
+index 13723736d38f..b7a2aa7f7c0f 100644
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -769,7 +769,7 @@ static void delete_vma(struct mm_struct *mm, struct vm_area_struct *vma)
+ if (vma->vm_file)
+ fput(vma->vm_file);
+ put_nommu_region(vma->vm_region);
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ }
+
+ /*
+@@ -1204,7 +1204,7 @@ unsigned long do_mmap(struct file *file,
+ if (!region)
+ goto error_getting_region;
+
+- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
++ vma = vm_area_alloc(current->mm);
+ if (!vma)
+ goto error_getting_vma;
+
+@@ -1212,7 +1212,6 @@ unsigned long do_mmap(struct file *file,
+ region->vm_flags = vm_flags;
+ region->vm_pgoff = pgoff;
+
+- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ vma->vm_flags = vm_flags;
+ vma->vm_pgoff = pgoff;
+
+@@ -1368,7 +1367,7 @@ error:
+ kmem_cache_free(vm_region_jar, region);
+ if (vma->vm_file)
+ fput(vma->vm_file);
+- kmem_cache_free(vm_area_cachep, vma);
++ vm_area_free(vma);
+ return ret;
+
+ sharing_violation:
+@@ -1469,14 +1468,13 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
+ if (!region)
+ return -ENOMEM;
+
+- new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
++ new = vm_area_dup(vma);
+ if (!new) {
+ kmem_cache_free(vm_region_jar, region);
+ return -ENOMEM;
+ }
+
+ /* most fields are the same, copy all, and then fixup */
+- *new = *vma;
+ *region = *vma->vm_region;
+ new->vm_region = region;
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7b841a764dd0..e27bc10467c4 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6933,9 +6933,21 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
+ start = (void *)PAGE_ALIGN((unsigned long)start);
+ end = (void *)((unsigned long)end & PAGE_MASK);
+ for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
++ struct page *page = virt_to_page(pos);
++ void *direct_map_addr;
++
++ /*
++ * 'direct_map_addr' might be different from 'pos'
++ * because some architectures' virt_to_page()
++ * work with aliases. Getting the direct map
++ * address ensures that we get a _writeable_
++ * alias for the memset().
++ */
++ direct_map_addr = page_address(page);
+ if ((unsigned int)poison <= 0xFF)
+- memset(pos, poison, PAGE_SIZE);
+- free_reserved_page(virt_to_page(pos));
++ memset(direct_map_addr, poison, PAGE_SIZE);
++
++ free_reserved_page(page);
+ }
+
+ if (pages && s)
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 21e6df1cc70f..0ad6993aaa9b 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -228,7 +228,8 @@ static int parse_opts(char *opts, struct p9_client *clnt)
+ }
+
+ free_and_return:
+- v9fs_put_trans(clnt->trans_mod);
++ if (ret)
++ v9fs_put_trans(clnt->trans_mod);
+ kfree(tmp_options);
+ return ret;
+ }
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index be09a9883825..73bf6a93a3cf 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -2732,7 +2732,7 @@ static int batadv_iv_gw_dump_entry(struct sk_buff *msg, u32 portid, u32 seq,
+ {
+ struct batadv_neigh_ifinfo *router_ifinfo = NULL;
+ struct batadv_neigh_node *router;
+- struct batadv_gw_node *curr_gw;
++ struct batadv_gw_node *curr_gw = NULL;
+ int ret = 0;
+ void *hdr;
+
+@@ -2780,6 +2780,8 @@ static int batadv_iv_gw_dump_entry(struct sk_buff *msg, u32 portid, u32 seq,
+ ret = 0;
+
+ out:
++ if (curr_gw)
++ batadv_gw_node_put(curr_gw);
+ if (router_ifinfo)
+ batadv_neigh_ifinfo_put(router_ifinfo);
+ if (router)
+diff --git a/net/batman-adv/bat_v.c b/net/batman-adv/bat_v.c
+index ec93337ee259..6baec4e68898 100644
+--- a/net/batman-adv/bat_v.c
++++ b/net/batman-adv/bat_v.c
+@@ -927,7 +927,7 @@ static int batadv_v_gw_dump_entry(struct sk_buff *msg, u32 portid, u32 seq,
+ {
+ struct batadv_neigh_ifinfo *router_ifinfo = NULL;
+ struct batadv_neigh_node *router;
+- struct batadv_gw_node *curr_gw;
++ struct batadv_gw_node *curr_gw = NULL;
+ int ret = 0;
+ void *hdr;
+
+@@ -995,6 +995,8 @@ static int batadv_v_gw_dump_entry(struct sk_buff *msg, u32 portid, u32 seq,
+ ret = 0;
+
+ out:
++ if (curr_gw)
++ batadv_gw_node_put(curr_gw);
+ if (router_ifinfo)
+ batadv_neigh_ifinfo_put(router_ifinfo);
+ if (router)
+diff --git a/net/batman-adv/debugfs.c b/net/batman-adv/debugfs.c
+index 4229b01ac7b5..87479c60670e 100644
+--- a/net/batman-adv/debugfs.c
++++ b/net/batman-adv/debugfs.c
+@@ -19,6 +19,7 @@
+ #include "debugfs.h"
+ #include "main.h"
+
++#include <linux/dcache.h>
+ #include <linux/debugfs.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -343,6 +344,25 @@ out:
+ return -ENOMEM;
+ }
+
++/**
++ * batadv_debugfs_rename_hardif() - Fix debugfs path for renamed hardif
++ * @hard_iface: hard interface which was renamed
++ */
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
++{
++ const char *name = hard_iface->net_dev->name;
++ struct dentry *dir;
++ struct dentry *d;
++
++ dir = hard_iface->debug_dir;
++ if (!dir)
++ return;
++
++ d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
++ if (!d)
++ pr_err("Can't rename debugfs dir to %s\n", name);
++}
++
+ /**
+ * batadv_debugfs_del_hardif() - delete the base directory for a hard interface
+ * in debugfs.
+@@ -413,6 +433,26 @@ out:
+ return -ENOMEM;
+ }
+
++/**
++ * batadv_debugfs_rename_meshif() - Fix debugfs path for renamed softif
++ * @dev: net_device which was renamed
++ */
++void batadv_debugfs_rename_meshif(struct net_device *dev)
++{
++ struct batadv_priv *bat_priv = netdev_priv(dev);
++ const char *name = dev->name;
++ struct dentry *dir;
++ struct dentry *d;
++
++ dir = bat_priv->debug_dir;
++ if (!dir)
++ return;
++
++ d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
++ if (!d)
++ pr_err("Can't rename debugfs dir to %s\n", name);
++}
++
+ /**
+ * batadv_debugfs_del_meshif() - Remove interface dependent debugfs entries
+ * @dev: netdev struct of the soft interface
+diff --git a/net/batman-adv/debugfs.h b/net/batman-adv/debugfs.h
+index 37b069698b04..08a592ffbee5 100644
+--- a/net/batman-adv/debugfs.h
++++ b/net/batman-adv/debugfs.h
+@@ -30,8 +30,10 @@ struct net_device;
+ void batadv_debugfs_init(void);
+ void batadv_debugfs_destroy(void);
+ int batadv_debugfs_add_meshif(struct net_device *dev);
++void batadv_debugfs_rename_meshif(struct net_device *dev);
+ void batadv_debugfs_del_meshif(struct net_device *dev);
+ int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface);
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface);
+ void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface);
+
+ #else
+@@ -49,6 +51,10 @@ static inline int batadv_debugfs_add_meshif(struct net_device *dev)
+ return 0;
+ }
+
++static inline void batadv_debugfs_rename_meshif(struct net_device *dev)
++{
++}
++
+ static inline void batadv_debugfs_del_meshif(struct net_device *dev)
+ {
+ }
+@@ -59,6 +65,11 @@ int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface)
+ return 0;
+ }
+
++static inline
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
++{
++}
++
+ static inline
+ void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface)
+ {
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index c405d15befd6..2f0d42f2f913 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -989,6 +989,32 @@ void batadv_hardif_remove_interfaces(void)
+ rtnl_unlock();
+ }
+
++/**
++ * batadv_hard_if_event_softif() - Handle events for soft interfaces
++ * @event: NETDEV_* event to handle
++ * @net_dev: net_device which generated an event
++ *
++ * Return: NOTIFY_* result
++ */
++static int batadv_hard_if_event_softif(unsigned long event,
++ struct net_device *net_dev)
++{
++ struct batadv_priv *bat_priv;
++
++ switch (event) {
++ case NETDEV_REGISTER:
++ batadv_sysfs_add_meshif(net_dev);
++ bat_priv = netdev_priv(net_dev);
++ batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
++ break;
++ case NETDEV_CHANGENAME:
++ batadv_debugfs_rename_meshif(net_dev);
++ break;
++ }
++
++ return NOTIFY_DONE;
++}
++
+ static int batadv_hard_if_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
+ {
+@@ -997,12 +1023,8 @@ static int batadv_hard_if_event(struct notifier_block *this,
+ struct batadv_hard_iface *primary_if = NULL;
+ struct batadv_priv *bat_priv;
+
+- if (batadv_softif_is_valid(net_dev) && event == NETDEV_REGISTER) {
+- batadv_sysfs_add_meshif(net_dev);
+- bat_priv = netdev_priv(net_dev);
+- batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
+- return NOTIFY_DONE;
+- }
++ if (batadv_softif_is_valid(net_dev))
++ return batadv_hard_if_event_softif(event, net_dev);
+
+ hard_iface = batadv_hardif_get_by_netdev(net_dev);
+ if (!hard_iface && (event == NETDEV_REGISTER ||
+@@ -1051,6 +1073,9 @@ static int batadv_hard_if_event(struct notifier_block *this,
+ if (batadv_is_wifi_hardif(hard_iface))
+ hard_iface->num_bcasts = BATADV_NUM_BCASTS_WIRELESS;
+ break;
++ case NETDEV_CHANGENAME:
++ batadv_debugfs_rename_hardif(hard_iface);
++ break;
+ default:
+ break;
+ }
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 3986551397ca..12a2b7d21376 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -1705,7 +1705,9 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ ether_addr_copy(common->addr, tt_addr);
+ common->vid = vid;
+
+- common->flags = flags;
++ if (!is_multicast_ether_addr(common->addr))
++ common->flags = flags & (~BATADV_TT_SYNC_MASK);
++
+ tt_global_entry->roam_at = 0;
+ /* node must store current time in case of roaming. This is
+ * needed to purge this entry out on timeout (if nobody claims
+@@ -1768,7 +1770,8 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ * TT_CLIENT_TEMP, therefore they have to be copied in the
+ * client entry
+ */
+- common->flags |= flags & (~BATADV_TT_SYNC_MASK);
++ if (!is_multicast_ether_addr(common->addr))
++ common->flags |= flags & (~BATADV_TT_SYNC_MASK);
+
+ /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only
+ * one originator left in the list and we previously received a
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1ccc2a2ac2e9..2d6b23e39833 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8608,7 +8608,8 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+ /* We get here if we can't use the current device name */
+ if (!pat)
+ goto out;
+- if (dev_get_valid_name(net, dev, pat) < 0)
++ err = dev_get_valid_name(net, dev, pat);
++ if (err < 0)
+ goto out;
+ }
+
+@@ -8620,7 +8621,6 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+ dev_close(dev);
+
+ /* And unlink it from device chain */
+- err = -ENODEV;
+ unlist_netdevice(dev);
+
+ synchronize_net();
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 201ff36b17a8..a491dbb0a955 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2516,7 +2516,8 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 len_diff)
+
+ static u32 __bpf_skb_max_len(const struct sk_buff *skb)
+ {
+- return skb->dev->mtu + skb->dev->hard_header_len;
++ return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
++ SKB_MAX_ALLOC;
+ }
+
+ static int bpf_skb_adjust_net(struct sk_buff *skb, s32 len_diff)
+diff --git a/net/ieee802154/6lowpan/core.c b/net/ieee802154/6lowpan/core.c
+index 275449b0d633..3297e7fa9945 100644
+--- a/net/ieee802154/6lowpan/core.c
++++ b/net/ieee802154/6lowpan/core.c
+@@ -90,12 +90,18 @@ static int lowpan_neigh_construct(struct net_device *dev, struct neighbour *n)
+ return 0;
+ }
+
++static int lowpan_get_iflink(const struct net_device *dev)
++{
++ return lowpan_802154_dev(dev)->wdev->ifindex;
++}
++
+ static const struct net_device_ops lowpan_netdev_ops = {
+ .ndo_init = lowpan_dev_init,
+ .ndo_start_xmit = lowpan_xmit,
+ .ndo_open = lowpan_open,
+ .ndo_stop = lowpan_stop,
+ .ndo_neigh_construct = lowpan_neigh_construct,
++ .ndo_get_iflink = lowpan_get_iflink,
+ };
+
+ static void lowpan_setup(struct net_device *ldev)
+diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
+index eeb6646aa892..0d70608cc2e1 100644
+--- a/net/ipv4/inet_fragment.c
++++ b/net/ipv4/inet_fragment.c
+@@ -90,7 +90,7 @@ static void inet_frags_free_cb(void *ptr, void *arg)
+
+ void inet_frags_exit_net(struct netns_frags *nf)
+ {
+- nf->low_thresh = 0; /* prevent creation of new frags */
++ nf->high_thresh = 0; /* prevent creation of new frags */
+
+ rhashtable_free_and_destroy(&nf->rhashtable, inet_frags_free_cb, NULL);
+ }
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index f6130704f052..1bf71e36f545 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -1895,6 +1895,7 @@ static struct xt_match ipt_builtin_mt[] __read_mostly = {
+ .checkentry = icmp_checkentry,
+ .proto = IPPROTO_ICMP,
+ .family = NFPROTO_IPV4,
++ .me = THIS_MODULE,
+ },
+ };
+
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 58e316cf6607..14c26a747e50 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1845,7 +1845,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
+ * shouldn't happen.
+ */
+ if (WARN(before(*seq, TCP_SKB_CB(skb)->seq),
+- "recvmsg bug: copied %X seq %X rcvnxt %X fl %X\n",
++ "TCP recvmsg seq # bug: copied %X, seq %X, rcvnxt %X, fl %X\n",
+ *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt,
+ flags))
+ break;
+@@ -1860,7 +1860,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
+ goto found_fin_ok;
+ WARN(!(flags & MSG_PEEK),
+- "recvmsg bug 2: copied %X seq %X rcvnxt %X fl %X\n",
++ "TCP recvmsg seq # bug 2: copied %X, seq %X, rcvnxt %X, fl %X\n",
+ *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt, flags);
+ }
+
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index 1a9b88c8cf72..8b637f9f23a2 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -55,7 +55,6 @@ struct dctcp {
+ u32 dctcp_alpha;
+ u32 next_seq;
+ u32 ce_state;
+- u32 delayed_ack_reserved;
+ u32 loss_cwnd;
+ };
+
+@@ -96,7 +95,6 @@ static void dctcp_init(struct sock *sk)
+
+ ca->dctcp_alpha = min(dctcp_alpha_on_init, DCTCP_MAX_ALPHA);
+
+- ca->delayed_ack_reserved = 0;
+ ca->loss_cwnd = 0;
+ ca->ce_state = 0;
+
+@@ -230,25 +228,6 @@ static void dctcp_state(struct sock *sk, u8 new_state)
+ }
+ }
+
+-static void dctcp_update_ack_reserved(struct sock *sk, enum tcp_ca_event ev)
+-{
+- struct dctcp *ca = inet_csk_ca(sk);
+-
+- switch (ev) {
+- case CA_EVENT_DELAYED_ACK:
+- if (!ca->delayed_ack_reserved)
+- ca->delayed_ack_reserved = 1;
+- break;
+- case CA_EVENT_NON_DELAYED_ACK:
+- if (ca->delayed_ack_reserved)
+- ca->delayed_ack_reserved = 0;
+- break;
+- default:
+- /* Don't care for the rest. */
+- break;
+- }
+-}
+-
+ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+ {
+ switch (ev) {
+@@ -258,10 +237,6 @@ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+ case CA_EVENT_ECN_NO_CE:
+ dctcp_ce_state_1_to_0(sk);
+ break;
+- case CA_EVENT_DELAYED_ACK:
+- case CA_EVENT_NON_DELAYED_ACK:
+- dctcp_update_ack_reserved(sk, ev);
+- break;
+ default:
+ /* Don't care for the rest. */
+ break;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 3049d10a1476..5a689b07bad7 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3545,8 +3545,6 @@ void tcp_send_delayed_ack(struct sock *sk)
+ int ato = icsk->icsk_ack.ato;
+ unsigned long timeout;
+
+- tcp_ca_event(sk, CA_EVENT_DELAYED_ACK);
+-
+ if (ato > TCP_DELACK_MIN) {
+ const struct tcp_sock *tp = tcp_sk(sk);
+ int max_ato = HZ / 2;
+@@ -3603,8 +3601,6 @@ void __tcp_send_ack(struct sock *sk, u32 rcv_nxt)
+ if (sk->sk_state == TCP_CLOSE)
+ return;
+
+- tcp_ca_event(sk, CA_EVENT_NON_DELAYED_ACK);
+-
+ /* We are not putting this on the write queue, so
+ * tcp_transmit_skb() will set the ownership to this
+ * sock.
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index 1323b9679cf7..1c0bb9fb76e6 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -799,8 +799,7 @@ static int calipso_opt_update(struct sock *sk, struct ipv6_opt_hdr *hop)
+ {
+ struct ipv6_txoptions *old = txopt_get(inet6_sk(sk)), *txopts;
+
+- txopts = ipv6_renew_options_kern(sk, old, IPV6_HOPOPTS,
+- hop, hop ? ipv6_optlen(hop) : 0);
++ txopts = ipv6_renew_options(sk, old, IPV6_HOPOPTS, hop);
+ txopt_put(old);
+ if (IS_ERR(txopts))
+ return PTR_ERR(txopts);
+@@ -1222,8 +1221,7 @@ static int calipso_req_setattr(struct request_sock *req,
+ if (IS_ERR(new))
+ return PTR_ERR(new);
+
+- txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS,
+- new, new ? ipv6_optlen(new) : 0);
++ txopts = ipv6_renew_options(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, new);
+
+ kfree(new);
+
+@@ -1260,8 +1258,7 @@ static void calipso_req_delattr(struct request_sock *req)
+ if (calipso_opt_del(req_inet->ipv6_opt->hopopt, &new))
+ return; /* Nothing to do */
+
+- txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS,
+- new, new ? ipv6_optlen(new) : 0);
++ txopts = ipv6_renew_options(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, new);
+
+ if (!IS_ERR(txopts)) {
+ txopts = xchg(&req_inet->ipv6_opt, txopts);
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index bc68eb661970..07a4d4232231 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -1028,29 +1028,21 @@ ipv6_dup_options(struct sock *sk, struct ipv6_txoptions *opt)
+ }
+ EXPORT_SYMBOL_GPL(ipv6_dup_options);
+
+-static int ipv6_renew_option(void *ohdr,
+- struct ipv6_opt_hdr __user *newopt, int newoptlen,
+- int inherit,
+- struct ipv6_opt_hdr **hdr,
+- char **p)
++static void ipv6_renew_option(int renewtype,
++ struct ipv6_opt_hdr **dest,
++ struct ipv6_opt_hdr *old,
++ struct ipv6_opt_hdr *new,
++ int newtype, char **p)
+ {
+- if (inherit) {
+- if (ohdr) {
+- memcpy(*p, ohdr, ipv6_optlen((struct ipv6_opt_hdr *)ohdr));
+- *hdr = (struct ipv6_opt_hdr *)*p;
+- *p += CMSG_ALIGN(ipv6_optlen(*hdr));
+- }
+- } else {
+- if (newopt) {
+- if (copy_from_user(*p, newopt, newoptlen))
+- return -EFAULT;
+- *hdr = (struct ipv6_opt_hdr *)*p;
+- if (ipv6_optlen(*hdr) > newoptlen)
+- return -EINVAL;
+- *p += CMSG_ALIGN(newoptlen);
+- }
+- }
+- return 0;
++ struct ipv6_opt_hdr *src;
++
++ src = (renewtype == newtype ? new : old);
++ if (!src)
++ return;
++
++ memcpy(*p, src, ipv6_optlen(src));
++ *dest = (struct ipv6_opt_hdr *)*p;
++ *p += CMSG_ALIGN(ipv6_optlen(*dest));
+ }
+
+ /**
+@@ -1076,13 +1068,11 @@ static int ipv6_renew_option(void *ohdr,
+ */
+ struct ipv6_txoptions *
+ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
+- int newtype,
+- struct ipv6_opt_hdr __user *newopt, int newoptlen)
++ int newtype, struct ipv6_opt_hdr *newopt)
+ {
+ int tot_len = 0;
+ char *p;
+ struct ipv6_txoptions *opt2;
+- int err;
+
+ if (opt) {
+ if (newtype != IPV6_HOPOPTS && opt->hopopt)
+@@ -1095,8 +1085,8 @@ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
+ tot_len += CMSG_ALIGN(ipv6_optlen(opt->dst1opt));
+ }
+
+- if (newopt && newoptlen)
+- tot_len += CMSG_ALIGN(newoptlen);
++ if (newopt)
++ tot_len += CMSG_ALIGN(ipv6_optlen(newopt));
+
+ if (!tot_len)
+ return NULL;
+@@ -1111,29 +1101,19 @@ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
+ opt2->tot_len = tot_len;
+ p = (char *)(opt2 + 1);
+
+- err = ipv6_renew_option(opt ? opt->hopopt : NULL, newopt, newoptlen,
+- newtype != IPV6_HOPOPTS,
+- &opt2->hopopt, &p);
+- if (err)
+- goto out;
+-
+- err = ipv6_renew_option(opt ? opt->dst0opt : NULL, newopt, newoptlen,
+- newtype != IPV6_RTHDRDSTOPTS,
+- &opt2->dst0opt, &p);
+- if (err)
+- goto out;
+-
+- err = ipv6_renew_option(opt ? opt->srcrt : NULL, newopt, newoptlen,
+- newtype != IPV6_RTHDR,
+- (struct ipv6_opt_hdr **)&opt2->srcrt, &p);
+- if (err)
+- goto out;
+-
+- err = ipv6_renew_option(opt ? opt->dst1opt : NULL, newopt, newoptlen,
+- newtype != IPV6_DSTOPTS,
+- &opt2->dst1opt, &p);
+- if (err)
+- goto out;
++ ipv6_renew_option(IPV6_HOPOPTS, &opt2->hopopt,
++ (opt ? opt->hopopt : NULL),
++ newopt, newtype, &p);
++ ipv6_renew_option(IPV6_RTHDRDSTOPTS, &opt2->dst0opt,
++ (opt ? opt->dst0opt : NULL),
++ newopt, newtype, &p);
++ ipv6_renew_option(IPV6_RTHDR,
++ (struct ipv6_opt_hdr **)&opt2->srcrt,
++ (opt ? (struct ipv6_opt_hdr *)opt->srcrt : NULL),
++ newopt, newtype, &p);
++ ipv6_renew_option(IPV6_DSTOPTS, &opt2->dst1opt,
++ (opt ? opt->dst1opt : NULL),
++ newopt, newtype, &p);
+
+ opt2->opt_nflen = (opt2->hopopt ? ipv6_optlen(opt2->hopopt) : 0) +
+ (opt2->dst0opt ? ipv6_optlen(opt2->dst0opt) : 0) +
+@@ -1141,37 +1121,6 @@ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
+ opt2->opt_flen = (opt2->dst1opt ? ipv6_optlen(opt2->dst1opt) : 0);
+
+ return opt2;
+-out:
+- sock_kfree_s(sk, opt2, opt2->tot_len);
+- return ERR_PTR(err);
+-}
+-
+-/**
+- * ipv6_renew_options_kern - replace a specific ext hdr with a new one.
+- *
+- * @sk: sock from which to allocate memory
+- * @opt: original options
+- * @newtype: option type to replace in @opt
+- * @newopt: new option of type @newtype to replace (kernel-mem)
+- * @newoptlen: length of @newopt
+- *
+- * See ipv6_renew_options(). The difference is that @newopt is
+- * kernel memory, rather than user memory.
+- */
+-struct ipv6_txoptions *
+-ipv6_renew_options_kern(struct sock *sk, struct ipv6_txoptions *opt,
+- int newtype, struct ipv6_opt_hdr *newopt,
+- int newoptlen)
+-{
+- struct ipv6_txoptions *ret_val;
+- const mm_segment_t old_fs = get_fs();
+-
+- set_fs(KERNEL_DS);
+- ret_val = ipv6_renew_options(sk, opt, newtype,
+- (struct ipv6_opt_hdr __user *)newopt,
+- newoptlen);
+- set_fs(old_fs);
+- return ret_val;
+ }
+
+ struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 4d780c7f0130..c95c3486d904 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -398,6 +398,12 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ case IPV6_DSTOPTS:
+ {
+ struct ipv6_txoptions *opt;
++ struct ipv6_opt_hdr *new = NULL;
++
++ /* hop-by-hop / destination options are privileged option */
++ retv = -EPERM;
++ if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW))
++ break;
+
+ /* remove any sticky options header with a zero option
+ * length, per RFC3542.
+@@ -409,17 +415,22 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ else if (optlen < sizeof(struct ipv6_opt_hdr) ||
+ optlen & 0x7 || optlen > 8 * 255)
+ goto e_inval;
+-
+- /* hop-by-hop / destination options are privileged option */
+- retv = -EPERM;
+- if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW))
+- break;
++ else {
++ new = memdup_user(optval, optlen);
++ if (IS_ERR(new)) {
++ retv = PTR_ERR(new);
++ break;
++ }
++ if (unlikely(ipv6_optlen(new) > optlen)) {
++ kfree(new);
++ goto e_inval;
++ }
++ }
+
+ opt = rcu_dereference_protected(np->opt,
+ lockdep_sock_is_held(sk));
+- opt = ipv6_renew_options(sk, opt, optname,
+- (struct ipv6_opt_hdr __user *)optval,
+- optlen);
++ opt = ipv6_renew_options(sk, opt, optname, new);
++ kfree(new);
+ if (IS_ERR(opt)) {
+ retv = PTR_ERR(opt);
+ break;
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 0604a737eecf..a23cfd922509 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2081,7 +2081,8 @@ void ipv6_mc_dad_complete(struct inet6_dev *idev)
+ mld_send_initial_cr(idev);
+ idev->mc_dad_count--;
+ if (idev->mc_dad_count)
+- mld_dad_start_timer(idev, idev->mc_maxdelay);
++ mld_dad_start_timer(idev,
++ unsolicited_report_interval(idev));
+ }
+ }
+
+@@ -2093,7 +2094,8 @@ static void mld_dad_timer_expire(struct timer_list *t)
+ if (idev->mc_dad_count) {
+ idev->mc_dad_count--;
+ if (idev->mc_dad_count)
+- mld_dad_start_timer(idev, idev->mc_maxdelay);
++ mld_dad_start_timer(idev,
++ unsolicited_report_interval(idev));
+ }
+ in6_dev_put(idev);
+ }
+@@ -2451,7 +2453,8 @@ static void mld_ifc_timer_expire(struct timer_list *t)
+ if (idev->mc_ifc_count) {
+ idev->mc_ifc_count--;
+ if (idev->mc_ifc_count)
+- mld_ifc_start_timer(idev, idev->mc_maxdelay);
++ mld_ifc_start_timer(idev,
++ unsolicited_report_interval(idev));
+ }
+ in6_dev_put(idev);
+ }
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 685c2168f524..1e518cedffea 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -1907,6 +1907,7 @@ static struct xt_match ip6t_builtin_mt[] __read_mostly = {
+ .checkentry = icmp6_checkentry,
+ .proto = IPPROTO_ICMPV6,
+ .family = NFPROTO_IPV6,
++ .me = THIS_MODULE,
+ },
+ };
+
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index eeb4d3098ff4..e4d9e6976d3c 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -107,7 +107,7 @@ static int nf_ct_frag6_sysctl_register(struct net *net)
+ if (hdr == NULL)
+ goto err_reg;
+
+- net->nf_frag.sysctl.frags_hdr = hdr;
++ net->nf_frag_frags_hdr = hdr;
+ return 0;
+
+ err_reg:
+@@ -121,8 +121,8 @@ static void __net_exit nf_ct_frags6_sysctl_unregister(struct net *net)
+ {
+ struct ctl_table *table;
+
+- table = net->nf_frag.sysctl.frags_hdr->ctl_table_arg;
+- unregister_net_sysctl_table(net->nf_frag.sysctl.frags_hdr);
++ table = net->nf_frag_frags_hdr->ctl_table_arg;
++ unregister_net_sysctl_table(net->nf_frag_frags_hdr);
+ if (!net_eq(net, &init_net))
+ kfree(table);
+ }
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 05a265cd573d..7404a5114597 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4800,7 +4800,9 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
+ skb_reset_network_header(skb);
+ skb_reset_mac_header(skb);
+
++ local_bh_disable();
+ __ieee80211_subif_start_xmit(skb, skb->dev, flags);
++ local_bh_enable();
+
+ return 0;
+ }
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 41ff04ee2554..6b2fa4870237 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1972,7 +1972,7 @@ int nf_conntrack_set_hashsize(const char *val, const struct kernel_param *kp)
+ return -EOPNOTSUPP;
+
+ /* On boot, we can set this without any fancy locking. */
+- if (!nf_conntrack_htable_size)
++ if (!nf_conntrack_hash)
+ return param_set_uint(val, kp);
+
+ rc = kstrtouint(val, 0, &hashsize);
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index 551a1eddf0fa..a75b11c39312 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -465,6 +465,11 @@ void nf_conntrack_helper_unregister(struct nf_conntrack_helper *me)
+
+ nf_ct_expect_iterate_destroy(expect_iter_me, NULL);
+ nf_ct_iterate_destroy(unhelp, me);
++
++ /* Maybe someone has gotten the helper already when unhelp above.
++ * So need to wait it.
++ */
++ synchronize_rcu();
+ }
+ EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister);
+
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index abe647d5b8c6..9ce6336d1e55 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -243,14 +243,14 @@ dccp_state_table[CT_DCCP_ROLE_MAX + 1][DCCP_PKT_SYNCACK + 1][CT_DCCP_MAX + 1] =
+ * We currently ignore Sync packets
+ *
+ * sNO, sRQ, sRS, sPO, sOP, sCR, sCG, sTW */
+- sIG, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
++ sIV, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
+ },
+ [DCCP_PKT_SYNCACK] = {
+ /*
+ * We currently ignore SyncAck packets
+ *
+ * sNO, sRQ, sRS, sPO, sOP, sCR, sCG, sTW */
+- sIG, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
++ sIV, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
+ },
+ },
+ [CT_DCCP_ROLE_SERVER] = {
+@@ -371,14 +371,14 @@ dccp_state_table[CT_DCCP_ROLE_MAX + 1][DCCP_PKT_SYNCACK + 1][CT_DCCP_MAX + 1] =
+ * We currently ignore Sync packets
+ *
+ * sNO, sRQ, sRS, sPO, sOP, sCR, sCG, sTW */
+- sIG, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
++ sIV, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
+ },
+ [DCCP_PKT_SYNCACK] = {
+ /*
+ * We currently ignore SyncAck packets
+ *
+ * sNO, sRQ, sRS, sPO, sOP, sCR, sCG, sTW */
+- sIG, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
++ sIV, sIG, sIG, sIG, sIG, sIG, sIG, sIG,
+ },
+ },
+ };
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index a82dfb8f8790..d60589747afb 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -439,6 +439,10 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write,
+ if (write) {
+ struct ctl_table tmp = *table;
+
++ /* proc_dostring() can append to existing strings, so we need to
++ * initialize it as an empty string.
++ */
++ buf[0] = '\0';
+ tmp.data = buf;
+ r = proc_dostring(&tmp, write, buffer, lenp, ppos);
+ if (r)
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 1d99a1efdafc..0210b40a529a 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -825,10 +825,18 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ rev = ntohl(nla_get_be32(tb[NFTA_TARGET_REV]));
+ family = ctx->family;
+
++ if (strcmp(tg_name, XT_ERROR_TARGET) == 0 ||
++ strcmp(tg_name, XT_STANDARD_TARGET) == 0 ||
++ strcmp(tg_name, "standard") == 0)
++ return ERR_PTR(-EINVAL);
++
+ /* Re-use the existing target if it's already loaded. */
+ list_for_each_entry(nft_target, &nft_target_list, head) {
+ struct xt_target *target = nft_target->ops.data;
+
++ if (!target->target)
++ continue;
++
+ if (nft_target_cmp(target, tg_name, rev, family))
+ return &nft_target->ops;
+ }
+@@ -837,6 +845,11 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ if (IS_ERR(target))
+ return ERR_PTR(-ENOENT);
+
++ if (!target->target) {
++ err = -EINVAL;
++ goto err;
++ }
++
+ if (target->targetsize > nla_len(tb[NFTA_TARGET_INFO])) {
+ err = -EINVAL;
+ goto err;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index cb0f02785749..af663a27c191 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2910,6 +2910,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ goto out_free;
+ } else if (reserve) {
+ skb_reserve(skb, -reserve);
++ if (len < reserve)
++ skb_reset_network_header(skb);
+ }
+
+ /* Returns -EFAULT on error */
+@@ -4256,6 +4258,8 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ }
+
+ if (req->tp_block_nr) {
++ unsigned int min_frame_size;
++
+ /* Sanity tests and some calculations */
+ err = -EBUSY;
+ if (unlikely(rb->pg_vec))
+@@ -4278,12 +4282,12 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ goto out;
+ if (unlikely(!PAGE_ALIGNED(req->tp_block_size)))
+ goto out;
++ min_frame_size = po->tp_hdrlen + po->tp_reserve;
+ if (po->tp_version >= TPACKET_V3 &&
+- req->tp_block_size <=
+- BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv) + sizeof(struct tpacket3_hdr))
++ req->tp_block_size <
++ BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv) + min_frame_size)
+ goto out;
+- if (unlikely(req->tp_frame_size < po->tp_hdrlen +
+- po->tp_reserve))
++ if (unlikely(req->tp_frame_size < min_frame_size))
+ goto out;
+ if (unlikely(req->tp_frame_size & (TPACKET_ALIGNMENT - 1)))
+ goto out;
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 2aa07b547b16..86e1e37eb4e8 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -191,8 +191,13 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ hdr->type = cpu_to_le32(type);
+ hdr->src_node_id = cpu_to_le32(from->sq_node);
+ hdr->src_port_id = cpu_to_le32(from->sq_port);
+- hdr->dst_node_id = cpu_to_le32(to->sq_node);
+- hdr->dst_port_id = cpu_to_le32(to->sq_port);
++ if (to->sq_port == QRTR_PORT_CTRL) {
++ hdr->dst_node_id = cpu_to_le32(node->nid);
++ hdr->dst_port_id = cpu_to_le32(QRTR_NODE_BCAST);
++ } else {
++ hdr->dst_node_id = cpu_to_le32(to->sq_node);
++ hdr->dst_port_id = cpu_to_le32(to->sq_port);
++ }
+
+ hdr->size = cpu_to_le32(len);
+ hdr->confirm_rx = 0;
+@@ -764,6 +769,10 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ node = NULL;
+ if (addr->sq_node == QRTR_NODE_BCAST) {
+ enqueue_fn = qrtr_bcast_enqueue;
++ if (addr->sq_port != QRTR_PORT_CTRL) {
++ release_sock(sk);
++ return -ENOTCONN;
++ }
+ } else if (addr->sq_node == ipc->us.sq_node) {
+ enqueue_fn = qrtr_local_enqueue;
+ } else {
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index abef75da89a7..cfb05953b0e5 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -659,11 +659,19 @@ static void rds_conn_info(struct socket *sock, unsigned int len,
+
+ int rds_conn_init(void)
+ {
++ int ret;
++
++ ret = rds_loop_net_init(); /* register pernet callback */
++ if (ret)
++ return ret;
++
+ rds_conn_slab = kmem_cache_create("rds_connection",
+ sizeof(struct rds_connection),
+ 0, 0, NULL);
+- if (!rds_conn_slab)
++ if (!rds_conn_slab) {
++ rds_loop_net_exit();
+ return -ENOMEM;
++ }
+
+ rds_info_register_func(RDS_INFO_CONNECTIONS, rds_conn_info);
+ rds_info_register_func(RDS_INFO_SEND_MESSAGES,
+@@ -676,6 +684,7 @@ int rds_conn_init(void)
+
+ void rds_conn_exit(void)
+ {
++ rds_loop_net_exit(); /* unregister pernet callback */
+ rds_loop_exit();
+
+ WARN_ON(!hlist_empty(rds_conn_hash));
+diff --git a/net/rds/loop.c b/net/rds/loop.c
+index dac6218a460e..feea1f96ee2a 100644
+--- a/net/rds/loop.c
++++ b/net/rds/loop.c
+@@ -33,6 +33,8 @@
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+ #include <linux/in.h>
++#include <net/net_namespace.h>
++#include <net/netns/generic.h>
+
+ #include "rds_single_path.h"
+ #include "rds.h"
+@@ -40,6 +42,17 @@
+
+ static DEFINE_SPINLOCK(loop_conns_lock);
+ static LIST_HEAD(loop_conns);
++static atomic_t rds_loop_unloading = ATOMIC_INIT(0);
++
++static void rds_loop_set_unloading(void)
++{
++ atomic_set(&rds_loop_unloading, 1);
++}
++
++static bool rds_loop_is_unloading(struct rds_connection *conn)
++{
++ return atomic_read(&rds_loop_unloading) != 0;
++}
+
+ /*
+ * This 'loopback' transport is a special case for flows that originate
+@@ -165,6 +178,8 @@ void rds_loop_exit(void)
+ struct rds_loop_connection *lc, *_lc;
+ LIST_HEAD(tmp_list);
+
++ rds_loop_set_unloading();
++ synchronize_rcu();
+ /* avoid calling conn_destroy with irqs off */
+ spin_lock_irq(&loop_conns_lock);
+ list_splice(&loop_conns, &tmp_list);
+@@ -177,6 +192,46 @@ void rds_loop_exit(void)
+ }
+ }
+
++static void rds_loop_kill_conns(struct net *net)
++{
++ struct rds_loop_connection *lc, *_lc;
++ LIST_HEAD(tmp_list);
++
++ spin_lock_irq(&loop_conns_lock);
++ list_for_each_entry_safe(lc, _lc, &loop_conns, loop_node) {
++ struct net *c_net = read_pnet(&lc->conn->c_net);
++
++ if (net != c_net)
++ continue;
++ list_move_tail(&lc->loop_node, &tmp_list);
++ }
++ spin_unlock_irq(&loop_conns_lock);
++
++ list_for_each_entry_safe(lc, _lc, &tmp_list, loop_node) {
++ WARN_ON(lc->conn->c_passive);
++ rds_conn_destroy(lc->conn);
++ }
++}
++
++static void __net_exit rds_loop_exit_net(struct net *net)
++{
++ rds_loop_kill_conns(net);
++}
++
++static struct pernet_operations rds_loop_net_ops = {
++ .exit = rds_loop_exit_net,
++};
++
++int rds_loop_net_init(void)
++{
++ return register_pernet_device(&rds_loop_net_ops);
++}
++
++void rds_loop_net_exit(void)
++{
++ unregister_pernet_device(&rds_loop_net_ops);
++}
++
+ /*
+ * This is missing .xmit_* because loop doesn't go through generic
+ * rds_send_xmit() and doesn't call rds_recv_incoming(). .listen_stop and
+@@ -194,4 +249,5 @@ struct rds_transport rds_loop_transport = {
+ .inc_free = rds_loop_inc_free,
+ .t_name = "loopback",
+ .t_type = RDS_TRANS_LOOP,
++ .t_unloading = rds_loop_is_unloading,
+ };
+diff --git a/net/rds/loop.h b/net/rds/loop.h
+index 469fa4b2da4f..bbc8cdd030df 100644
+--- a/net/rds/loop.h
++++ b/net/rds/loop.h
+@@ -5,6 +5,8 @@
+ /* loop.c */
+ extern struct rds_transport rds_loop_transport;
+
++int rds_loop_net_init(void);
++void rds_loop_net_exit(void);
+ void rds_loop_exit(void);
+
+ #endif
+diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c
+index 7e28b2ce1437..e65cc4d35cee 100644
+--- a/net/sched/act_csum.c
++++ b/net/sched/act_csum.c
+@@ -91,7 +91,7 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla,
+ }
+ params_old = rtnl_dereference(p->params);
+
+- params_new->action = parm->action;
++ p->tcf_action = parm->action;
+ params_new->update_flags = parm->update_flags;
+ rcu_assign_pointer(p->params, params_new);
+ if (params_old)
+@@ -561,7 +561,7 @@ static int tcf_csum(struct sk_buff *skb, const struct tc_action *a,
+ tcf_lastuse_update(&p->tcf_tm);
+ bstats_cpu_update(this_cpu_ptr(p->common.cpu_bstats), skb);
+
+- action = params->action;
++ action = READ_ONCE(p->tcf_action);
+ if (unlikely(action == TC_ACT_SHOT))
+ goto drop_stats;
+
+@@ -599,11 +599,11 @@ static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind,
+ .index = p->tcf_index,
+ .refcnt = p->tcf_refcnt - ref,
+ .bindcnt = p->tcf_bindcnt - bind,
++ .action = p->tcf_action,
+ };
+ struct tcf_t t;
+
+ params = rtnl_dereference(p->params);
+- opt.action = params->action;
+ opt.update_flags = params->update_flags;
+
+ if (nla_put(skb, TCA_CSUM_PARMS, sizeof(opt), &opt))
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 626dac81a48a..9bc6c2ae98a5 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -36,7 +36,7 @@ static int tunnel_key_act(struct sk_buff *skb, const struct tc_action *a,
+
+ tcf_lastuse_update(&t->tcf_tm);
+ bstats_cpu_update(this_cpu_ptr(t->common.cpu_bstats), skb);
+- action = params->action;
++ action = READ_ONCE(t->tcf_action);
+
+ switch (params->tcft_action) {
+ case TCA_TUNNEL_KEY_ACT_RELEASE:
+@@ -182,7 +182,7 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
+
+ params_old = rtnl_dereference(t->params);
+
+- params_new->action = parm->action;
++ t->tcf_action = parm->action;
+ params_new->tcft_action = parm->t_action;
+ params_new->tcft_enc_metadata = metadata;
+
+@@ -254,13 +254,13 @@ static int tunnel_key_dump(struct sk_buff *skb, struct tc_action *a,
+ .index = t->tcf_index,
+ .refcnt = t->tcf_refcnt - ref,
+ .bindcnt = t->tcf_bindcnt - bind,
++ .action = t->tcf_action,
+ };
+ struct tcf_t tm;
+
+ params = rtnl_dereference(t->params);
+
+ opt.t_action = params->tcft_action;
+- opt.action = params->action;
+
+ if (nla_put(skb, TCA_TUNNEL_KEY_PARMS, sizeof(opt), &opt))
+ goto nla_put_failure;
+diff --git a/net/sctp/chunk.c b/net/sctp/chunk.c
+index be296d633e95..ded86c449c49 100644
+--- a/net/sctp/chunk.c
++++ b/net/sctp/chunk.c
+@@ -247,7 +247,9 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
+ /* Account for a different sized first fragment */
+ if (msg_len >= first_len) {
+ msg->can_delay = 0;
+- SCTP_INC_STATS(sock_net(asoc->base.sk), SCTP_MIB_FRAGUSRMSGS);
++ if (msg_len > first_len)
++ SCTP_INC_STATS(sock_net(asoc->base.sk),
++ SCTP_MIB_FRAGUSRMSGS);
+ } else {
+ /* Which may be the only one... */
+ first_len = msg_len;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 544bab42f925..9c5f447fa366 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1231,8 +1231,7 @@ static int smc_shutdown(struct socket *sock, int how)
+ lock_sock(sk);
+
+ rc = -ENOTCONN;
+- if ((sk->sk_state != SMC_LISTEN) &&
+- (sk->sk_state != SMC_ACTIVE) &&
++ if ((sk->sk_state != SMC_ACTIVE) &&
+ (sk->sk_state != SMC_PEERCLOSEWAIT1) &&
+ (sk->sk_state != SMC_PEERCLOSEWAIT2) &&
+ (sk->sk_state != SMC_APPCLOSEWAIT1) &&
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 3a988c22f627..49062e752cbf 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -250,6 +250,7 @@ out:
+ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
+ u8 expected_type)
+ {
++ long rcvtimeo = smc->clcsock->sk->sk_rcvtimeo;
+ struct sock *clc_sk = smc->clcsock->sk;
+ struct smc_clc_msg_hdr *clcm = buf;
+ struct msghdr msg = {NULL, 0};
+@@ -306,7 +307,6 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
+ memset(&msg, 0, sizeof(struct msghdr));
+ iov_iter_kvec(&msg.msg_iter, READ | ITER_KVEC, &vec, 1, datlen);
+ krflags = MSG_WAITALL;
+- smc->clcsock->sk->sk_rcvtimeo = CLC_WAIT_TIME;
+ len = sock_recvmsg(smc->clcsock, &msg, krflags);
+ if (len < datlen || !smc_clc_msg_hdr_valid(clcm)) {
+ smc->sk.sk_err = EPROTO;
+@@ -322,6 +322,7 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
+ }
+
+ out:
++ smc->clcsock->sk->sk_rcvtimeo = rcvtimeo;
+ return reason_code;
+ }
+
+diff --git a/net/tipc/discover.c b/net/tipc/discover.c
+index 9f666e0650e2..2830709957bd 100644
+--- a/net/tipc/discover.c
++++ b/net/tipc/discover.c
+@@ -133,6 +133,8 @@ static void disc_dupl_alert(struct tipc_bearer *b, u32 node_addr,
+ }
+
+ /* tipc_disc_addr_trial(): - handle an address uniqueness trial from peer
++ * Returns true if message should be dropped by caller, i.e., if it is a
++ * trial message or we are inside trial period. Otherwise false.
+ */
+ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
+ struct tipc_media_addr *maddr,
+@@ -168,8 +170,9 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
+ }
+
++ /* Accept regular link requests/responses only after trial period */
+ if (mtyp != DSC_TRIAL_MSG)
+- return false;
++ return trial;
+
+ sugg_addr = tipc_node_try_addr(net, peer_id, src);
+ if (sugg_addr)
+@@ -284,7 +287,6 @@ static void tipc_disc_timeout(struct timer_list *t)
+ {
+ struct tipc_discoverer *d = from_timer(d, t, timer);
+ struct tipc_net *tn = tipc_net(d->net);
+- u32 self = tipc_own_addr(d->net);
+ struct tipc_media_addr maddr;
+ struct sk_buff *skb = NULL;
+ struct net *net = d->net;
+@@ -298,12 +300,14 @@ static void tipc_disc_timeout(struct timer_list *t)
+ goto exit;
+ }
+
+- /* Did we just leave the address trial period ? */
+- if (!self && !time_before(jiffies, tn->addr_trial_end)) {
+- self = tn->trial_addr;
+- tipc_net_finalize(net, self);
+- msg_set_prevnode(buf_msg(d->skb), self);
++ /* Trial period over ? */
++ if (!time_before(jiffies, tn->addr_trial_end)) {
++ /* Did we just leave it ? */
++ if (!tipc_own_addr(net))
++ tipc_net_finalize(net, tn->trial_addr);
++
+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
++ msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));
+ }
+
+ /* Adjust timeout interval according to discovery phase */
+diff --git a/net/tipc/net.c b/net/tipc/net.c
+index 4fbaa0464405..a7f6964c3a4b 100644
+--- a/net/tipc/net.c
++++ b/net/tipc/net.c
+@@ -121,12 +121,17 @@ int tipc_net_init(struct net *net, u8 *node_id, u32 addr)
+
+ void tipc_net_finalize(struct net *net, u32 addr)
+ {
+- tipc_set_node_addr(net, addr);
+- smp_mb();
+- tipc_named_reinit(net);
+- tipc_sk_reinit(net);
+- tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
+- TIPC_CLUSTER_SCOPE, 0, addr);
++ struct tipc_net *tn = tipc_net(net);
++
++ spin_lock_bh(&tn->node_list_lock);
++ if (!tipc_own_addr(net)) {
++ tipc_set_node_addr(net, addr);
++ tipc_named_reinit(net);
++ tipc_sk_reinit(net);
++ tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
++ TIPC_CLUSTER_SCOPE, 0, addr);
++ }
++ spin_unlock_bh(&tn->node_list_lock);
+ }
+
+ void tipc_net_stop(struct net *net)
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index f29549de9245..aa09c515775f 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -776,6 +776,7 @@ static u32 tipc_node_suggest_addr(struct net *net, u32 addr)
+ }
+
+ /* tipc_node_try_addr(): Check if addr can be used by peer, suggest other if not
++ * Returns suggested address if any, otherwise 0
+ */
+ u32 tipc_node_try_addr(struct net *net, u8 *id, u32 addr)
+ {
+@@ -798,12 +799,14 @@ u32 tipc_node_try_addr(struct net *net, u8 *id, u32 addr)
+ if (n) {
+ addr = n->addr;
+ tipc_node_put(n);
++ return addr;
+ }
+- /* Even this node may be in trial phase */
++
++ /* Even this node may be in conflict */
+ if (tn->trial_addr == addr)
+ return tipc_node_suggest_addr(net, addr);
+
+- return addr;
++ return 0;
+ }
+
+ void tipc_node_check_dest(struct net *net, u32 addr,
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 60708a4ebed4..237e227c9707 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -705,6 +705,10 @@ static int decrypt_skb(struct sock *sk, struct sk_buff *skb,
+ nsg = skb_to_sgvec(skb, &sgin[1],
+ rxm->offset + tls_ctx->rx.prepend_size,
+ rxm->full_len - tls_ctx->rx.prepend_size);
++ if (nsg < 0) {
++ ret = nsg;
++ goto out;
++ }
+
+ tls_make_aad(ctx->rx_aad_ciphertext,
+ rxm->full_len - tls_ctx->rx.overhead_size,
+@@ -716,6 +720,7 @@ static int decrypt_skb(struct sock *sk, struct sk_buff *skb,
+ rxm->full_len - tls_ctx->rx.overhead_size,
+ skb, sk->sk_allocation);
+
++out:
+ if (sgin != &sgin_arr[0])
+ kfree(sgin);
+
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 7c5135a92d76..f8e4371a1129 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -6061,7 +6061,7 @@ do { \
+ nl80211_check_s32);
+ /*
+ * Check HT operation mode based on
+- * IEEE 802.11 2012 8.4.2.59 HT Operation element.
++ * IEEE 802.11-2016 9.4.2.57 HT Operation element.
+ */
+ if (tb[NL80211_MESHCONF_HT_OPMODE]) {
+ ht_opmode = nla_get_u16(tb[NL80211_MESHCONF_HT_OPMODE]);
+@@ -6071,22 +6071,9 @@ do { \
+ IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))
+ return -EINVAL;
+
+- if ((ht_opmode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) &&
+- (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))
+- return -EINVAL;
++ /* NON_HT_STA bit is reserved, but some programs set it */
++ ht_opmode &= ~IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT;
+
+- switch (ht_opmode & IEEE80211_HT_OP_MODE_PROTECTION) {
+- case IEEE80211_HT_OP_MODE_PROTECTION_NONE:
+- case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ:
+- if (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)
+- return -EINVAL;
+- break;
+- case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER:
+- case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED:
+- if (!(ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))
+- return -EINVAL;
+- break;
+- }
+ cfg->ht_opmode = ht_opmode;
+ mask |= (1 << (NL80211_MESHCONF_HT_OPMODE - 1));
+ }
+@@ -10716,9 +10703,12 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info)
+ rem) {
+ u8 *mask_pat;
+
+- nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,
+- nl80211_packet_pattern_policy,
+- info->extack);
++ err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,
++ nl80211_packet_pattern_policy,
++ info->extack);
++ if (err)
++ goto error;
++
+ err = -EINVAL;
+ if (!pat_tb[NL80211_PKTPAT_MASK] ||
+ !pat_tb[NL80211_PKTPAT_PATTERN])
+@@ -10967,8 +10957,11 @@ static int nl80211_parse_coalesce_rule(struct cfg80211_registered_device *rdev,
+ rem) {
+ u8 *mask_pat;
+
+- nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,
+- nl80211_packet_pattern_policy, NULL);
++ err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,
++ nl80211_packet_pattern_policy, NULL);
++ if (err)
++ return err;
++
+ if (!pat_tb[NL80211_PKTPAT_MASK] ||
+ !pat_tb[NL80211_PKTPAT_PATTERN])
+ return -EINVAL;
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 080035f056d9..1e50b70ad668 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1671,9 +1671,11 @@ static inline unsigned int userpolicy_type_attrsize(void)
+ #ifdef CONFIG_XFRM_SUB_POLICY
+ static int copy_to_user_policy_type(u8 type, struct sk_buff *skb)
+ {
+- struct xfrm_userpolicy_type upt = {
+- .type = type,
+- };
++ struct xfrm_userpolicy_type upt;
++
++ /* Sadly there are two holes in struct xfrm_userpolicy_type */
++ memset(&upt, 0, sizeof(upt));
++ upt.type = type;
+
+ return nla_put(skb, XFRMA_POLICY_TYPE, sizeof(upt), &upt);
+ }
+diff --git a/samples/bpf/parse_varlen.c b/samples/bpf/parse_varlen.c
+index 95c16324760c..0b6f22feb2c9 100644
+--- a/samples/bpf/parse_varlen.c
++++ b/samples/bpf/parse_varlen.c
+@@ -6,6 +6,7 @@
+ */
+ #define KBUILD_MODNAME "foo"
+ #include <linux/if_ether.h>
++#include <linux/if_vlan.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
+ #include <linux/in.h>
+@@ -108,11 +109,6 @@ static int parse_ipv6(void *data, uint64_t nh_off, void *data_end)
+ return 0;
+ }
+
+-struct vlan_hdr {
+- uint16_t h_vlan_TCI;
+- uint16_t h_vlan_encapsulated_proto;
+-};
+-
+ SEC("varlen")
+ int handle_ingress(struct __sk_buff *skb)
+ {
+diff --git a/samples/bpf/test_overhead_user.c b/samples/bpf/test_overhead_user.c
+index e1d35e07a10e..da71dcc21634 100644
+--- a/samples/bpf/test_overhead_user.c
++++ b/samples/bpf/test_overhead_user.c
+@@ -6,6 +6,7 @@
+ */
+ #define _GNU_SOURCE
+ #include <sched.h>
++#include <errno.h>
+ #include <stdio.h>
+ #include <sys/types.h>
+ #include <asm/unistd.h>
+@@ -44,8 +45,13 @@ static void test_task_rename(int cpu)
+ exit(1);
+ }
+ start_time = time_get_ns();
+- for (i = 0; i < MAX_CNT; i++)
+- write(fd, buf, sizeof(buf));
++ for (i = 0; i < MAX_CNT; i++) {
++ if (write(fd, buf, sizeof(buf)) < 0) {
++ printf("task rename failed: %s\n", strerror(errno));
++ close(fd);
++ return;
++ }
++ }
+ printf("task_rename:%d: %lld events per sec\n",
+ cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+ close(fd);
+@@ -63,8 +69,13 @@ static void test_urandom_read(int cpu)
+ exit(1);
+ }
+ start_time = time_get_ns();
+- for (i = 0; i < MAX_CNT; i++)
+- read(fd, buf, sizeof(buf));
++ for (i = 0; i < MAX_CNT; i++) {
++ if (read(fd, buf, sizeof(buf)) < 0) {
++ printf("failed to read from /dev/urandom: %s\n", strerror(errno));
++ close(fd);
++ return;
++ }
++ }
+ printf("urandom_read:%d: %lld events per sec\n",
+ cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+ close(fd);
+diff --git a/samples/bpf/trace_event_user.c b/samples/bpf/trace_event_user.c
+index 56f7a259a7c9..ff2b8dae25ec 100644
+--- a/samples/bpf/trace_event_user.c
++++ b/samples/bpf/trace_event_user.c
+@@ -121,6 +121,16 @@ static void print_stacks(void)
+ }
+ }
+
++static inline int generate_load(void)
++{
++ if (system("dd if=/dev/zero of=/dev/null count=5000k status=none") < 0) {
++ printf("failed to generate some load with dd: %s\n", strerror(errno));
++ return -1;
++ }
++
++ return 0;
++}
++
+ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
+ {
+ int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
+@@ -141,7 +151,11 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
+ assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
+ assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0);
+ }
+- system("dd if=/dev/zero of=/dev/null count=5000k status=none");
++
++ if (generate_load() < 0) {
++ error = 1;
++ goto all_cpu_err;
++ }
+ print_stacks();
+ all_cpu_err:
+ for (i--; i >= 0; i--) {
+@@ -155,7 +169,7 @@ all_cpu_err:
+
+ static void test_perf_event_task(struct perf_event_attr *attr)
+ {
+- int pmu_fd;
++ int pmu_fd, error = 0;
+
+ /* per task perf event, enable inherit so the "dd ..." command can be traced properly.
+ * Enabling inherit will cause bpf_perf_prog_read_time helper failure.
+@@ -170,10 +184,17 @@ static void test_perf_event_task(struct perf_event_attr *attr)
+ }
+ assert(ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
+ assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE) == 0);
+- system("dd if=/dev/zero of=/dev/null count=5000k status=none");
++
++ if (generate_load() < 0) {
++ error = 1;
++ goto err;
++ }
+ print_stacks();
++err:
+ ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
+ close(pmu_fd);
++ if (error)
++ int_exit(0);
+ }
+
+ static void test_bpf_perf_event(void)
+diff --git a/samples/bpf/xdp2skb_meta.sh b/samples/bpf/xdp2skb_meta.sh
+index b9c9549c4c27..4bde9d066c46 100755
+--- a/samples/bpf/xdp2skb_meta.sh
++++ b/samples/bpf/xdp2skb_meta.sh
+@@ -16,8 +16,8 @@
+ BPF_FILE=xdp2skb_meta_kern.o
+ DIR=$(dirname $0)
+
+-export TC=/usr/sbin/tc
+-export IP=/usr/sbin/ip
++[ -z "$TC" ] && TC=tc
++[ -z "$IP" ] && IP=ip
+
+ function usage() {
+ echo ""
+@@ -53,7 +53,7 @@ function _call_cmd() {
+ local allow_fail="$2"
+ shift 2
+ if [[ -n "$VERBOSE" ]]; then
+- echo "$(basename $cmd) $@"
++ echo "$cmd $@"
+ fi
+ if [[ -n "$DRYRUN" ]]; then
+ return
+diff --git a/scripts/kconfig/zconf.y b/scripts/kconfig/zconf.y
+index ad6305b0f40c..f8f7cea86d7a 100644
+--- a/scripts/kconfig/zconf.y
++++ b/scripts/kconfig/zconf.y
+@@ -31,7 +31,7 @@ struct symbol *symbol_hash[SYMBOL_HASHSIZE];
+ static struct menu *current_menu, *current_entry;
+
+ %}
+-%expect 32
++%expect 31
+
+ %union
+ {
+@@ -345,7 +345,7 @@ choice_block:
+
+ /* if entry */
+
+-if_entry: T_IF expr nl
++if_entry: T_IF expr T_EOL
+ {
+ printd(DEBUG_PARSE, "%s:%d:if\n", zconf_curname(), zconf_lineno());
+ menu_add_entry(NULL);
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 0b414836bebd..60419cc2b7b8 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2296,6 +2296,7 @@ static void smack_task_to_inode(struct task_struct *p, struct inode *inode)
+ struct smack_known *skp = smk_of_task_struct(p);
+
+ isp->smk_inode = skp;
++ isp->smk_flags |= SMK_INODE_INSTANT;
+ }
+
+ /*
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index ee8d0d86f0df..6fd4b074b206 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -2004,7 +2004,8 @@ static int snd_seq_ioctl_query_next_client(struct snd_seq_client *client,
+ struct snd_seq_client *cptr = NULL;
+
+ /* search for next client */
+- info->client++;
++ if (info->client < INT_MAX)
++ info->client++;
+ if (info->client < 0)
+ info->client = 0;
+ for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) {
+diff --git a/tools/build/Build.include b/tools/build/Build.include
+index d9048f145f97..950c1504ca37 100644
+--- a/tools/build/Build.include
++++ b/tools/build/Build.include
+@@ -98,4 +98,4 @@ cxx_flags = -Wp,-MD,$(depfile) -Wp,-MT,$@ $(CXXFLAGS) -D"BUILD_STR(s)=\#s" $(CXX
+ ###
+ ## HOSTCC C flags
+
+-host_c_flags = -Wp,-MD,$(depfile) -Wp,-MT,$@ $(CHOSTFLAGS) -D"BUILD_STR(s)=\#s" $(CHOSTFLAGS_$(basetarget).o) $(CHOSTFLAGS_$(obj))
++host_c_flags = -Wp,-MD,$(depfile) -Wp,-MT,$@ $(HOSTCFLAGS) -D"BUILD_STR(s)=\#s" $(HOSTCFLAGS_$(basetarget).o) $(HOSTCFLAGS_$(obj))
+diff --git a/tools/build/Makefile b/tools/build/Makefile
+index 5eb4b5ad79cb..5edf65e684ab 100644
+--- a/tools/build/Makefile
++++ b/tools/build/Makefile
+@@ -43,7 +43,7 @@ $(OUTPUT)fixdep-in.o: FORCE
+ $(Q)$(MAKE) $(build)=fixdep
+
+ $(OUTPUT)fixdep: $(OUTPUT)fixdep-in.o
+- $(QUIET_LINK)$(HOSTCC) $(LDFLAGS) -o $@ $<
++ $(QUIET_LINK)$(HOSTCC) $(HOSTLDFLAGS) -o $@ $<
+
+ FORCE:
+
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 4e60e105583e..0d1acb704f64 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -302,19 +302,34 @@ static int read_symbols(struct elf *elf)
+ continue;
+ sym->pfunc = sym->cfunc = sym;
+ coldstr = strstr(sym->name, ".cold.");
+- if (coldstr) {
+- coldstr[0] = '\0';
+- pfunc = find_symbol_by_name(elf, sym->name);
+- coldstr[0] = '.';
+-
+- if (!pfunc) {
+- WARN("%s(): can't find parent function",
+- sym->name);
+- goto err;
+- }
+-
+- sym->pfunc = pfunc;
+- pfunc->cfunc = sym;
++ if (!coldstr)
++ continue;
++
++ coldstr[0] = '\0';
++ pfunc = find_symbol_by_name(elf, sym->name);
++ coldstr[0] = '.';
++
++ if (!pfunc) {
++ WARN("%s(): can't find parent function",
++ sym->name);
++ goto err;
++ }
++
++ sym->pfunc = pfunc;
++ pfunc->cfunc = sym;
++
++ /*
++ * Unfortunately, -fnoreorder-functions puts the child
++ * inside the parent. Remove the overlap so we can
++ * have sane assumptions.
++ *
++ * Note that pfunc->len now no longer matches
++ * pfunc->sym.st_size.
++ */
++ if (sym->sec == pfunc->sec &&
++ sym->offset >= pfunc->offset &&
++ sym->offset + sym->len == pfunc->offset + pfunc->len) {
++ pfunc->len -= sym->len;
+ }
+ }
+ }
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index ae7dc46e8f8a..46d69c5d2ec3 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -207,8 +207,7 @@ ifdef PYTHON_CONFIG
+ PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) --ldflags 2>/dev/null)
+ PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS))
+ PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
+- PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --cflags 2>/dev/null)
+- PYTHON_EMBED_CCOPTS := $(filter-out -specs=%,$(PYTHON_EMBED_CCOPTS))
++ PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
+ FLAGS_PYTHON_EMBED := $(PYTHON_EMBED_CCOPTS) $(PYTHON_EMBED_LDOPTS)
+ endif
+
+diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+index 0c370f81e002..bd630c222e65 100644
+--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
++++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+@@ -243,7 +243,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
+ u64 ip;
+ u64 skip_slot = -1;
+
+- if (chain->nr < 3)
++ if (!chain || chain->nr < 3)
+ return skip_slot;
+
+ ip = chain->ips[2];
+diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c
+index 4b2caf6d48e7..fead6b3b4206 100644
+--- a/tools/perf/arch/x86/util/perf_regs.c
++++ b/tools/perf/arch/x86/util/perf_regs.c
+@@ -226,7 +226,7 @@ int arch_sdt_arg_parse_op(char *old_op, char **new_op)
+ else if (rm[2].rm_so != rm[2].rm_eo)
+ prefix[0] = '+';
+ else
+- strncpy(prefix, "+0", 2);
++ scnprintf(prefix, sizeof(prefix), "+0");
+ }
+
+ /* Rename register */
+diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
+index 63eb49082774..44195514b19e 100644
+--- a/tools/perf/bench/numa.c
++++ b/tools/perf/bench/numa.c
+@@ -1098,7 +1098,7 @@ static void *worker_thread(void *__tdata)
+ u8 *global_data;
+ u8 *process_data;
+ u8 *thread_data;
+- u64 bytes_done;
++ u64 bytes_done, secs;
+ long work_done;
+ u32 l;
+ struct rusage rusage;
+@@ -1254,7 +1254,8 @@ static void *worker_thread(void *__tdata)
+ timersub(&stop, &start0, &diff);
+ td->runtime_ns = diff.tv_sec * NSEC_PER_SEC;
+ td->runtime_ns += diff.tv_usec * NSEC_PER_USEC;
+- td->speed_gbs = bytes_done / (td->runtime_ns / NSEC_PER_SEC) / 1e9;
++ secs = td->runtime_ns / NSEC_PER_SEC;
++ td->speed_gbs = secs ? bytes_done / secs / 1e9 : 0;
+
+ getrusage(RUSAGE_THREAD, &rusage);
+ td->system_time_ns = rusage.ru_stime.tv_sec * NSEC_PER_SEC;
+diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
+index 51709a961496..3eeb6420ceea 100644
+--- a/tools/perf/builtin-annotate.c
++++ b/tools/perf/builtin-annotate.c
+@@ -283,6 +283,15 @@ out_put:
+ return ret;
+ }
+
++static int process_feature_event(struct perf_tool *tool,
++ union perf_event *event,
++ struct perf_session *session)
++{
++ if (event->feat.feat_id < HEADER_LAST_FEATURE)
++ return perf_event__process_feature(tool, event, session);
++ return 0;
++}
++
+ static int hist_entry__tty_annotate(struct hist_entry *he,
+ struct perf_evsel *evsel,
+ struct perf_annotate *ann)
+@@ -471,7 +480,7 @@ int cmd_annotate(int argc, const char **argv)
+ .attr = perf_event__process_attr,
+ .build_id = perf_event__process_build_id,
+ .tracing_data = perf_event__process_tracing_data,
+- .feature = perf_event__process_feature,
++ .feature = process_feature_event,
+ .ordered_events = true,
+ .ordering_requires_timestamps = true,
+ },
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 0f198f6d9b77..e5f0782b225d 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -226,7 +226,8 @@ static int process_feature_event(struct perf_tool *tool,
+ }
+
+ /*
+- * All features are received, we can force the
++ * (feat_id = HEADER_LAST_FEATURE) is the end marker which
++ * means all features are received, now we can force the
+ * group if needed.
+ */
+ setup_forced_leader(rep, session->evlist);
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index e0a9845b6cbc..553715ac8320 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -1832,6 +1832,7 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ struct perf_evlist *evlist;
+ struct perf_evsel *evsel, *pos;
+ int err;
++ static struct perf_evsel_script *es;
+
+ err = perf_event__process_attr(tool, event, pevlist);
+ if (err)
+@@ -1840,6 +1841,19 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ evlist = *pevlist;
+ evsel = perf_evlist__last(*pevlist);
+
++ if (!evsel->priv) {
++ if (scr->per_event_dump) {
++ evsel->priv = perf_evsel_script__new(evsel,
++ scr->session->data);
++ } else {
++ es = zalloc(sizeof(*es));
++ if (!es)
++ return -ENOMEM;
++ es->fp = stdout;
++ evsel->priv = es;
++ }
++ }
++
+ if (evsel->attr.type >= PERF_TYPE_MAX &&
+ evsel->attr.type != PERF_TYPE_SYNTH)
+ return 0;
+@@ -3028,6 +3042,15 @@ int process_cpu_map_event(struct perf_tool *tool __maybe_unused,
+ return set_maps(script);
+ }
+
++static int process_feature_event(struct perf_tool *tool,
++ union perf_event *event,
++ struct perf_session *session)
++{
++ if (event->feat.feat_id < HEADER_LAST_FEATURE)
++ return perf_event__process_feature(tool, event, session);
++ return 0;
++}
++
+ #ifdef HAVE_AUXTRACE_SUPPORT
+ static int perf_script__process_auxtrace_info(struct perf_tool *tool,
+ union perf_event *event,
+@@ -3072,7 +3095,7 @@ int cmd_script(int argc, const char **argv)
+ .attr = process_attr,
+ .event_update = perf_event__process_event_update,
+ .tracing_data = perf_event__process_tracing_data,
+- .feature = perf_event__process_feature,
++ .feature = process_feature_event,
+ .build_id = perf_event__process_build_id,
+ .id_index = perf_event__process_id_index,
+ .auxtrace_info = perf_script__process_auxtrace_info,
+diff --git a/tools/perf/jvmti/jvmti_agent.c b/tools/perf/jvmti/jvmti_agent.c
+index 0c6d1002b524..ac1bcdc17dae 100644
+--- a/tools/perf/jvmti/jvmti_agent.c
++++ b/tools/perf/jvmti/jvmti_agent.c
+@@ -35,6 +35,7 @@
+ #include <sys/mman.h>
+ #include <syscall.h> /* for gettid() */
+ #include <err.h>
++#include <linux/kernel.h>
+
+ #include "jvmti_agent.h"
+ #include "../util/jitdump.h"
+@@ -249,7 +250,7 @@ void *jvmti_open(void)
+ /*
+ * jitdump file name
+ */
+- snprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid());
++ scnprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid());
+
+ fd = open(dump_path, O_CREAT|O_TRUNC|O_RDWR, 0666);
+ if (fd == -1)
+diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
+index 17783913d330..215ba30b8534 100644
+--- a/tools/perf/pmu-events/Build
++++ b/tools/perf/pmu-events/Build
+@@ -1,7 +1,7 @@
+ hostprogs := jevents
+
+ jevents-y += json.o jsmn.o jevents.o
+-CHOSTFLAGS_jevents.o = -I$(srctree)/tools/include
++HOSTCFLAGS_jevents.o = -I$(srctree)/tools/include
+ pmu-events-y += pmu-events.o
+ JDIR = pmu-events/arch/$(SRCARCH)
+ JSON = $(shell [ -d $(JDIR) ] && \
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index cac8f8889bc3..6a858355091d 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -422,7 +422,7 @@ static const char *shell_test__description(char *description, size_t size,
+
+ #define for_each_shell_test(dir, base, ent) \
+ while ((ent = readdir(dir)) != NULL) \
+- if (!is_directory(base, ent))
++ if (!is_directory(base, ent) && ent->d_name[0] != '.')
+
+ static const char *shell_tests__dir(char *path, size_t size)
+ {
+diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
+index 18b06444f230..1509ef2c10c8 100644
+--- a/tools/perf/tests/parse-events.c
++++ b/tools/perf/tests/parse-events.c
+@@ -1672,6 +1672,7 @@ static struct terms_test test__terms[] = {
+
+ static int test_event(struct evlist_test *e)
+ {
++ struct parse_events_error err = { .idx = 0, };
+ struct perf_evlist *evlist;
+ int ret;
+
+@@ -1679,10 +1680,11 @@ static int test_event(struct evlist_test *e)
+ if (evlist == NULL)
+ return -ENOMEM;
+
+- ret = parse_events(evlist, e->name, NULL);
++ ret = parse_events(evlist, e->name, &err);
+ if (ret) {
+- pr_debug("failed to parse event '%s', err %d\n",
+- e->name, ret);
++ pr_debug("failed to parse event '%s', err %d, str '%s'\n",
++ e->name, ret, err.str);
++ parse_events_print_error(&err, e->name);
+ } else {
+ ret = e->check(evlist);
+ }
+diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
+index 40e30a26b23c..9497d02f69e6 100644
+--- a/tools/perf/tests/topology.c
++++ b/tools/perf/tests/topology.c
+@@ -45,6 +45,7 @@ static int session_write_header(char *path)
+
+ perf_header__set_feat(&session->header, HEADER_CPU_TOPOLOGY);
+ perf_header__set_feat(&session->header, HEADER_NRCPUS);
++ perf_header__set_feat(&session->header, HEADER_ARCH);
+
+ session->header.data_size += DATA_SIZE;
+
+diff --git a/tools/perf/util/c++/clang.cpp b/tools/perf/util/c++/clang.cpp
+index bf31ceab33bd..89512504551b 100644
+--- a/tools/perf/util/c++/clang.cpp
++++ b/tools/perf/util/c++/clang.cpp
+@@ -146,8 +146,15 @@ getBPFObjectFromModule(llvm::Module *Module)
+ raw_svector_ostream ostream(*Buffer);
+
+ legacy::PassManager PM;
+- if (TargetMachine->addPassesToEmitFile(PM, ostream,
+- TargetMachine::CGFT_ObjectFile)) {
++ bool NotAdded;
++#if CLANG_VERSION_MAJOR < 7
++ NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream,
++ TargetMachine::CGFT_ObjectFile);
++#else
++ NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream, nullptr,
++ TargetMachine::CGFT_ObjectFile);
++#endif
++ if (NotAdded) {
+ llvm::errs() << "TargetMachine can't emit a file of this type\n";
+ return std::unique_ptr<llvm::SmallVectorImpl<char>>(nullptr);;
+ }
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index a8bff2178fbc..9d3907db0802 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -2113,6 +2113,7 @@ static int process_cpu_topology(struct feat_fd *ff, void *data __maybe_unused)
+ int cpu_nr = ff->ph->env.nr_cpus_avail;
+ u64 size = 0;
+ struct perf_header *ph = ff->ph;
++ bool do_core_id_test = true;
+
+ ph->env.cpu = calloc(cpu_nr, sizeof(*ph->env.cpu));
+ if (!ph->env.cpu)
+@@ -2167,6 +2168,13 @@ static int process_cpu_topology(struct feat_fd *ff, void *data __maybe_unused)
+ return 0;
+ }
+
++ /* On s390 the socket_id number is not related to the numbers of cpus.
++ * The socket_id number might be higher than the numbers of cpus.
++ * This depends on the configuration.
++ */
++ if (ph->env.arch && !strncmp(ph->env.arch, "s390", 4))
++ do_core_id_test = false;
++
+ for (i = 0; i < (u32)cpu_nr; i++) {
+ if (do_read_u32(ff, &nr))
+ goto free_cpu;
+@@ -2176,7 +2184,7 @@ static int process_cpu_topology(struct feat_fd *ff, void *data __maybe_unused)
+ if (do_read_u32(ff, &nr))
+ goto free_cpu;
+
+- if (nr != (u32)-1 && nr > (u32)cpu_nr) {
++ if (do_core_id_test && nr != (u32)-1 && nr > (u32)cpu_nr) {
+ pr_debug("socket_id number is too big."
+ "You may need to upgrade the perf tool.\n");
+ goto free_cpu;
+@@ -3442,7 +3450,7 @@ int perf_event__process_feature(struct perf_tool *tool,
+ pr_warning("invalid record type %d in pipe-mode\n", type);
+ return 0;
+ }
+- if (feat == HEADER_RESERVED || feat > HEADER_LAST_FEATURE) {
++ if (feat == HEADER_RESERVED || feat >= HEADER_LAST_FEATURE) {
+ pr_warning("invalid record type %d in pipe-mode\n", type);
+ return -1;
+ }
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index 1cca0a2fa641..c3c0ce8cdc55 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -265,16 +265,16 @@ static const char *kinc_fetch_script =
+ "#!/usr/bin/env sh\n"
+ "if ! test -d \"$KBUILD_DIR\"\n"
+ "then\n"
+-" exit -1\n"
++" exit 1\n"
+ "fi\n"
+ "if ! test -f \"$KBUILD_DIR/include/generated/autoconf.h\"\n"
+ "then\n"
+-" exit -1\n"
++" exit 1\n"
+ "fi\n"
+ "TMPDIR=`mktemp -d`\n"
+ "if test -z \"$TMPDIR\"\n"
+ "then\n"
+-" exit -1\n"
++" exit 1\n"
+ "fi\n"
+ "cat << EOF > $TMPDIR/Makefile\n"
+ "obj-y := dummy.o\n"
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index 155d2570274f..da8fe57691b8 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -227,11 +227,16 @@ event_def: event_pmu |
+ event_pmu:
+ PE_NAME opt_pmu_config
+ {
++ struct parse_events_state *parse_state = _parse_state;
++ struct parse_events_error *error = parse_state->error;
+ struct list_head *list, *orig_terms, *terms;
+
+ if (parse_events_copy_term_list($2, &orig_terms))
+ YYABORT;
+
++ if (error)
++ error->idx = @1.first_column;
++
+ ALLOC_LIST(list);
+ if (parse_events_add_pmu(_parse_state, list, $1, $2, false, false)) {
+ struct perf_pmu *pmu = NULL;
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 7f8afacd08ee..39894b96b5d6 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -676,14 +676,11 @@ static void python_process_tracepoint(struct perf_sample *sample,
+ if (_PyTuple_Resize(&t, n) == -1)
+ Py_FatalError("error resizing Python tuple");
+
+- if (!dict) {
++ if (!dict)
+ call_object(handler, t, handler_name);
+- } else {
++ else
+ call_object(handler, t, default_handler_name);
+- Py_DECREF(dict);
+- }
+
+- Py_XDECREF(all_entries_dict);
+ Py_DECREF(t);
+ }
+
+@@ -1003,7 +1000,6 @@ static void python_process_general_event(struct perf_sample *sample,
+
+ call_object(handler, t, handler_name);
+
+- Py_DECREF(dict);
+ Py_DECREF(t);
+ }
+
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index 4ea385be528f..51b7ce7b9ad3 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -1989,8 +1989,7 @@ static void nfit_test0_setup(struct nfit_test *t)
+ pcap->header.type = ACPI_NFIT_TYPE_CAPABILITIES;
+ pcap->header.length = sizeof(*pcap);
+ pcap->highest_capability = 1;
+- pcap->capabilities = ACPI_NFIT_CAPABILITY_CACHE_FLUSH |
+- ACPI_NFIT_CAPABILITY_MEM_FLUSH;
++ pcap->capabilities = ACPI_NFIT_CAPABILITY_MEM_FLUSH;
+ offset += pcap->header.length;
+
+ if (t->setup_hotplug) {
+diff --git a/tools/testing/selftests/bpf/test_kmod.sh b/tools/testing/selftests/bpf/test_kmod.sh
+index 35669ccd4d23..9df0d2ac45f8 100755
+--- a/tools/testing/selftests/bpf/test_kmod.sh
++++ b/tools/testing/selftests/bpf/test_kmod.sh
+@@ -1,6 +1,15 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
++msg="skip all tests:"
++if [ "$(id -u)" != "0" ]; then
++ echo $msg please run this as root >&2
++ exit $ksft_skip
++fi
++
+ SRC_TREE=../../../../
+
+ test_run()
+diff --git a/tools/testing/selftests/bpf/test_offload.py b/tools/testing/selftests/bpf/test_offload.py
+index e78aad0a68bb..be800d0e7a84 100755
+--- a/tools/testing/selftests/bpf/test_offload.py
++++ b/tools/testing/selftests/bpf/test_offload.py
+@@ -163,6 +163,10 @@ def bpftool(args, JSON=True, ns="", fail=True):
+
+ def bpftool_prog_list(expected=None, ns=""):
+ _, progs = bpftool("prog show", JSON=True, ns=ns, fail=True)
++ # Remove the base progs
++ for p in base_progs:
++ if p in progs:
++ progs.remove(p)
+ if expected is not None:
+ if len(progs) != expected:
+ fail(True, "%d BPF programs loaded, expected %d" %
+@@ -171,6 +175,10 @@ def bpftool_prog_list(expected=None, ns=""):
+
+ def bpftool_map_list(expected=None, ns=""):
+ _, maps = bpftool("map show", JSON=True, ns=ns, fail=True)
++ # Remove the base maps
++ for m in base_maps:
++ if m in maps:
++ maps.remove(m)
+ if expected is not None:
+ if len(maps) != expected:
+ fail(True, "%d BPF maps loaded, expected %d" %
+@@ -585,8 +593,8 @@ skip(os.getuid() != 0, "test must be run as root")
+ # Check tools
+ ret, progs = bpftool("prog", fail=False)
+ skip(ret != 0, "bpftool not installed")
+-# Check no BPF programs are loaded
+-skip(len(progs) != 0, "BPF programs already loaded on the system")
++base_progs = progs
++_, base_maps = bpftool("map")
+
+ # Check netdevsim
+ ret, out = cmd("modprobe netdevsim", fail=False)
+diff --git a/tools/testing/selftests/net/config b/tools/testing/selftests/net/config
+index 7ba089b33e8b..cd3a2f1545b5 100644
+--- a/tools/testing/selftests/net/config
++++ b/tools/testing/selftests/net/config
+@@ -12,3 +12,5 @@ CONFIG_NET_IPVTI=y
+ CONFIG_INET6_XFRM_MODE_TUNNEL=y
+ CONFIG_IPV6_VTI=y
+ CONFIG_DUMMY=y
++CONFIG_BRIDGE=y
++CONFIG_VLAN_8021Q=y
+diff --git a/tools/testing/selftests/pstore/pstore_post_reboot_tests b/tools/testing/selftests/pstore/pstore_post_reboot_tests
+index 6ccb154cb4aa..22f8df1ad7d4 100755
+--- a/tools/testing/selftests/pstore/pstore_post_reboot_tests
++++ b/tools/testing/selftests/pstore/pstore_post_reboot_tests
+@@ -7,13 +7,16 @@
+ #
+ # Released under the terms of the GPL v2.
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ . ./common_tests
+
+ if [ -e $REBOOT_FLAG ]; then
+ rm $REBOOT_FLAG
+ else
+ prlog "pstore_crash_test has not been executed yet. we skip further tests."
+- exit 0
++ exit $ksft_skip
+ fi
+
+ prlog -n "Mounting pstore filesystem ... "
+diff --git a/tools/testing/selftests/static_keys/test_static_keys.sh b/tools/testing/selftests/static_keys/test_static_keys.sh
+index 24cff498b31a..fc9f8cde7d42 100755
+--- a/tools/testing/selftests/static_keys/test_static_keys.sh
++++ b/tools/testing/selftests/static_keys/test_static_keys.sh
+@@ -2,6 +2,19 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Runs static keys kernel module tests
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
++if ! /sbin/modprobe -q -n test_static_key_base; then
++ echo "static_key: module test_static_key_base is not found [SKIP]"
++ exit $ksft_skip
++fi
++
++if ! /sbin/modprobe -q -n test_static_keys; then
++ echo "static_key: module test_static_keys is not found [SKIP]"
++ exit $ksft_skip
++fi
++
+ if /sbin/modprobe -q test_static_key_base; then
+ if /sbin/modprobe -q test_static_keys; then
+ echo "static_key: ok"
+diff --git a/tools/testing/selftests/sync/config b/tools/testing/selftests/sync/config
+new file mode 100644
+index 000000000000..1ab7e8130db2
+--- /dev/null
++++ b/tools/testing/selftests/sync/config
+@@ -0,0 +1,4 @@
++CONFIG_STAGING=y
++CONFIG_ANDROID=y
++CONFIG_SYNC=y
++CONFIG_SW_SYNC=y
+diff --git a/tools/testing/selftests/sysctl/sysctl.sh b/tools/testing/selftests/sysctl/sysctl.sh
+index ec232c3cfcaa..584eb8ea780a 100755
+--- a/tools/testing/selftests/sysctl/sysctl.sh
++++ b/tools/testing/selftests/sysctl/sysctl.sh
+@@ -14,6 +14,9 @@
+
+ # This performs a series tests against the proc sysctl interface.
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ TEST_NAME="sysctl"
+ TEST_DRIVER="test_${TEST_NAME}"
+ TEST_DIR=$(dirname $0)
+@@ -41,7 +44,7 @@ test_modprobe()
+ echo "$0: $DIR not present" >&2
+ echo "You must have the following enabled in your kernel:" >&2
+ cat $TEST_DIR/config >&2
+- exit 1
++ exit $ksft_skip
+ fi
+ }
+
+@@ -98,28 +101,30 @@ test_reqs()
+ uid=$(id -u)
+ if [ $uid -ne 0 ]; then
+ echo $msg must be run as root >&2
+- exit 0
++ exit $ksft_skip
+ fi
+
+ if ! which perl 2> /dev/null > /dev/null; then
+ echo "$0: You need perl installed"
+- exit 1
++ exit $ksft_skip
+ fi
+ if ! which getconf 2> /dev/null > /dev/null; then
+ echo "$0: You need getconf installed"
+- exit 1
++ exit $ksft_skip
+ fi
+ if ! which diff 2> /dev/null > /dev/null; then
+ echo "$0: You need diff installed"
+- exit 1
++ exit $ksft_skip
+ fi
+ }
+
+ function load_req_mod()
+ {
+- trap "test_modprobe" EXIT
+-
+ if [ ! -d $DIR ]; then
++ if ! modprobe -q -n $TEST_DRIVER; then
++ echo "$0: module $TEST_DRIVER not found [SKIP]"
++ exit $ksft_skip
++ fi
+ modprobe $TEST_DRIVER
+ if [ $? -ne 0 ]; then
+ exit
+@@ -765,6 +770,7 @@ function parse_args()
+ test_reqs
+ allow_user_defaults
+ check_production_sysctl_writes_strict
++test_modprobe
+ load_req_mod
+
+ trap "test_finish" EXIT
+diff --git a/tools/testing/selftests/user/test_user_copy.sh b/tools/testing/selftests/user/test_user_copy.sh
+index d60506fc77f8..f9b31a57439b 100755
+--- a/tools/testing/selftests/user/test_user_copy.sh
++++ b/tools/testing/selftests/user/test_user_copy.sh
+@@ -2,6 +2,13 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Runs copy_to/from_user infrastructure using test_user_copy kernel module
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
++if ! /sbin/modprobe -q -n test_user_copy; then
++ echo "user: module test_user_copy is not found [SKIP]"
++ exit $ksft_skip
++fi
+ if /sbin/modprobe -q test_user_copy; then
+ /sbin/modprobe -q -r test_user_copy
+ echo "user_copy: ok"
+diff --git a/tools/testing/selftests/vm/compaction_test.c b/tools/testing/selftests/vm/compaction_test.c
+index 1097f04e4d80..bcec71250873 100644
+--- a/tools/testing/selftests/vm/compaction_test.c
++++ b/tools/testing/selftests/vm/compaction_test.c
+@@ -16,6 +16,8 @@
+ #include <unistd.h>
+ #include <string.h>
+
++#include "../kselftest.h"
++
+ #define MAP_SIZE 1048576
+
+ struct map_list {
+@@ -169,7 +171,7 @@ int main(int argc, char **argv)
+ printf("Either the sysctl compact_unevictable_allowed is not\n"
+ "set to 1 or couldn't read the proc file.\n"
+ "Skipping the test\n");
+- return 0;
++ return KSFT_SKIP;
+ }
+
+ lim.rlim_cur = RLIM_INFINITY;
+diff --git a/tools/testing/selftests/vm/mlock2-tests.c b/tools/testing/selftests/vm/mlock2-tests.c
+index 4997b9222cfa..637b6d0ac0d0 100644
+--- a/tools/testing/selftests/vm/mlock2-tests.c
++++ b/tools/testing/selftests/vm/mlock2-tests.c
+@@ -9,6 +9,8 @@
+ #include <stdbool.h>
+ #include "mlock2.h"
+
++#include "../kselftest.h"
++
+ struct vm_boundaries {
+ unsigned long start;
+ unsigned long end;
+@@ -303,7 +305,7 @@ static int test_mlock_lock()
+ if (mlock2_(map, 2 * page_size, 0)) {
+ if (errno == ENOSYS) {
+ printf("Cannot call new mlock family, skipping test\n");
+- _exit(0);
++ _exit(KSFT_SKIP);
+ }
+ perror("mlock2(0)");
+ goto unmap;
+@@ -412,7 +414,7 @@ static int test_mlock_onfault()
+ if (mlock2_(map, 2 * page_size, MLOCK_ONFAULT)) {
+ if (errno == ENOSYS) {
+ printf("Cannot call new mlock family, skipping test\n");
+- _exit(0);
++ _exit(KSFT_SKIP);
+ }
+ perror("mlock2(MLOCK_ONFAULT)");
+ goto unmap;
+@@ -425,7 +427,7 @@ static int test_mlock_onfault()
+ if (munlock(map, 2 * page_size)) {
+ if (errno == ENOSYS) {
+ printf("Cannot call new mlock family, skipping test\n");
+- _exit(0);
++ _exit(KSFT_SKIP);
+ }
+ perror("munlock()");
+ goto unmap;
+@@ -457,7 +459,7 @@ static int test_lock_onfault_of_present()
+ if (mlock2_(map, 2 * page_size, MLOCK_ONFAULT)) {
+ if (errno == ENOSYS) {
+ printf("Cannot call new mlock family, skipping test\n");
+- _exit(0);
++ _exit(KSFT_SKIP);
+ }
+ perror("mlock2(MLOCK_ONFAULT)");
+ goto unmap;
+@@ -583,7 +585,7 @@ static int test_vma_management(bool call_mlock)
+ if (call_mlock && mlock2_(map, 3 * page_size, MLOCK_ONFAULT)) {
+ if (errno == ENOSYS) {
+ printf("Cannot call new mlock family, skipping test\n");
+- _exit(0);
++ _exit(KSFT_SKIP);
+ }
+ perror("mlock(ONFAULT)\n");
+ goto out;
+diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
+index 22d564673830..88cbe5575f0c 100755
+--- a/tools/testing/selftests/vm/run_vmtests
++++ b/tools/testing/selftests/vm/run_vmtests
+@@ -2,6 +2,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+ #please run as root
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ mnt=./huge
+ exitcode=0
+
+@@ -36,7 +39,7 @@ if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
+ echo $(( $lackpgs + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
+ if [ $? -ne 0 ]; then
+ echo "Please run this test as root"
+- exit 1
++ exit $ksft_skip
+ fi
+ while read name size unit; do
+ if [ "$name" = "HugePages_Free:" ]; then
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index de2f9ec8a87f..7b8171e3128a 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -69,6 +69,8 @@
+ #include <setjmp.h>
+ #include <stdbool.h>
+
++#include "../kselftest.h"
++
+ #ifdef __NR_userfaultfd
+
+ static unsigned long nr_cpus, nr_pages, nr_pages_per_cpu, page_size;
+@@ -1322,7 +1324,7 @@ int main(int argc, char **argv)
+ int main(void)
+ {
+ printf("skip: Skipping userfaultfd test (missing __NR_userfaultfd)\n");
+- return 0;
++ return KSFT_SKIP;
+ }
+
+ #endif /* __NR_userfaultfd */
+diff --git a/tools/testing/selftests/x86/sigreturn.c b/tools/testing/selftests/x86/sigreturn.c
+index 246145b84a12..4d9dc3f2fd70 100644
+--- a/tools/testing/selftests/x86/sigreturn.c
++++ b/tools/testing/selftests/x86/sigreturn.c
+@@ -610,21 +610,41 @@ static int test_valid_sigreturn(int cs_bits, bool use_16bit_ss, int force_ss)
+ */
+ for (int i = 0; i < NGREG; i++) {
+ greg_t req = requested_regs[i], res = resulting_regs[i];
++
+ if (i == REG_TRAPNO || i == REG_IP)
+ continue; /* don't care */
+- if (i == REG_SP) {
+- printf("\tSP: %llx -> %llx\n", (unsigned long long)req,
+- (unsigned long long)res);
+
++ if (i == REG_SP) {
+ /*
+- * In many circumstances, the high 32 bits of rsp
+- * are zeroed. For example, we could be a real
+- * 32-bit program, or we could hit any of a number
+- * of poorly-documented IRET or segmented ESP
+- * oddities. If this happens, it's okay.
++ * If we were using a 16-bit stack segment, then
++ * the kernel is a bit stuck: IRET only restores
++ * the low 16 bits of ESP/RSP if SS is 16-bit.
++ * The kernel uses a hack to restore bits 31:16,
++ * but that hack doesn't help with bits 63:32.
++ * On Intel CPUs, bits 63:32 end up zeroed, and, on
++ * AMD CPUs, they leak the high bits of the kernel
++ * espfix64 stack pointer. There's very little that
++ * the kernel can do about it.
++ *
++ * Similarly, if we are returning to a 32-bit context,
++ * the CPU will often lose the high 32 bits of RSP.
+ */
+- if (res == (req & 0xFFFFFFFF))
+- continue; /* OK; not expected to work */
++
++ if (res == req)
++ continue;
++
++ if (cs_bits != 64 && ((res ^ req) & 0xFFFFFFFF) == 0) {
++ printf("[NOTE]\tSP: %llx -> %llx\n",
++ (unsigned long long)req,
++ (unsigned long long)res);
++ continue;
++ }
++
++ printf("[FAIL]\tSP mismatch: requested 0x%llx; got 0x%llx\n",
++ (unsigned long long)requested_regs[i],
++ (unsigned long long)resulting_regs[i]);
++ nerrs++;
++ continue;
+ }
+
+ bool ignore_reg = false;
+@@ -654,25 +674,18 @@ static int test_valid_sigreturn(int cs_bits, bool use_16bit_ss, int force_ss)
+ #endif
+
+ /* Sanity check on the kernel */
+- if (i == REG_CX && requested_regs[i] != resulting_regs[i]) {
++ if (i == REG_CX && req != res) {
+ printf("[FAIL]\tCX (saved SP) mismatch: requested 0x%llx; got 0x%llx\n",
+- (unsigned long long)requested_regs[i],
+- (unsigned long long)resulting_regs[i]);
++ (unsigned long long)req,
++ (unsigned long long)res);
+ nerrs++;
+ continue;
+ }
+
+- if (requested_regs[i] != resulting_regs[i] && !ignore_reg) {
+- /*
+- * SP is particularly interesting here. The
+- * usual cause of failures is that we hit the
+- * nasty IRET case of returning to a 16-bit SS,
+- * in which case bits 16:31 of the *kernel*
+- * stack pointer persist in ESP.
+- */
++ if (req != res && !ignore_reg) {
+ printf("[FAIL]\tReg %d mismatch: requested 0x%llx; got 0x%llx\n",
+- i, (unsigned long long)requested_regs[i],
+- (unsigned long long)resulting_regs[i]);
++ i, (unsigned long long)req,
++ (unsigned long long)res);
+ nerrs++;
+ }
+ }
+diff --git a/tools/testing/selftests/zram/zram.sh b/tools/testing/selftests/zram/zram.sh
+index 754de7da426a..232e958ec454 100755
+--- a/tools/testing/selftests/zram/zram.sh
++++ b/tools/testing/selftests/zram/zram.sh
+@@ -2,6 +2,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+ TCID="zram.sh"
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ . ./zram_lib.sh
+
+ run_zram () {
+@@ -24,5 +27,5 @@ elif [ -b /dev/zram0 ]; then
+ else
+ echo "$TCID : No zram.ko module or /dev/zram0 device file not found"
+ echo "$TCID : CONFIG_ZRAM is not set"
+- exit 1
++ exit $ksft_skip
+ fi
+diff --git a/tools/testing/selftests/zram/zram_lib.sh b/tools/testing/selftests/zram/zram_lib.sh
+index f6a9c73e7a44..9e73a4fb9b0a 100755
+--- a/tools/testing/selftests/zram/zram_lib.sh
++++ b/tools/testing/selftests/zram/zram_lib.sh
+@@ -18,6 +18,9 @@ MODULE=0
+ dev_makeswap=-1
+ dev_mounted=-1
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ trap INT
+
+ check_prereqs()
+@@ -27,7 +30,7 @@ check_prereqs()
+
+ if [ $uid -ne 0 ]; then
+ echo $msg must be run as root >&2
+- exit 0
++ exit $ksft_skip
+ fi
+ }
+
+diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
+index bdcf8e7a6161..72fc688c3e9d 100644
+--- a/virt/kvm/arm/vgic/vgic-v3.c
++++ b/virt/kvm/arm/vgic/vgic-v3.c
+@@ -552,11 +552,6 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
+ pr_warn("GICV physical address 0x%llx not page aligned\n",
+ (unsigned long long)info->vcpu.start);
+ kvm_vgic_global_state.vcpu_base = 0;
+- } else if (!PAGE_ALIGNED(resource_size(&info->vcpu))) {
+- pr_warn("GICV size 0x%llx not a multiple of page size 0x%lx\n",
+- (unsigned long long)resource_size(&info->vcpu),
+- PAGE_SIZE);
+- kvm_vgic_global_state.vcpu_base = 0;
+ } else {
+ kvm_vgic_global_state.vcpu_base = info->vcpu.start;
+ kvm_vgic_global_state.can_emulate_gicv2 = true;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-22 9:56 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2018-08-22 9:56 UTC (permalink / raw
To: gentoo-commits
commit: 517a45fb3e1f8dfc3e9881a2b3818b06261d4e25
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 22 09:56:22 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug 22 09:56:27 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=517a45fb
linux kernel 4.17.18
0000_README | 4 +
1017_linux-4.17.18.patch | 1470 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1474 insertions(+)
diff --git a/0000_README b/0000_README
index e0ea866..1887187 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-4.17.17.patch
From: http://www.kernel.org
Desc: Linux 4.17.17
+Patch: 1017_linux-4.17.18.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-4.17.18.patch b/1017_linux-4.17.18.patch
new file mode 100644
index 0000000..efddc43
--- /dev/null
+++ b/1017_linux-4.17.18.patch
@@ -0,0 +1,1470 @@
+diff --git a/Makefile b/Makefile
+index 5ff2040cf3ee..429a1fe0b40b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 974e58457697..af54d7bbb173 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -338,6 +338,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
+ },
+ },
++ {
++ .callback = init_nvs_save_s3,
++ .ident = "Asus 1025C",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "1025C"),
++ },
++ },
+ /*
+ * https://bugzilla.kernel.org/show_bug.cgi?id=189431
+ * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
+diff --git a/drivers/isdn/i4l/isdn_common.c b/drivers/isdn/i4l/isdn_common.c
+index 7c6f3f5d9d9a..66ac7fa6e034 100644
+--- a/drivers/isdn/i4l/isdn_common.c
++++ b/drivers/isdn/i4l/isdn_common.c
+@@ -1640,13 +1640,7 @@ isdn_ioctl(struct file *file, uint cmd, ulong arg)
+ } else
+ return -EINVAL;
+ case IIOCDBGVAR:
+- if (arg) {
+- if (copy_to_user(argp, &dev, sizeof(ulong)))
+- return -EFAULT;
+- return 0;
+- } else
+- return -EINVAL;
+- break;
++ return -EINVAL;
+ default:
+ if ((cmd & IIOCDRVCTL) == IIOCDRVCTL)
+ cmd = ((cmd >> _IOC_NRSHIFT) & _IOC_NRMASK) & ISDN_DRVIOCTL_MASK;
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index fc0415771c00..4dd0d868ff88 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -407,13 +407,20 @@ static int sram_probe(struct platform_device *pdev)
+ if (init_func) {
+ ret = init_func();
+ if (ret)
+- return ret;
++ goto err_disable_clk;
+ }
+
+ dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+ gen_pool_size(sram->pool) / 1024, sram->virt_base);
+
+ return 0;
++
++err_disable_clk:
++ if (sram->clk)
++ clk_disable_unprepare(sram->clk);
++ sram_free_partitions(sram);
++
++ return ret;
+ }
+
+ static int sram_remove(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index 956860a69797..3bdab972420b 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -762,7 +762,7 @@ static int hw_atl_b0_hw_packet_filter_set(struct aq_hw_s *self,
+
+ hw_atl_rpfl2promiscuous_mode_en_set(self, IS_FILTER_ENABLED(IFF_PROMISC));
+ hw_atl_rpfl2multicast_flr_en_set(self,
+- IS_FILTER_ENABLED(IFF_MULTICAST), 0);
++ IS_FILTER_ENABLED(IFF_ALLMULTI), 0);
+
+ hw_atl_rpfl2_accept_all_mc_packets_set(self,
+ IS_FILTER_ENABLED(IFF_ALLMULTI));
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 0ad2f3f7da85..82ac1d10f239 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
+ }
+
+ /* Main rx processing when using software buffer management */
+-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_swbm(struct napi_struct *napi,
++ struct mvneta_port *pp, int rx_todo,
+ struct mvneta_rx_queue *rxq)
+ {
+- struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ struct net_device *dev = pp->dev;
+ int rx_done;
+ u32 rcvd_pkts = 0;
+@@ -1959,7 +1959,7 @@ err_drop_frame:
+
+ skb->protocol = eth_type_trans(skb, dev);
+ mvneta_rx_csum(pp, rx_status, skb);
+- napi_gro_receive(&port->napi, skb);
++ napi_gro_receive(napi, skb);
+
+ rcvd_pkts++;
+ rcvd_bytes += rx_bytes;
+@@ -2001,7 +2001,7 @@ err_drop_frame:
+
+ mvneta_rx_csum(pp, rx_status, skb);
+
+- napi_gro_receive(&port->napi, skb);
++ napi_gro_receive(napi, skb);
+ }
+
+ if (rcvd_pkts) {
+@@ -2020,10 +2020,10 @@ err_drop_frame:
+ }
+
+ /* Main rx processing when using hardware buffer management */
+-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_hwbm(struct napi_struct *napi,
++ struct mvneta_port *pp, int rx_todo,
+ struct mvneta_rx_queue *rxq)
+ {
+- struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ struct net_device *dev = pp->dev;
+ int rx_done;
+ u32 rcvd_pkts = 0;
+@@ -2085,7 +2085,7 @@ err_drop_frame:
+
+ skb->protocol = eth_type_trans(skb, dev);
+ mvneta_rx_csum(pp, rx_status, skb);
+- napi_gro_receive(&port->napi, skb);
++ napi_gro_receive(napi, skb);
+
+ rcvd_pkts++;
+ rcvd_bytes += rx_bytes;
+@@ -2129,7 +2129,7 @@ err_drop_frame:
+
+ mvneta_rx_csum(pp, rx_status, skb);
+
+- napi_gro_receive(&port->napi, skb);
++ napi_gro_receive(napi, skb);
+ }
+
+ if (rcvd_pkts) {
+@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
+ if (rx_queue) {
+ rx_queue = rx_queue - 1;
+ if (pp->bm_priv)
+- rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
++ rx_done = mvneta_rx_hwbm(napi, pp, budget,
++ &pp->rxqs[rx_queue]);
+ else
+- rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
++ rx_done = mvneta_rx_swbm(napi, pp, budget,
++ &pp->rxqs[rx_queue]);
+ }
+
+ if (rx_done < budget) {
+@@ -4018,13 +4020,18 @@ static int mvneta_config_rss(struct mvneta_port *pp)
+
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+
+- /* We have to synchronise on the napi of each CPU */
+- for_each_online_cpu(cpu) {
+- struct mvneta_pcpu_port *pcpu_port =
+- per_cpu_ptr(pp->ports, cpu);
++ if (!pp->neta_armada3700) {
++ /* We have to synchronise on the napi of each CPU */
++ for_each_online_cpu(cpu) {
++ struct mvneta_pcpu_port *pcpu_port =
++ per_cpu_ptr(pp->ports, cpu);
+
+- napi_synchronize(&pcpu_port->napi);
+- napi_disable(&pcpu_port->napi);
++ napi_synchronize(&pcpu_port->napi);
++ napi_disable(&pcpu_port->napi);
++ }
++ } else {
++ napi_synchronize(&pp->napi);
++ napi_disable(&pp->napi);
+ }
+
+ pp->rxq_def = pp->indir[0];
+@@ -4041,12 +4048,16 @@ static int mvneta_config_rss(struct mvneta_port *pp)
+ mvneta_percpu_elect(pp);
+ spin_unlock(&pp->lock);
+
+- /* We have to synchronise on the napi of each CPU */
+- for_each_online_cpu(cpu) {
+- struct mvneta_pcpu_port *pcpu_port =
+- per_cpu_ptr(pp->ports, cpu);
++ if (!pp->neta_armada3700) {
++ /* We have to synchronise on the napi of each CPU */
++ for_each_online_cpu(cpu) {
++ struct mvneta_pcpu_port *pcpu_port =
++ per_cpu_ptr(pp->ports, cpu);
+
+- napi_enable(&pcpu_port->napi);
++ napi_enable(&pcpu_port->napi);
++ }
++ } else {
++ napi_enable(&pp->napi);
+ }
+
+ netif_tx_start_all_queues(pp->dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index a0ba6cfc9092..290fc6f9afc1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1907,15 +1907,15 @@ static bool actions_match_supported(struct mlx5e_priv *priv,
+ static bool same_hw_devs(struct mlx5e_priv *priv, struct mlx5e_priv *peer_priv)
+ {
+ struct mlx5_core_dev *fmdev, *pmdev;
+- u16 func_id, peer_id;
++ u64 fsystem_guid, psystem_guid;
+
+ fmdev = priv->mdev;
+ pmdev = peer_priv->mdev;
+
+- func_id = (u16)((fmdev->pdev->bus->number << 8) | PCI_SLOT(fmdev->pdev->devfn));
+- peer_id = (u16)((pmdev->pdev->bus->number << 8) | PCI_SLOT(pmdev->pdev->devfn));
++ mlx5_query_nic_vport_system_image_guid(fmdev, &fsystem_guid);
++ mlx5_query_nic_vport_system_image_guid(pmdev, &psystem_guid);
+
+- return (func_id == peer_id);
++ return (fsystem_guid == psystem_guid);
+ }
+
+ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+index 3c0d882ba183..f6f6a568d66a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+@@ -327,12 +327,16 @@ static void mlxsw_afa_resource_add(struct mlxsw_afa_block *block,
+ list_add(&resource->list, &block->resource_list);
+ }
+
++static void mlxsw_afa_resource_del(struct mlxsw_afa_resource *resource)
++{
++ list_del(&resource->list);
++}
++
+ static void mlxsw_afa_resources_destroy(struct mlxsw_afa_block *block)
+ {
+ struct mlxsw_afa_resource *resource, *tmp;
+
+ list_for_each_entry_safe(resource, tmp, &block->resource_list, list) {
+- list_del(&resource->list);
+ resource->destructor(block, resource);
+ }
+ }
+@@ -530,6 +534,7 @@ static void
+ mlxsw_afa_fwd_entry_ref_destroy(struct mlxsw_afa_block *block,
+ struct mlxsw_afa_fwd_entry_ref *fwd_entry_ref)
+ {
++ mlxsw_afa_resource_del(&fwd_entry_ref->resource);
+ mlxsw_afa_fwd_entry_put(block->afa, fwd_entry_ref->fwd_entry);
+ kfree(fwd_entry_ref);
+ }
+@@ -579,6 +584,7 @@ static void
+ mlxsw_afa_counter_destroy(struct mlxsw_afa_block *block,
+ struct mlxsw_afa_counter *counter)
+ {
++ mlxsw_afa_resource_del(&counter->resource);
+ block->afa->ops->counter_index_put(block->afa->ops_priv,
+ counter->counter_index);
+ kfree(counter);
+@@ -626,8 +632,8 @@ static char *mlxsw_afa_block_append_action(struct mlxsw_afa_block *block,
+ char *oneact;
+ char *actions;
+
+- if (WARN_ON(block->finished))
+- return NULL;
++ if (block->finished)
++ return ERR_PTR(-EINVAL);
+ if (block->cur_act_index + action_size >
+ block->afa->max_acts_per_set) {
+ struct mlxsw_afa_set *set;
+@@ -637,7 +643,7 @@ static char *mlxsw_afa_block_append_action(struct mlxsw_afa_block *block,
+ */
+ set = mlxsw_afa_set_create(false);
+ if (!set)
+- return NULL;
++ return ERR_PTR(-ENOBUFS);
+ set->prev = block->cur_set;
+ block->cur_act_index = 0;
+ block->cur_set->next = set;
+@@ -724,8 +730,8 @@ int mlxsw_afa_block_append_vlan_modify(struct mlxsw_afa_block *block,
+ MLXSW_AFA_VLAN_CODE,
+ MLXSW_AFA_VLAN_SIZE);
+
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_vlan_pack(act, MLXSW_AFA_VLAN_VLAN_TAG_CMD_NOP,
+ MLXSW_AFA_VLAN_CMD_SET_OUTER, vid,
+ MLXSW_AFA_VLAN_CMD_SET_OUTER, pcp,
+@@ -806,8 +812,8 @@ int mlxsw_afa_block_append_drop(struct mlxsw_afa_block *block)
+ MLXSW_AFA_TRAPDISC_CODE,
+ MLXSW_AFA_TRAPDISC_SIZE);
+
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_trapdisc_pack(act, MLXSW_AFA_TRAPDISC_TRAP_ACTION_NOP,
+ MLXSW_AFA_TRAPDISC_FORWARD_ACTION_DISCARD, 0);
+ return 0;
+@@ -820,8 +826,8 @@ int mlxsw_afa_block_append_trap(struct mlxsw_afa_block *block, u16 trap_id)
+ MLXSW_AFA_TRAPDISC_CODE,
+ MLXSW_AFA_TRAPDISC_SIZE);
+
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_trapdisc_pack(act, MLXSW_AFA_TRAPDISC_TRAP_ACTION_TRAP,
+ MLXSW_AFA_TRAPDISC_FORWARD_ACTION_DISCARD,
+ trap_id);
+@@ -836,8 +842,8 @@ int mlxsw_afa_block_append_trap_and_forward(struct mlxsw_afa_block *block,
+ MLXSW_AFA_TRAPDISC_CODE,
+ MLXSW_AFA_TRAPDISC_SIZE);
+
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_trapdisc_pack(act, MLXSW_AFA_TRAPDISC_TRAP_ACTION_TRAP,
+ MLXSW_AFA_TRAPDISC_FORWARD_ACTION_FORWARD,
+ trap_id);
+@@ -856,6 +862,7 @@ static void
+ mlxsw_afa_mirror_destroy(struct mlxsw_afa_block *block,
+ struct mlxsw_afa_mirror *mirror)
+ {
++ mlxsw_afa_resource_del(&mirror->resource);
+ block->afa->ops->mirror_del(block->afa->ops_priv,
+ mirror->local_in_port,
+ mirror->span_id,
+@@ -908,8 +915,8 @@ mlxsw_afa_block_append_allocated_mirror(struct mlxsw_afa_block *block,
+ char *act = mlxsw_afa_block_append_action(block,
+ MLXSW_AFA_TRAPDISC_CODE,
+ MLXSW_AFA_TRAPDISC_SIZE);
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_trapdisc_pack(act, MLXSW_AFA_TRAPDISC_TRAP_ACTION_NOP,
+ MLXSW_AFA_TRAPDISC_FORWARD_ACTION_FORWARD, 0);
+ mlxsw_afa_trapdisc_mirror_pack(act, true, mirror_agent);
+@@ -996,8 +1003,8 @@ int mlxsw_afa_block_append_fwd(struct mlxsw_afa_block *block,
+
+ act = mlxsw_afa_block_append_action(block, MLXSW_AFA_FORWARD_CODE,
+ MLXSW_AFA_FORWARD_SIZE);
+- if (!act) {
+- err = -ENOBUFS;
++ if (IS_ERR(act)) {
++ err = PTR_ERR(act);
+ goto err_append_action;
+ }
+ mlxsw_afa_forward_pack(act, MLXSW_AFA_FORWARD_TYPE_PBS,
+@@ -1052,8 +1059,8 @@ int mlxsw_afa_block_append_allocated_counter(struct mlxsw_afa_block *block,
+ {
+ char *act = mlxsw_afa_block_append_action(block, MLXSW_AFA_POLCNT_CODE,
+ MLXSW_AFA_POLCNT_SIZE);
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_polcnt_pack(act, MLXSW_AFA_POLCNT_COUNTER_SET_TYPE_PACKETS_BYTES,
+ counter_index);
+ return 0;
+@@ -1123,8 +1130,8 @@ int mlxsw_afa_block_append_fid_set(struct mlxsw_afa_block *block, u16 fid)
+ char *act = mlxsw_afa_block_append_action(block,
+ MLXSW_AFA_VIRFWD_CODE,
+ MLXSW_AFA_VIRFWD_SIZE);
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_virfwd_pack(act, MLXSW_AFA_VIRFWD_FID_CMD_SET, fid);
+ return 0;
+ }
+@@ -1193,8 +1200,8 @@ int mlxsw_afa_block_append_mcrouter(struct mlxsw_afa_block *block,
+ char *act = mlxsw_afa_block_append_action(block,
+ MLXSW_AFA_MCROUTER_CODE,
+ MLXSW_AFA_MCROUTER_SIZE);
+- if (!act)
+- return -ENOBUFS;
++ if (IS_ERR(act))
++ return PTR_ERR(act);
+ mlxsw_afa_mcrouter_pack(act, MLXSW_AFA_MCROUTER_RPF_ACTION_TRAP,
+ expected_irif, min_mtu, rmid_valid, kvdl_index);
+ return 0;
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 764b25fa470c..d3c6ce074571 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -8061,12 +8061,20 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ unsigned int flags;
+
+- if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
++ switch (tp->mac_version) {
++ case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ flags = PCI_IRQ_LEGACY;
+- } else {
++ break;
++ case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
++ /* This version was reported to have issues with resume
++ * from suspend when using MSI-X
++ */
++ flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
++ break;
++ default:
+ flags = PCI_IRQ_ALL_TYPES;
+ }
+
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 6fcdb90f616a..2826052a7e70 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -274,7 +274,7 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
+ long rate;
+ int ret;
+
+- if (IS_ERR(d->clk) || !old)
++ if (IS_ERR(d->clk))
+ goto out;
+
+ clk_disable_unprepare(d->clk);
+@@ -680,6 +680,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ { "APMC0D08", 0},
+ { "AMD0020", 0 },
+ { "AMDI0020", 0 },
++ { "BRCM2032", 0 },
+ { "HISI0031", 0 },
+ { },
+ };
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 38af306ca0e8..a951511f04cf 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -433,7 +433,11 @@ static irqreturn_t exar_misc_handler(int irq, void *data)
+ struct exar8250 *priv = data;
+
+ /* Clear all PCI interrupts by reading INT0. No effect on IIR */
+- ioread8(priv->virt + UART_EXAR_INT0);
++ readb(priv->virt + UART_EXAR_INT0);
++
++ /* Clear INT0 for Expansion Interface slave ports, too */
++ if (priv->board->num_ports > 8)
++ readb(priv->virt + 0x2000 + UART_EXAR_INT0);
+
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 95833cbc4338..8d981168279c 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -90,8 +90,7 @@ static const struct serial8250_config uart_config[] = {
+ .name = "16550A",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .rxtrig_bytes = {1, 4, 8, 14},
+ .flags = UART_CAP_FIFO,
+ },
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2058852a87fa..2fca8060c90b 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -196,6 +196,8 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5800_V2_MINICARD_VZW 0x8196 /* Novatel E362 */
+ #define DELL_PRODUCT_5804_MINICARD_ATT 0x819b /* Novatel E371 */
+
++#define DELL_PRODUCT_5821E 0x81d7
++
+ #define KYOCERA_VENDOR_ID 0x0c88
+ #define KYOCERA_PRODUCT_KPC650 0x17da
+ #define KYOCERA_PRODUCT_KPC680 0x180a
+@@ -1030,6 +1032,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_V2_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5804_MINICARD_ATT, 0xff, 0xff, 0xff) },
++ { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E),
++ .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 46dd09da2434..3bc2d6c28aa3 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -52,6 +52,8 @@ static const struct usb_device_id id_table[] = {
+ .driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC485),
+ .driver_info = PL2303_QUIRK_ENDPOINT_HACK },
++ { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC232B),
++ .driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) },
+ { USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) },
+ { USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index fcd72396a7b6..26965cc23c17 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -24,6 +24,7 @@
+ #define ATEN_VENDOR_ID2 0x0547
+ #define ATEN_PRODUCT_ID 0x2008
+ #define ATEN_PRODUCT_UC485 0x2021
++#define ATEN_PRODUCT_UC232B 0x2022
+ #define ATEN_PRODUCT_ID2 0x2118
+
+ #define IODATA_VENDOR_ID 0x04bb
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index d189f953c891..55956a638f5b 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -770,9 +770,9 @@ static void sierra_close(struct usb_serial_port *port)
+ kfree(urb->transfer_buffer);
+ usb_free_urb(urb);
+ usb_autopm_put_interface_async(serial->interface);
+- spin_lock(&portdata->lock);
++ spin_lock_irq(&portdata->lock);
+ portdata->outstanding_urbs--;
+- spin_unlock(&portdata->lock);
++ spin_unlock_irq(&portdata->lock);
+ }
+
+ sierra_stop_rx_urbs(port);
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 9beefa6ed1ce..d1de2cb13fd6 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -1556,9 +1556,12 @@ int vhost_init_device_iotlb(struct vhost_dev *d, bool enabled)
+ d->iotlb = niotlb;
+
+ for (i = 0; i < d->nvqs; ++i) {
+- mutex_lock(&d->vqs[i]->mutex);
+- d->vqs[i]->iotlb = niotlb;
+- mutex_unlock(&d->vqs[i]->mutex);
++ struct vhost_virtqueue *vq = d->vqs[i];
++
++ mutex_lock(&vq->mutex);
++ vq->iotlb = niotlb;
++ __vhost_vq_meta_reset(vq);
++ mutex_unlock(&vq->mutex);
+ }
+
+ vhost_umem_clean(oiotlb);
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index 9324ac2d9ff2..43913ae79f64 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -64,7 +64,8 @@ struct vsock_sock {
+ struct list_head pending_links;
+ struct list_head accept_queue;
+ bool rejected;
+- struct delayed_work dwork;
++ struct delayed_work connect_work;
++ struct delayed_work pending_work;
+ struct delayed_work close_work;
+ bool close_work_scheduled;
+ u32 peer_shutdown;
+@@ -77,7 +78,6 @@ struct vsock_sock {
+
+ s64 vsock_stream_has_data(struct vsock_sock *vsk);
+ s64 vsock_stream_has_space(struct vsock_sock *vsk);
+-void vsock_pending_work(struct work_struct *work);
+ struct sock *__vsock_create(struct net *net,
+ struct socket *sock,
+ struct sock *parent,
+diff --git a/include/net/llc.h b/include/net/llc.h
+index dc35f25eb679..890a87318014 100644
+--- a/include/net/llc.h
++++ b/include/net/llc.h
+@@ -116,6 +116,11 @@ static inline void llc_sap_hold(struct llc_sap *sap)
+ refcount_inc(&sap->refcnt);
+ }
+
++static inline bool llc_sap_hold_safe(struct llc_sap *sap)
++{
++ return refcount_inc_not_zero(&sap->refcnt);
++}
++
+ void llc_sap_close(struct llc_sap *sap);
+
+ static inline void llc_sap_put(struct llc_sap *sap)
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 413b8ee49fec..8f0f9279eac9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -393,7 +393,8 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+ */
+ static void sco_sock_kill(struct sock *sk)
+ {
+- if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
++ if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
++ sock_flag(sk, SOCK_DEAD))
+ return;
+
+ BT_DBG("sk %p state %d", sk, sk->sk_state);
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index c37b5be7c5e4..3312a5849a97 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/tcp.h>
+ #include <linux/workqueue.h>
++#include <linux/nospec.h>
+
+ #include <linux/inet_diag.h>
+ #include <linux/sock_diag.h>
+@@ -218,6 +219,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+
+ if (req->sdiag_family >= AF_MAX)
+ return -EINVAL;
++ req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+
+ if (sock_diag_handlers[req->sdiag_family] == NULL)
+ sock_load_diag_module(req->sdiag_family, 0);
+diff --git a/net/dccp/ccids/ccid2.c b/net/dccp/ccids/ccid2.c
+index 385f153fe031..33c5b1c88be2 100644
+--- a/net/dccp/ccids/ccid2.c
++++ b/net/dccp/ccids/ccid2.c
+@@ -228,14 +228,16 @@ static void ccid2_cwnd_restart(struct sock *sk, const u32 now)
+ struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
+ u32 cwnd = hc->tx_cwnd, restart_cwnd,
+ iwnd = rfc3390_bytes_to_packets(dccp_sk(sk)->dccps_mss_cache);
++ s32 delta = now - hc->tx_lsndtime;
+
+ hc->tx_ssthresh = max(hc->tx_ssthresh, (cwnd >> 1) + (cwnd >> 2));
+
+ /* don't reduce cwnd below the initial window (IW) */
+ restart_cwnd = min(cwnd, iwnd);
+- cwnd >>= (now - hc->tx_lsndtime) / hc->tx_rto;
+- hc->tx_cwnd = max(cwnd, restart_cwnd);
+
++ while ((delta -= hc->tx_rto) >= 0 && cwnd > restart_cwnd)
++ cwnd >>= 1;
++ hc->tx_cwnd = max(cwnd, restart_cwnd);
+ hc->tx_cwnd_stamp = now;
+ hc->tx_cwnd_used = 0;
+
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 3f091ccad9af..f38cb21d773d 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -438,7 +438,8 @@ static int __net_init vti_init_net(struct net *net)
+ if (err)
+ return err;
+ itn = net_generic(net, vti_net_id);
+- vti_fb_tunnel_init(itn->fb_tunnel_dev);
++ if (itn->fb_tunnel_dev)
++ vti_fb_tunnel_init(itn->fb_tunnel_dev);
+ return 0;
+ }
+
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 00e138a44cbb..1cc9650af9fb 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1133,12 +1133,8 @@ route_lookup:
+ max_headroom += 8;
+ mtu -= 8;
+ }
+- if (skb->protocol == htons(ETH_P_IPV6)) {
+- if (mtu < IPV6_MIN_MTU)
+- mtu = IPV6_MIN_MTU;
+- } else if (mtu < 576) {
+- mtu = 576;
+- }
++ mtu = max(mtu, skb->protocol == htons(ETH_P_IPV6) ?
++ IPV6_MIN_MTU : IPV4_MIN_MTU);
+
+ skb_dst_update_pmtu(skb, mtu);
+ if (skb->len - t->tun_hlen - eth_hlen > mtu && !skb_is_gso(skb)) {
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 40261cb68e83..8aaf8157da2b 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1110,7 +1110,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+
+ /* Get routing info from the tunnel socket */
+ skb_dst_drop(skb);
+- skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
++ skb_dst_set(skb, sk_dst_check(sk, 0));
+
+ inet = inet_sk(sk);
+ fl = &inet->cork.fl;
+diff --git a/net/llc/llc_core.c b/net/llc/llc_core.c
+index 89041260784c..260b3dc1b4a2 100644
+--- a/net/llc/llc_core.c
++++ b/net/llc/llc_core.c
+@@ -73,8 +73,8 @@ struct llc_sap *llc_sap_find(unsigned char sap_value)
+
+ rcu_read_lock_bh();
+ sap = __llc_sap_find(sap_value);
+- if (sap)
+- llc_sap_hold(sap);
++ if (!sap || !llc_sap_hold_safe(sap))
++ sap = NULL;
+ rcu_read_unlock_bh();
+ return sap;
+ }
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 19975d2ca9a2..5da2d3379a57 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -104,9 +104,9 @@ struct rxrpc_net {
+
+ #define RXRPC_KEEPALIVE_TIME 20 /* NAT keepalive time in seconds */
+ u8 peer_keepalive_cursor;
+- ktime_t peer_keepalive_base;
+- struct hlist_head peer_keepalive[RXRPC_KEEPALIVE_TIME + 1];
+- struct hlist_head peer_keepalive_new;
++ time64_t peer_keepalive_base;
++ struct list_head peer_keepalive[32];
++ struct list_head peer_keepalive_new;
+ struct timer_list peer_keepalive_timer;
+ struct work_struct peer_keepalive_work;
+ };
+@@ -295,7 +295,7 @@ struct rxrpc_peer {
+ struct hlist_head error_targets; /* targets for net error distribution */
+ struct work_struct error_distributor;
+ struct rb_root service_conns; /* Service connections */
+- struct hlist_node keepalive_link; /* Link in net->peer_keepalive[] */
++ struct list_head keepalive_link; /* Link in net->peer_keepalive[] */
+ time64_t last_tx_at; /* Last time packet sent here */
+ seqlock_t service_conn_lock;
+ spinlock_t lock; /* access lock */
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 8229a52c2acd..3fde001fcc39 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -136,7 +136,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ }
+
+ ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len);
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+ if (ret < 0)
+ trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
+ rxrpc_tx_fail_call_final_resend);
+@@ -245,7 +245,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ return -EAGAIN;
+ }
+
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+
+ _leave(" = 0");
+ return 0;
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index c7a023fb22d0..48fb8754c387 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -85,12 +85,12 @@ static __net_init int rxrpc_init_net(struct net *net)
+ hash_init(rxnet->peer_hash);
+ spin_lock_init(&rxnet->peer_hash_lock);
+ for (i = 0; i < ARRAY_SIZE(rxnet->peer_keepalive); i++)
+- INIT_HLIST_HEAD(&rxnet->peer_keepalive[i]);
+- INIT_HLIST_HEAD(&rxnet->peer_keepalive_new);
++ INIT_LIST_HEAD(&rxnet->peer_keepalive[i]);
++ INIT_LIST_HEAD(&rxnet->peer_keepalive_new);
+ timer_setup(&rxnet->peer_keepalive_timer,
+ rxrpc_peer_keepalive_timeout, 0);
+ INIT_WORK(&rxnet->peer_keepalive_work, rxrpc_peer_keepalive_worker);
+- rxnet->peer_keepalive_base = ktime_add(ktime_get_real(), NSEC_PER_SEC);
++ rxnet->peer_keepalive_base = ktime_get_seconds();
+
+ ret = -ENOMEM;
+ rxnet->proc_net = proc_net_mkdir(net, "rxrpc", net->proc_net);
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index f03de1c59ba3..4774c8f5634d 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -209,7 +209,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ now = ktime_get_real();
+ if (ping)
+ call->ping_time = now;
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+ if (ret < 0)
+ trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+ rxrpc_tx_fail_call_ack);
+@@ -296,7 +296,7 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call)
+
+ ret = kernel_sendmsg(conn->params.local->socket,
+ &msg, iov, 1, sizeof(pkt));
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+ if (ret < 0)
+ trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+ rxrpc_tx_fail_call_abort);
+@@ -391,7 +391,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ * message and update the peer record
+ */
+ ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+
+ up_read(&conn->params.local->defrag_sem);
+ if (ret < 0)
+@@ -457,7 +457,7 @@ send_fragmentable:
+ if (ret == 0) {
+ ret = kernel_sendmsg(conn->params.local->socket, &msg,
+ iov, 2, len);
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+
+ opt = IP_PMTUDISC_DO;
+ kernel_setsockopt(conn->params.local->socket, SOL_IP,
+@@ -475,7 +475,7 @@ send_fragmentable:
+ if (ret == 0) {
+ ret = kernel_sendmsg(conn->params.local->socket, &msg,
+ iov, 2, len);
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+
+ opt = IPV6_PMTUDISC_DO;
+ kernel_setsockopt(conn->params.local->socket,
+@@ -599,6 +599,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *peer)
+ trace_rxrpc_tx_fail(peer->debug_id, 0, ret,
+ rxrpc_tx_fail_version_keepalive);
+
+- peer->last_tx_at = ktime_get_real();
++ peer->last_tx_at = ktime_get_seconds();
+ _leave("");
+ }
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 0ed8b651cec2..4f9da2f51c69 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -350,97 +350,117 @@ void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
+ }
+
+ /*
+- * Perform keep-alive pings with VERSION packets to keep any NAT alive.
++ * Perform keep-alive pings.
+ */
+-void rxrpc_peer_keepalive_worker(struct work_struct *work)
++static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
++ struct list_head *collector,
++ time64_t base,
++ u8 cursor)
+ {
+- struct rxrpc_net *rxnet =
+- container_of(work, struct rxrpc_net, peer_keepalive_work);
+ struct rxrpc_peer *peer;
+- unsigned long delay;
+- ktime_t base, now = ktime_get_real();
+- s64 diff;
+- u8 cursor, slot;
++ const u8 mask = ARRAY_SIZE(rxnet->peer_keepalive) - 1;
++ time64_t keepalive_at;
++ int slot;
+
+- base = rxnet->peer_keepalive_base;
+- cursor = rxnet->peer_keepalive_cursor;
++ spin_lock_bh(&rxnet->peer_hash_lock);
+
+- _enter("%u,%lld", cursor, ktime_sub(now, base));
++ while (!list_empty(collector)) {
++ peer = list_entry(collector->next,
++ struct rxrpc_peer, keepalive_link);
+
+-next_bucket:
+- diff = ktime_to_ns(ktime_sub(now, base));
+- if (diff < 0)
+- goto resched;
++ list_del_init(&peer->keepalive_link);
++ if (!rxrpc_get_peer_maybe(peer))
++ continue;
+
+- _debug("at %u", cursor);
+- spin_lock_bh(&rxnet->peer_hash_lock);
+-next_peer:
+- if (!rxnet->live) {
+ spin_unlock_bh(&rxnet->peer_hash_lock);
+- goto out;
+- }
+
+- /* Everything in the bucket at the cursor is processed this second; the
+- * bucket at cursor + 1 goes now + 1s and so on...
+- */
+- if (hlist_empty(&rxnet->peer_keepalive[cursor])) {
+- if (hlist_empty(&rxnet->peer_keepalive_new)) {
+- spin_unlock_bh(&rxnet->peer_hash_lock);
+- goto emptied_bucket;
++ keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME;
++ slot = keepalive_at - base;
++ _debug("%02x peer %u t=%d {%pISp}",
++ cursor, peer->debug_id, slot, &peer->srx.transport);
++
++ if (keepalive_at <= base ||
++ keepalive_at > base + RXRPC_KEEPALIVE_TIME) {
++ rxrpc_send_keepalive(peer);
++ slot = RXRPC_KEEPALIVE_TIME;
+ }
+
+- hlist_move_list(&rxnet->peer_keepalive_new,
+- &rxnet->peer_keepalive[cursor]);
++ /* A transmission to this peer occurred since last we examined
++ * it so put it into the appropriate future bucket.
++ */
++ slot += cursor;
++ slot &= mask;
++ spin_lock_bh(&rxnet->peer_hash_lock);
++ list_add_tail(&peer->keepalive_link,
++ &rxnet->peer_keepalive[slot & mask]);
++ rxrpc_put_peer(peer);
+ }
+
+- peer = hlist_entry(rxnet->peer_keepalive[cursor].first,
+- struct rxrpc_peer, keepalive_link);
+- hlist_del_init(&peer->keepalive_link);
+- if (!rxrpc_get_peer_maybe(peer))
+- goto next_peer;
+-
+ spin_unlock_bh(&rxnet->peer_hash_lock);
++}
+
+- _debug("peer %u {%pISp}", peer->debug_id, &peer->srx.transport);
++/*
++ * Perform keep-alive pings with VERSION packets to keep any NAT alive.
++ */
++void rxrpc_peer_keepalive_worker(struct work_struct *work)
++{
++ struct rxrpc_net *rxnet =
++ container_of(work, struct rxrpc_net, peer_keepalive_work);
++ const u8 mask = ARRAY_SIZE(rxnet->peer_keepalive) - 1;
++ time64_t base, now, delay;
++ u8 cursor, stop;
++ LIST_HEAD(collector);
+
+-recalc:
+- diff = ktime_divns(ktime_sub(peer->last_tx_at, base), NSEC_PER_SEC);
+- if (diff < -30 || diff > 30)
+- goto send; /* LSW of 64-bit time probably wrapped on 32-bit */
+- diff += RXRPC_KEEPALIVE_TIME - 1;
+- if (diff < 0)
+- goto send;
++ now = ktime_get_seconds();
++ base = rxnet->peer_keepalive_base;
++ cursor = rxnet->peer_keepalive_cursor;
++ _enter("%lld,%u", base - now, cursor);
+
+- slot = (diff > RXRPC_KEEPALIVE_TIME - 1) ? RXRPC_KEEPALIVE_TIME - 1 : diff;
+- if (slot == 0)
+- goto send;
++ if (!rxnet->live)
++ return;
+
+- /* A transmission to this peer occurred since last we examined it so
+- * put it into the appropriate future bucket.
++ /* Remove to a temporary list all the peers that are currently lodged
++ * in expired buckets plus all new peers.
++ *
++ * Everything in the bucket at the cursor is processed this
++ * second; the bucket at cursor + 1 goes at now + 1s and so
++ * on...
+ */
+- slot = (slot + cursor) % ARRAY_SIZE(rxnet->peer_keepalive);
+ spin_lock_bh(&rxnet->peer_hash_lock);
+- hlist_add_head(&peer->keepalive_link, &rxnet->peer_keepalive[slot]);
+- rxrpc_put_peer(peer);
+- goto next_peer;
+-
+-send:
+- rxrpc_send_keepalive(peer);
+- now = ktime_get_real();
+- goto recalc;
++ list_splice_init(&rxnet->peer_keepalive_new, &collector);
++
++ stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive);
++ while (base <= now && (s8)(cursor - stop) < 0) {
++ list_splice_tail_init(&rxnet->peer_keepalive[cursor & mask],
++ &collector);
++ base++;
++ cursor++;
++ }
+
+-emptied_bucket:
+- cursor++;
+- if (cursor >= ARRAY_SIZE(rxnet->peer_keepalive))
+- cursor = 0;
+- base = ktime_add_ns(base, NSEC_PER_SEC);
+- goto next_bucket;
++ base = now;
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+-resched:
+ rxnet->peer_keepalive_base = base;
+ rxnet->peer_keepalive_cursor = cursor;
+- delay = nsecs_to_jiffies(-diff) + 1;
+- timer_reduce(&rxnet->peer_keepalive_timer, jiffies + delay);
+-out:
++ rxrpc_peer_keepalive_dispatch(rxnet, &collector, base, cursor);
++ ASSERT(list_empty(&collector));
++
++ /* Schedule the timer for the next occupied timeslot. */
++ cursor = rxnet->peer_keepalive_cursor;
++ stop = cursor + RXRPC_KEEPALIVE_TIME - 1;
++ for (; (s8)(cursor - stop) < 0; cursor++) {
++ if (!list_empty(&rxnet->peer_keepalive[cursor & mask]))
++ break;
++ base++;
++ }
++
++ now = ktime_get_seconds();
++ delay = base - now;
++ if (delay < 1)
++ delay = 1;
++ delay *= HZ;
++ if (rxnet->live)
++ timer_reduce(&rxnet->peer_keepalive_timer, jiffies + delay);
++
+ _leave("");
+ }
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 1b7e8107b3ae..24ec7cdcf332 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -322,7 +322,7 @@ struct rxrpc_peer *rxrpc_lookup_incoming_peer(struct rxrpc_local *local,
+ if (!peer) {
+ peer = prealloc;
+ hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
+- hlist_add_head(&peer->keepalive_link, &rxnet->peer_keepalive_new);
++ list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
+ }
+
+ spin_unlock(&rxnet->peer_hash_lock);
+@@ -367,8 +367,8 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ if (!peer) {
+ hash_add_rcu(rxnet->peer_hash,
+ &candidate->hash_link, hash_key);
+- hlist_add_head(&candidate->keepalive_link,
+- &rxnet->peer_keepalive_new);
++ list_add_tail(&candidate->keepalive_link,
++ &rxnet->peer_keepalive_new);
+ }
+
+ spin_unlock_bh(&rxnet->peer_hash_lock);
+@@ -441,7 +441,7 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+
+ spin_lock_bh(&rxnet->peer_hash_lock);
+ hash_del_rcu(&peer->hash_link);
+- hlist_del_init(&peer->keepalive_link);
++ list_del_init(&peer->keepalive_link);
+ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ kfree_rcu(peer, rcu);
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index 6c0ae27fff84..94262c3ead88 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -669,7 +669,7 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn)
+ return -EAGAIN;
+ }
+
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+ _leave(" = 0");
+ return 0;
+ }
+@@ -725,7 +725,7 @@ static int rxkad_send_response(struct rxrpc_connection *conn,
+ return -EAGAIN;
+ }
+
+- conn->params.peer->last_tx_at = ktime_get_real();
++ conn->params.peer->last_tx_at = ktime_get_seconds();
+ _leave(" = 0");
+ return 0;
+ }
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 2ba721a590a7..a74b4d6ee186 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -122,6 +122,8 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ if (!head)
+ return;
+
++ tcf_unbind_filter(tp, &head->res);
++
+ if (!tc_skip_hw(head->flags))
+ mall_destroy_hw_filter(tp, head, (unsigned long) head, extack);
+
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index b49cc990a000..9cb37c63c3e5 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -468,11 +468,6 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ tcf_bind_filter(tp, &cr.res, base);
+ }
+
+- if (old_r)
+- tcf_exts_change(&r->exts, &e);
+- else
+- tcf_exts_change(&cr.exts, &e);
+-
+ if (old_r && old_r != r) {
+ err = tcindex_filter_result_init(old_r);
+ if (err < 0) {
+@@ -483,12 +478,15 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+
+ oldp = p;
+ r->res = cr.res;
++ tcf_exts_change(&r->exts, &e);
++
+ rcu_assign_pointer(tp->root, cp);
+
+ if (r == &new_filter_result) {
+ struct tcindex_filter *nfp;
+ struct tcindex_filter __rcu **fp;
+
++ f->result.res = r->res;
+ tcf_exts_change(&f->result.exts, &r->exts);
+
+ fp = cp->h + (handle % cp->hash);
+diff --git a/net/socket.c b/net/socket.c
+index 6a6aa84b64c1..0316b380389e 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2694,8 +2694,7 @@ EXPORT_SYMBOL(sock_unregister);
+
+ bool sock_is_registered(int family)
+ {
+- return family < NPROTO &&
+- rcu_access_pointer(net_families[array_index_nospec(family, NPROTO)]);
++ return family < NPROTO && rcu_access_pointer(net_families[family]);
+ }
+
+ static int __init sock_init(void)
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index c1076c19b858..ab27a2872935 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -451,14 +451,14 @@ static int vsock_send_shutdown(struct sock *sk, int mode)
+ return transport->shutdown(vsock_sk(sk), mode);
+ }
+
+-void vsock_pending_work(struct work_struct *work)
++static void vsock_pending_work(struct work_struct *work)
+ {
+ struct sock *sk;
+ struct sock *listener;
+ struct vsock_sock *vsk;
+ bool cleanup;
+
+- vsk = container_of(work, struct vsock_sock, dwork.work);
++ vsk = container_of(work, struct vsock_sock, pending_work.work);
+ sk = sk_vsock(vsk);
+ listener = vsk->listener;
+ cleanup = true;
+@@ -498,7 +498,6 @@ out:
+ sock_put(sk);
+ sock_put(listener);
+ }
+-EXPORT_SYMBOL_GPL(vsock_pending_work);
+
+ /**** SOCKET OPERATIONS ****/
+
+@@ -597,6 +596,8 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr)
+ return retval;
+ }
+
++static void vsock_connect_timeout(struct work_struct *work);
++
+ struct sock *__vsock_create(struct net *net,
+ struct socket *sock,
+ struct sock *parent,
+@@ -638,6 +639,8 @@ struct sock *__vsock_create(struct net *net,
+ vsk->sent_request = false;
+ vsk->ignore_connecting_rst = false;
+ vsk->peer_shutdown = 0;
++ INIT_DELAYED_WORK(&vsk->connect_work, vsock_connect_timeout);
++ INIT_DELAYED_WORK(&vsk->pending_work, vsock_pending_work);
+
+ psk = parent ? vsock_sk(parent) : NULL;
+ if (parent) {
+@@ -1117,7 +1120,7 @@ static void vsock_connect_timeout(struct work_struct *work)
+ struct vsock_sock *vsk;
+ int cancel = 0;
+
+- vsk = container_of(work, struct vsock_sock, dwork.work);
++ vsk = container_of(work, struct vsock_sock, connect_work.work);
+ sk = sk_vsock(vsk);
+
+ lock_sock(sk);
+@@ -1221,9 +1224,7 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ * timeout fires.
+ */
+ sock_hold(sk);
+- INIT_DELAYED_WORK(&vsk->dwork,
+- vsock_connect_timeout);
+- schedule_delayed_work(&vsk->dwork, timeout);
++ schedule_delayed_work(&vsk->connect_work, timeout);
+
+ /* Skip ahead to preserve error code set above. */
+ goto out_wait;
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index a7a73ffe675b..cb332adb84cd 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -1094,8 +1094,7 @@ static int vmci_transport_recv_listen(struct sock *sk,
+ vpending->listener = sk;
+ sock_hold(sk);
+ sock_hold(pending);
+- INIT_DELAYED_WORK(&vpending->dwork, vsock_pending_work);
+- schedule_delayed_work(&vpending->dwork, HZ);
++ schedule_delayed_work(&vpending->pending_work, HZ);
+
+ out:
+ return err;
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 7f89d3c79a4b..753d5fc4b284 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -242,16 +242,12 @@ int snd_dma_alloc_pages_fallback(int type, struct device *device, size_t size,
+ int err;
+
+ while ((err = snd_dma_alloc_pages(type, device, size, dmab)) < 0) {
+- size_t aligned_size;
+ if (err != -ENOMEM)
+ return err;
+ if (size <= PAGE_SIZE)
+ return -ENOMEM;
+- aligned_size = PAGE_SIZE << get_order(size);
+- if (size != aligned_size)
+- size = aligned_size;
+- else
+- size >>= 1;
++ size >>= 1;
++ size = PAGE_SIZE << get_order(size);
+ }
+ if (! dmab->area)
+ return -ENOMEM;
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 5f64d0d88320..e1f44fc86885 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -203,7 +203,7 @@ odev_poll(struct file *file, poll_table * wait)
+ struct seq_oss_devinfo *dp;
+ dp = file->private_data;
+ if (snd_BUG_ON(!dp))
+- return -ENXIO;
++ return EPOLLERR;
+ return snd_seq_oss_poll(dp, file, wait);
+ }
+
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 61a07fe34cd2..ee8d0d86f0df 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1101,7 +1101,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
+
+ /* check client structures are in place */
+ if (snd_BUG_ON(!client))
+- return -ENXIO;
++ return EPOLLERR;
+
+ if ((snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_INPUT) &&
+ client->data.user.fifo) {
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 289ae6bb81d9..8ebbca554e99 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -163,6 +163,7 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ int count, res;
+ unsigned char buf[32], *pbuf;
+ unsigned long flags;
++ bool check_resched = !in_atomic();
+
+ if (up) {
+ vmidi->trigger = 1;
+@@ -200,6 +201,15 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ }
+ }
++ if (!check_resched)
++ continue;
++ /* do temporary unlock & cond_resched() for avoiding
++ * CPU soft lockup, which may happen via a write from
++ * a huge rawmidi buffer
++ */
++ spin_unlock_irqrestore(&substream->runtime->lock, flags);
++ cond_resched();
++ spin_lock_irqsave(&substream->runtime->lock, flags);
+ }
+ out:
+ spin_unlock_irqrestore(&substream->runtime->lock, flags);
+diff --git a/sound/pci/cs5535audio/cs5535audio.h b/sound/pci/cs5535audio/cs5535audio.h
+index f4fcdf93f3c8..d84620a0c26c 100644
+--- a/sound/pci/cs5535audio/cs5535audio.h
++++ b/sound/pci/cs5535audio/cs5535audio.h
+@@ -67,9 +67,9 @@ struct cs5535audio_dma_ops {
+ };
+
+ struct cs5535audio_dma_desc {
+- u32 addr;
+- u16 size;
+- u16 ctlreserved;
++ __le32 addr;
++ __le16 size;
++ __le16 ctlreserved;
+ };
+
+ struct cs5535audio_dma {
+diff --git a/sound/pci/cs5535audio/cs5535audio_pcm.c b/sound/pci/cs5535audio/cs5535audio_pcm.c
+index ee7065f6e162..326caec854e1 100644
+--- a/sound/pci/cs5535audio/cs5535audio_pcm.c
++++ b/sound/pci/cs5535audio/cs5535audio_pcm.c
+@@ -158,8 +158,8 @@ static int cs5535audio_build_dma_packets(struct cs5535audio *cs5535au,
+ lastdesc->addr = cpu_to_le32((u32) dma->desc_buf.addr);
+ lastdesc->size = 0;
+ lastdesc->ctlreserved = cpu_to_le16(PRD_JMP);
+- jmpprd_addr = cpu_to_le32(lastdesc->addr +
+- (sizeof(struct cs5535audio_dma_desc)*periods));
++ jmpprd_addr = (u32)dma->desc_buf.addr +
++ sizeof(struct cs5535audio_dma_desc) * periods;
+
+ dma->substream = substream;
+ dma->period_bytes = period_bytes;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index a0c93b9c9a28..c8e6d0d08c8f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2207,7 +2207,7 @@ out_free:
+ */
+ static struct snd_pci_quirk power_save_blacklist[] = {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+- SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0),
++ SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0),
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 88ce2f1022e1..16197ad4512a 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -211,6 +211,7 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ struct conexant_spec *spec = codec->spec;
+
+ switch (codec->core.vendor_id) {
++ case 0x14f12008: /* CX8200 */
+ case 0x14f150f2: /* CX20722 */
+ case 0x14f150f4: /* CX20724 */
+ break;
+@@ -218,13 +219,14 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ return;
+ }
+
+- /* Turn the CX20722 codec into D3 to avoid spurious noises
++ /* Turn the problematic codec into D3 to avoid spurious noises
+ from the internal speaker during (and after) reboot */
+ cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+
+ snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);
+ snd_hda_codec_write(codec, codec->core.afg, 0,
+ AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
++ msleep(10);
+ }
+
+ static void cx_auto_free(struct hda_codec *codec)
+diff --git a/sound/pci/vx222/vx222_ops.c b/sound/pci/vx222/vx222_ops.c
+index d4298af6d3ee..c0d0bf44f365 100644
+--- a/sound/pci/vx222/vx222_ops.c
++++ b/sound/pci/vx222/vx222_ops.c
+@@ -275,7 +275,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ length >>= 2; /* in 32bit words */
+ /* Transfer using pseudo-dma. */
+ for (; length > 0; length--) {
+- outl(cpu_to_le32(*addr), port);
++ outl(*addr, port);
+ addr++;
+ }
+ addr = (u32 *)runtime->dma_area;
+@@ -285,7 +285,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ count >>= 2; /* in 32bit words */
+ /* Transfer using pseudo-dma. */
+ for (; count > 0; count--) {
+- outl(cpu_to_le32(*addr), port);
++ outl(*addr, port);
+ addr++;
+ }
+
+@@ -313,7 +313,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ length >>= 2; /* in 32bit words */
+ /* Transfer using pseudo-dma. */
+ for (; length > 0; length--)
+- *addr++ = le32_to_cpu(inl(port));
++ *addr++ = inl(port);
+ addr = (u32 *)runtime->dma_area;
+ pipe->hw_ptr = 0;
+ }
+@@ -321,7 +321,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ count >>= 2; /* in 32bit words */
+ /* Transfer using pseudo-dma. */
+ for (; count > 0; count--)
+- *addr++ = le32_to_cpu(inl(port));
++ *addr++ = inl(port);
+
+ vx2_release_pseudo_dma(chip);
+ }
+diff --git a/sound/pcmcia/vx/vxp_ops.c b/sound/pcmcia/vx/vxp_ops.c
+index 8cde40226355..4c4ef1fec69f 100644
+--- a/sound/pcmcia/vx/vxp_ops.c
++++ b/sound/pcmcia/vx/vxp_ops.c
+@@ -375,7 +375,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ length >>= 1; /* in 16bit words */
+ /* Transfer using pseudo-dma. */
+ for (; length > 0; length--) {
+- outw(cpu_to_le16(*addr), port);
++ outw(*addr, port);
+ addr++;
+ }
+ addr = (unsigned short *)runtime->dma_area;
+@@ -385,7 +385,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ count >>= 1; /* in 16bit words */
+ /* Transfer using pseudo-dma. */
+ for (; count > 0; count--) {
+- outw(cpu_to_le16(*addr), port);
++ outw(*addr, port);
+ addr++;
+ }
+ vx_release_pseudo_dma(chip);
+@@ -417,7 +417,7 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ length >>= 1; /* in 16bit words */
+ /* Transfer using pseudo-dma. */
+ for (; length > 0; length--)
+- *addr++ = le16_to_cpu(inw(port));
++ *addr++ = inw(port);
+ addr = (unsigned short *)runtime->dma_area;
+ pipe->hw_ptr = 0;
+ }
+@@ -425,12 +425,12 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ count >>= 1; /* in 16bit words */
+ /* Transfer using pseudo-dma. */
+ for (; count > 1; count--)
+- *addr++ = le16_to_cpu(inw(port));
++ *addr++ = inw(port);
+ /* Disable DMA */
+ pchip->regDIALOG &= ~VXP_DLG_DMAREAD_SEL_MASK;
+ vx_outb(chip, DIALOG, pchip->regDIALOG);
+ /* Read the last word (16 bits) */
+- *addr = le16_to_cpu(inw(port));
++ *addr = inw(port);
+ /* Disable 16-bit accesses */
+ pchip->regDIALOG &= ~VXP_DLG_DMA16_SEL_MASK;
+ vx_outb(chip, DIALOG, pchip->regDIALOG);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-18 18:10 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-18 18:10 UTC (permalink / raw
To: gentoo-commits
commit: bb094983a895bf834a2e28e3001333227f669392
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 18 18:09:59 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug 18 18:09:59 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bb094983
Linux patch 4.17.17
0000_README | 4 ++++
1016_linux-4.17.17.patch | 37 +++++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
diff --git a/0000_README b/0000_README
index 377ddec..e0ea866 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-4.17.16.patch
From: http://www.kernel.org
Desc: Linux 4.17.16
+Patch: 1016_linux-4.17.17.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-4.17.17.patch b/1016_linux-4.17.17.patch
new file mode 100644
index 0000000..d43dfd4
--- /dev/null
+++ b/1016_linux-4.17.17.patch
@@ -0,0 +1,37 @@
+diff --git a/Makefile b/Makefile
+index 8ca595f045c4..5ff2040cf3ee 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+index 44b1203ece12..a0c1525f1b6f 100644
+--- a/arch/x86/include/asm/pgtable-invert.h
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -4,9 +4,18 @@
+
+ #ifndef __ASSEMBLY__
+
++/*
++ * A clear pte value is special, and doesn't get inverted.
++ *
++ * Note that even users that only pass a pgprot_t (rather
++ * than a full pte) won't trigger the special zero case,
++ * because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
++ * set. So the all zero case really is limited to just the
++ * cleared page table entry case.
++ */
+ static inline bool __pte_needs_invert(u64 val)
+ {
+- return !(val & _PAGE_PRESENT);
++ return val && !(val & _PAGE_PRESENT);
+ }
+
+ /* Get a mask to xor with the page table entry to get the correct pfn. */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-17 19:40 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-17 19:40 UTC (permalink / raw
To: gentoo-commits
commit: 319bd276ebc00102a846d13de238ce8fec14ae9c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:40:19 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 17 19:40:19 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=319bd276
Removal of redundant patch:
x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
0000_README | 4 ---
1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 --------------------------
2 files changed, 44 deletions(-)
diff --git a/0000_README b/0000_README
index 83fccd5..377ddec 100644
--- a/0000_README
+++ b/0000_README
@@ -115,10 +115,6 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
-Patch: 1700_x86-l1tf-config-kvm-build-error-fix.patch
-From: http://www.kernel.org
-Desc: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
deleted file mode 100644
index 88c2ec6..0000000
--- a/1700_x86-l1tf-config-kvm-build-error-fix.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
-From: Guenter Roeck <linux@roeck-us.net>
-Date: Wed, 15 Aug 2018 08:38:33 -0700
-Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
-From: Guenter Roeck <linux@roeck-us.net>
-
-commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
-
-allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
-
- ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
-
-Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
-Reported-by: Meelis Roos <mroos@linux.ee>
-Cc: Meelis Roos <mroos@linux.ee>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Guenter Roeck <linux@roeck-us.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
----
- arch/x86/kernel/cpu/bugs.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
---- a/arch/x86/kernel/cpu/bugs.c
-+++ b/arch/x86/kernel/cpu/bugs.c
-@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
- enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
- #if IS_ENABLED(CONFIG_KVM_INTEL)
- EXPORT_SYMBOL_GPL(l1tf_mitigation);
--
-+#endif
- enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
- EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
--#endif
-
- static void __init l1tf_select_mitigation(void)
- {
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-17 19:27 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-17 19:27 UTC (permalink / raw
To: gentoo-commits
commit: 2c2807aea9a2fb02322e1f70bedc5d2836585229
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:27:36 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 17 19:27:36 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2c2807ae
Linux patch 4.17.16
0000_README | 4 +
1015_linux-4.17.16.patch | 1663 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1667 insertions(+)
diff --git a/0000_README b/0000_README
index a489588..83fccd5 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-4.17.15.patch
From: http://www.kernel.org
Desc: Linux 4.17.15
+Patch: 1015_linux-4.17.16.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-4.17.16.patch b/1015_linux-4.17.16.patch
new file mode 100644
index 0000000..edbea4b
--- /dev/null
+++ b/1015_linux-4.17.16.patch
@@ -0,0 +1,1663 @@
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index ddc029734b25..005d8842a503 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -35,7 +35,7 @@ binutils 2.20 ld -v
+ flex 2.5.35 flex --version
+ bison 2.0 bison --version
+ util-linux 2.10o fdformat --version
+-module-init-tools 0.9.10 depmod -V
++kmod 13 depmod -V
+ e2fsprogs 1.41.4 e2fsck -V
+ jfsutils 1.1.3 fsck.jfs -V
+ reiserfsprogs 3.6.3 reiserfsck -V
+@@ -156,12 +156,6 @@ is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
+ reproduce the Oops with that option, then you can still decode that Oops
+ with ksymoops.
+
+-Module-Init-Tools
+------------------
+-
+-A new module loader is now in the kernel that requires ``module-init-tools``
+-to use. It is backward compatible with the 2.4.x series kernels.
+-
+ Mkinitrd
+ --------
+
+@@ -371,16 +365,17 @@ Util-linux
+
+ - <https://www.kernel.org/pub/linux/utils/util-linux/>
+
++Kmod
++----
++
++- <https://www.kernel.org/pub/linux/utils/kernel/kmod/>
++- <https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git>
++
+ Ksymoops
+ --------
+
+ - <https://www.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+
+-Module-Init-Tools
+------------------
+-
+-- <https://www.kernel.org/pub/linux/utils/kernel/module-init-tools/>
+-
+ Mkinitrd
+ --------
+
+diff --git a/Makefile b/Makefile
+index e8cbf2dd3069..8ca595f045c4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 493ff75670ff..8ae5d7ae4af3 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -977,12 +977,12 @@ int pmd_clear_huge(pmd_t *pmdp)
+ return 1;
+ }
+
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ return pud_none(*pud);
+ }
+
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ return pmd_none(*pmd);
+ }
+diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+index 16c4ccb1f154..d2364c55bbde 100644
+--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
++++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+@@ -265,7 +265,7 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2)
+ vpinsrd $1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
+ vpinsrd $2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
+ vpinsrd $3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
+- vmovd _args_digest(state , idx, 4) , %xmm0
++ vmovd _args_digest+4*32(state, idx, 4), %xmm1
+ vpinsrd $1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
+ vpinsrd $2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
+ vpinsrd $3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index 5cdcdbd4d892..89789e8c80f6 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -3,6 +3,7 @@
+ #define _ASM_X86_I8259_H
+
+ #include <linux/delay.h>
++#include <asm/io.h>
+
+ extern unsigned int cached_irq_mask;
+
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index d492752f79e1..391f358ebb4c 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -394,10 +394,10 @@ extern int uv_hub_info_version(void)
+ EXPORT_SYMBOL(uv_hub_info_version);
+
+ /* Default UV memory block size is 2GB */
+-static unsigned long mem_block_size = (2UL << 30);
++static unsigned long mem_block_size __initdata = (2UL << 30);
+
+ /* Kernel parameter to specify UV mem block size */
+-static int parse_mem_block_size(char *ptr)
++static int __init parse_mem_block_size(char *ptr)
+ {
+ unsigned long size = memparse(ptr, NULL);
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index edfc64a8a154..d07addb99b71 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+
+ static void __init l1tf_select_mitigation(void)
+ {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 6c54d8b0e5dc..ae4f5afb9095 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -883,7 +883,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ apply_forced_caps(c);
+ }
+
+-static void get_cpu_address_sizes(struct cpuinfo_x86 *c)
++void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ {
+ u32 eax, ebx, ecx, edx;
+
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index cca588407dca..d0a17a438dd0 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -46,6 +46,7 @@ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+ *const __x86_cpu_dev_end[];
+
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
++extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_ht_early(struct cpuinfo_x86 *c);
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 7bb6f65c79de..29505724202a 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1784,6 +1784,12 @@ int set_memory_nonglobal(unsigned long addr, int numpages)
+ __pgprot(_PAGE_GLOBAL), 0);
+ }
+
++int set_memory_global(unsigned long addr, int numpages)
++{
++ return change_page_attr_set(&addr, numpages,
++ __pgprot(_PAGE_GLOBAL), 0);
++}
++
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+ struct cpa_data cpa;
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index ffc8c13c50e4..64df979a97e2 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -715,28 +715,50 @@ int pmd_clear_huge(pmd_t *pmd)
+ return 0;
+ }
+
++#ifdef CONFIG_X86_64
+ /**
+ * pud_free_pmd_page - Clear pud entry and free pmd page.
+ * @pud: Pointer to a PUD.
++ * @addr: Virtual address associated with pud.
+ *
+- * Context: The pud range has been unmaped and TLB purged.
++ * Context: The pud range has been unmapped and TLB purged.
+ * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ *
++ * NOTE: Callers must allow a single page allocation.
+ */
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+- pmd_t *pmd;
++ pmd_t *pmd, *pmd_sv;
++ pte_t *pte;
+ int i;
+
+ if (pud_none(*pud))
+ return 1;
+
+ pmd = (pmd_t *)pud_page_vaddr(*pud);
++ pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
++ if (!pmd_sv)
++ return 0;
+
+- for (i = 0; i < PTRS_PER_PMD; i++)
+- if (!pmd_free_pte_page(&pmd[i]))
+- return 0;
++ for (i = 0; i < PTRS_PER_PMD; i++) {
++ pmd_sv[i] = pmd[i];
++ if (!pmd_none(pmd[i]))
++ pmd_clear(&pmd[i]);
++ }
+
+ pud_clear(pud);
++
++ /* INVLPG to clear all paging-structure caches */
++ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
++ for (i = 0; i < PTRS_PER_PMD; i++) {
++ if (!pmd_none(pmd_sv[i])) {
++ pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
++ free_page((unsigned long)pte);
++ }
++ }
++
++ free_page((unsigned long)pmd_sv);
+ free_page((unsigned long)pmd);
+
+ return 1;
+@@ -745,11 +767,12 @@ int pud_free_pmd_page(pud_t *pud)
+ /**
+ * pmd_free_pte_page - Clear pmd entry and free pte page.
+ * @pmd: Pointer to a PMD.
++ * @addr: Virtual address associated with pmd.
+ *
+- * Context: The pmd range has been unmaped and TLB purged.
++ * Context: The pmd range has been unmapped and TLB purged.
+ * Return: 1 if clearing the entry succeeded. 0 otherwise.
+ */
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ pte_t *pte;
+
+@@ -758,8 +781,30 @@ int pmd_free_pte_page(pmd_t *pmd)
+
+ pte = (pte_t *)pmd_page_vaddr(*pmd);
+ pmd_clear(pmd);
++
++ /* INVLPG to clear all paging-structure caches */
++ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
+ free_page((unsigned long)pte);
+
+ return 1;
+ }
++
++#else /* !CONFIG_X86_64 */
++
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
++{
++ return pud_none(*pud);
++}
++
++/*
++ * Disable free page handling on x86-PAE. This assures that ioremap()
++ * does not update sync'd pmd entries. See vmalloc_sync_one().
++ */
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
++{
++ return pmd_none(*pmd);
++}
++
++#endif /* CONFIG_X86_64 */
+ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index fb752d9a3ce9..946455e9cfef 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -435,6 +435,13 @@ static inline bool pti_kernel_image_global_ok(void)
+ return true;
+ }
+
++/*
++ * This is the only user for these and it is not arch-generic
++ * like the other set_memory.h functions. Just extern them.
++ */
++extern int set_memory_nonglobal(unsigned long addr, int numpages);
++extern int set_memory_global(unsigned long addr, int numpages);
++
+ /*
+ * For some configurations, map all of kernel text into the user page
+ * tables. This reduces TLB misses, especially on non-PCID systems.
+@@ -447,7 +454,8 @@ void pti_clone_kernel_text(void)
+ * clone the areas past rodata, they might contain secrets.
+ */
+ unsigned long start = PFN_ALIGN(_text);
+- unsigned long end = (unsigned long)__end_rodata_hpage_align;
++ unsigned long end_clone = (unsigned long)__end_rodata_hpage_align;
++ unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table);
+
+ if (!pti_kernel_image_global_ok())
+ return;
+@@ -459,14 +467,18 @@ void pti_clone_kernel_text(void)
+ * pti_set_kernel_image_nonglobal() did to clear the
+ * global bit.
+ */
+- pti_clone_pmds(start, end, _PAGE_RW);
++ pti_clone_pmds(start, end_clone, _PAGE_RW);
++
++ /*
++ * pti_clone_pmds() will set the global bit in any PMDs
++ * that it clones, but we also need to get any PTEs in
++ * the last level for areas that are not huge-page-aligned.
++ */
++
++ /* Set the global bit for normal non-__init kernel text: */
++ set_memory_global(start, (end_global - start) >> PAGE_SHIFT);
+ }
+
+-/*
+- * This is the only user for it and it is not arch-generic like
+- * the other set_memory.h functions. Just extern it.
+- */
+-extern int set_memory_nonglobal(unsigned long addr, int numpages);
+ void pti_set_kernel_image_nonglobal(void)
+ {
+ /*
+@@ -478,9 +490,11 @@ void pti_set_kernel_image_nonglobal(void)
+ unsigned long start = PFN_ALIGN(_text);
+ unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+
+- if (pti_kernel_image_global_ok())
+- return;
+-
++ /*
++ * This clears _PAGE_GLOBAL from the entire kernel image.
++ * pti_clone_kernel_text() map put _PAGE_GLOBAL back for
++ * areas that are mapped to userspace.
++ */
+ set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);
+ }
+
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index ce8f6a8d4ac9..ca0fce74814b 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1258,6 +1258,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ get_cpu_cap(&boot_cpu_data);
+ x86_configure_nx();
+
++ /* Determine virtual and physical address sizes */
++ get_cpu_address_sizes(&boot_cpu_data);
++
+ /* Let's presume PV guests always boot on vCPU with id 0. */
+ per_cpu(xen_vcpu_id, 0) = 0;
+
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index d880a4897159..4ee7c041bb82 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -71,11 +71,9 @@ static inline u8 *ablkcipher_get_spot(u8 *start, unsigned int len)
+ return max(start, end_page);
+ }
+
+-static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+- unsigned int bsize)
++static inline void ablkcipher_done_slow(struct ablkcipher_walk *walk,
++ unsigned int n)
+ {
+- unsigned int n = bsize;
+-
+ for (;;) {
+ unsigned int len_this_page = scatterwalk_pagelen(&walk->out);
+
+@@ -87,17 +85,13 @@ static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+ n -= len_this_page;
+ scatterwalk_start(&walk->out, sg_next(walk->out.sg));
+ }
+-
+- return bsize;
+ }
+
+-static inline unsigned int ablkcipher_done_fast(struct ablkcipher_walk *walk,
+- unsigned int n)
++static inline void ablkcipher_done_fast(struct ablkcipher_walk *walk,
++ unsigned int n)
+ {
+ scatterwalk_advance(&walk->in, n);
+ scatterwalk_advance(&walk->out, n);
+-
+- return n;
+ }
+
+ static int ablkcipher_walk_next(struct ablkcipher_request *req,
+@@ -107,39 +101,40 @@ int ablkcipher_walk_done(struct ablkcipher_request *req,
+ struct ablkcipher_walk *walk, int err)
+ {
+ struct crypto_tfm *tfm = req->base.tfm;
+- unsigned int nbytes = 0;
++ unsigned int n; /* bytes processed */
++ bool more;
+
+- if (likely(err >= 0)) {
+- unsigned int n = walk->nbytes - err;
++ if (unlikely(err < 0))
++ goto finish;
+
+- if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW)))
+- n = ablkcipher_done_fast(walk, n);
+- else if (WARN_ON(err)) {
+- err = -EINVAL;
+- goto err;
+- } else
+- n = ablkcipher_done_slow(walk, n);
++ n = walk->nbytes - err;
++ walk->total -= n;
++ more = (walk->total != 0);
+
+- nbytes = walk->total - n;
+- err = 0;
++ if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW))) {
++ ablkcipher_done_fast(walk, n);
++ } else {
++ if (WARN_ON(err)) {
++ /* unexpected case; didn't process all bytes */
++ err = -EINVAL;
++ goto finish;
++ }
++ ablkcipher_done_slow(walk, n);
+ }
+
+- scatterwalk_done(&walk->in, 0, nbytes);
+- scatterwalk_done(&walk->out, 1, nbytes);
+-
+-err:
+- walk->total = nbytes;
+- walk->nbytes = nbytes;
++ scatterwalk_done(&walk->in, 0, more);
++ scatterwalk_done(&walk->out, 1, more);
+
+- if (nbytes) {
++ if (more) {
+ crypto_yield(req->base.flags);
+ return ablkcipher_walk_next(req, walk);
+ }
+-
++ err = 0;
++finish:
++ walk->nbytes = 0;
+ if (walk->iv != req->info)
+ memcpy(req->info, walk->iv, tfm->crt_ablkcipher.ivsize);
+ kfree(walk->iv_buffer);
+-
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(ablkcipher_walk_done);
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 01c0d4aa2563..77b5fa293f66 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -70,19 +70,18 @@ static inline u8 *blkcipher_get_spot(u8 *start, unsigned int len)
+ return max(start, end_page);
+ }
+
+-static inline unsigned int blkcipher_done_slow(struct blkcipher_walk *walk,
+- unsigned int bsize)
++static inline void blkcipher_done_slow(struct blkcipher_walk *walk,
++ unsigned int bsize)
+ {
+ u8 *addr;
+
+ addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1);
+ addr = blkcipher_get_spot(addr, bsize);
+ scatterwalk_copychunks(addr, &walk->out, bsize, 1);
+- return bsize;
+ }
+
+-static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+- unsigned int n)
++static inline void blkcipher_done_fast(struct blkcipher_walk *walk,
++ unsigned int n)
+ {
+ if (walk->flags & BLKCIPHER_WALK_COPY) {
+ blkcipher_map_dst(walk);
+@@ -96,49 +95,48 @@ static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+
+ scatterwalk_advance(&walk->in, n);
+ scatterwalk_advance(&walk->out, n);
+-
+- return n;
+ }
+
+ int blkcipher_walk_done(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk, int err)
+ {
+- unsigned int nbytes = 0;
++ unsigned int n; /* bytes processed */
++ bool more;
+
+- if (likely(err >= 0)) {
+- unsigned int n = walk->nbytes - err;
++ if (unlikely(err < 0))
++ goto finish;
+
+- if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW)))
+- n = blkcipher_done_fast(walk, n);
+- else if (WARN_ON(err)) {
+- err = -EINVAL;
+- goto err;
+- } else
+- n = blkcipher_done_slow(walk, n);
++ n = walk->nbytes - err;
++ walk->total -= n;
++ more = (walk->total != 0);
+
+- nbytes = walk->total - n;
+- err = 0;
++ if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW))) {
++ blkcipher_done_fast(walk, n);
++ } else {
++ if (WARN_ON(err)) {
++ /* unexpected case; didn't process all bytes */
++ err = -EINVAL;
++ goto finish;
++ }
++ blkcipher_done_slow(walk, n);
+ }
+
+- scatterwalk_done(&walk->in, 0, nbytes);
+- scatterwalk_done(&walk->out, 1, nbytes);
++ scatterwalk_done(&walk->in, 0, more);
++ scatterwalk_done(&walk->out, 1, more);
+
+-err:
+- walk->total = nbytes;
+- walk->nbytes = nbytes;
+-
+- if (nbytes) {
++ if (more) {
+ crypto_yield(desc->flags);
+ return blkcipher_walk_next(desc, walk);
+ }
+-
++ err = 0;
++finish:
++ walk->nbytes = 0;
+ if (walk->iv != desc->info)
+ memcpy(desc->info, walk->iv, walk->ivsize);
+ if (walk->buffer != walk->page)
+ kfree(walk->buffer);
+ if (walk->page)
+ free_page((unsigned long)walk->page);
+-
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(blkcipher_walk_done);
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 0fe2a2923ad0..5dc8407bdaa9 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -95,7 +95,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
+ return max(start, end_page);
+ }
+
+-static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
++static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ {
+ u8 *addr;
+
+@@ -103,23 +103,24 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ addr = skcipher_get_spot(addr, bsize);
+ scatterwalk_copychunks(addr, &walk->out, bsize,
+ (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
+- return 0;
+ }
+
+ int skcipher_walk_done(struct skcipher_walk *walk, int err)
+ {
+- unsigned int n = walk->nbytes - err;
+- unsigned int nbytes;
+-
+- nbytes = walk->total - n;
+-
+- if (unlikely(err < 0)) {
+- nbytes = 0;
+- n = 0;
+- } else if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
+- SKCIPHER_WALK_SLOW |
+- SKCIPHER_WALK_COPY |
+- SKCIPHER_WALK_DIFF)))) {
++ unsigned int n; /* bytes processed */
++ bool more;
++
++ if (unlikely(err < 0))
++ goto finish;
++
++ n = walk->nbytes - err;
++ walk->total -= n;
++ more = (walk->total != 0);
++
++ if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
++ SKCIPHER_WALK_SLOW |
++ SKCIPHER_WALK_COPY |
++ SKCIPHER_WALK_DIFF)))) {
+ unmap_src:
+ skcipher_unmap_src(walk);
+ } else if (walk->flags & SKCIPHER_WALK_DIFF) {
+@@ -131,28 +132,28 @@ unmap_src:
+ skcipher_unmap_dst(walk);
+ } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+ if (WARN_ON(err)) {
++ /* unexpected case; didn't process all bytes */
+ err = -EINVAL;
+- nbytes = 0;
+- } else
+- n = skcipher_done_slow(walk, n);
++ goto finish;
++ }
++ skcipher_done_slow(walk, n);
++ goto already_advanced;
+ }
+
+- if (err > 0)
+- err = 0;
+-
+- walk->total = nbytes;
+- walk->nbytes = nbytes;
+-
+ scatterwalk_advance(&walk->in, n);
+ scatterwalk_advance(&walk->out, n);
+- scatterwalk_done(&walk->in, 0, nbytes);
+- scatterwalk_done(&walk->out, 1, nbytes);
++already_advanced:
++ scatterwalk_done(&walk->in, 0, more);
++ scatterwalk_done(&walk->out, 1, more);
+
+- if (nbytes) {
++ if (more) {
+ crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+ CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+ return skcipher_walk_next(walk);
+ }
++ err = 0;
++finish:
++ walk->nbytes = 0;
+
+ /* Short-circuit for the common/fast path. */
+ if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
+@@ -399,7 +400,7 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
+ unsigned size;
+ u8 *iv;
+
+- aligned_bs = ALIGN(bs, alignmask);
++ aligned_bs = ALIGN(bs, alignmask + 1);
+
+ /* Minimum size to align buffer by alignmask. */
+ size = alignmask & ~a;
+diff --git a/crypto/vmac.c b/crypto/vmac.c
+index df76a816cfb2..bb2fc787d615 100644
+--- a/crypto/vmac.c
++++ b/crypto/vmac.c
+@@ -1,6 +1,10 @@
+ /*
+- * Modified to interface to the Linux kernel
++ * VMAC: Message Authentication Code using Universal Hashing
++ *
++ * Reference: https://tools.ietf.org/html/draft-krovetz-vmac-01
++ *
+ * Copyright (c) 2009, Intel Corporation.
++ * Copyright (c) 2018, Google Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+@@ -16,14 +20,15 @@
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
++/*
++ * Derived from:
++ * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
++ * This implementation is herby placed in the public domain.
++ * The authors offers no warranty. Use at your own risk.
++ * Last modified: 17 APR 08, 1700 PDT
++ */
+
++#include <asm/unaligned.h>
+ #include <linux/init.h>
+ #include <linux/types.h>
+ #include <linux/crypto.h>
+@@ -31,9 +36,35 @@
+ #include <linux/scatterlist.h>
+ #include <asm/byteorder.h>
+ #include <crypto/scatterwalk.h>
+-#include <crypto/vmac.h>
+ #include <crypto/internal/hash.h>
+
++/*
++ * User definable settings.
++ */
++#define VMAC_TAG_LEN 64
++#define VMAC_KEY_SIZE 128/* Must be 128, 192 or 256 */
++#define VMAC_KEY_LEN (VMAC_KEY_SIZE/8)
++#define VMAC_NHBYTES 128/* Must 2^i for any 3 < i < 13 Standard = 128*/
++
++/* per-transform (per-key) context */
++struct vmac_tfm_ctx {
++ struct crypto_cipher *cipher;
++ u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
++ u64 polykey[2*VMAC_TAG_LEN/64];
++ u64 l3key[2*VMAC_TAG_LEN/64];
++};
++
++/* per-request context */
++struct vmac_desc_ctx {
++ union {
++ u8 partial[VMAC_NHBYTES]; /* partial block */
++ __le64 partial_words[VMAC_NHBYTES / 8];
++ };
++ unsigned int partial_size; /* size of the partial block */
++ bool first_block_processed;
++ u64 polytmp[2*VMAC_TAG_LEN/64]; /* running total of L2-hash */
++};
++
+ /*
+ * Constants and masks
+ */
+@@ -318,13 +349,6 @@ static void poly_step_func(u64 *ahi, u64 *alo,
+ } while (0)
+ #endif
+
+-static void vhash_abort(struct vmac_ctx *ctx)
+-{
+- ctx->polytmp[0] = ctx->polykey[0] ;
+- ctx->polytmp[1] = ctx->polykey[1] ;
+- ctx->first_block_processed = 0;
+-}
+-
+ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ {
+ u64 rh, rl, t, z = 0;
+@@ -364,280 +388,209 @@ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ return rl;
+ }
+
+-static void vhash_update(const unsigned char *m,
+- unsigned int mbytes, /* Pos multiple of VMAC_NHBYTES */
+- struct vmac_ctx *ctx)
++/* L1 and L2-hash one or more VMAC_NHBYTES-byte blocks */
++static void vhash_blocks(const struct vmac_tfm_ctx *tctx,
++ struct vmac_desc_ctx *dctx,
++ const __le64 *mptr, unsigned int blocks)
+ {
+- u64 rh, rl, *mptr;
+- const u64 *kptr = (u64 *)ctx->nhkey;
+- int i;
+- u64 ch, cl;
+- u64 pkh = ctx->polykey[0];
+- u64 pkl = ctx->polykey[1];
+-
+- if (!mbytes)
+- return;
+-
+- BUG_ON(mbytes % VMAC_NHBYTES);
+-
+- mptr = (u64 *)m;
+- i = mbytes / VMAC_NHBYTES; /* Must be non-zero */
+-
+- ch = ctx->polytmp[0];
+- cl = ctx->polytmp[1];
+-
+- if (!ctx->first_block_processed) {
+- ctx->first_block_processed = 1;
++ const u64 *kptr = tctx->nhkey;
++ const u64 pkh = tctx->polykey[0];
++ const u64 pkl = tctx->polykey[1];
++ u64 ch = dctx->polytmp[0];
++ u64 cl = dctx->polytmp[1];
++ u64 rh, rl;
++
++ if (!dctx->first_block_processed) {
++ dctx->first_block_processed = true;
+ nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ rh &= m62;
+ ADD128(ch, cl, rh, rl);
+ mptr += (VMAC_NHBYTES/sizeof(u64));
+- i--;
++ blocks--;
+ }
+
+- while (i--) {
++ while (blocks--) {
+ nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ rh &= m62;
+ poly_step(ch, cl, pkh, pkl, rh, rl);
+ mptr += (VMAC_NHBYTES/sizeof(u64));
+ }
+
+- ctx->polytmp[0] = ch;
+- ctx->polytmp[1] = cl;
++ dctx->polytmp[0] = ch;
++ dctx->polytmp[1] = cl;
+ }
+
+-static u64 vhash(unsigned char m[], unsigned int mbytes,
+- u64 *tagl, struct vmac_ctx *ctx)
++static int vmac_setkey(struct crypto_shash *tfm,
++ const u8 *key, unsigned int keylen)
+ {
+- u64 rh, rl, *mptr;
+- const u64 *kptr = (u64 *)ctx->nhkey;
+- int i, remaining;
+- u64 ch, cl;
+- u64 pkh = ctx->polykey[0];
+- u64 pkl = ctx->polykey[1];
+-
+- mptr = (u64 *)m;
+- i = mbytes / VMAC_NHBYTES;
+- remaining = mbytes % VMAC_NHBYTES;
+-
+- if (ctx->first_block_processed) {
+- ch = ctx->polytmp[0];
+- cl = ctx->polytmp[1];
+- } else if (i) {
+- nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, ch, cl);
+- ch &= m62;
+- ADD128(ch, cl, pkh, pkl);
+- mptr += (VMAC_NHBYTES/sizeof(u64));
+- i--;
+- } else if (remaining) {
+- nh_16(mptr, kptr, 2*((remaining+15)/16), ch, cl);
+- ch &= m62;
+- ADD128(ch, cl, pkh, pkl);
+- mptr += (VMAC_NHBYTES/sizeof(u64));
+- goto do_l3;
+- } else {/* Empty String */
+- ch = pkh; cl = pkl;
+- goto do_l3;
+- }
+-
+- while (i--) {
+- nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+- rh &= m62;
+- poly_step(ch, cl, pkh, pkl, rh, rl);
+- mptr += (VMAC_NHBYTES/sizeof(u64));
+- }
+- if (remaining) {
+- nh_16(mptr, kptr, 2*((remaining+15)/16), rh, rl);
+- rh &= m62;
+- poly_step(ch, cl, pkh, pkl, rh, rl);
+- }
+-
+-do_l3:
+- vhash_abort(ctx);
+- remaining *= 8;
+- return l3hash(ch, cl, ctx->l3key[0], ctx->l3key[1], remaining);
+-}
++ struct vmac_tfm_ctx *tctx = crypto_shash_ctx(tfm);
++ __be64 out[2];
++ u8 in[16] = { 0 };
++ unsigned int i;
++ int err;
+
+-static u64 vmac(unsigned char m[], unsigned int mbytes,
+- const unsigned char n[16], u64 *tagl,
+- struct vmac_ctx_t *ctx)
+-{
+- u64 *in_n, *out_p;
+- u64 p, h;
+- int i;
+-
+- in_n = ctx->__vmac_ctx.cached_nonce;
+- out_p = ctx->__vmac_ctx.cached_aes;
+-
+- i = n[15] & 1;
+- if ((*(u64 *)(n+8) != in_n[1]) || (*(u64 *)(n) != in_n[0])) {
+- in_n[0] = *(u64 *)(n);
+- in_n[1] = *(u64 *)(n+8);
+- ((unsigned char *)in_n)[15] &= 0xFE;
+- crypto_cipher_encrypt_one(ctx->child,
+- (unsigned char *)out_p, (unsigned char *)in_n);
+-
+- ((unsigned char *)in_n)[15] |= (unsigned char)(1-i);
++ if (keylen != VMAC_KEY_LEN) {
++ crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
++ return -EINVAL;
+ }
+- p = be64_to_cpup(out_p + i);
+- h = vhash(m, mbytes, (u64 *)0, &ctx->__vmac_ctx);
+- return le64_to_cpu(p + h);
+-}
+
+-static int vmac_set_key(unsigned char user_key[], struct vmac_ctx_t *ctx)
+-{
+- u64 in[2] = {0}, out[2];
+- unsigned i;
+- int err = 0;
+-
+- err = crypto_cipher_setkey(ctx->child, user_key, VMAC_KEY_LEN);
++ err = crypto_cipher_setkey(tctx->cipher, key, keylen);
+ if (err)
+ return err;
+
+ /* Fill nh key */
+- ((unsigned char *)in)[0] = 0x80;
+- for (i = 0; i < sizeof(ctx->__vmac_ctx.nhkey)/8; i += 2) {
+- crypto_cipher_encrypt_one(ctx->child,
+- (unsigned char *)out, (unsigned char *)in);
+- ctx->__vmac_ctx.nhkey[i] = be64_to_cpup(out);
+- ctx->__vmac_ctx.nhkey[i+1] = be64_to_cpup(out+1);
+- ((unsigned char *)in)[15] += 1;
++ in[0] = 0x80;
++ for (i = 0; i < ARRAY_SIZE(tctx->nhkey); i += 2) {
++ crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++ tctx->nhkey[i] = be64_to_cpu(out[0]);
++ tctx->nhkey[i+1] = be64_to_cpu(out[1]);
++ in[15]++;
+ }
+
+ /* Fill poly key */
+- ((unsigned char *)in)[0] = 0xC0;
+- in[1] = 0;
+- for (i = 0; i < sizeof(ctx->__vmac_ctx.polykey)/8; i += 2) {
+- crypto_cipher_encrypt_one(ctx->child,
+- (unsigned char *)out, (unsigned char *)in);
+- ctx->__vmac_ctx.polytmp[i] =
+- ctx->__vmac_ctx.polykey[i] =
+- be64_to_cpup(out) & mpoly;
+- ctx->__vmac_ctx.polytmp[i+1] =
+- ctx->__vmac_ctx.polykey[i+1] =
+- be64_to_cpup(out+1) & mpoly;
+- ((unsigned char *)in)[15] += 1;
++ in[0] = 0xC0;
++ in[15] = 0;
++ for (i = 0; i < ARRAY_SIZE(tctx->polykey); i += 2) {
++ crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++ tctx->polykey[i] = be64_to_cpu(out[0]) & mpoly;
++ tctx->polykey[i+1] = be64_to_cpu(out[1]) & mpoly;
++ in[15]++;
+ }
+
+ /* Fill ip key */
+- ((unsigned char *)in)[0] = 0xE0;
+- in[1] = 0;
+- for (i = 0; i < sizeof(ctx->__vmac_ctx.l3key)/8; i += 2) {
++ in[0] = 0xE0;
++ in[15] = 0;
++ for (i = 0; i < ARRAY_SIZE(tctx->l3key); i += 2) {
+ do {
+- crypto_cipher_encrypt_one(ctx->child,
+- (unsigned char *)out, (unsigned char *)in);
+- ctx->__vmac_ctx.l3key[i] = be64_to_cpup(out);
+- ctx->__vmac_ctx.l3key[i+1] = be64_to_cpup(out+1);
+- ((unsigned char *)in)[15] += 1;
+- } while (ctx->__vmac_ctx.l3key[i] >= p64
+- || ctx->__vmac_ctx.l3key[i+1] >= p64);
++ crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++ tctx->l3key[i] = be64_to_cpu(out[0]);
++ tctx->l3key[i+1] = be64_to_cpu(out[1]);
++ in[15]++;
++ } while (tctx->l3key[i] >= p64 || tctx->l3key[i+1] >= p64);
+ }
+
+- /* Invalidate nonce/aes cache and reset other elements */
+- ctx->__vmac_ctx.cached_nonce[0] = (u64)-1; /* Ensure illegal nonce */
+- ctx->__vmac_ctx.cached_nonce[1] = (u64)0; /* Ensure illegal nonce */
+- ctx->__vmac_ctx.first_block_processed = 0;
+-
+- return err;
++ return 0;
+ }
+
+-static int vmac_setkey(struct crypto_shash *parent,
+- const u8 *key, unsigned int keylen)
++static int vmac_init(struct shash_desc *desc)
+ {
+- struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
++ const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++ struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
+
+- if (keylen != VMAC_KEY_LEN) {
+- crypto_shash_set_flags(parent, CRYPTO_TFM_RES_BAD_KEY_LEN);
+- return -EINVAL;
+- }
+-
+- return vmac_set_key((u8 *)key, ctx);
+-}
+-
+-static int vmac_init(struct shash_desc *pdesc)
+-{
++ dctx->partial_size = 0;
++ dctx->first_block_processed = false;
++ memcpy(dctx->polytmp, tctx->polykey, sizeof(dctx->polytmp));
+ return 0;
+ }
+
+-static int vmac_update(struct shash_desc *pdesc, const u8 *p,
+- unsigned int len)
++static int vmac_update(struct shash_desc *desc, const u8 *p, unsigned int len)
+ {
+- struct crypto_shash *parent = pdesc->tfm;
+- struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+- int expand;
+- int min;
+-
+- expand = VMAC_NHBYTES - ctx->partial_size > 0 ?
+- VMAC_NHBYTES - ctx->partial_size : 0;
+-
+- min = len < expand ? len : expand;
+-
+- memcpy(ctx->partial + ctx->partial_size, p, min);
+- ctx->partial_size += min;
+-
+- if (len < expand)
+- return 0;
+-
+- vhash_update(ctx->partial, VMAC_NHBYTES, &ctx->__vmac_ctx);
+- ctx->partial_size = 0;
+-
+- len -= expand;
+- p += expand;
++ const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++ struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++ unsigned int n;
++
++ if (dctx->partial_size) {
++ n = min(len, VMAC_NHBYTES - dctx->partial_size);
++ memcpy(&dctx->partial[dctx->partial_size], p, n);
++ dctx->partial_size += n;
++ p += n;
++ len -= n;
++ if (dctx->partial_size == VMAC_NHBYTES) {
++ vhash_blocks(tctx, dctx, dctx->partial_words, 1);
++ dctx->partial_size = 0;
++ }
++ }
+
+- if (len % VMAC_NHBYTES) {
+- memcpy(ctx->partial, p + len - (len % VMAC_NHBYTES),
+- len % VMAC_NHBYTES);
+- ctx->partial_size = len % VMAC_NHBYTES;
++ if (len >= VMAC_NHBYTES) {
++ n = round_down(len, VMAC_NHBYTES);
++ /* TODO: 'p' may be misaligned here */
++ vhash_blocks(tctx, dctx, (const __le64 *)p, n / VMAC_NHBYTES);
++ p += n;
++ len -= n;
+ }
+
+- vhash_update(p, len - len % VMAC_NHBYTES, &ctx->__vmac_ctx);
++ if (len) {
++ memcpy(dctx->partial, p, len);
++ dctx->partial_size = len;
++ }
+
+ return 0;
+ }
+
+-static int vmac_final(struct shash_desc *pdesc, u8 *out)
++static u64 vhash_final(const struct vmac_tfm_ctx *tctx,
++ struct vmac_desc_ctx *dctx)
+ {
+- struct crypto_shash *parent = pdesc->tfm;
+- struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+- vmac_t mac;
+- u8 nonce[16] = {};
+-
+- /* vmac() ends up accessing outside the array bounds that
+- * we specify. In appears to access up to the next 2-word
+- * boundary. We'll just be uber cautious and zero the
+- * unwritten bytes in the buffer.
+- */
+- if (ctx->partial_size) {
+- memset(ctx->partial + ctx->partial_size, 0,
+- VMAC_NHBYTES - ctx->partial_size);
++ unsigned int partial = dctx->partial_size;
++ u64 ch = dctx->polytmp[0];
++ u64 cl = dctx->polytmp[1];
++
++ /* L1 and L2-hash the final block if needed */
++ if (partial) {
++ /* Zero-pad to next 128-bit boundary */
++ unsigned int n = round_up(partial, 16);
++ u64 rh, rl;
++
++ memset(&dctx->partial[partial], 0, n - partial);
++ nh_16(dctx->partial_words, tctx->nhkey, n / 8, rh, rl);
++ rh &= m62;
++ if (dctx->first_block_processed)
++ poly_step(ch, cl, tctx->polykey[0], tctx->polykey[1],
++ rh, rl);
++ else
++ ADD128(ch, cl, rh, rl);
+ }
+- mac = vmac(ctx->partial, ctx->partial_size, nonce, NULL, ctx);
+- memcpy(out, &mac, sizeof(vmac_t));
+- memzero_explicit(&mac, sizeof(vmac_t));
+- memset(&ctx->__vmac_ctx, 0, sizeof(struct vmac_ctx));
+- ctx->partial_size = 0;
++
++ /* L3-hash the 128-bit output of L2-hash */
++ return l3hash(ch, cl, tctx->l3key[0], tctx->l3key[1], partial * 8);
++}
++
++static int vmac_final(struct shash_desc *desc, u8 *out)
++{
++ const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++ struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++ static const u8 nonce[16] = {}; /* TODO: this is insecure */
++ union {
++ u8 bytes[16];
++ __be64 pads[2];
++ } block;
++ int index;
++ u64 hash, pad;
++
++ /* Finish calculating the VHASH of the message */
++ hash = vhash_final(tctx, dctx);
++
++ /* Generate pseudorandom pad by encrypting the nonce */
++ memcpy(&block, nonce, 16);
++ index = block.bytes[15] & 1;
++ block.bytes[15] &= ~1;
++ crypto_cipher_encrypt_one(tctx->cipher, block.bytes, block.bytes);
++ pad = be64_to_cpu(block.pads[index]);
++
++ /* The VMAC is the sum of VHASH and the pseudorandom pad */
++ put_unaligned_le64(hash + pad, out);
+ return 0;
+ }
+
+ static int vmac_init_tfm(struct crypto_tfm *tfm)
+ {
+- struct crypto_cipher *cipher;
+- struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+- struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
++ struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++ struct crypto_cipher *cipher;
+
+ cipher = crypto_spawn_cipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+- ctx->child = cipher;
++ tctx->cipher = cipher;
+ return 0;
+ }
+
+ static void vmac_exit_tfm(struct crypto_tfm *tfm)
+ {
+- struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
+- crypto_free_cipher(ctx->child);
++ struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_cipher(tctx->cipher);
+ }
+
+ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+@@ -655,6 +608,10 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ if (IS_ERR(alg))
+ return PTR_ERR(alg);
+
++ err = -EINVAL;
++ if (alg->cra_blocksize != 16)
++ goto out_put_alg;
++
+ inst = shash_alloc_instance("vmac", alg);
+ err = PTR_ERR(inst);
+ if (IS_ERR(inst))
+@@ -670,11 +627,12 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ inst->alg.base.cra_alignmask = alg->cra_alignmask;
+
+- inst->alg.digestsize = sizeof(vmac_t);
+- inst->alg.base.cra_ctxsize = sizeof(struct vmac_ctx_t);
++ inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
+ inst->alg.base.cra_init = vmac_init_tfm;
+ inst->alg.base.cra_exit = vmac_exit_tfm;
+
++ inst->alg.descsize = sizeof(struct vmac_desc_ctx);
++ inst->alg.digestsize = VMAC_TAG_LEN / 8;
+ inst->alg.init = vmac_init;
+ inst->alg.update = vmac_update;
+ inst->alg.final = vmac_final;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index d95ec526587a..2867612eaf61 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -78,8 +78,6 @@ done:
+
+ static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
+ {
+- psp->sev_int_rcvd = 0;
+-
+ wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
+ *reg = ioread32(psp->io_regs + PSP_CMDRESP);
+ }
+@@ -140,6 +138,8 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+ iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+
++ psp->sev_int_rcvd = 0;
++
+ reg = cmd;
+ reg <<= PSP_CMDRESP_CMD_SHIFT;
+ reg |= PSP_CMDRESP_IOC;
+@@ -732,6 +732,9 @@ void psp_dev_destroy(struct sp_device *sp)
+ {
+ struct psp_device *psp = sp->psp_data;
+
++ if (!psp)
++ return;
++
+ if (psp->sev_misc)
+ kref_put(&misc_dev->refcount, sev_exit);
+
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index df98f7afe645..86e568a6d217 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -554,34 +554,82 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
+ }
+ }
+
++/*
++ * Update a CTR-AES 128 bit counter
++ */
++static void cc_update_ctr(u8 *ctr, unsigned int increment)
++{
++ if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++ IS_ALIGNED((unsigned long)ctr, 8)) {
++
++ __be64 *high_be = (__be64 *)ctr;
++ __be64 *low_be = high_be + 1;
++ u64 orig_low = __be64_to_cpu(*low_be);
++ u64 new_low = orig_low + (u64)increment;
++
++ *low_be = __cpu_to_be64(new_low);
++
++ if (new_low < orig_low)
++ *high_be = __cpu_to_be64(__be64_to_cpu(*high_be) + 1);
++ } else {
++ u8 *pos = (ctr + AES_BLOCK_SIZE);
++ u8 val;
++ unsigned int size;
++
++ for (; increment; increment--)
++ for (size = AES_BLOCK_SIZE; size; size--) {
++ val = *--pos + 1;
++ *pos = val;
++ if (val)
++ break;
++ }
++ }
++}
++
+ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ {
+ struct skcipher_request *req = (struct skcipher_request *)cc_req;
+ struct scatterlist *dst = req->dst;
+ struct scatterlist *src = req->src;
+ struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+- unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++ struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++ struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++ struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
++ unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
++ unsigned int len;
+
+- cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+- kzfree(req_ctx->iv);
++ switch (ctx_p->cipher_mode) {
++ case DRV_CIPHER_CBC:
++ /*
++ * The crypto API expects us to set the req->iv to the last
++ * ciphertext block. For encrypt, simply copy from the result.
++ * For decrypt, we must copy from a saved buffer since this
++ * could be an in-place decryption operation and the src is
++ * lost by this point.
++ */
++ if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
++ memcpy(req->iv, req_ctx->backup_info, ivsize);
++ kzfree(req_ctx->backup_info);
++ } else if (!err) {
++ len = req->cryptlen - ivsize;
++ scatterwalk_map_and_copy(req->iv, req->dst, len,
++ ivsize, 0);
++ }
++ break;
+
+- /*
+- * The crypto API expects us to set the req->iv to the last
+- * ciphertext block. For encrypt, simply copy from the result.
+- * For decrypt, we must copy from a saved buffer since this
+- * could be an in-place decryption operation and the src is
+- * lost by this point.
+- */
+- if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+- memcpy(req->iv, req_ctx->backup_info, ivsize);
+- kzfree(req_ctx->backup_info);
+- } else if (!err) {
+- scatterwalk_map_and_copy(req->iv, req->dst,
+- (req->cryptlen - ivsize),
+- ivsize, 0);
++ case DRV_CIPHER_CTR:
++ /* Compute the counter of the last block */
++ len = ALIGN(req->cryptlen, AES_BLOCK_SIZE) / AES_BLOCK_SIZE;
++ cc_update_ctr((u8 *)req->iv, len);
++ break;
++
++ default:
++ break;
+ }
+
++ cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++ kzfree(req_ctx->iv);
++
+ skcipher_request_complete(req, err);
+ }
+
+@@ -713,20 +761,29 @@ static int cc_cipher_encrypt(struct skcipher_request *req)
+ static int cc_cipher_decrypt(struct skcipher_request *req)
+ {
+ struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++ struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++ struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+ unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ gfp_t flags = cc_gfp_flags(&req->base);
++ unsigned int len;
+
+- /*
+- * Allocate and save the last IV sized bytes of the source, which will
+- * be lost in case of in-place decryption and might be needed for CTS.
+- */
+- req_ctx->backup_info = kmalloc(ivsize, flags);
+- if (!req_ctx->backup_info)
+- return -ENOMEM;
++ if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++
++ /* Allocate and save the last IV sized bytes of the source,
++ * which will be lost in case of in-place decryption.
++ */
++ req_ctx->backup_info = kzalloc(ivsize, flags);
++ if (!req_ctx->backup_info)
++ return -ENOMEM;
++
++ len = req->cryptlen - ivsize;
++ scatterwalk_map_and_copy(req_ctx->backup_info, req->src, len,
++ ivsize, 0);
++ } else {
++ req_ctx->backup_info = NULL;
++ }
+
+- scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
+- (req->cryptlen - ivsize), ivsize, 0);
+ req_ctx->is_giv = false;
+
+ return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 96ff777474d7..e4ebde05a8a0 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -602,66 +602,7 @@ static int cc_hash_update(struct ahash_request *req)
+ return rc;
+ }
+
+-static int cc_hash_finup(struct ahash_request *req)
+-{
+- struct ahash_req_ctx *state = ahash_request_ctx(req);
+- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+- struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+- u32 digestsize = crypto_ahash_digestsize(tfm);
+- struct scatterlist *src = req->src;
+- unsigned int nbytes = req->nbytes;
+- u8 *result = req->result;
+- struct device *dev = drvdata_to_dev(ctx->drvdata);
+- bool is_hmac = ctx->is_hmac;
+- struct cc_crypto_req cc_req = {};
+- struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
+- unsigned int idx = 0;
+- int rc;
+- gfp_t flags = cc_gfp_flags(&req->base);
+-
+- dev_dbg(dev, "===== %s-finup (%d) ====\n", is_hmac ? "hmac" : "hash",
+- nbytes);
+-
+- if (cc_map_req(dev, state, ctx)) {
+- dev_err(dev, "map_ahash_source() failed\n");
+- return -EINVAL;
+- }
+-
+- if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1,
+- flags)) {
+- dev_err(dev, "map_ahash_request_final() failed\n");
+- cc_unmap_req(dev, state, ctx);
+- return -ENOMEM;
+- }
+- if (cc_map_result(dev, state, digestsize)) {
+- dev_err(dev, "map_ahash_digest() failed\n");
+- cc_unmap_hash_request(dev, state, src, true);
+- cc_unmap_req(dev, state, ctx);
+- return -ENOMEM;
+- }
+-
+- /* Setup request structure */
+- cc_req.user_cb = cc_hash_complete;
+- cc_req.user_arg = req;
+-
+- idx = cc_restore_hash(desc, ctx, state, idx);
+-
+- if (is_hmac)
+- idx = cc_fin_hmac(desc, req, idx);
+-
+- idx = cc_fin_result(desc, req, idx);
+-
+- rc = cc_send_request(ctx->drvdata, &cc_req, desc, idx, &req->base);
+- if (rc != -EINPROGRESS && rc != -EBUSY) {
+- dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+- cc_unmap_hash_request(dev, state, src, true);
+- cc_unmap_result(dev, state, digestsize, result);
+- cc_unmap_req(dev, state, ctx);
+- }
+- return rc;
+-}
+-
+-static int cc_hash_final(struct ahash_request *req)
++static int cc_do_finup(struct ahash_request *req, bool update)
+ {
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -678,21 +619,20 @@ static int cc_hash_final(struct ahash_request *req)
+ int rc;
+ gfp_t flags = cc_gfp_flags(&req->base);
+
+- dev_dbg(dev, "===== %s-final (%d) ====\n", is_hmac ? "hmac" : "hash",
+- nbytes);
++ dev_dbg(dev, "===== %s-%s (%d) ====\n", is_hmac ? "hmac" : "hash",
++ update ? "finup" : "final", nbytes);
+
+ if (cc_map_req(dev, state, ctx)) {
+ dev_err(dev, "map_ahash_source() failed\n");
+ return -EINVAL;
+ }
+
+- if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0,
++ if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, update,
+ flags)) {
+ dev_err(dev, "map_ahash_request_final() failed\n");
+ cc_unmap_req(dev, state, ctx);
+ return -ENOMEM;
+ }
+-
+ if (cc_map_result(dev, state, digestsize)) {
+ dev_err(dev, "map_ahash_digest() failed\n");
+ cc_unmap_hash_request(dev, state, src, true);
+@@ -706,7 +646,7 @@ static int cc_hash_final(struct ahash_request *req)
+
+ idx = cc_restore_hash(desc, ctx, state, idx);
+
+- /* "DO-PAD" must be enabled only when writing current length to HW */
++ /* Pad the hash */
+ hw_desc_init(&desc[idx]);
+ set_cipher_do(&desc[idx], DO_PAD);
+ set_cipher_mode(&desc[idx], ctx->hw_mode);
+@@ -731,6 +671,17 @@ static int cc_hash_final(struct ahash_request *req)
+ return rc;
+ }
+
++static int cc_hash_finup(struct ahash_request *req)
++{
++ return cc_do_finup(req, true);
++}
++
++
++static int cc_hash_final(struct ahash_request *req)
++{
++ return cc_do_finup(req, false);
++}
++
+ static int cc_hash_init(struct ahash_request *req)
+ {
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 26ca0276b503..a75cb371cd19 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1019,8 +1019,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
+ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
+ int pud_clear_huge(pud_t *pud);
+ int pmd_clear_huge(pmd_t *pmd);
+-int pud_free_pmd_page(pud_t *pud);
+-int pmd_free_pte_page(pmd_t *pmd);
++int pud_free_pmd_page(pud_t *pud, unsigned long addr);
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
+ #else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
+ {
+@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
+ {
+ return 0;
+ }
+-static inline int pud_free_pmd_page(pud_t *pud)
++static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ return 0;
+ }
+-static inline int pmd_free_pte_page(pmd_t *pmd)
++static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ return 0;
+ }
+diff --git a/include/crypto/vmac.h b/include/crypto/vmac.h
+deleted file mode 100644
+index 6b700c7b2fe1..000000000000
+--- a/include/crypto/vmac.h
++++ /dev/null
+@@ -1,63 +0,0 @@
+-/*
+- * Modified to interface to the Linux kernel
+- * Copyright (c) 2009, Intel Corporation.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms and conditions of the GNU General Public License,
+- * version 2, as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+- * Place - Suite 330, Boston, MA 02111-1307 USA.
+- */
+-
+-#ifndef __CRYPTO_VMAC_H
+-#define __CRYPTO_VMAC_H
+-
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
+-
+-/*
+- * User definable settings.
+- */
+-#define VMAC_TAG_LEN 64
+-#define VMAC_KEY_SIZE 128/* Must be 128, 192 or 256 */
+-#define VMAC_KEY_LEN (VMAC_KEY_SIZE/8)
+-#define VMAC_NHBYTES 128/* Must 2^i for any 3 < i < 13 Standard = 128*/
+-
+-/*
+- * This implementation uses u32 and u64 as names for unsigned 32-
+- * and 64-bit integer types. These are defined in C99 stdint.h. The
+- * following may need adaptation if you are not running a C99 or
+- * Microsoft C environment.
+- */
+-struct vmac_ctx {
+- u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
+- u64 polykey[2*VMAC_TAG_LEN/64];
+- u64 l3key[2*VMAC_TAG_LEN/64];
+- u64 polytmp[2*VMAC_TAG_LEN/64];
+- u64 cached_nonce[2];
+- u64 cached_aes[2];
+- int first_block_processed;
+-};
+-
+-typedef u64 vmac_t;
+-
+-struct vmac_ctx_t {
+- struct crypto_cipher *child;
+- struct vmac_ctx __vmac_ctx;
+- u8 partial[VMAC_NHBYTES]; /* partial block */
+- int partial_size; /* size of the partial block */
+-};
+-
+-#endif /* __CRYPTO_VMAC_H */
+diff --git a/lib/ioremap.c b/lib/ioremap.c
+index 54e5bbaa3200..517f5853ffed 100644
+--- a/lib/ioremap.c
++++ b/lib/ioremap.c
+@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
+ if (ioremap_pmd_enabled() &&
+ ((next - addr) == PMD_SIZE) &&
+ IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
+- pmd_free_pte_page(pmd)) {
++ pmd_free_pte_page(pmd, addr)) {
+ if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ continue;
+ }
+@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
+ if (ioremap_pud_enabled() &&
+ ((next - addr) == PUD_SIZE) &&
+ IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
+- pud_free_pmd_page(pud)) {
++ pud_free_pmd_page(pud, addr)) {
+ if (pud_set_huge(pud, phys_addr + addr, prot))
+ continue;
+ }
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 1036e4fa1ea2..3bba8f4b08a9 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -431,8 +431,8 @@ static void hidp_del_timer(struct hidp_session *session)
+ del_timer(&session->timer);
+ }
+
+-static void hidp_process_report(struct hidp_session *session,
+- int type, const u8 *data, int len, int intr)
++static void hidp_process_report(struct hidp_session *session, int type,
++ const u8 *data, unsigned int len, int intr)
+ {
+ if (len > HID_MAX_BUFFER_SIZE)
+ len = HID_MAX_BUFFER_SIZE;
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 9831cca31240..f41b0a4b575c 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -11,10 +11,16 @@ DEPMOD=$1
+ KERNELRELEASE=$2
+ SYMBOL_PREFIX=$3
+
+-if ! test -r System.map -a -x "$DEPMOD"; then
++if ! test -r System.map ; then
+ exit 0
+ fi
+
++if [ -z $(command -v $DEPMOD) ]; then
++ echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++ echo "This is probably in the kmod package." >&2
++ exit 1
++fi
++
+ # older versions of depmod don't support -P <symbol-prefix>
+ # support was added in module-init-tools 3.13
+ if test -n "$SYMBOL_PREFIX"; then
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-16 11:47 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-16 11:47 UTC (permalink / raw
To: gentoo-commits
commit: 5a8ea77e6169b9a966c3d3c73133410b3e2c6947
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 16 11:47:22 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 16 11:47:22 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5a8ea77e
x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
0000_README | 4 +++
1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 ++++++++++++++++++++++++++
2 files changed, 44 insertions(+)
diff --git a/0000_README b/0000_README
index ae45bfe..a489588 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 1700_x86-l1tf-config-kvm-build-error-fix.patch
+From: http://www.kernel.org
+Desc: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
new file mode 100644
index 0000000..88c2ec6
--- /dev/null
+++ b/1700_x86-l1tf-config-kvm-build-error-fix.patch
@@ -0,0 +1,40 @@
+From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
+From: Guenter Roeck <linux@roeck-us.net>
+Date: Wed, 15 Aug 2018 08:38:33 -0700
+Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
+From: Guenter Roeck <linux@roeck-us.net>
+
+commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
+
+allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
+
+ ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
+
+Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
+Reported-by: Meelis Roos <mroos@linux.ee>
+Cc: Meelis Roos <mroos@linux.ee>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/bugs.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+
+ static void __init l1tf_select_mitigation(void)
+ {
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-15 16:35 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-15 16:35 UTC (permalink / raw
To: gentoo-commits
commit: 7fa8c974dfadd7ffd295f500f29891e4a662da84
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 15 16:35:41 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 15 16:35:41 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7fa8c974
Linux patch 4.17.15
0000_README | 4 +
1014_linux-4.17.15.patch | 4800 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4804 insertions(+)
diff --git a/0000_README b/0000_README
index 102b8df..ae45bfe 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-4.17.14.patch
From: http://www.kernel.org
Desc: Linux 4.17.14
+Patch: 1014_linux-4.17.15.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-4.17.15.patch b/1014_linux-4.17.15.patch
new file mode 100644
index 0000000..174db8f
--- /dev/null
+++ b/1014_linux-4.17.15.patch
@@ -0,0 +1,4800 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index bd4975e132d3..6048a81fa744 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -479,6 +479,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spectre_v1
+ /sys/devices/system/cpu/vulnerabilities/spectre_v2
+ /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
++ /sys/devices/system/cpu/vulnerabilities/l1tf
+ Date: January 2018
+ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description: Information about CPU vulnerabilities
+@@ -490,3 +491,26 @@ Description: Information about CPU vulnerabilities
+ "Not affected" CPU is not affected by the vulnerability
+ "Vulnerable" CPU is affected and no mitigation in effect
+ "Mitigation: $M" CPU is affected and mitigation $M is in effect
++
++ Details about the l1tf file can be found in
++ Documentation/admin-guide/l1tf.rst
++
++What: /sys/devices/system/cpu/smt
++ /sys/devices/system/cpu/smt/active
++ /sys/devices/system/cpu/smt/control
++Date: June 2018
++Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
++Description: Control Symetric Multi Threading (SMT)
++
++ active: Tells whether SMT is active (enabled and siblings online)
++
++ control: Read/write interface to control SMT. Possible
++ values:
++
++ "on" SMT is enabled
++ "off" SMT is disabled
++ "forceoff" SMT is force disabled. Cannot be changed.
++ "notsupported" SMT is not supported by the CPU
++
++ If control status is "forceoff" or "notsupported" writes
++ are rejected.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 5bb9161dbe6a..78f8f00c369f 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,6 +17,15 @@ etc.
+ kernel-parameters
+ devices
+
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++ :maxdepth: 1
++
++ l1tf
++
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index ff4ba249a26f..d7dd58ccf0d4 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1958,10 +1958,84 @@
+ (virtualized real and unpaged mode) on capable
+ Intel chips. Default is 1 (enabled)
+
++ kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
++ CVE-2018-3620.
++
++ Valid arguments: never, cond, always
++
++ always: L1D cache flush on every VMENTER.
++ cond: Flush L1D on VMENTER only when the code between
++ VMEXIT and VMENTER can leak host memory.
++ never: Disables the mitigation
++
++ Default is cond (do L1 cache flush in specific instances)
++
+ kvm-intel.vpid= [KVM,Intel] Disable Virtual Processor Identification
+ feature (tagged TLBs) on capable Intel chips.
+ Default is 1 (enabled)
+
++ l1tf= [X86] Control mitigation of the L1TF vulnerability on
++ affected CPUs
++
++ The kernel PTE inversion protection is unconditionally
++ enabled and cannot be disabled.
++
++ full
++ Provides all available mitigations for the
++ L1TF vulnerability. Disables SMT and
++ enables all mitigations in the
++ hypervisors, i.e. unconditional L1D flush.
++
++ SMT control and L1D flush control via the
++ sysfs interface is still possible after
++ boot. Hypervisors will issue a warning
++ when the first VM is started in a
++ potentially insecure configuration,
++ i.e. SMT enabled or L1D flush disabled.
++
++ full,force
++ Same as 'full', but disables SMT and L1D
++ flush runtime control. Implies the
++ 'nosmt=force' command line option.
++ (i.e. sysfs control of SMT is disabled.)
++
++ flush
++ Leaves SMT enabled and enables the default
++ hypervisor mitigation, i.e. conditional
++ L1D flush.
++
++ SMT control and L1D flush control via the
++ sysfs interface is still possible after
++ boot. Hypervisors will issue a warning
++ when the first VM is started in a
++ potentially insecure configuration,
++ i.e. SMT enabled or L1D flush disabled.
++
++ flush,nosmt
++
++ Disables SMT and enables the default
++ hypervisor mitigation.
++
++ SMT control and L1D flush control via the
++ sysfs interface is still possible after
++ boot. Hypervisors will issue a warning
++ when the first VM is started in a
++ potentially insecure configuration,
++ i.e. SMT enabled or L1D flush disabled.
++
++ flush,nowarn
++ Same as 'flush', but hypervisors will not
++ warn when a VM is started in a potentially
++ insecure configuration.
++
++ off
++ Disables hypervisor mitigations and doesn't
++ emit any warnings.
++
++ Default is 'flush'.
++
++ For details see: Documentation/admin-guide/l1tf.rst
++
+ l2cr= [PPC]
+
+ l3cr= [PPC]
+@@ -2675,6 +2749,10 @@
+ nosmt [KNL,S390] Disable symmetric multithreading (SMT).
+ Equivalent to smt=1.
+
++ [KNL,x86] Disable symmetric multithreading (SMT).
++ nosmt=force: Force disable SMT, cannot be undone
++ via the sysfs control file.
++
+ nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2
+ (indirect branch prediction) vulnerability. System may
+ allow data leaks with this option, which is equivalent
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+new file mode 100644
+index 000000000000..bae52b845de0
+--- /dev/null
++++ b/Documentation/admin-guide/l1tf.rst
+@@ -0,0 +1,610 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++ - Processors from AMD, Centaur and other non Intel vendors
++
++ - Older processor models, where the CPU family is < 6
++
++ - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++ Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++ - The Intel XEON PHI family
++
++ - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++ IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++ by the Meltdown vulnerability either. These CPUs should become
++ available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++ ============= ================= ==============================
++ CVE-2018-3615 L1 Terminal Fault SGX related aspects
++ CVE-2018-3620 L1 Terminal Fault OS, SMM related aspects
++ CVE-2018-3646 L1 Terminal Fault Virtualization related aspects
++ ============= ================= ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++ Operating Systems store arbitrary information in the address bits of a
++ PTE which is marked non present. This allows a malicious user space
++ application to attack the physical memory to which these PTEs resolve.
++ In some cases user-space can maliciously influence the information
++ encoded in the address bits of the PTE, thus making attacks more
++ deterministic and more practical.
++
++ The Linux kernel contains a mitigation for this attack vector, PTE
++ inversion, which is permanently enabled and has no performance
++ impact. The kernel ensures that the address bits of PTEs, which are not
++ marked present, never point to cacheable physical memory space.
++
++ A system with an up to date kernel is protected against attacks from
++ malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ The fact that L1TF breaks all domain protections allows malicious guest
++ OSes, which can control the PTEs directly, and malicious guest user
++ space applications, which run on an unprotected guest kernel lacking the
++ PTE inversion mitigation for L1TF, to attack physical host memory.
++
++ A special aspect of L1TF in the context of virtualization is symmetric
++ multi threading (SMT). The Intel implementation of SMT is called
++ HyperThreading. The fact that Hyperthreads on the affected processors
++ share the L1 Data Cache (L1D) is important for this. As the flaw allows
++ only to attack data which is present in L1D, a malicious guest running
++ on one Hyperthread can attack the data which is brought into the L1D by
++ the context which runs on the sibling Hyperthread of the same physical
++ core. This context can be host OS, host user space or a different guest.
++
++ If the processor does not support Extended Page Tables, the attack is
++ only possible, when the hypervisor does not sanitize the content of the
++ effective (shadow) page tables.
++
++ While solutions exist to mitigate these attack vectors fully, these
++ mitigations are not enabled by default in the Linux kernel because they
++ can affect performance significantly. The kernel provides several
++ mechanisms which can be utilized to address the problem depending on the
++ deployment scenario. The mitigations, their protection scope and impact
++ are described in the next sections.
++
++ The default mitigations and the rationale for choosing them are explained
++ at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++ =========================== ===============================
++ 'Not affected' The processor is not vulnerable
++ 'Mitigation: PTE Inversion' The host protection is active
++ =========================== ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++ - SMT status:
++
++ ===================== ================
++ 'VMX: SMT vulnerable' SMT is enabled
++ 'VMX: SMT disabled' SMT is disabled
++ ===================== ================
++
++ - L1D Flush mode:
++
++ ================================ ====================================
++ 'L1D vulnerable' L1D flushing is disabled
++
++ 'L1D conditional cache flushes' L1D flush is conditionally enabled
++
++ 'L1D cache flushes' L1D flush is unconditionally enabled
++ ================================ ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++ To make sure that a guest cannot attack data which is present in the L1D
++ the hypervisor flushes the L1D before entering the guest.
++
++ Flushing the L1D evicts not only the data which should not be accessed
++ by a potentially malicious guest, it also flushes the guest
++ data. Flushing the L1D has a performance impact as the processor has to
++ bring the flushed guest data back into the L1D. Depending on the
++ frequency of VMEXIT/VMENTER and the type of computations in the guest
++ performance degradation in the range of 1% to 50% has been observed. For
++ scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++ minimal. Virtio and mechanisms like posted interrupts are designed to
++ confine the VMEXITs to a bare minimum, but specific configurations and
++ application scenarios might still suffer from a high VMEXIT rate.
++
++ The kernel provides two L1D flush modes:
++ - conditional ('cond')
++ - unconditional ('always')
++
++ The conditional mode avoids L1D flushing after VMEXITs which execute
++ only audited code paths before the corresponding VMENTER. These code
++ paths have been verified that they cannot expose secrets or other
++ interesting data to an attacker, but they can leak information about the
++ address space layout of the hypervisor.
++
++ Unconditional mode flushes L1D on all VMENTER invocations and provides
++ maximum protection. It has a higher overhead than the conditional
++ mode. The overhead cannot be quantified correctly as it depends on the
++ workload scenario and the resulting number of VMEXITs.
++
++ The general recommendation is to enable L1D flush on VMENTER. The kernel
++ defaults to conditional mode on affected processors.
++
++ **Note**, that L1D flush does not prevent the SMT problem because the
++ sibling thread will also bring back its data into the L1D which makes it
++ attackable again.
++
++ L1D flush can be controlled by the administrator via the kernel command
++ line and sysfs control files. See :ref:`mitigation_control_command_line`
++ and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ To address the SMT problem, it is possible to make a guest or a group of
++ guests affine to one or more physical cores. The proper mechanism for
++ that is to utilize exclusive cpusets to ensure that no other guest or
++ host tasks can run on these cores.
++
++ If only a single guest or related guests run on sibling SMT threads on
++ the same physical core then they can only attack their own memory and
++ restricted parts of the host memory.
++
++ Host memory is attackable, when one of the sibling SMT threads runs in
++ host OS (hypervisor) context and the other in guest context. The amount
++ of valuable information from the host OS context depends on the context
++ which the host OS executes, i.e. interrupts, soft interrupts and kernel
++ threads. The amount of valuable data from these contexts cannot be
++ declared as non-interesting for an attacker without deep inspection of
++ the code.
++
++ **Note**, that assigning guests to a fixed set of physical cores affects
++ the ability of the scheduler to do load balancing and might have
++ negative effects on CPU utilization depending on the hosting
++ scenario. Disabling SMT might be a viable alternative for particular
++ scenarios.
++
++ For further information about confining guests to a single or to a group
++ of cores consult the cpusets documentation:
++
++ https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++ Interrupts can be made affine to logical CPUs. This is not universally
++ true because there are types of interrupts which are truly per CPU
++ interrupts, e.g. the local timer interrupt. Aside of that multi queue
++ devices affine their interrupts to single CPUs or groups of CPUs per
++ queue without allowing the administrator to control the affinities.
++
++ Moving the interrupts, which can be affinity controlled, away from CPUs
++ which run untrusted guests, reduces the attack vector space.
++
++ Whether the interrupts with are affine to CPUs, which run untrusted
++ guests, provide interesting data for an attacker depends on the system
++ configuration and the scenarios which run on the system. While for some
++ of the interrupts it can be assumed that they won't expose interesting
++ information beyond exposing hints about the host OS memory layout, there
++ is no way to make general assumptions.
++
++ Interrupt affinity can be controlled by the administrator via the
++ /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++ available at:
++
++ https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++ To prevent the SMT issues of L1TF it might be necessary to disable SMT
++ completely. Disabling SMT can have a significant performance impact, but
++ the impact depends on the hosting scenario and the type of workloads.
++ The impact of disabling SMT needs also to be weighted against the impact
++ of other mitigation solutions like confining guests to dedicated cores.
++
++ The kernel provides a sysfs interface to retrieve the status of SMT and
++ to control it. It also provides a kernel command line interface to
++ control SMT.
++
++ The kernel command line interface consists of the following options:
++
++ =========== ==========================================================
++ nosmt Affects the bring up of the secondary CPUs during boot. The
++ kernel tries to bring all present CPUs online during the
++ boot process. "nosmt" makes sure that from each physical
++ core only one - the so called primary (hyper) thread is
++ activated. Due to a design flaw of Intel processors related
++ to Machine Check Exceptions the non primary siblings have
++ to be brought up at least partially and are then shut down
++ again. "nosmt" can be undone via the sysfs interface.
++
++ nosmt=force Has the same effect as "nosmt" but it does not allow to
++ undo the SMT disable via the sysfs interface.
++ =========== ==========================================================
++
++ The sysfs interface provides two files:
++
++ - /sys/devices/system/cpu/smt/control
++ - /sys/devices/system/cpu/smt/active
++
++ /sys/devices/system/cpu/smt/control:
++
++ This file allows to read out the SMT control state and provides the
++ ability to disable or (re)enable SMT. The possible states are:
++
++ ============== ===================================================
++ on SMT is supported by the CPU and enabled. All
++ logical CPUs can be onlined and offlined without
++ restrictions.
++
++ off SMT is supported by the CPU and disabled. Only
++ the so called primary SMT threads can be onlined
++ and offlined without restrictions. An attempt to
++ online a non-primary sibling is rejected
++
++ forceoff Same as 'off' but the state cannot be controlled.
++ Attempts to write to the control file are rejected.
++
++ notsupported The processor does not support SMT. It's therefore
++ not affected by the SMT implications of L1TF.
++ Attempts to write to the control file are rejected.
++ ============== ===================================================
++
++ The possible states which can be written into this file to control SMT
++ state are:
++
++ - on
++ - off
++ - forceoff
++
++ /sys/devices/system/cpu/smt/active:
++
++ This file reports whether SMT is enabled and active, i.e. if on any
++ physical core two or more sibling threads are online.
++
++ SMT control is also possible at boot time via the l1tf kernel command
++ line parameter in combination with L1D flush control. See
++ :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++ Disabling EPT for virtual machines provides full mitigation for L1TF even
++ with SMT enabled, because the effective page tables for guests are
++ managed and sanitized by the hypervisor. Though disabling EPT has a
++ significant performance impact especially when the Meltdown mitigation
++ KPTI is enabled.
++
++ EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++ ============ =============================================================
++ full Provides all available mitigations for the L1TF
++ vulnerability. Disables SMT and enables all mitigations in
++ the hypervisors, i.e. unconditional L1D flushing
++
++ SMT control and L1D flush control via the sysfs interface
++ is still possible after boot. Hypervisors will issue a
++ warning when the first VM is started in a potentially
++ insecure configuration, i.e. SMT enabled or L1D flush
++ disabled.
++
++ full,force Same as 'full', but disables SMT and L1D flush runtime
++ control. Implies the 'nosmt=force' command line option.
++ (i.e. sysfs control of SMT is disabled.)
++
++ flush Leaves SMT enabled and enables the default hypervisor
++ mitigation, i.e. conditional L1D flushing
++
++ SMT control and L1D flush control via the sysfs interface
++ is still possible after boot. Hypervisors will issue a
++ warning when the first VM is started in a potentially
++ insecure configuration, i.e. SMT enabled or L1D flush
++ disabled.
++
++ flush,nosmt Disables SMT and enables the default hypervisor mitigation,
++ i.e. conditional L1D flushing.
++
++ SMT control and L1D flush control via the sysfs interface
++ is still possible after boot. Hypervisors will issue a
++ warning when the first VM is started in a potentially
++ insecure configuration, i.e. SMT enabled or L1D flush
++ disabled.
++
++ flush,nowarn Same as 'flush', but hypervisors will not warn when a VM is
++ started in a potentially insecure configuration.
++
++ off Disables hypervisor mitigations and doesn't emit any
++ warnings.
++ ============ =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++ ============ ==============================================================
++ always L1D cache flush on every VMENTER.
++
++ cond Flush L1D on VMENTER only when the code between VMEXIT and
++ VMENTER can leak host memory which is considered
++ interesting for an attacker. This still can leak host memory
++ which allows e.g. to determine the hosts address space layout.
++
++ never Disables the mitigation
++ ============ ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ The system is protected by the kernel unconditionally and no further
++ action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ If the guest comes from a trusted source and the guest OS kernel is
++ guaranteed to have the L1TF mitigations in place the system is fully
++ protected against L1TF and no further action is required.
++
++ To avoid the overhead of the default L1D flushing on VMENTER the
++ administrator can disable the flushing via the kernel command line and
++ sysfs control files. See :ref:`mitigation_control_command_line` and
++ :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++ If SMT is not supported by the processor or disabled in the BIOS or by
++ the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++ Conditional L1D flushing is the default behaviour and can be tuned. See
++ :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++ If EPT is not supported by the processor or disabled in the hypervisor,
++ the system is fully protected. SMT can stay enabled and L1D flushing on
++ VMENTER is not required.
++
++ EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++ If SMT and EPT are supported and active then various degrees of
++ mitigations can be employed:
++
++ - L1D flushing on VMENTER:
++
++ L1D flushing on VMENTER is the minimal protection requirement, but it
++ is only potent in combination with other mitigation methods.
++
++ Conditional L1D flushing is the default behaviour and can be tuned. See
++ :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++ - Guest confinement:
++
++ Confinement of guests to a single or a group of physical cores which
++ are not running any other processes, can reduce the attack surface
++ significantly, but interrupts, soft interrupts and kernel threads can
++ still expose valuable data to a potential attacker. See
++ :ref:`guest_confinement`.
++
++ - Interrupt isolation:
++
++ Isolating the guest CPUs from interrupts can reduce the attack surface
++ further, but still allows a malicious guest to explore a limited amount
++ of host physical memory. This can at least be used to gain knowledge
++ about the host address space layout. The interrupts which have a fixed
++ affinity to the CPUs which run the untrusted guests can depending on
++ the scenario still trigger soft interrupts and schedule kernel threads
++ which might expose valuable information. See
++ :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++ - Disabling SMT:
++
++ Disabling SMT and enforcing the L1D flushing provides the maximum
++ amount of protection. This mitigation is not depending on any of the
++ above mitigation methods.
++
++ SMT control and L1D flushing can be tuned by the command line
++ parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++ time with the matching sysfs control files. See :ref:`smt_control`,
++ :ref:`mitigation_control_command_line` and
++ :ref:`mitigation_control_kvm`.
++
++ - Disabling EPT:
++
++ Disabling EPT provides the maximum amount of protection as well. It is
++ not depending on any of the above mitigation methods. SMT can stay
++ enabled and L1D flushing is not required, but the performance impact is
++ significant.
++
++ EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++ parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine. VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it wiil:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++ nested virtual machine, so that the nested hypervisor's secrets are not
++ exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++ the nested hypervisor; this is a complex operation, and flushing the L1D
++ cache avoids that the bare metal hypervisor's secrets are exposed to the
++ nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++ is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++ The kernel default mitigations for vulnerable processors are:
++
++ - PTE inversion to protect against malicious user space. This is done
++ unconditionally and cannot be controlled.
++
++ - L1D conditional flushing on VMENTER when EPT is enabled for
++ a guest.
++
++ The kernel does not by default enforce the disabling of SMT, which leaves
++ SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++ The rationale for this choice is:
++
++ - Force disabling SMT can break existing setups, especially with
++ unattended updates.
++
++ - If regular users run untrusted guests on their machine, then L1TF is
++ just an add on to other malware which might be embedded in an untrusted
++ guest, e.g. spam-bots or attacks on the local network.
++
++ There is no technical way to prevent a user from running untrusted code
++ on their machines blindly.
++
++ - It's technically extremely unlikely and from today's knowledge even
++ impossible that L1TF can be exploited via the most popular attack
++ mechanisms like JavaScript because these mechanisms have no way to
++ control PTEs. If this would be possible and not other mitigation would
++ be possible, then the default might be different.
++
++ - The administrators of cloud and hosting setups have to carefully
++ analyze the risk for their scenarios and make the appropriate
++ mitigation choices, which might even vary across their deployed
++ machines and also result in other changes of their overall setup.
++ There is no way for the kernel to provide a sensible default for this
++ kind of scenarios.
+diff --git a/Makefile b/Makefile
+index ce4248f558d1..e8cbf2dd3069 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 75dd23acf133..95ee27f372ed 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -13,6 +13,9 @@ config KEXEC_CORE
+ config HAVE_IMA_KEXEC
+ bool
+
++config HOTPLUG_SMT
++ bool
++
+ config OPROFILE
+ tristate "OProfile system profiling"
+ depends on PROFILING
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index 49c7205b8db8..77fdad65e2bb 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -1351,7 +1351,7 @@
+ ranges = <0x81000000 0 0 0x08f80000 0 0x00010000 /* downstream I/O */
+ 0x82000000 0 0x08000000 0x08000000 0 0x00f00000>; /* non-prefetchable memory */
+ num-lanes = <1>;
+- interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index fc5a574c3482..f02087656528 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -199,7 +199,7 @@ config PREFETCH
+
+ config MLONGCALLS
+ bool "Enable the -mlong-calls compiler option for big kernels"
+- def_bool y if (!MODULES)
++ default y
+ depends on PA8X00
+ help
+ If you configure the kernel to include many drivers built-in instead
+diff --git a/arch/parisc/include/asm/barrier.h b/arch/parisc/include/asm/barrier.h
+new file mode 100644
+index 000000000000..dbaaca84f27f
+--- /dev/null
++++ b/arch/parisc/include/asm/barrier.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __ASM_BARRIER_H
++#define __ASM_BARRIER_H
++
++#ifndef __ASSEMBLY__
++
++/* The synchronize caches instruction executes as a nop on systems in
++ which all memory references are performed in order. */
++#define synchronize_caches() __asm__ __volatile__ ("sync" : : : "memory")
++
++#if defined(CONFIG_SMP)
++#define mb() do { synchronize_caches(); } while (0)
++#define rmb() mb()
++#define wmb() mb()
++#define dma_rmb() mb()
++#define dma_wmb() mb()
++#else
++#define mb() barrier()
++#define rmb() barrier()
++#define wmb() barrier()
++#define dma_rmb() barrier()
++#define dma_wmb() barrier()
++#endif
++
++#define __smp_mb() mb()
++#define __smp_rmb() mb()
++#define __smp_wmb() mb()
++
++#include <asm-generic/barrier.h>
++
++#endif /* !__ASSEMBLY__ */
++#endif /* __ASM_BARRIER_H */
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index e95207c0565e..1b4732e20137 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -481,6 +481,8 @@
+ /* Release pa_tlb_lock lock without reloading lock address. */
+ .macro tlb_unlock0 spc,tmp
+ #ifdef CONFIG_SMP
++ or,COND(=) %r0,\spc,%r0
++ sync
+ or,COND(=) %r0,\spc,%r0
+ stw \spc,0(\tmp)
+ #endif
+diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S
+index 22e6374ece44..97451e67d35b 100644
+--- a/arch/parisc/kernel/pacache.S
++++ b/arch/parisc/kernel/pacache.S
+@@ -353,6 +353,7 @@ ENDPROC_CFI(flush_data_cache_local)
+ .macro tlb_unlock la,flags,tmp
+ #ifdef CONFIG_SMP
+ ldi 1,\tmp
++ sync
+ stw \tmp,0(\la)
+ mtsm \flags
+ #endif
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index e775f80ae28c..4886a6db42e9 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -633,6 +633,7 @@ cas_action:
+ sub,<> %r28, %r25, %r0
+ 2: stw,ma %r24, 0(%r26)
+ /* Free lock */
++ sync
+ stw,ma %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ /* Clear thread register indicator */
+@@ -647,6 +648,7 @@ cas_action:
+ 3:
+ /* Error occurred on load or store */
+ /* Free lock */
++ sync
+ stw %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ stw %r0, 4(%sr2,%r20)
+@@ -848,6 +850,7 @@ cas2_action:
+
+ cas2_end:
+ /* Free lock */
++ sync
+ stw,ma %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+@@ -858,6 +861,7 @@ cas2_end:
+ 22:
+ /* Error occurred on load or store */
+ /* Free lock */
++ sync
+ stw %r20, 0(%sr2,%r20)
+ ssm PSW_SM_I, %r0
+ ldo 1(%r0),%r28
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index c07f492b871a..960539ae701c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -183,6 +183,7 @@ config X86
+ select HAVE_SYSCALL_TRACEPOINTS
+ select HAVE_UNSTABLE_SCHED_CLOCK
+ select HAVE_USER_RETURN_NOTIFIER
++ select HOTPLUG_SMT if SMP
+ select IRQ_FORCED_THREADING
+ select PCI_LOCKLESS_CONFIG
+ select PERF_EVENTS
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 74a9e06b6cfd..130e81e10fc7 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -10,6 +10,7 @@
+ #include <asm/fixmap.h>
+ #include <asm/mpspec.h>
+ #include <asm/msr.h>
++#include <asm/hardirq.h>
+
+ #define ARCH_APICTIMER_STOPS_ON_C3 1
+
+@@ -502,12 +503,19 @@ extern int default_check_phys_apicid_present(int phys_apicid);
+
+ #endif /* CONFIG_X86_LOCAL_APIC */
+
++#ifdef CONFIG_SMP
++bool apic_id_is_primary_thread(unsigned int id);
++#else
++static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
++#endif
++
+ extern void irq_enter(void);
+ extern void irq_exit(void);
+
+ static inline void entering_irq(void)
+ {
+ irq_enter();
++ kvm_set_cpu_l1tf_flush_l1d();
+ }
+
+ static inline void entering_ack_irq(void)
+@@ -520,6 +528,7 @@ static inline void ipi_entering_ack_irq(void)
+ {
+ irq_enter();
+ ack_APIC_irq();
++ kvm_set_cpu_l1tf_flush_l1d();
+ }
+
+ static inline void exiting_irq(void)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index fb00a2fca990..f8659f070fc6 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */
+
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
+@@ -339,6 +340,7 @@
+ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
+
+@@ -371,5 +373,6 @@
+ #define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/dmi.h b/arch/x86/include/asm/dmi.h
+index 0ab2ab27ad1f..b825cb201251 100644
+--- a/arch/x86/include/asm/dmi.h
++++ b/arch/x86/include/asm/dmi.h
+@@ -4,8 +4,8 @@
+
+ #include <linux/compiler.h>
+ #include <linux/init.h>
++#include <linux/io.h>
+
+-#include <asm/io.h>
+ #include <asm/setup.h>
+
+ static __always_inline __init void *dmi_alloc(unsigned len)
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index 5ea2afd4c871..0459169ab589 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -3,10 +3,12 @@
+ #define _ASM_X86_HARDIRQ_H
+
+ #include <linux/threads.h>
+-#include <linux/irq.h>
+
+ typedef struct {
+- unsigned int __softirq_pending;
++ u16 __softirq_pending;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++ u8 kvm_cpu_l1tf_flush_l1d;
++#endif
+ unsigned int __nmi_count; /* arch dependent */
+ #ifdef CONFIG_X86_LOCAL_APIC
+ unsigned int apic_timer_irqs; /* arch dependent */
+@@ -66,4 +68,24 @@ extern u64 arch_irq_stat_cpu(unsigned int cpu);
+ extern u64 arch_irq_stat(void);
+ #define arch_irq_stat arch_irq_stat
+
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++{
++ __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
++}
++
++static inline void kvm_clear_cpu_l1tf_flush_l1d(void)
++{
++ __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0);
++}
++
++static inline bool kvm_get_cpu_l1tf_flush_l1d(void)
++{
++ return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
++}
++#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
++static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
++
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c4fc17220df9..c14f2a74b2be 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -13,6 +13,8 @@
+ * Interrupt control:
+ */
+
++/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
++extern inline unsigned long native_save_fl(void);
+ extern inline unsigned long native_save_fl(void)
+ {
+ unsigned long flags;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index f4b2588865e9..5d216d1f40a2 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -17,6 +17,7 @@
+ #include <linux/tracepoint.h>
+ #include <linux/cpumask.h>
+ #include <linux/irq_work.h>
++#include <linux/irq.h>
+
+ #include <linux/kvm.h>
+ #include <linux/kvm_para.h>
+@@ -711,6 +712,9 @@ struct kvm_vcpu_arch {
+
+ /* be preempted when it's in kernel-mode(cpl=0) */
+ bool preempted_in_kernel;
++
++ /* Flush the L1 Data cache for L1TF mitigation on VMENTER */
++ bool l1tf_flush_l1d;
+ };
+
+ struct kvm_lpage_info {
+@@ -879,6 +883,7 @@ struct kvm_vcpu_stat {
+ u64 signal_exits;
+ u64 irq_window_exits;
+ u64 nmi_window_exits;
++ u64 l1d_flush;
+ u64 halt_exits;
+ u64 halt_successful_poll;
+ u64 halt_attempted_poll;
+@@ -1410,6 +1415,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
+
++u64 kvm_get_arch_capabilities(void);
+ void kvm_define_shared_msr(unsigned index, u32 msr);
+ int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index fda2114197b3..a7df14793e1d 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -70,12 +70,19 @@
+ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a
+ #define ARCH_CAP_RDCL_NO (1 << 0) /* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL (1 << 1) /* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH (1 << 3) /* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO (1 << 4) /*
+ * Not susceptible to Speculative Store Bypass
+ * attack, so no Speculative Store Bypass
+ * control required.
+ */
+
++#define MSR_IA32_FLUSH_CMD 0x0000010b
++#define L1D_FLUSH (1 << 0) /*
++ * Writeback and invalidate the
++ * L1 data cache.
++ */
++
+ #define MSR_IA32_BBL_CR_CTL 0x00000119
+ #define MSR_IA32_BBL_CR_CTL3 0x0000011e
+
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index aa30c3241ea7..0d5c739eebd7 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -29,8 +29,13 @@
+ #define N_EXCEPTION_STACKS 1
+
+ #ifdef CONFIG_X86_PAE
+-/* 44=32+12, the limit we can fit into an unsigned long pfn */
+-#define __PHYSICAL_MASK_SHIFT 44
++/*
++ * This is beyond the 44 bit limit imposed by the 32bit long pfns,
++ * but we need the full mask to make sure inverted PROT_NONE
++ * entries have all the host bits set in a guest.
++ * The real limit is still 44 bits.
++ */
++#define __PHYSICAL_MASK_SHIFT 52
+ #define __VIRTUAL_MASK_SHIFT 32
+
+ #else /* !CONFIG_X86_PAE */
+diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h
+index 685ffe8a0eaf..60d0f9015317 100644
+--- a/arch/x86/include/asm/pgtable-2level.h
++++ b/arch/x86/include/asm/pgtable-2level.h
+@@ -95,4 +95,21 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
+ #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low })
+ #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
+
++/* No inverted PFNs on 2 level page tables */
++
++static inline u64 protnone_mask(u64 val)
++{
++ return 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++ return val;
++}
++
++static inline bool __pte_needs_invert(u64 val)
++{
++ return false;
++}
++
+ #endif /* _ASM_X86_PGTABLE_2LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index f24df59c40b2..bb035a4cbc8c 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -241,12 +241,43 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
+ #endif
+
+ /* Encode and de-code a swap entry */
++#define SWP_TYPE_BITS 5
++
++#define SWP_OFFSET_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS)
++
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
+ #define __swp_type(x) (((x).val) & 0x1f)
+ #define __swp_offset(x) ((x).val >> 5)
+ #define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5})
+-#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x) ((pte_t){ { .pte_high = (x).val } })
++
++/*
++ * Normally, __swp_entry() converts from arch-independent swp_entry_t to
++ * arch-dependent swp_entry_t, and __swp_entry_to_pte() just stores the result
++ * to pte. But here we have 32bit swp_entry_t and 64bit pte, and need to use the
++ * whole 64 bits. Thus, we shift the "real" arch-dependent conversion to
++ * __swp_entry_to_pte() through the following helper macro based on 64bit
++ * __swp_entry().
++ */
++#define __swp_pteval_entry(type, offset) ((pteval_t) { \
++ (~(pteval_t)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++ | ((pteval_t)(type) << (64 - SWP_TYPE_BITS)) })
++
++#define __swp_entry_to_pte(x) ((pte_t){ .pte = \
++ __swp_pteval_entry(__swp_type(x), __swp_offset(x)) })
++/*
++ * Analogically, __pte_to_swp_entry() doesn't just extract the arch-dependent
++ * swp_entry_t, but also has to convert it from 64bit to the 32bit
++ * intermediate representation, using the following macros based on 64bit
++ * __swp_type() and __swp_offset().
++ */
++#define __pteval_swp_type(x) ((unsigned long)((x).pte >> (64 - SWP_TYPE_BITS)))
++#define __pteval_swp_offset(x) ((unsigned long)(~((x).pte) << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT))
++
++#define __pte_to_swp_entry(pte) (__swp_entry(__pteval_swp_type(pte), \
++ __pteval_swp_offset(pte)))
+
+ #define gup_get_pte gup_get_pte
+ /*
+@@ -295,4 +326,6 @@ static inline pte_t gup_get_pte(pte_t *ptep)
+ return pte;
+ }
+
++#include <asm/pgtable-invert.h>
++
+ #endif /* _ASM_X86_PGTABLE_3LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+new file mode 100644
+index 000000000000..44b1203ece12
+--- /dev/null
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_PGTABLE_INVERT_H
++#define _ASM_PGTABLE_INVERT_H 1
++
++#ifndef __ASSEMBLY__
++
++static inline bool __pte_needs_invert(u64 val)
++{
++ return !(val & _PAGE_PRESENT);
++}
++
++/* Get a mask to xor with the page table entry to get the correct pfn. */
++static inline u64 protnone_mask(u64 val)
++{
++ return __pte_needs_invert(val) ? ~0ull : 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++ /*
++ * When a PTE transitions from NONE to !NONE or vice-versa
++ * invert the PFN part to stop speculation.
++ * pte_pfn undoes this when needed.
++ */
++ if (__pte_needs_invert(oldval) != __pte_needs_invert(val))
++ val = (val & ~mask) | (~val & mask);
++ return val;
++}
++
++#endif /* __ASSEMBLY__ */
++
++#endif
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index f1633de5a675..f3bbb6ea5937 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -185,19 +185,29 @@ static inline int pte_special(pte_t pte)
+ return pte_flags(pte) & _PAGE_SPECIAL;
+ }
+
++/* Entries that were set to PROT_NONE are inverted */
++
++static inline u64 protnone_mask(u64 val);
++
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+- return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT;
++ phys_addr_t pfn = pte_val(pte);
++ pfn ^= protnone_mask(pfn);
++ return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT;
+ }
+
+ static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+- return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
++ phys_addr_t pfn = pmd_val(pmd);
++ pfn ^= protnone_mask(pfn);
++ return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
+ }
+
+ static inline unsigned long pud_pfn(pud_t pud)
+ {
+- return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT;
++ phys_addr_t pfn = pud_val(pud);
++ pfn ^= protnone_mask(pfn);
++ return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT;
+ }
+
+ static inline unsigned long p4d_pfn(p4d_t p4d)
+@@ -400,11 +410,6 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
+ return pmd_set_flags(pmd, _PAGE_RW);
+ }
+
+-static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+-{
+- return pmd_clear_flags(pmd, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ static inline pud_t pud_set_flags(pud_t pud, pudval_t set)
+ {
+ pudval_t v = native_pud_val(pud);
+@@ -459,11 +464,6 @@ static inline pud_t pud_mkwrite(pud_t pud)
+ return pud_set_flags(pud, _PAGE_RW);
+ }
+
+-static inline pud_t pud_mknotpresent(pud_t pud)
+-{
+- return pud_clear_flags(pud, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+ static inline int pte_soft_dirty(pte_t pte)
+ {
+@@ -545,25 +545,45 @@ static inline pgprotval_t check_pgprot(pgprot_t pgprot)
+
+ static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+ {
+- return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
+- check_pgprot(pgprot));
++ phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++ pfn ^= protnone_mask(pgprot_val(pgprot));
++ pfn &= PTE_PFN_MASK;
++ return __pte(pfn | check_pgprot(pgprot));
+ }
+
+ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ {
+- return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) |
+- check_pgprot(pgprot));
++ phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++ pfn ^= protnone_mask(pgprot_val(pgprot));
++ pfn &= PHYSICAL_PMD_PAGE_MASK;
++ return __pmd(pfn | check_pgprot(pgprot));
+ }
+
+ static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
+ {
+- return __pud(((phys_addr_t)page_nr << PAGE_SHIFT) |
+- check_pgprot(pgprot));
++ phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++ pfn ^= protnone_mask(pgprot_val(pgprot));
++ pfn &= PHYSICAL_PUD_PAGE_MASK;
++ return __pud(pfn | check_pgprot(pgprot));
+ }
+
++static inline pmd_t pmd_mknotpresent(pmd_t pmd)
++{
++ return pfn_pmd(pmd_pfn(pmd),
++ __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline pud_t pud_mknotpresent(pud_t pud)
++{
++ return pfn_pud(pud_pfn(pud),
++ __pgprot(pud_flags(pud) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask);
++
+ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ {
+- pteval_t val = pte_val(pte);
++ pteval_t val = pte_val(pte), oldval = val;
+
+ /*
+ * Chop off the NX bit (if present), and add the NX portion of
+@@ -571,17 +591,17 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ */
+ val &= _PAGE_CHG_MASK;
+ val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK;
+-
++ val = flip_protnone_guard(oldval, val, PTE_PFN_MASK);
+ return __pte(val);
+ }
+
+ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+ {
+- pmdval_t val = pmd_val(pmd);
++ pmdval_t val = pmd_val(pmd), oldval = val;
+
+ val &= _HPAGE_CHG_MASK;
+ val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK;
+-
++ val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK);
+ return __pmd(val);
+ }
+
+@@ -1320,6 +1340,14 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
+ return __pte_access_permitted(pud_val(pud), write);
+ }
+
++#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
++extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++ return boot_cpu_has_bug(X86_BUG_L1TF);
++}
++
+ #include <asm-generic/pgtable.h>
+ #endif /* __ASSEMBLY__ */
+
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 877bc27718ae..ea99272ab63e 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -273,7 +273,7 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+ *
+ * | ... | 11| 10| 9|8|7|6|5| 4| 3|2| 1|0| <- bit number
+ * | ... |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names
+- * | OFFSET (14->63) | TYPE (9-13) |0|0|X|X| X| X|X|SD|0| <- swp entry
++ * | TYPE (59-63) | ~OFFSET (9-58) |0|0|X|X| X| X|X|SD|0| <- swp entry
+ *
+ * G (8) is aliased and used as a PROT_NONE indicator for
+ * !present ptes. We need to start storing swap entries above
+@@ -286,20 +286,34 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+ *
+ * Bit 7 in swp entry should be 0 because pmd_present checks not only P,
+ * but also L and G.
++ *
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
+ */
+-#define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
+-#define SWP_TYPE_BITS 5
+-/* Place the offset above the type: */
+-#define SWP_OFFSET_FIRST_BIT (SWP_TYPE_FIRST_BIT + SWP_TYPE_BITS)
++#define SWP_TYPE_BITS 5
++
++#define SWP_OFFSET_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS)
+
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
+
+-#define __swp_type(x) (((x).val >> (SWP_TYPE_FIRST_BIT)) \
+- & ((1U << SWP_TYPE_BITS) - 1))
+-#define __swp_offset(x) ((x).val >> SWP_OFFSET_FIRST_BIT)
+-#define __swp_entry(type, offset) ((swp_entry_t) { \
+- ((type) << (SWP_TYPE_FIRST_BIT)) \
+- | ((offset) << SWP_OFFSET_FIRST_BIT) })
++/* Extract the high bits for type */
++#define __swp_type(x) ((x).val >> (64 - SWP_TYPE_BITS))
++
++/* Shift up (to get rid of type), then down to get value */
++#define __swp_offset(x) (~(x).val << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT)
++
++/*
++ * Shift the offset up "too far" by TYPE bits, then down again
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
++ */
++#define __swp_entry(type, offset) ((swp_entry_t) { \
++ (~(unsigned long)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++ | ((unsigned long)(type) << (64-SWP_TYPE_BITS)) })
++
+ #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val((pmd)) })
+ #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
+@@ -343,5 +357,7 @@ static inline bool gup_fast_permitted(unsigned long start, int nr_pages,
+ return true;
+ }
+
++#include <asm/pgtable-invert.h>
++
+ #endif /* !__ASSEMBLY__ */
+ #endif /* _ASM_X86_PGTABLE_64_H */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 21a114914ba4..d7a9dea8563d 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -181,6 +181,11 @@ extern const struct seq_operations cpuinfo_op;
+
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+
++static inline unsigned long l1tf_pfn_limit(void)
++{
++ return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++}
++
+ extern void early_cpu_init(void);
+ extern void identify_boot_cpu(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+@@ -986,4 +991,16 @@ bool xen_set_default_idle(void);
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
+ void microcode_check(void);
++
++enum l1tf_mitigations {
++ L1TF_MITIGATION_OFF,
++ L1TF_MITIGATION_FLUSH_NOWARN,
++ L1TF_MITIGATION_FLUSH,
++ L1TF_MITIGATION_FLUSH_NOSMT,
++ L1TF_MITIGATION_FULL,
++ L1TF_MITIGATION_FULL_FORCE
++};
++
++extern enum l1tf_mitigations l1tf_mitigation;
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index f75bff8f9d82..547c4fe50711 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -171,7 +171,6 @@ static inline int wbinvd_on_all_cpus(void)
+ wbinvd();
+ return 0;
+ }
+-#define smp_num_siblings 1
+ #endif /* CONFIG_SMP */
+
+ extern unsigned disabled_cpus;
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index c1d2a9892352..453cf38a1c33 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -123,13 +123,17 @@ static inline int topology_max_smt_threads(void)
+ }
+
+ int topology_update_package_map(unsigned int apicid, unsigned int cpu);
+-extern int topology_phys_to_logical_pkg(unsigned int pkg);
++int topology_phys_to_logical_pkg(unsigned int pkg);
++bool topology_is_primary_thread(unsigned int cpu);
++bool topology_smt_supported(void);
+ #else
+ #define topology_max_packages() (1)
+ static inline int
+ topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
++static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
++static inline bool topology_smt_supported(void) { return false; }
+ #endif
+
+ static inline void arch_fix_phys_package_id(int num, u32 slot)
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 7cc81c586d71..9b71d0d24db1 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -574,4 +574,15 @@ enum vm_instruction_error_number {
+ VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
+ };
+
++enum vmx_l1d_flush_state {
++ VMENTER_L1D_FLUSH_AUTO,
++ VMENTER_L1D_FLUSH_NEVER,
++ VMENTER_L1D_FLUSH_COND,
++ VMENTER_L1D_FLUSH_ALWAYS,
++ VMENTER_L1D_FLUSH_EPT_DISABLED,
++ VMENTER_L1D_FLUSH_NOT_REQUIRED,
++};
++
++extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
++
+ #endif
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index adbda5847b14..3b3a2d0af78d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -56,6 +56,7 @@
+ #include <asm/hypervisor.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
++#include <asm/irq_regs.h>
+
+ unsigned int num_processors;
+
+@@ -2192,6 +2193,23 @@ static int cpuid_to_apicid[] = {
+ [0 ... NR_CPUS - 1] = -1,
+ };
+
++#ifdef CONFIG_SMP
++/**
++ * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
++ * @id: APIC ID to check
++ */
++bool apic_id_is_primary_thread(unsigned int apicid)
++{
++ u32 mask;
++
++ if (smp_num_siblings == 1)
++ return true;
++ /* Isolate the SMT bit(s) in the APICID and check for 0 */
++ mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
++ return !(apicid & mask);
++}
++#endif
++
+ /*
+ * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
+ * and cpuid_to_apicid[] synchronized.
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 3982f79d2377..ff0d14cd9e82 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -33,6 +33,7 @@
+
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/init.h>
+ #include <linux/delay.h>
+ #include <linux/sched.h>
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index ce503c99f5c4..72a94401f9e0 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -12,6 +12,7 @@
+ */
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/dmar.h>
+ #include <linux/hpet.h>
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index b708f597eee3..9f38b4140c27 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -11,6 +11,7 @@
+ * published by the Free Software Foundation.
+ */
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/init.h>
+ #include <linux/compiler.h>
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 1b18be3f35a8..02fc277a56f2 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -298,7 +298,6 @@ static int nearby_node(int apicid)
+ }
+ #endif
+
+-#ifdef CONFIG_SMP
+ /*
+ * Fix up cpu_core_id for pre-F17h systems to be in the
+ * [0 .. cores_per_node - 1] range. Not really needed but
+@@ -315,6 +314,13 @@ static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+ c->cpu_core_id %= cus_per_node;
+ }
+
++
++static void amd_get_topology_early(struct cpuinfo_x86 *c)
++{
++ if (cpu_has(c, X86_FEATURE_TOPOEXT))
++ smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
++}
++
+ /*
+ * Fixup core topology information for
+ * (1) AMD multi-node processors
+@@ -333,7 +339,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+
+ node_id = ecx & 0xff;
+- smp_num_siblings = ((ebx >> 8) & 0xff) + 1;
+
+ if (c->x86 == 0x15)
+ c->cu_id = ebx & 0xff;
+@@ -376,7 +381,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ legacy_fixup_core_id(c);
+ }
+ }
+-#endif
+
+ /*
+ * On a AMD dual core setup the lower bits of the APIC id distinguish the cores.
+@@ -384,7 +388,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ */
+ static void amd_detect_cmp(struct cpuinfo_x86 *c)
+ {
+-#ifdef CONFIG_SMP
+ unsigned bits;
+ int cpu = smp_processor_id();
+
+@@ -396,16 +399,11 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
+ /* use socket ID also for last level cache */
+ per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
+ amd_get_topology(c);
+-#endif
+ }
+
+ u16 amd_get_nb_id(int cpu)
+ {
+- u16 id = 0;
+-#ifdef CONFIG_SMP
+- id = per_cpu(cpu_llc_id, cpu);
+-#endif
+- return id;
++ return per_cpu(cpu_llc_id, cpu);
+ }
+ EXPORT_SYMBOL_GPL(amd_get_nb_id);
+
+@@ -624,6 +622,7 @@ clear_sev:
+
+ static void early_init_amd(struct cpuinfo_x86 *c)
+ {
++ u64 value;
+ u32 dummy;
+
+ early_init_amd_mc(c);
+@@ -694,6 +693,22 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ set_cpu_bug(c, X86_BUG_AMD_E400);
+
+ early_detect_mem_encrypt(c);
++
++ /* Re-enable TopologyExtensions if switched off by BIOS */
++ if (c->x86 == 0x15 &&
++ (c->x86_model >= 0x10 && c->x86_model <= 0x6f) &&
++ !cpu_has(c, X86_FEATURE_TOPOEXT)) {
++
++ if (msr_set_bit(0xc0011005, 54) > 0) {
++ rdmsrl(0xc0011005, value);
++ if (value & BIT_64(54)) {
++ set_cpu_cap(c, X86_FEATURE_TOPOEXT);
++ pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
++ }
++ }
++ }
++
++ amd_get_topology_early(c);
+ }
+
+ static void init_amd_k8(struct cpuinfo_x86 *c)
+@@ -785,19 +800,6 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ {
+ u64 value;
+
+- /* re-enable TopologyExtensions if switched off by BIOS */
+- if ((c->x86_model >= 0x10) && (c->x86_model <= 0x6f) &&
+- !cpu_has(c, X86_FEATURE_TOPOEXT)) {
+-
+- if (msr_set_bit(0xc0011005, 54) > 0) {
+- rdmsrl(0xc0011005, value);
+- if (value & BIT_64(54)) {
+- set_cpu_cap(c, X86_FEATURE_TOPOEXT);
+- pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
+- }
+- }
+- }
+-
+ /*
+ * The way access filter has a performance penalty on some workloads.
+ * Disable it on the affected CPUs.
+@@ -861,15 +863,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+
+ cpu_detect_cache_sizes(c);
+
+- /* Multi core CPU? */
+- if (c->extended_cpuid_level >= 0x80000008) {
+- amd_detect_cmp(c);
+- srat_detect_node(c);
+- }
+-
+-#ifdef CONFIG_X86_32
+- detect_ht(c);
+-#endif
++ amd_detect_cmp(c);
++ srat_detect_node(c);
+
+ init_amd_cacheinfo(c);
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 7416fc206b4a..edfc64a8a154 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -22,14 +22,17 @@
+ #include <asm/processor-flags.h>
+ #include <asm/fpu/internal.h>
+ #include <asm/msr.h>
++#include <asm/vmx.h>
+ #include <asm/paravirt.h>
+ #include <asm/alternative.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/intel-family.h>
++#include <asm/e820/api.h>
+
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
++static void __init l1tf_select_mitigation(void);
+
+ /*
+ * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+@@ -55,6 +58,12 @@ void __init check_bugs(void)
+ {
+ identify_boot_cpu();
+
++ /*
++ * identify_boot_cpu() initialized SMT support information, let the
++ * core code know.
++ */
++ cpu_smt_check_topology_early();
++
+ if (!IS_ENABLED(CONFIG_SMP)) {
+ pr_info("CPU: ");
+ print_cpu_info(&boot_cpu_data);
+@@ -81,6 +90,8 @@ void __init check_bugs(void)
+ */
+ ssb_select_mitigation();
+
++ l1tf_select_mitigation();
++
+ #ifdef CONFIG_X86_32
+ /*
+ * Check whether we are able to run this kernel safely on SMP.
+@@ -311,23 +322,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ return cmd;
+ }
+
+-/* Check for Skylake-like CPUs (for RSB handling) */
+-static bool __init is_skylake_era(void)
+-{
+- if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+- boot_cpu_data.x86 == 6) {
+- switch (boot_cpu_data.x86_model) {
+- case INTEL_FAM6_SKYLAKE_MOBILE:
+- case INTEL_FAM6_SKYLAKE_DESKTOP:
+- case INTEL_FAM6_SKYLAKE_X:
+- case INTEL_FAM6_KABYLAKE_MOBILE:
+- case INTEL_FAM6_KABYLAKE_DESKTOP:
+- return true;
+- }
+- }
+- return false;
+-}
+-
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -388,22 +382,15 @@ retpoline_auto:
+ pr_info("%s\n", spectre_v2_strings[mode]);
+
+ /*
+- * If neither SMEP nor PTI are available, there is a risk of
+- * hitting userspace addresses in the RSB after a context switch
+- * from a shallow call stack to a deeper one. To prevent this fill
+- * the entire RSB, even when using IBRS.
++ * If spectre v2 protection has been enabled, unconditionally fill
++ * RSB during a context switch; this protects against two independent
++ * issues:
+ *
+- * Skylake era CPUs have a separate issue with *underflow* of the
+- * RSB, when they will predict 'ret' targets from the generic BTB.
+- * The proper mitigation for this is IBRS. If IBRS is not supported
+- * or deactivated in favour of retpolines the RSB fill on context
+- * switch is required.
++ * - RSB underflow (and switch to BTB) on Skylake+
++ * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
+ */
+- if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+- !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
+- setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+- pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
+- }
++ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
++ pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+
+ /* Initialize Indirect Branch Prediction Barrier if supported */
+ if (boot_cpu_has(X86_FEATURE_IBPB)) {
+@@ -654,8 +641,121 @@ void x86_spec_ctrl_setup_ap(void)
+ x86_amd_ssb_disable();
+ }
+
++#undef pr_fmt
++#define pr_fmt(fmt) "L1TF: " fmt
++
++/* Default mitigation for L1TF-affected CPUs */
++enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(l1tf_mitigation);
++
++enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
++#endif
++
++static void __init l1tf_select_mitigation(void)
++{
++ u64 half_pa;
++
++ if (!boot_cpu_has_bug(X86_BUG_L1TF))
++ return;
++
++ switch (l1tf_mitigation) {
++ case L1TF_MITIGATION_OFF:
++ case L1TF_MITIGATION_FLUSH_NOWARN:
++ case L1TF_MITIGATION_FLUSH:
++ break;
++ case L1TF_MITIGATION_FLUSH_NOSMT:
++ case L1TF_MITIGATION_FULL:
++ cpu_smt_disable(false);
++ break;
++ case L1TF_MITIGATION_FULL_FORCE:
++ cpu_smt_disable(true);
++ break;
++ }
++
++#if CONFIG_PGTABLE_LEVELS == 2
++ pr_warn("Kernel not compiled for PAE. No mitigation for L1TF\n");
++ return;
++#endif
++
++ /*
++ * This is extremely unlikely to happen because almost all
++ * systems have far more MAX_PA/2 than RAM can be fit into
++ * DIMM slots.
++ */
++ half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
++ if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
++ pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++ return;
++ }
++
++ setup_force_cpu_cap(X86_FEATURE_L1TF_PTEINV);
++}
++
++static int __init l1tf_cmdline(char *str)
++{
++ if (!boot_cpu_has_bug(X86_BUG_L1TF))
++ return 0;
++
++ if (!str)
++ return -EINVAL;
++
++ if (!strcmp(str, "off"))
++ l1tf_mitigation = L1TF_MITIGATION_OFF;
++ else if (!strcmp(str, "flush,nowarn"))
++ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOWARN;
++ else if (!strcmp(str, "flush"))
++ l1tf_mitigation = L1TF_MITIGATION_FLUSH;
++ else if (!strcmp(str, "flush,nosmt"))
++ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++ else if (!strcmp(str, "full"))
++ l1tf_mitigation = L1TF_MITIGATION_FULL;
++ else if (!strcmp(str, "full,force"))
++ l1tf_mitigation = L1TF_MITIGATION_FULL_FORCE;
++
++ return 0;
++}
++early_param("l1tf", l1tf_cmdline);
++
++#undef pr_fmt
++
+ #ifdef CONFIG_SYSFS
+
++#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static const char *l1tf_vmx_states[] = {
++ [VMENTER_L1D_FLUSH_AUTO] = "auto",
++ [VMENTER_L1D_FLUSH_NEVER] = "vulnerable",
++ [VMENTER_L1D_FLUSH_COND] = "conditional cache flushes",
++ [VMENTER_L1D_FLUSH_ALWAYS] = "cache flushes",
++ [VMENTER_L1D_FLUSH_EPT_DISABLED] = "EPT disabled",
++ [VMENTER_L1D_FLUSH_NOT_REQUIRED] = "flush not necessary"
++};
++
++static ssize_t l1tf_show_state(char *buf)
++{
++ if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO)
++ return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++
++ if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
++ (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
++ cpu_smt_control == CPU_SMT_ENABLED))
++ return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
++ l1tf_vmx_states[l1tf_vmx_mitigation]);
++
++ return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
++ l1tf_vmx_states[l1tf_vmx_mitigation],
++ cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
++}
++#else
++static ssize_t l1tf_show_state(char *buf)
++{
++ return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++}
++#endif
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ char *buf, unsigned int bug)
+ {
+@@ -681,6 +781,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_SPEC_STORE_BYPASS:
+ return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+
++ case X86_BUG_L1TF:
++ if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
++ return l1tf_show_state(buf);
++ break;
+ default:
+ break;
+ }
+@@ -707,4 +811,9 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_SPEC_STORE_BYPASS);
+ }
++
++ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 38276f58d3bf..6c54d8b0e5dc 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -66,6 +66,13 @@ cpumask_var_t cpu_callin_mask;
+ /* representing cpus for which sibling maps can be computed */
+ cpumask_var_t cpu_sibling_setup_mask;
+
++/* Number of siblings per CPU package */
++int smp_num_siblings = 1;
++EXPORT_SYMBOL(smp_num_siblings);
++
++/* Last level cache ID of each logical CPU */
++DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
++
+ /* correctly size the local cpu masks */
+ void __init setup_cpu_local_masks(void)
+ {
+@@ -638,33 +645,36 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+
+-void detect_ht(struct cpuinfo_x86 *c)
++int detect_ht_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+ u32 eax, ebx, ecx, edx;
+- int index_msb, core_bits;
+- static bool printed;
+
+ if (!cpu_has(c, X86_FEATURE_HT))
+- return;
++ return -1;
+
+ if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+- goto out;
++ return -1;
+
+ if (cpu_has(c, X86_FEATURE_XTOPOLOGY))
+- return;
++ return -1;
+
+ cpuid(1, &eax, &ebx, &ecx, &edx);
+
+ smp_num_siblings = (ebx & 0xff0000) >> 16;
+-
+- if (smp_num_siblings == 1) {
++ if (smp_num_siblings == 1)
+ pr_info_once("CPU0: Hyper-Threading is disabled\n");
+- goto out;
+- }
++#endif
++ return 0;
++}
+
+- if (smp_num_siblings <= 1)
+- goto out;
++void detect_ht(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++ int index_msb, core_bits;
++
++ if (detect_ht_early(c) < 0)
++ return;
+
+ index_msb = get_count_order(smp_num_siblings);
+ c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+@@ -677,15 +687,6 @@ void detect_ht(struct cpuinfo_x86 *c)
+
+ c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
+ ((1 << core_bits) - 1);
+-
+-out:
+- if (!printed && (c->x86_max_cores * smp_num_siblings) > 1) {
+- pr_info("CPU: Physical Processor ID: %d\n",
+- c->phys_proc_id);
+- pr_info("CPU: Processor Core ID: %d\n",
+- c->cpu_core_id);
+- printed = 1;
+- }
+ #endif
+ }
+
+@@ -958,6 +959,21 @@ static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+ {}
+ };
+
++static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
++ /* in addition to cpu_no_speculation */
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT2 },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MERRIFIELD },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MOOREFIELD },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_DENVERTON },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GEMINI_LAKE },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM },
++ {}
++};
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ u64 ia32_cap = 0;
+@@ -983,6 +999,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ return;
+
+ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
++
++ if (x86_match_cpu(cpu_no_l1tf))
++ return;
++
++ setup_force_cpu_bug(X86_BUG_L1TF);
+ }
+
+ /*
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 37672d299e35..cca588407dca 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -47,6 +47,8 @@ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
++extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
++extern int detect_ht_early(struct cpuinfo_x86 *c);
+
+ unsigned int aperfmperf_get_khz(int cpu);
+
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 577e7f7ae273..bfaaa92ef6de 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -301,6 +301,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ }
+
+ check_mpx_erratum(c);
++
++ /*
++ * Get the number of SMT siblings early from the extended topology
++ * leaf, if available. Otherwise try the legacy SMT detection.
++ */
++ if (detect_extended_topology_early(c) < 0)
++ detect_ht_early(c);
+ }
+
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 08286269fd24..b9bc8a1a584e 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -509,12 +509,20 @@ static struct platform_device *microcode_pdev;
+
+ static int check_online_cpus(void)
+ {
+- if (num_online_cpus() == num_present_cpus())
+- return 0;
++ unsigned int cpu;
+
+- pr_err("Not all CPUs online, aborting microcode update.\n");
++ /*
++ * Make sure all CPUs are online. It's fine for SMT to be disabled if
++ * all the primary threads are still online.
++ */
++ for_each_present_cpu(cpu) {
++ if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) {
++ pr_err("Not all CPUs online, aborting microcode update.\n");
++ return -EINVAL;
++ }
++ }
+
+- return -EINVAL;
++ return 0;
+ }
+
+ static atomic_t late_cpus_in;
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index b099024d339c..19c6e800e816 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -27,16 +27,13 @@
+ * exists, use it for populating initial_apicid and cpu topology
+ * detection.
+ */
+-void detect_extended_topology(struct cpuinfo_x86 *c)
++int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+- unsigned int eax, ebx, ecx, edx, sub_index;
+- unsigned int ht_mask_width, core_plus_mask_width;
+- unsigned int core_select_mask, core_level_siblings;
+- static bool printed;
++ unsigned int eax, ebx, ecx, edx;
+
+ if (c->cpuid_level < 0xb)
+- return;
++ return -1;
+
+ cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+
+@@ -44,7 +41,7 @@ void detect_extended_topology(struct cpuinfo_x86 *c)
+ * check if the cpuid leaf 0xb is actually implemented.
+ */
+ if (ebx == 0 || (LEAFB_SUBTYPE(ecx) != SMT_TYPE))
+- return;
++ return -1;
+
+ set_cpu_cap(c, X86_FEATURE_XTOPOLOGY);
+
+@@ -52,10 +49,30 @@ void detect_extended_topology(struct cpuinfo_x86 *c)
+ * initial apic id, which also represents 32-bit extended x2apic id.
+ */
+ c->initial_apicid = edx;
++ smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++#endif
++ return 0;
++}
++
++/*
++ * Check for extended topology enumeration cpuid leaf 0xb and if it
++ * exists, use it for populating initial_apicid and cpu topology
++ * detection.
++ */
++void detect_extended_topology(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++ unsigned int eax, ebx, ecx, edx, sub_index;
++ unsigned int ht_mask_width, core_plus_mask_width;
++ unsigned int core_select_mask, core_level_siblings;
++
++ if (detect_extended_topology_early(c) < 0)
++ return;
+
+ /*
+ * Populate HT related information from sub-leaf level 0.
+ */
++ cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+
+@@ -86,15 +103,5 @@ void detect_extended_topology(struct cpuinfo_x86 *c)
+ c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+
+ c->x86_max_cores = (core_level_siblings / smp_num_siblings);
+-
+- if (!printed) {
+- pr_info("CPU: Physical Processor ID: %d\n",
+- c->phys_proc_id);
+- if (c->x86_max_cores > 1)
+- pr_info("CPU: Processor Core ID: %d\n",
+- c->cpu_core_id);
+- printed = 1;
+- }
+- return;
+ #endif
+ }
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index f92a6593de1e..2ea85b32421a 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -10,6 +10,7 @@
+ #include <asm/fpu/signal.h>
+ #include <asm/fpu/types.h>
+ #include <asm/traps.h>
++#include <asm/irq_regs.h>
+
+ #include <linux/hardirq.h>
+ #include <linux/pkeys.h>
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 8ce4212e2b8d..afa1a204bc6d 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1,6 +1,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/export.h>
+ #include <linux/delay.h>
+ #include <linux/errno.h>
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 86c4439f9d74..519649ddf100 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/init.h>
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 2c3a1b4294eb..7f6cffaa5322 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -8,6 +8,7 @@
+ #include <asm/traps.h>
+ #include <asm/proto.h>
+ #include <asm/desc.h>
++#include <asm/hw_irq.h>
+
+ struct idt_data {
+ unsigned int vector;
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 328d027d829d..59b5f2ea7c2f 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -10,6 +10,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/delay.h>
+ #include <linux/export.h>
++#include <linux/irq.h>
+
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index c1bdbd3d3232..95600a99ae93 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -11,6 +11,7 @@
+
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/notifier.h>
+ #include <linux/cpu.h>
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index d86e344f5b3d..0469cd078db1 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -11,6 +11,7 @@
+
+ #include <linux/kernel_stat.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/delay.h>
+ #include <linux/ftrace.h>
+diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
+index 772196c1b8c4..a0693b71cfc1 100644
+--- a/arch/x86/kernel/irqinit.c
++++ b/arch/x86/kernel/irqinit.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/kprobes.h>
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 6f4d42377fe5..44e26dc326d5 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -395,8 +395,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ - (u8 *) real;
+ if ((s64) (s32) newdisp != newdisp) {
+ pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
+- pr_err("\tSrc: %p, Dest: %p, old disp: %x\n",
+- src, real, insn->displacement.value);
+ return 0;
+ }
+ disp = (u8 *) dest + insn_offset_displacement(insn);
+@@ -640,8 +638,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ * Raise a BUG or we'll continue in an endless reentering loop
+ * and eventually a stack overflow.
+ */
+- printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n",
+- p->addr);
++ pr_err("Unrecoverable kprobe detected.\n");
+ dump_kprobe(p);
+ BUG();
+ default:
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 99dc79e76bdc..930c88341e4e 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -88,10 +88,12 @@ unsigned paravirt_patch_call(void *insnbuf,
+ struct branch *b = insnbuf;
+ unsigned long delta = (unsigned long)target - (addr+5);
+
+- if (tgt_clobbers & ~site_clobbers)
+- return len; /* target would clobber too much for this site */
+- if (len < 5)
++ if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++ WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++#endif
+ return len; /* call too long for patch site */
++ }
+
+ b->opcode = 0xe8; /* call */
+ b->delta = delta;
+@@ -106,8 +108,12 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ struct branch *b = insnbuf;
+ unsigned long delta = (unsigned long)target - (addr+5);
+
+- if (len < 5)
++ if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++ WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++#endif
+ return len; /* call too long for patch site */
++ }
+
+ b->opcode = 0xe9; /* jmp */
+ b->delta = delta;
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 5c623dfe39d1..89fd35349412 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -823,6 +823,12 @@ void __init setup_arch(char **cmdline_p)
+ memblock_reserve(__pa_symbol(_text),
+ (unsigned long)__bss_stop - (unsigned long)_text);
+
++ /*
++ * Make sure page 0 is always reserved because on systems with
++ * L1TF its contents can be leaked to user processes.
++ */
++ memblock_reserve(0, PAGE_SIZE);
++
+ early_reserve_initrd();
+
+ /*
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 5c574dff4c1a..04adc8d60aed 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -261,6 +261,7 @@ __visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
+ {
+ ack_APIC_irq();
+ inc_irq_stat(irq_resched_count);
++ kvm_set_cpu_l1tf_flush_l1d();
+
+ if (trace_resched_ipi_enabled()) {
+ /*
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 9dd324ae4832..f5d30c68fd09 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -80,13 +80,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/spec-ctrl.h>
+-
+-/* Number of siblings per CPU package */
+-int smp_num_siblings = 1;
+-EXPORT_SYMBOL(smp_num_siblings);
+-
+-/* Last level cache ID of each logical CPU */
+-DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
++#include <asm/hw_irq.h>
+
+ /* representing HT siblings of each logical CPU */
+ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
+@@ -272,6 +266,23 @@ static void notrace start_secondary(void *unused)
+ cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+
++/**
++ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
++ * @cpu: CPU to check
++ */
++bool topology_is_primary_thread(unsigned int cpu)
++{
++ return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
++}
++
++/**
++ * topology_smt_supported - Check whether SMT is supported by the CPUs
++ */
++bool topology_smt_supported(void)
++{
++ return smp_num_siblings > 1;
++}
++
+ /**
+ * topology_phys_to_logical_pkg - Map a physical package id to a logical
+ *
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index 774ebafa97c4..be01328eb755 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -12,6 +12,7 @@
+
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/i8253.h>
+ #include <linux/time.h>
+ #include <linux/export.h>
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 030c6bb240d9..2b974d4e1489 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3836,6 +3836,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ {
+ int r = 1;
+
++ vcpu->arch.l1tf_flush_l1d = true;
+ switch (vcpu->arch.apf.host_apf_reason) {
+ default:
+ trace_kvm_page_fault(fault_address, error_code);
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 7a28959f1985..12cad70acc3b 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -188,6 +188,150 @@ module_param(ple_window_max, uint, 0444);
+
+ extern const ulong vmx_return;
+
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush);
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond);
++static DEFINE_MUTEX(vmx_l1d_flush_mutex);
++
++/* Storage for pre module init parameter parsing */
++static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_L1D_FLUSH_AUTO;
++
++static const struct {
++ const char *option;
++ enum vmx_l1d_flush_state cmd;
++} vmentry_l1d_param[] = {
++ {"auto", VMENTER_L1D_FLUSH_AUTO},
++ {"never", VMENTER_L1D_FLUSH_NEVER},
++ {"cond", VMENTER_L1D_FLUSH_COND},
++ {"always", VMENTER_L1D_FLUSH_ALWAYS},
++};
++
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
++{
++ struct page *page;
++ unsigned int i;
++
++ if (!enable_ept) {
++ l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
++ return 0;
++ }
++
++ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
++ u64 msr;
++
++ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
++ if (msr & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) {
++ l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
++ return 0;
++ }
++ }
++
++ /* If set to auto use the default l1tf mitigation method */
++ if (l1tf == VMENTER_L1D_FLUSH_AUTO) {
++ switch (l1tf_mitigation) {
++ case L1TF_MITIGATION_OFF:
++ l1tf = VMENTER_L1D_FLUSH_NEVER;
++ break;
++ case L1TF_MITIGATION_FLUSH_NOWARN:
++ case L1TF_MITIGATION_FLUSH:
++ case L1TF_MITIGATION_FLUSH_NOSMT:
++ l1tf = VMENTER_L1D_FLUSH_COND;
++ break;
++ case L1TF_MITIGATION_FULL:
++ case L1TF_MITIGATION_FULL_FORCE:
++ l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++ break;
++ }
++ } else if (l1tf_mitigation == L1TF_MITIGATION_FULL_FORCE) {
++ l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++ }
++
++ if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages &&
++ !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++ page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
++ if (!page)
++ return -ENOMEM;
++ vmx_l1d_flush_pages = page_address(page);
++
++ /*
++ * Initialize each page with a different pattern in
++ * order to protect against KSM in the nested
++ * virtualization case.
++ */
++ for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
++ memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
++ PAGE_SIZE);
++ }
++ }
++
++ l1tf_vmx_mitigation = l1tf;
++
++ if (l1tf != VMENTER_L1D_FLUSH_NEVER)
++ static_branch_enable(&vmx_l1d_should_flush);
++ else
++ static_branch_disable(&vmx_l1d_should_flush);
++
++ if (l1tf == VMENTER_L1D_FLUSH_COND)
++ static_branch_enable(&vmx_l1d_flush_cond);
++ else
++ static_branch_disable(&vmx_l1d_flush_cond);
++ return 0;
++}
++
++static int vmentry_l1d_flush_parse(const char *s)
++{
++ unsigned int i;
++
++ if (s) {
++ for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
++ if (sysfs_streq(s, vmentry_l1d_param[i].option))
++ return vmentry_l1d_param[i].cmd;
++ }
++ }
++ return -EINVAL;
++}
++
++static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
++{
++ int l1tf, ret;
++
++ if (!boot_cpu_has(X86_BUG_L1TF))
++ return 0;
++
++ l1tf = vmentry_l1d_flush_parse(s);
++ if (l1tf < 0)
++ return l1tf;
++
++ /*
++ * Has vmx_init() run already? If not then this is the pre init
++ * parameter parsing. In that case just store the value and let
++ * vmx_init() do the proper setup after enable_ept has been
++ * established.
++ */
++ if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) {
++ vmentry_l1d_flush_param = l1tf;
++ return 0;
++ }
++
++ mutex_lock(&vmx_l1d_flush_mutex);
++ ret = vmx_setup_l1d_flush(l1tf);
++ mutex_unlock(&vmx_l1d_flush_mutex);
++ return ret;
++}
++
++static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
++{
++ return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
++}
++
++static const struct kernel_param_ops vmentry_l1d_flush_ops = {
++ .set = vmentry_l1d_flush_set,
++ .get = vmentry_l1d_flush_get,
++};
++module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644);
++
+ struct kvm_vmx {
+ struct kvm kvm;
+
+@@ -591,6 +735,11 @@ static inline int pi_test_sn(struct pi_desc *pi_desc)
+ (unsigned long *)&pi_desc->control);
+ }
+
++struct vmx_msrs {
++ unsigned int nr;
++ struct vmx_msr_entry val[NR_AUTOLOAD_MSRS];
++};
++
+ struct vcpu_vmx {
+ struct kvm_vcpu vcpu;
+ unsigned long host_rsp;
+@@ -624,9 +773,8 @@ struct vcpu_vmx {
+ struct loaded_vmcs *loaded_vmcs;
+ bool __launched; /* temporary, used in vmx_vcpu_run */
+ struct msr_autoload {
+- unsigned nr;
+- struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS];
+- struct vmx_msr_entry host[NR_AUTOLOAD_MSRS];
++ struct vmx_msrs guest;
++ struct vmx_msrs host;
+ } msr_autoload;
+ struct {
+ int loaded;
+@@ -2182,9 +2330,20 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ vm_exit_controls_clearbit(vmx, exit);
+ }
+
++static int find_msr(struct vmx_msrs *m, unsigned int msr)
++{
++ unsigned int i;
++
++ for (i = 0; i < m->nr; ++i) {
++ if (m->val[i].index == msr)
++ return i;
++ }
++ return -ENOENT;
++}
++
+ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ {
+- unsigned i;
++ int i;
+ struct msr_autoload *m = &vmx->msr_autoload;
+
+ switch (msr) {
+@@ -2205,18 +2364,21 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ }
+ break;
+ }
++ i = find_msr(&m->guest, msr);
++ if (i < 0)
++ goto skip_guest;
++ --m->guest.nr;
++ m->guest.val[i] = m->guest.val[m->guest.nr];
++ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
+
+- for (i = 0; i < m->nr; ++i)
+- if (m->guest[i].index == msr)
+- break;
+-
+- if (i == m->nr)
++skip_guest:
++ i = find_msr(&m->host, msr);
++ if (i < 0)
+ return;
+- --m->nr;
+- m->guest[i] = m->guest[m->nr];
+- m->host[i] = m->host[m->nr];
+- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
++
++ --m->host.nr;
++ m->host.val[i] = m->host.val[m->host.nr];
++ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
+ }
+
+ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+@@ -2231,9 +2393,9 @@ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ }
+
+ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+- u64 guest_val, u64 host_val)
++ u64 guest_val, u64 host_val, bool entry_only)
+ {
+- unsigned i;
++ int i, j = 0;
+ struct msr_autoload *m = &vmx->msr_autoload;
+
+ switch (msr) {
+@@ -2268,24 +2430,31 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+ wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+ }
+
+- for (i = 0; i < m->nr; ++i)
+- if (m->guest[i].index == msr)
+- break;
++ i = find_msr(&m->guest, msr);
++ if (!entry_only)
++ j = find_msr(&m->host, msr);
+
+- if (i == NR_AUTOLOAD_MSRS) {
++ if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) {
+ printk_once(KERN_WARNING "Not enough msr switch entries. "
+ "Can't add msr %x\n", msr);
+ return;
+- } else if (i == m->nr) {
+- ++m->nr;
+- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
+ }
++ if (i < 0) {
++ i = m->guest.nr++;
++ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
++ }
++ m->guest.val[i].index = msr;
++ m->guest.val[i].value = guest_val;
++
++ if (entry_only)
++ return;
+
+- m->guest[i].index = msr;
+- m->guest[i].value = guest_val;
+- m->host[i].index = msr;
+- m->host[i].value = host_val;
++ if (j < 0) {
++ j = m->host.nr++;
++ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
++ }
++ m->host.val[j].index = msr;
++ m->host.val[j].value = host_val;
+ }
+
+ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+@@ -2329,7 +2498,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+ guest_efer &= ~EFER_LME;
+ if (guest_efer != host_efer)
+ add_atomic_switch_msr(vmx, MSR_EFER,
+- guest_efer, host_efer);
++ guest_efer, host_efer, false);
+ return false;
+ } else {
+ guest_efer &= ~ignore_bits;
+@@ -3775,7 +3944,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ vcpu->arch.ia32_xss = data;
+ if (vcpu->arch.ia32_xss != host_xss)
+ add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+- vcpu->arch.ia32_xss, host_xss);
++ vcpu->arch.ia32_xss, host_xss, false);
+ else
+ clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+ break;
+@@ -6041,9 +6210,9 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0);
+- vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
++ vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0);
+- vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++ vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+
+ if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
+ vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
+@@ -6063,8 +6232,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ ++vmx->nmsrs;
+ }
+
+- if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+- rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities);
++ vmx->arch_capabilities = kvm_get_arch_capabilities();
+
+ vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl);
+
+@@ -9282,6 +9450,79 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
+ }
+ }
+
++/*
++ * Software based L1D cache flush which is used when microcode providing
++ * the cache control MSR is not loaded.
++ *
++ * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to
++ * flush it is required to read in 64 KiB because the replacement algorithm
++ * is not exactly LRU. This could be sized at runtime via topology
++ * information but as all relevant affected CPUs have 32KiB L1D cache size
++ * there is no point in doing so.
++ */
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
++{
++ int size = PAGE_SIZE << L1D_CACHE_ORDER;
++
++ /*
++ * This code is only executed when the the flush mode is 'cond' or
++ * 'always'
++ */
++ if (static_branch_likely(&vmx_l1d_flush_cond)) {
++ bool flush_l1d;
++
++ /*
++ * Clear the per-vcpu flush bit, it gets set again
++ * either from vcpu_run() or from one of the unsafe
++ * VMEXIT handlers.
++ */
++ flush_l1d = vcpu->arch.l1tf_flush_l1d;
++ vcpu->arch.l1tf_flush_l1d = false;
++
++ /*
++ * Clear the per-cpu flush bit, it gets set again from
++ * the interrupt handlers.
++ */
++ flush_l1d |= kvm_get_cpu_l1tf_flush_l1d();
++ kvm_clear_cpu_l1tf_flush_l1d();
++
++ if (!flush_l1d)
++ return;
++ }
++
++ vcpu->stat.l1d_flush++;
++
++ if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++ wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
++ return;
++ }
++
++ asm volatile(
++ /* First ensure the pages are in the TLB */
++ "xorl %%eax, %%eax\n"
++ ".Lpopulate_tlb:\n\t"
++ "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++ "addl $4096, %%eax\n\t"
++ "cmpl %%eax, %[size]\n\t"
++ "jne .Lpopulate_tlb\n\t"
++ "xorl %%eax, %%eax\n\t"
++ "cpuid\n\t"
++ /* Now fill the cache */
++ "xorl %%eax, %%eax\n"
++ ".Lfill_cache:\n"
++ "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++ "addl $64, %%eax\n\t"
++ "cmpl %%eax, %[size]\n\t"
++ "jne .Lfill_cache\n\t"
++ "lfence\n"
++ :: [flush_pages] "r" (vmx_l1d_flush_pages),
++ [size] "r" (size)
++ : "eax", "ebx", "ecx", "edx");
++}
++
+ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+ {
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+@@ -9688,7 +9929,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
+ clear_atomic_switch_msr(vmx, msrs[i].msr);
+ else
+ add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest,
+- msrs[i].host);
++ msrs[i].host, false);
+ }
+
+ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
+@@ -9783,6 +10024,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+ (unsigned long)¤t_evmcs->host_rsp : 0;
+
++ if (static_branch_unlikely(&vmx_l1d_should_flush))
++ vmx_l1d_flush(vcpu);
++
+ asm(
+ /* Store host registers */
+ "push %%" _ASM_DX "; push %%" _ASM_BP ";"
+@@ -10142,10 +10386,37 @@ free_vcpu:
+ return ERR_PTR(err);
+ }
+
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+ if (!ple_gap)
+ kvm->arch.pause_in_guest = true;
++
++ if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
++ switch (l1tf_mitigation) {
++ case L1TF_MITIGATION_OFF:
++ case L1TF_MITIGATION_FLUSH_NOWARN:
++ /* 'I explicitly don't care' is set */
++ break;
++ case L1TF_MITIGATION_FLUSH:
++ case L1TF_MITIGATION_FLUSH_NOSMT:
++ case L1TF_MITIGATION_FULL:
++ /*
++ * Warn upon starting the first VM in a potentially
++ * insecure environment.
++ */
++ if (cpu_smt_control == CPU_SMT_ENABLED)
++ pr_warn_once(L1TF_MSG_SMT);
++ if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
++ pr_warn_once(L1TF_MSG_L1D);
++ break;
++ case L1TF_MITIGATION_FULL_FORCE:
++ /* Flush is enforced */
++ break;
++ }
++ }
+ return 0;
+ }
+
+@@ -11005,10 +11276,10 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ * Set the MSR load/store lists to match L0's settings.
+ */
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+- vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
+- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+- vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++ vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
++ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
++ vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+
+ set_cr4_guest_host_mask(vmx);
+
+@@ -11642,6 +11913,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ if (ret)
+ return ret;
+
++ /* Hide L1D cache contents from the nested guest. */
++ vmx->vcpu.arch.l1tf_flush_l1d = true;
++
+ /*
+ * If we're entering a halted L2 vcpu and the L2 vcpu won't be woken
+ * by event injection, halt vcpu.
+@@ -12155,8 +12429,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ vmx_segment_cache_clear(vmx);
+
+ /* Update any VMCS fields that might have changed while L2 ran */
+- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
++ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+ vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
+ if (vmx->hv_deadline_tsc == -1)
+ vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL,
+@@ -12868,6 +13142,51 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ .enable_smi_window = enable_smi_window,
+ };
+
++static void vmx_cleanup_l1d_flush(void)
++{
++ if (vmx_l1d_flush_pages) {
++ free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER);
++ vmx_l1d_flush_pages = NULL;
++ }
++ /* Restore state so sysfs ignores VMX */
++ l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++}
++
++static void vmx_exit(void)
++{
++#ifdef CONFIG_KEXEC_CORE
++ RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
++ synchronize_rcu();
++#endif
++
++ kvm_exit();
++
++#if IS_ENABLED(CONFIG_HYPERV)
++ if (static_branch_unlikely(&enable_evmcs)) {
++ int cpu;
++ struct hv_vp_assist_page *vp_ap;
++ /*
++ * Reset everything to support using non-enlightened VMCS
++ * access later (e.g. when we reload the module with
++ * enlightened_vmcs=0)
++ */
++ for_each_online_cpu(cpu) {
++ vp_ap = hv_get_vp_assist_page(cpu);
++
++ if (!vp_ap)
++ continue;
++
++ vp_ap->current_nested_vmcs = 0;
++ vp_ap->enlighten_vmentry = 0;
++ }
++
++ static_branch_disable(&enable_evmcs);
++ }
++#endif
++ vmx_cleanup_l1d_flush();
++}
++module_exit(vmx_exit);
++
+ static int __init vmx_init(void)
+ {
+ int r;
+@@ -12902,10 +13221,25 @@ static int __init vmx_init(void)
+ #endif
+
+ r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+- __alignof__(struct vcpu_vmx), THIS_MODULE);
++ __alignof__(struct vcpu_vmx), THIS_MODULE);
+ if (r)
+ return r;
+
++ /*
++ * Must be called after kvm_init() so enable_ept is properly set
++ * up. Hand the parameter mitigation value in which was stored in
++ * the pre module init parser. If no parameter was given, it will
++ * contain 'auto' which will be turned into the default 'cond'
++ * mitigation mode.
++ */
++ if (boot_cpu_has(X86_BUG_L1TF)) {
++ r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
++ if (r) {
++ vmx_exit();
++ return r;
++ }
++ }
++
+ #ifdef CONFIG_KEXEC_CORE
+ rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ crash_vmclear_local_loaded_vmcss);
+@@ -12913,39 +13247,4 @@ static int __init vmx_init(void)
+
+ return 0;
+ }
+-
+-static void __exit vmx_exit(void)
+-{
+-#ifdef CONFIG_KEXEC_CORE
+- RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+- synchronize_rcu();
+-#endif
+-
+- kvm_exit();
+-
+-#if IS_ENABLED(CONFIG_HYPERV)
+- if (static_branch_unlikely(&enable_evmcs)) {
+- int cpu;
+- struct hv_vp_assist_page *vp_ap;
+- /*
+- * Reset everything to support using non-enlightened VMCS
+- * access later (e.g. when we reload the module with
+- * enlightened_vmcs=0)
+- */
+- for_each_online_cpu(cpu) {
+- vp_ap = hv_get_vp_assist_page(cpu);
+-
+- if (!vp_ap)
+- continue;
+-
+- vp_ap->current_nested_vmcs = 0;
+- vp_ap->enlighten_vmentry = 0;
+- }
+-
+- static_branch_disable(&enable_evmcs);
+- }
+-#endif
+-}
+-
+-module_init(vmx_init)
+-module_exit(vmx_exit)
++module_init(vmx_init);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ac01341f2d1f..0125698b9b70 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -194,6 +194,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ { "irq_injections", VCPU_STAT(irq_injections) },
+ { "nmi_injections", VCPU_STAT(nmi_injections) },
+ { "req_event", VCPU_STAT(req_event) },
++ { "l1d_flush", VCPU_STAT(l1d_flush) },
+ { "mmu_shadow_zapped", VM_STAT(mmu_shadow_zapped) },
+ { "mmu_pte_write", VM_STAT(mmu_pte_write) },
+ { "mmu_pte_updated", VM_STAT(mmu_pte_updated) },
+@@ -1097,11 +1098,35 @@ static u32 msr_based_features[] = {
+
+ static unsigned int num_msr_based_features;
+
++u64 kvm_get_arch_capabilities(void)
++{
++ u64 data;
++
++ rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data);
++
++ /*
++ * If we're doing cache flushes (either "always" or "cond")
++ * we will do one whenever the guest does a vmlaunch/vmresume.
++ * If an outer hypervisor is doing the cache flush for us
++ * (VMENTER_L1D_FLUSH_NESTED_VM), we can safely pass that
++ * capability to the guest too, and if EPT is disabled we're not
++ * vulnerable. Overall, only VMENTER_L1D_FLUSH_NEVER will
++ * require a nested hypervisor to do a flush of its own.
++ */
++ if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
++ data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
++
++ return data;
++}
++EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
++
+ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
+ {
+ switch (msr->index) {
+- case MSR_IA32_UCODE_REV:
+ case MSR_IA32_ARCH_CAPABILITIES:
++ msr->data = kvm_get_arch_capabilities();
++ break;
++ case MSR_IA32_UCODE_REV:
+ rdmsrl_safe(msr->index, &msr->data);
+ break;
+ default:
+@@ -4870,6 +4895,9 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
+ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
+ unsigned int bytes, struct x86_exception *exception)
+ {
++ /* kvm_write_guest_virt_system can pull in tons of pages. */
++ vcpu->arch.l1tf_flush_l1d = true;
++
+ return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
+ PFERR_WRITE_MASK, exception);
+ }
+@@ -6046,6 +6074,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
+ bool writeback = true;
+ bool write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
+
++ vcpu->arch.l1tf_flush_l1d = true;
++
+ /*
+ * Clear write_fault_to_shadow_pgtable here to ensure it is
+ * never reused.
+@@ -7575,6 +7605,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+ struct kvm *kvm = vcpu->kvm;
+
+ vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
++ vcpu->arch.l1tf_flush_l1d = true;
+
+ for (;;) {
+ if (kvm_vcpu_running(vcpu)) {
+@@ -8694,6 +8725,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
+
+ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+ {
++ vcpu->arch.l1tf_flush_l1d = true;
+ kvm_x86_ops->sched_in(vcpu, cpu);
+ }
+
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index cee58a972cb2..83241eb71cd4 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -4,6 +4,8 @@
+ #include <linux/swap.h>
+ #include <linux/memblock.h>
+ #include <linux/bootmem.h> /* for max_low_pfn */
++#include <linux/swapfile.h>
++#include <linux/swapops.h>
+
+ #include <asm/set_memory.h>
+ #include <asm/e820/api.h>
+@@ -880,3 +882,26 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+ __cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+ __pte2cachemode_tbl[entry] = cache;
+ }
++
++#ifdef CONFIG_SWAP
++unsigned long max_swapfile_size(void)
++{
++ unsigned long pages;
++
++ pages = generic_max_swapfile_size();
++
++ if (boot_cpu_has_bug(X86_BUG_L1TF)) {
++ /* Limit the swap file size to MAX_PA/2 for L1TF workaround */
++ unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++ /*
++ * We encode swap offsets also with 3 bits below those for pfn
++ * which makes the usable limit higher.
++ */
++#if CONFIG_PGTABLE_LEVELS > 2
++ l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
++#endif
++ pages = min_t(unsigned long, l1tf_limit, pages);
++ }
++ return pages;
++}
++#endif
+diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
+index 7c8686709636..79eb55ce69a9 100644
+--- a/arch/x86/mm/kmmio.c
++++ b/arch/x86/mm/kmmio.c
+@@ -126,24 +126,29 @@ static struct kmmio_fault_page *get_kmmio_fault_page(unsigned long addr)
+
+ static void clear_pmd_presence(pmd_t *pmd, bool clear, pmdval_t *old)
+ {
++ pmd_t new_pmd;
+ pmdval_t v = pmd_val(*pmd);
+ if (clear) {
+- *old = v & _PAGE_PRESENT;
+- v &= ~_PAGE_PRESENT;
+- } else /* presume this has been called with clear==true previously */
+- v |= *old;
+- set_pmd(pmd, __pmd(v));
++ *old = v;
++ new_pmd = pmd_mknotpresent(*pmd);
++ } else {
++ /* Presume this has been called with clear==true previously */
++ new_pmd = __pmd(*old);
++ }
++ set_pmd(pmd, new_pmd);
+ }
+
+ static void clear_pte_presence(pte_t *pte, bool clear, pteval_t *old)
+ {
+ pteval_t v = pte_val(*pte);
+ if (clear) {
+- *old = v & _PAGE_PRESENT;
+- v &= ~_PAGE_PRESENT;
+- } else /* presume this has been called with clear==true previously */
+- v |= *old;
+- set_pte_atomic(pte, __pte(v));
++ *old = v;
++ /* Nothing should care about address */
++ pte_clear(&init_mm, 0, pte);
++ } else {
++ /* Presume this has been called with clear==true previously */
++ set_pte_atomic(pte, __pte(*old));
++ }
+ }
+
+ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index 48c591251600..f40ab8185d94 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -240,3 +240,24 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t count)
+
+ return phys_addr_valid(addr + count - 1);
+ }
++
++/*
++ * Only allow root to set high MMIO mappings to PROT_NONE.
++ * This prevents an unpriv. user to set them to PROT_NONE and invert
++ * them, then pointing to valid memory for L1TF speculation.
++ *
++ * Note: for locked down kernels may want to disable the root override.
++ */
++bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++ if (!boot_cpu_has_bug(X86_BUG_L1TF))
++ return true;
++ if (!__pte_needs_invert(pgprot_val(prot)))
++ return true;
++ /* If it's real memory always allow */
++ if (pfn_valid(pfn))
++ return true;
++ if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++ return false;
++ return true;
++}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 3bded76e8d5c..7bb6f65c79de 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1014,8 +1014,8 @@ static long populate_pmd(struct cpa_data *cpa,
+
+ pmd = pmd_offset(pud, start);
+
+- set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+- massage_pgprot(pmd_pgprot)));
++ set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn,
++ canon_pgprot(pmd_pgprot))));
+
+ start += PMD_SIZE;
+ cpa->pfn += PMD_SIZE >> PAGE_SHIFT;
+@@ -1087,8 +1087,8 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
+ * Map everything starting from the Gb boundary, possibly with 1G pages
+ */
+ while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) {
+- set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+- massage_pgprot(pud_pgprot)));
++ set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn,
++ canon_pgprot(pud_pgprot))));
+
+ start += PUD_SIZE;
+ cpa->pfn += PUD_SIZE >> PAGE_SHIFT;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 4d418e705878..fb752d9a3ce9 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -45,6 +45,7 @@
+ #include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+ #include <asm/desc.h>
++#include <asm/sections.h>
+
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
+diff --git a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+index 4f5fa65a1011..2acd6be13375 100644
+--- a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
++++ b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+@@ -18,6 +18,7 @@
+ #include <asm/intel-mid.h>
+ #include <asm/intel_scu_ipc.h>
+ #include <asm/io_apic.h>
++#include <asm/hw_irq.h>
+
+ #define TANGIER_EXT_TIMER0_MSI 12
+
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index b36caae0fb2f..c29d3152f5a4 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -1285,6 +1285,7 @@ void uv_bau_message_interrupt(struct pt_regs *regs)
+ struct msg_desc msgdesc;
+
+ ack_APIC_irq();
++ kvm_set_cpu_l1tf_flush_l1d();
+ time_start = get_cycles();
+
+ bcp = &per_cpu(bau_control, smp_processor_id());
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index c9081c6671f0..df208af3cd74 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -3,6 +3,7 @@
+ #endif
+ #include <linux/cpu.h>
+ #include <linux/kexec.h>
++#include <linux/slab.h>
+
+ #include <xen/features.h>
+ #include <xen/page.h>
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 30cc9c877ebb..eb9443d5bae1 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -540,16 +540,24 @@ ssize_t __weak cpu_show_spec_store_bypass(struct device *dev,
+ return sprintf(buf, "Not affected\n");
+ }
+
++ssize_t __weak cpu_show_l1tf(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
++static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+ &dev_attr_spectre_v1.attr,
+ &dev_attr_spectre_v2.attr,
+ &dev_attr_spec_store_bypass.attr,
++ &dev_attr_l1tf.attr,
+ NULL
+ };
+
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 0f3fadd71230..9b0b69598e23 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -280,7 +280,8 @@ static void reset_bdev(struct zram *zram)
+ zram->backing_dev = NULL;
+ zram->old_block_size = 0;
+ zram->bdev = NULL;
+-
++ zram->disk->queue->backing_dev_info->capabilities |=
++ BDI_CAP_SYNCHRONOUS_IO;
+ kvfree(zram->bitmap);
+ zram->bitmap = NULL;
+ }
+@@ -382,6 +383,18 @@ static ssize_t backing_dev_store(struct device *dev,
+ zram->backing_dev = backing_dev;
+ zram->bitmap = bitmap;
+ zram->nr_pages = nr_pages;
++ /*
++ * With writeback feature, zram does asynchronous IO so it's no longer
++ * synchronous device so let's remove synchronous io flag. Othewise,
++ * upper layer(e.g., swap) could wait IO completion rather than
++ * (submit and return), which will cause system sluggish.
++ * Furthermore, when the IO function returns(e.g., swap_readpage),
++ * upper layer expects IO was done so it could deallocate the page
++ * freely but in fact, IO is going on so finally could cause
++ * use-after-free when the IO is really done.
++ */
++ zram->disk->queue->backing_dev_info->capabilities &=
++ ~BDI_CAP_SYNCHRONOUS_IO;
+ up_write(&zram->init_lock);
+
+ pr_info("setup backing device %s\n", file_name);
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index f0519e31543a..fbe2a9bee07f 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -24,6 +24,7 @@
+
+ #include <linux/perf_event.h>
+ #include <linux/pm_runtime.h>
++#include <linux/irq.h>
+
+ #include "i915_drv.h"
+ #include "i915_pmu.h"
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index 6269750e2b54..b4941101f21a 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -62,6 +62,7 @@
+
+ #include <linux/acpi.h>
+ #include <linux/device.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 42e93cb4eca7..ea2da91a96ea 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -894,7 +894,6 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ struct sk_buff *skb,
+ struct sk_buff_head *list)
+ {
+- struct skb_shared_info *shinfo = skb_shinfo(skb);
+ RING_IDX cons = queue->rx.rsp_cons;
+ struct sk_buff *nskb;
+
+@@ -903,15 +902,16 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ RING_GET_RESPONSE(&queue->rx, ++cons);
+ skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
+
+- if (shinfo->nr_frags == MAX_SKB_FRAGS) {
++ if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
+ unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
+
+ BUG_ON(pull_to <= skb_headlen(skb));
+ __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ }
+- BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
++ BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
+
+- skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
++ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
++ skb_frag_page(nfrag),
+ rx->offset, rx->status, PAGE_SIZE);
+
+ skb_shinfo(nskb)->nr_frags = 0;
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index 4690814cfc51..c07e952b94ee 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -43,6 +43,8 @@
+ #include <linux/delay.h>
+ #include <linux/semaphore.h>
+ #include <linux/irqdomain.h>
++#include <linux/irq.h>
++
+ #include <asm/irqdomain.h>
+ #include <asm/apic.h>
+ #include <linux/msi.h>
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index a91cca52b5d5..dd93a22fe843 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2130,34 +2130,11 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp)
+ req_cnt = 1;
+ handle = 0;
+
+- if (!sp)
+- goto skip_cmd_array;
+-
+- /* Check for room in outstanding command list. */
+- handle = req->current_outstanding_cmd;
+- for (index = 1; index < req->num_outstanding_cmds; index++) {
+- handle++;
+- if (handle == req->num_outstanding_cmds)
+- handle = 1;
+- if (!req->outstanding_cmds[handle])
+- break;
+- }
+- if (index == req->num_outstanding_cmds) {
+- ql_log(ql_log_warn, vha, 0x700b,
+- "No room on outstanding cmd array.\n");
+- goto queuing_error;
+- }
+-
+- /* Prep command array. */
+- req->current_outstanding_cmd = handle;
+- req->outstanding_cmds[handle] = sp;
+- sp->handle = handle;
+-
+- /* Adjust entry-counts as needed. */
+- if (sp->type != SRB_SCSI_CMD)
++ if (sp && (sp->type != SRB_SCSI_CMD)) {
++ /* Adjust entry-counts as needed. */
+ req_cnt = sp->iocbs;
++ }
+
+-skip_cmd_array:
+ /* Check for room on request queue. */
+ if (req->cnt < req_cnt + 2) {
+ if (qpair->use_shadow_reg)
+@@ -2183,6 +2160,28 @@ skip_cmd_array:
+ if (req->cnt < req_cnt + 2)
+ goto queuing_error;
+
++ if (sp) {
++ /* Check for room in outstanding command list. */
++ handle = req->current_outstanding_cmd;
++ for (index = 1; index < req->num_outstanding_cmds; index++) {
++ handle++;
++ if (handle == req->num_outstanding_cmds)
++ handle = 1;
++ if (!req->outstanding_cmds[handle])
++ break;
++ }
++ if (index == req->num_outstanding_cmds) {
++ ql_log(ql_log_warn, vha, 0x700b,
++ "No room on outstanding cmd array.\n");
++ goto queuing_error;
++ }
++
++ /* Prep command array. */
++ req->current_outstanding_cmd = handle;
++ req->outstanding_cmds[handle] = sp;
++ sp->handle = handle;
++ }
++
+ /* Prep packet */
+ req->cnt -= req_cnt;
+ pkt = req->ring_ptr;
+@@ -2195,6 +2194,8 @@ skip_cmd_array:
+ pkt->handle = handle;
+ }
+
++ return pkt;
++
+ queuing_error:
+ qpair->tgt_counters.num_alloc_iocb_failed++;
+ return pkt;
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 3f3cb72e0c0c..d0389b20574d 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -523,18 +523,26 @@ static int sr_init_command(struct scsi_cmnd *SCpnt)
+ static int sr_block_open(struct block_device *bdev, fmode_t mode)
+ {
+ struct scsi_cd *cd;
++ struct scsi_device *sdev;
+ int ret = -ENXIO;
+
++ cd = scsi_cd_get(bdev->bd_disk);
++ if (!cd)
++ goto out;
++
++ sdev = cd->device;
++ scsi_autopm_get_device(sdev);
+ check_disk_change(bdev);
+
+ mutex_lock(&sr_mutex);
+- cd = scsi_cd_get(bdev->bd_disk);
+- if (cd) {
+- ret = cdrom_open(&cd->cdi, bdev, mode);
+- if (ret)
+- scsi_cd_put(cd);
+- }
++ ret = cdrom_open(&cd->cdi, bdev, mode);
+ mutex_unlock(&sr_mutex);
++
++ scsi_autopm_put_device(sdev);
++ if (ret)
++ scsi_cd_put(cd);
++
++out:
+ return ret;
+ }
+
+@@ -562,6 +570,8 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+ if (ret)
+ goto out;
+
++ scsi_autopm_get_device(sdev);
++
+ /*
+ * Send SCSI addressing ioctls directly to mid level, send other
+ * ioctls to cdrom/block level.
+@@ -570,15 +580,18 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+ case SCSI_IOCTL_GET_IDLUN:
+ case SCSI_IOCTL_GET_BUS_NUMBER:
+ ret = scsi_ioctl(sdev, cmd, argp);
+- goto out;
++ goto put;
+ }
+
+ ret = cdrom_ioctl(&cd->cdi, bdev, mode, cmd, arg);
+ if (ret != -ENOSYS)
+- goto out;
++ goto put;
+
+ ret = scsi_ioctl(sdev, cmd, argp);
+
++put:
++ scsi_autopm_put_device(sdev);
++
+ out:
+ mutex_unlock(&sr_mutex);
+ return ret;
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 2acfc69878f5..811114b229b1 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -358,14 +358,11 @@ static void dentry_unlink_inode(struct dentry * dentry)
+ __releases(dentry->d_inode->i_lock)
+ {
+ struct inode *inode = dentry->d_inode;
+- bool hashed = !d_unhashed(dentry);
+
+- if (hashed)
+- raw_write_seqcount_begin(&dentry->d_seq);
++ raw_write_seqcount_begin(&dentry->d_seq);
+ __d_clear_type_and_inode(dentry);
+ hlist_del_init(&dentry->d_u.d_alias);
+- if (hashed)
+- raw_write_seqcount_end(&dentry->d_seq);
++ raw_write_seqcount_end(&dentry->d_seq);
+ spin_unlock(&dentry->d_lock);
+ spin_unlock(&inode->i_lock);
+ if (!inode->i_nlink)
+@@ -1954,10 +1951,12 @@ struct dentry *d_make_root(struct inode *root_inode)
+
+ if (root_inode) {
+ res = d_alloc_anon(root_inode->i_sb);
+- if (res)
++ if (res) {
++ res->d_flags |= DCACHE_RCUACCESS;
+ d_instantiate(res, root_inode);
+- else
++ } else {
+ iput(root_inode);
++ }
+ }
+ return res;
+ }
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 5f75969adff1..51a1935060a9 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -659,12 +659,21 @@ int __legitimize_mnt(struct vfsmount *bastard, unsigned seq)
+ return 0;
+ mnt = real_mount(bastard);
+ mnt_add_count(mnt, 1);
++ smp_mb(); // see mntput_no_expire()
+ if (likely(!read_seqretry(&mount_lock, seq)))
+ return 0;
+ if (bastard->mnt_flags & MNT_SYNC_UMOUNT) {
+ mnt_add_count(mnt, -1);
+ return 1;
+ }
++ lock_mount_hash();
++ if (unlikely(bastard->mnt_flags & MNT_DOOMED)) {
++ mnt_add_count(mnt, -1);
++ unlock_mount_hash();
++ return 1;
++ }
++ unlock_mount_hash();
++ /* caller will mntput() */
+ return -1;
+ }
+
+@@ -1195,12 +1204,27 @@ static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput);
+ static void mntput_no_expire(struct mount *mnt)
+ {
+ rcu_read_lock();
+- mnt_add_count(mnt, -1);
+- if (likely(mnt->mnt_ns)) { /* shouldn't be the last one */
++ if (likely(READ_ONCE(mnt->mnt_ns))) {
++ /*
++ * Since we don't do lock_mount_hash() here,
++ * ->mnt_ns can change under us. However, if it's
++ * non-NULL, then there's a reference that won't
++ * be dropped until after an RCU delay done after
++ * turning ->mnt_ns NULL. So if we observe it
++ * non-NULL under rcu_read_lock(), the reference
++ * we are dropping is not the final one.
++ */
++ mnt_add_count(mnt, -1);
+ rcu_read_unlock();
+ return;
+ }
+ lock_mount_hash();
++ /*
++ * make sure that if __legitimize_mnt() has not seen us grab
++ * mount_lock, we'll see their refcount increment here.
++ */
++ smp_mb();
++ mnt_add_count(mnt, -1);
+ if (mnt_get_count(mnt)) {
+ rcu_read_unlock();
+ unlock_mount_hash();
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index f59639afaa39..26ca0276b503 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1083,6 +1083,18 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
+ static inline void init_espfix_bsp(void) { }
+ #endif
+
++#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
++static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++ return true;
++}
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++ return false;
++}
++#endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */
++
+ #endif /* !__ASSEMBLY__ */
+
+ #ifndef io_remap_pfn_range
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index a97a63eef59f..45789a892c41 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -30,7 +30,7 @@ struct cpu {
+ };
+
+ extern void boot_cpu_init(void);
+-extern void boot_cpu_state_init(void);
++extern void boot_cpu_hotplug_init(void);
+ extern void cpu_init(void);
+ extern void trap_init(void);
+
+@@ -55,6 +55,8 @@ extern ssize_t cpu_show_spectre_v2(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_l1tf(struct device *dev,
++ struct device_attribute *attr, char *buf);
+
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -166,4 +168,23 @@ void cpuhp_report_idle_dead(void);
+ static inline void cpuhp_report_idle_dead(void) { }
+ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+
++enum cpuhp_smt_control {
++ CPU_SMT_ENABLED,
++ CPU_SMT_DISABLED,
++ CPU_SMT_FORCE_DISABLED,
++ CPU_SMT_NOT_SUPPORTED,
++};
++
++#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT)
++extern enum cpuhp_smt_control cpu_smt_control;
++extern void cpu_smt_disable(bool force);
++extern void cpu_smt_check_topology_early(void);
++extern void cpu_smt_check_topology(void);
++#else
++# define cpu_smt_control (CPU_SMT_ENABLED)
++static inline void cpu_smt_disable(bool force) { }
++static inline void cpu_smt_check_topology_early(void) { }
++static inline void cpu_smt_check_topology(void) { }
++#endif
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h
+index 06bd7b096167..e06febf62978 100644
+--- a/include/linux/swapfile.h
++++ b/include/linux/swapfile.h
+@@ -10,5 +10,7 @@ extern spinlock_t swap_lock;
+ extern struct plist_head swap_active_head;
+ extern struct swap_info_struct *swap_info[];
+ extern int try_to_unuse(unsigned int, bool, unsigned long);
++extern unsigned long generic_max_swapfile_size(void);
++extern unsigned long max_swapfile_size(void);
+
+ #endif /* _LINUX_SWAPFILE_H */
+diff --git a/init/main.c b/init/main.c
+index 3b4ada11ed52..5e13c544bbf4 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -561,8 +561,8 @@ asmlinkage __visible void __init start_kernel(void)
+ setup_command_line(command_line);
+ setup_nr_cpu_ids();
+ setup_per_cpu_areas();
+- boot_cpu_state_init();
+ smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
++ boot_cpu_hotplug_init();
+
+ build_all_zonelists(NULL);
+ page_alloc_init();
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index fc7ee4357381..70edc41a88d5 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -947,12 +947,12 @@ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
+
+ while (msg_data_left(msg)) {
+- struct sk_msg_buff *m;
++ struct sk_msg_buff *m = NULL;
+ bool enospc = false;
+ int copy;
+
+ if (sk->sk_err) {
+- err = sk->sk_err;
++ err = -sk->sk_err;
+ goto out_err;
+ }
+
+@@ -1015,8 +1015,11 @@ wait_for_sndbuf:
+ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ wait_for_memory:
+ err = sk_stream_wait_memory(sk, &timeo);
+- if (err)
++ if (err) {
++ if (m && m != psock->cork)
++ free_start_sg(sk, m);
+ goto out_err;
++ }
+ }
+ out_err:
+ if (err < 0)
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 0db8938fbb23..f80afc674f02 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -60,6 +60,7 @@ struct cpuhp_cpu_state {
+ bool rollback;
+ bool single;
+ bool bringup;
++ bool booted_once;
+ struct hlist_node *node;
+ struct hlist_node *last;
+ enum cpuhp_state cb_state;
+@@ -342,6 +343,85 @@ void cpu_hotplug_enable(void)
+ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
+ #endif /* CONFIG_HOTPLUG_CPU */
+
++#ifdef CONFIG_HOTPLUG_SMT
++enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
++EXPORT_SYMBOL_GPL(cpu_smt_control);
++
++static bool cpu_smt_available __read_mostly;
++
++void __init cpu_smt_disable(bool force)
++{
++ if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
++ cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++ return;
++
++ if (force) {
++ pr_info("SMT: Force disabled\n");
++ cpu_smt_control = CPU_SMT_FORCE_DISABLED;
++ } else {
++ cpu_smt_control = CPU_SMT_DISABLED;
++ }
++}
++
++/*
++ * The decision whether SMT is supported can only be done after the full
++ * CPU identification. Called from architecture code before non boot CPUs
++ * are brought up.
++ */
++void __init cpu_smt_check_topology_early(void)
++{
++ if (!topology_smt_supported())
++ cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++/*
++ * If SMT was disabled by BIOS, detect it here, after the CPUs have been
++ * brought online. This ensures the smt/l1tf sysfs entries are consistent
++ * with reality. cpu_smt_available is set to true during the bringup of non
++ * boot CPUs when a SMT sibling is detected. Note, this may overwrite
++ * cpu_smt_control's previous setting.
++ */
++void __init cpu_smt_check_topology(void)
++{
++ if (!cpu_smt_available)
++ cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++static int __init smt_cmdline_disable(char *str)
++{
++ cpu_smt_disable(str && !strcmp(str, "force"));
++ return 0;
++}
++early_param("nosmt", smt_cmdline_disable);
++
++static inline bool cpu_smt_allowed(unsigned int cpu)
++{
++ if (topology_is_primary_thread(cpu))
++ return true;
++
++ /*
++ * If the CPU is not a 'primary' thread and the booted_once bit is
++ * set then the processor has SMT support. Store this information
++ * for the late check of SMT support in cpu_smt_check_topology().
++ */
++ if (per_cpu(cpuhp_state, cpu).booted_once)
++ cpu_smt_available = true;
++
++ if (cpu_smt_control == CPU_SMT_ENABLED)
++ return true;
++
++ /*
++ * On x86 it's required to boot all logical CPUs at least once so
++ * that the init code can get a chance to set CR4.MCE on each
++ * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any
++ * core will shutdown the machine.
++ */
++ return !per_cpu(cpuhp_state, cpu).booted_once;
++}
++#else
++static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
++#endif
++
+ static inline enum cpuhp_state
+ cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ {
+@@ -422,6 +502,16 @@ static int bringup_wait_for_ap(unsigned int cpu)
+ stop_machine_unpark(cpu);
+ kthread_unpark(st->thread);
+
++ /*
++ * SMT soft disabling on X86 requires to bring the CPU out of the
++ * BIOS 'wait for SIPI' state in order to set the CR4.MCE bit. The
++ * CPU marked itself as booted_once in cpu_notify_starting() so the
++ * cpu_smt_allowed() check will now return false if this is not the
++ * primary sibling.
++ */
++ if (!cpu_smt_allowed(cpu))
++ return -ECANCELED;
++
+ if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ return 0;
+
+@@ -754,7 +844,6 @@ static int takedown_cpu(unsigned int cpu)
+
+ /* Park the smpboot threads */
+ kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
+- smpboot_park_threads(cpu);
+
+ /*
+ * Prevent irq alloc/free while the dying cpu reorganizes the
+@@ -907,20 +996,19 @@ out:
+ return ret;
+ }
+
++static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
++{
++ if (cpu_hotplug_disabled)
++ return -EBUSY;
++ return _cpu_down(cpu, 0, target);
++}
++
+ static int do_cpu_down(unsigned int cpu, enum cpuhp_state target)
+ {
+ int err;
+
+ cpu_maps_update_begin();
+-
+- if (cpu_hotplug_disabled) {
+- err = -EBUSY;
+- goto out;
+- }
+-
+- err = _cpu_down(cpu, 0, target);
+-
+-out:
++ err = cpu_down_maps_locked(cpu, target);
+ cpu_maps_update_done();
+ return err;
+ }
+@@ -949,6 +1037,7 @@ void notify_cpu_starting(unsigned int cpu)
+ int ret;
+
+ rcu_cpu_starting(cpu); /* Enables RCU usage on this CPU. */
++ st->booted_once = true;
+ while (st->state < target) {
+ st->state++;
+ ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+@@ -1058,6 +1147,10 @@ static int do_cpu_up(unsigned int cpu, enum cpuhp_state target)
+ err = -EBUSY;
+ goto out;
+ }
++ if (!cpu_smt_allowed(cpu)) {
++ err = -EPERM;
++ goto out;
++ }
+
+ err = _cpu_up(cpu, 0, target);
+ out:
+@@ -1332,7 +1425,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ [CPUHP_AP_SMPBOOT_THREADS] = {
+ .name = "smpboot/threads:online",
+ .startup.single = smpboot_unpark_threads,
+- .teardown.single = NULL,
++ .teardown.single = smpboot_park_threads,
+ },
+ [CPUHP_AP_IRQ_AFFINITY_ONLINE] = {
+ .name = "irq/affinity:online",
+@@ -1906,10 +1999,172 @@ static const struct attribute_group cpuhp_cpu_root_attr_group = {
+ NULL
+ };
+
++#ifdef CONFIG_HOTPLUG_SMT
++
++static const char *smt_states[] = {
++ [CPU_SMT_ENABLED] = "on",
++ [CPU_SMT_DISABLED] = "off",
++ [CPU_SMT_FORCE_DISABLED] = "forceoff",
++ [CPU_SMT_NOT_SUPPORTED] = "notsupported",
++};
++
++static ssize_t
++show_smt_control(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return snprintf(buf, PAGE_SIZE - 2, "%s\n", smt_states[cpu_smt_control]);
++}
++
++static void cpuhp_offline_cpu_device(unsigned int cpu)
++{
++ struct device *dev = get_cpu_device(cpu);
++
++ dev->offline = true;
++ /* Tell user space about the state change */
++ kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
++}
++
++static void cpuhp_online_cpu_device(unsigned int cpu)
++{
++ struct device *dev = get_cpu_device(cpu);
++
++ dev->offline = false;
++ /* Tell user space about the state change */
++ kobject_uevent(&dev->kobj, KOBJ_ONLINE);
++}
++
++static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
++{
++ int cpu, ret = 0;
++
++ cpu_maps_update_begin();
++ for_each_online_cpu(cpu) {
++ if (topology_is_primary_thread(cpu))
++ continue;
++ ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
++ if (ret)
++ break;
++ /*
++ * As this needs to hold the cpu maps lock it's impossible
++ * to call device_offline() because that ends up calling
++ * cpu_down() which takes cpu maps lock. cpu maps lock
++ * needs to be held as this might race against in kernel
++ * abusers of the hotplug machinery (thermal management).
++ *
++ * So nothing would update device:offline state. That would
++ * leave the sysfs entry stale and prevent onlining after
++ * smt control has been changed to 'off' again. This is
++ * called under the sysfs hotplug lock, so it is properly
++ * serialized against the regular offline usage.
++ */
++ cpuhp_offline_cpu_device(cpu);
++ }
++ if (!ret)
++ cpu_smt_control = ctrlval;
++ cpu_maps_update_done();
++ return ret;
++}
++
++static int cpuhp_smt_enable(void)
++{
++ int cpu, ret = 0;
++
++ cpu_maps_update_begin();
++ cpu_smt_control = CPU_SMT_ENABLED;
++ for_each_present_cpu(cpu) {
++ /* Skip online CPUs and CPUs on offline nodes */
++ if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
++ continue;
++ ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
++ if (ret)
++ break;
++ /* See comment in cpuhp_smt_disable() */
++ cpuhp_online_cpu_device(cpu);
++ }
++ cpu_maps_update_done();
++ return ret;
++}
++
++static ssize_t
++store_smt_control(struct device *dev, struct device_attribute *attr,
++ const char *buf, size_t count)
++{
++ int ctrlval, ret;
++
++ if (sysfs_streq(buf, "on"))
++ ctrlval = CPU_SMT_ENABLED;
++ else if (sysfs_streq(buf, "off"))
++ ctrlval = CPU_SMT_DISABLED;
++ else if (sysfs_streq(buf, "forceoff"))
++ ctrlval = CPU_SMT_FORCE_DISABLED;
++ else
++ return -EINVAL;
++
++ if (cpu_smt_control == CPU_SMT_FORCE_DISABLED)
++ return -EPERM;
++
++ if (cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++ return -ENODEV;
++
++ ret = lock_device_hotplug_sysfs();
++ if (ret)
++ return ret;
++
++ if (ctrlval != cpu_smt_control) {
++ switch (ctrlval) {
++ case CPU_SMT_ENABLED:
++ ret = cpuhp_smt_enable();
++ break;
++ case CPU_SMT_DISABLED:
++ case CPU_SMT_FORCE_DISABLED:
++ ret = cpuhp_smt_disable(ctrlval);
++ break;
++ }
++ }
++
++ unlock_device_hotplug();
++ return ret ? ret : count;
++}
++static DEVICE_ATTR(control, 0644, show_smt_control, store_smt_control);
++
++static ssize_t
++show_smt_active(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ bool active = topology_max_smt_threads() > 1;
++
++ return snprintf(buf, PAGE_SIZE - 2, "%d\n", active);
++}
++static DEVICE_ATTR(active, 0444, show_smt_active, NULL);
++
++static struct attribute *cpuhp_smt_attrs[] = {
++ &dev_attr_control.attr,
++ &dev_attr_active.attr,
++ NULL
++};
++
++static const struct attribute_group cpuhp_smt_attr_group = {
++ .attrs = cpuhp_smt_attrs,
++ .name = "smt",
++ NULL
++};
++
++static int __init cpu_smt_state_init(void)
++{
++ return sysfs_create_group(&cpu_subsys.dev_root->kobj,
++ &cpuhp_smt_attr_group);
++}
++
++#else
++static inline int cpu_smt_state_init(void) { return 0; }
++#endif
++
+ static int __init cpuhp_sysfs_init(void)
+ {
+ int cpu, ret;
+
++ ret = cpu_smt_state_init();
++ if (ret)
++ return ret;
++
+ ret = sysfs_create_group(&cpu_subsys.dev_root->kobj,
+ &cpuhp_cpu_root_attr_group);
+ if (ret)
+@@ -2010,7 +2265,10 @@ void __init boot_cpu_init(void)
+ /*
+ * Must be called _AFTER_ setting up the per_cpu areas
+ */
+-void __init boot_cpu_state_init(void)
++void __init boot_cpu_hotplug_init(void)
+ {
+- per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE;
++#ifdef CONFIG_SMP
++ this_cpu_write(cpuhp_state.booted_once, true);
++#endif
++ this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 211890edf37e..ec945451b9ef 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5788,6 +5788,18 @@ int sched_cpu_activate(unsigned int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ struct rq_flags rf;
+
++#ifdef CONFIG_SCHED_SMT
++ /*
++ * The sched_smt_present static key needs to be evaluated on every
++ * hotplug event because at boot time SMT might be disabled when
++ * the number of booted CPUs is limited.
++ *
++ * If then later a sibling gets hotplugged, then the key would stay
++ * off and SMT scheduling would never be functional.
++ */
++ if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
++ static_branch_enable_cpuslocked(&sched_smt_present);
++#endif
+ set_cpu_active(cpu, true);
+
+ if (sched_smp_initialized) {
+@@ -5885,22 +5897,6 @@ int sched_cpu_dying(unsigned int cpu)
+ }
+ #endif
+
+-#ifdef CONFIG_SCHED_SMT
+-DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+-
+-static void sched_init_smt(void)
+-{
+- /*
+- * We've enumerated all CPUs and will assume that if any CPU
+- * has SMT siblings, CPU0 will too.
+- */
+- if (cpumask_weight(cpu_smt_mask(0)) > 1)
+- static_branch_enable(&sched_smt_present);
+-}
+-#else
+-static inline void sched_init_smt(void) { }
+-#endif
+-
+ void __init sched_init_smp(void)
+ {
+ sched_init_numa();
+@@ -5922,8 +5918,6 @@ void __init sched_init_smp(void)
+ init_sched_rt_class();
+ init_sched_dl_class();
+
+- sched_init_smt();
+-
+ sched_smp_initialized = true;
+ }
+
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index fbfc3f1d368a..8b50eea4b607 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2090,8 +2090,14 @@ retry:
+ sub_rq_bw(&next_task->dl, &rq->dl);
+ set_task_cpu(next_task, later_rq->cpu);
+ add_rq_bw(&next_task->dl, &later_rq->dl);
++
++ /*
++ * Update the later_rq clock here, because the clock is used
++ * by the cpufreq_update_util() inside __add_running_bw().
++ */
++ update_rq_clock(later_rq);
+ add_running_bw(&next_task->dl, &later_rq->dl);
+- activate_task(later_rq, next_task, 0);
++ activate_task(later_rq, next_task, ENQUEUE_NOCLOCK);
+ ret = 1;
+
+ resched_curr(later_rq);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 79f574dba096..183068d22849 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6183,6 +6183,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
+ }
+
+ #ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+
+ static inline void set_idle_cores(int cpu, int val)
+ {
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 084c8b3a2681..d86eec5f51c1 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -584,6 +584,8 @@ void __init smp_init(void)
+ num_nodes, (num_nodes > 1 ? "s" : ""),
+ num_cpus, (num_cpus > 1 ? "s" : ""));
+
++ /* Final decision about SMT support */
++ cpu_smt_check_topology();
+ /* Any cleanup work */
+ smp_cpus_done(setup_max_cpus);
+ }
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index ce4fb0e12504..7a076b6c537a 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -79,12 +79,16 @@ static void wakeup_softirqd(void)
+
+ /*
+ * If ksoftirqd is scheduled, we do not want to process pending softirqs
+- * right now. Let ksoftirqd handle this at its own rate, to get fairness.
++ * right now. Let ksoftirqd handle this at its own rate, to get fairness,
++ * unless we're doing some of the synchronous softirqs.
+ */
+-static bool ksoftirqd_running(void)
++#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ))
++static bool ksoftirqd_running(unsigned long pending)
+ {
+ struct task_struct *tsk = __this_cpu_read(ksoftirqd);
+
++ if (pending & SOFTIRQ_NOW_MASK)
++ return false;
+ return tsk && (tsk->state == TASK_RUNNING);
+ }
+
+@@ -329,7 +333,7 @@ asmlinkage __visible void do_softirq(void)
+
+ pending = local_softirq_pending();
+
+- if (pending && !ksoftirqd_running())
++ if (pending && !ksoftirqd_running(pending))
+ do_softirq_own_stack();
+
+ local_irq_restore(flags);
+@@ -356,7 +360,7 @@ void irq_enter(void)
+
+ static inline void invoke_softirq(void)
+ {
+- if (ksoftirqd_running())
++ if (ksoftirqd_running(local_softirq_pending()))
+ return;
+
+ if (!force_irqthreads) {
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 1ff523dae6e2..e190d1ef3a23 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -260,6 +260,15 @@ retry:
+ err = 0;
+ __cpu_stop_queue_work(stopper1, work1, &wakeq);
+ __cpu_stop_queue_work(stopper2, work2, &wakeq);
++ /*
++ * The waking up of stopper threads has to happen
++ * in the same scheduling context as the queueing.
++ * Otherwise, there is a possibility of one of the
++ * above stoppers being woken up by another CPU,
++ * and preempting us. This will cause us to n ot
++ * wake up the other stopper forever.
++ */
++ preempt_disable();
+ unlock:
+ raw_spin_unlock(&stopper2->lock);
+ raw_spin_unlock_irq(&stopper1->lock);
+@@ -271,7 +280,6 @@ unlock:
+ }
+
+ if (!err) {
+- preempt_disable();
+ wake_up_q(&wakeq);
+ preempt_enable();
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index 01f5464e0fd2..fe497cecd2ab 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1891,6 +1891,9 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+ if (addr < vma->vm_start || addr >= vma->vm_end)
+ return -EFAULT;
+
++ if (!pfn_modify_allowed(pfn, pgprot))
++ return -EACCES;
++
+ track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
+
+ ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
+@@ -1926,6 +1929,9 @@ static int __vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+
+ track_pfn_insert(vma, &pgprot, pfn);
+
++ if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot))
++ return -EACCES;
++
+ /*
+ * If we don't have pte special, then we have to use the pfn_valid()
+ * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+@@ -1973,6 +1979,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ {
+ pte_t *pte;
+ spinlock_t *ptl;
++ int err = 0;
+
+ pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ if (!pte)
+@@ -1980,12 +1987,16 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ arch_enter_lazy_mmu_mode();
+ do {
+ BUG_ON(!pte_none(*pte));
++ if (!pfn_modify_allowed(pfn, prot)) {
++ err = -EACCES;
++ break;
++ }
+ set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
+ pfn++;
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ arch_leave_lazy_mmu_mode();
+ pte_unmap_unlock(pte - 1, ptl);
+- return 0;
++ return err;
+ }
+
+ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+@@ -1994,6 +2005,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ {
+ pmd_t *pmd;
+ unsigned long next;
++ int err;
+
+ pfn -= addr >> PAGE_SHIFT;
+ pmd = pmd_alloc(mm, pud, addr);
+@@ -2002,9 +2014,10 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ VM_BUG_ON(pmd_trans_huge(*pmd));
+ do {
+ next = pmd_addr_end(addr, end);
+- if (remap_pte_range(mm, pmd, addr, next,
+- pfn + (addr >> PAGE_SHIFT), prot))
+- return -ENOMEM;
++ err = remap_pte_range(mm, pmd, addr, next,
++ pfn + (addr >> PAGE_SHIFT), prot);
++ if (err)
++ return err;
+ } while (pmd++, addr = next, addr != end);
+ return 0;
+ }
+@@ -2015,6 +2028,7 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ {
+ pud_t *pud;
+ unsigned long next;
++ int err;
+
+ pfn -= addr >> PAGE_SHIFT;
+ pud = pud_alloc(mm, p4d, addr);
+@@ -2022,9 +2036,10 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ return -ENOMEM;
+ do {
+ next = pud_addr_end(addr, end);
+- if (remap_pmd_range(mm, pud, addr, next,
+- pfn + (addr >> PAGE_SHIFT), prot))
+- return -ENOMEM;
++ err = remap_pmd_range(mm, pud, addr, next,
++ pfn + (addr >> PAGE_SHIFT), prot);
++ if (err)
++ return err;
+ } while (pud++, addr = next, addr != end);
+ return 0;
+ }
+@@ -2035,6 +2050,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ {
+ p4d_t *p4d;
+ unsigned long next;
++ int err;
+
+ pfn -= addr >> PAGE_SHIFT;
+ p4d = p4d_alloc(mm, pgd, addr);
+@@ -2042,9 +2058,10 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ return -ENOMEM;
+ do {
+ next = p4d_addr_end(addr, end);
+- if (remap_pud_range(mm, p4d, addr, next,
+- pfn + (addr >> PAGE_SHIFT), prot))
+- return -ENOMEM;
++ err = remap_pud_range(mm, p4d, addr, next,
++ pfn + (addr >> PAGE_SHIFT), prot);
++ if (err)
++ return err;
+ } while (p4d++, addr = next, addr != end);
+ return 0;
+ }
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 625608bc8962..6d331620b9e5 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -306,6 +306,42 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
+ return pages;
+ }
+
++static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
++ unsigned long next, struct mm_walk *walk)
++{
++ return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++ 0 : -EACCES;
++}
++
++static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
++ unsigned long addr, unsigned long next,
++ struct mm_walk *walk)
++{
++ return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++ 0 : -EACCES;
++}
++
++static int prot_none_test(unsigned long addr, unsigned long next,
++ struct mm_walk *walk)
++{
++ return 0;
++}
++
++static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
++ unsigned long end, unsigned long newflags)
++{
++ pgprot_t new_pgprot = vm_get_page_prot(newflags);
++ struct mm_walk prot_none_walk = {
++ .pte_entry = prot_none_pte_entry,
++ .hugetlb_entry = prot_none_hugetlb_entry,
++ .test_walk = prot_none_test,
++ .mm = current->mm,
++ .private = &new_pgprot,
++ };
++
++ return walk_page_range(start, end, &prot_none_walk);
++}
++
+ int
+ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ unsigned long start, unsigned long end, unsigned long newflags)
+@@ -323,6 +359,19 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ return 0;
+ }
+
++ /*
++ * Do PROT_NONE PFN permission checks here when we can still
++ * bail out without undoing a lot of state. This is a rather
++ * uncommon case, so doesn't need to be very optimized.
++ */
++ if (arch_has_pfn_modify_check() &&
++ (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
++ (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
++ error = prot_none_walk(vma, start, end, newflags);
++ if (error)
++ return error;
++ }
++
+ /*
+ * If we make a private mapping writable we increase our commit;
+ * but (without finer accounting) cannot reduce our commit if we
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 78a015fcec3b..6ac2757d5997 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2909,6 +2909,35 @@ static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
+ return 0;
+ }
+
++
++/*
++ * Find out how many pages are allowed for a single swap device. There
++ * are two limiting factors:
++ * 1) the number of bits for the swap offset in the swp_entry_t type, and
++ * 2) the number of bits in the swap pte, as defined by the different
++ * architectures.
++ *
++ * In order to find the largest possible bit mask, a swap entry with
++ * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
++ * decoded to a swp_entry_t again, and finally the swap offset is
++ * extracted.
++ *
++ * This will mask all the bits from the initial ~0UL mask that can't
++ * be encoded in either the swp_entry_t or the architecture definition
++ * of a swap pte.
++ */
++unsigned long generic_max_swapfile_size(void)
++{
++ return swp_offset(pte_to_swp_entry(
++ swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++}
++
++/* Can be overridden by an architecture for additional checks. */
++__weak unsigned long max_swapfile_size(void)
++{
++ return generic_max_swapfile_size();
++}
++
+ static unsigned long read_swap_header(struct swap_info_struct *p,
+ union swap_header *swap_header,
+ struct inode *inode)
+@@ -2944,22 +2973,7 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
+ p->cluster_next = 1;
+ p->cluster_nr = 0;
+
+- /*
+- * Find out how many pages are allowed for a single swap
+- * device. There are two limiting factors: 1) the number
+- * of bits for the swap offset in the swp_entry_t type, and
+- * 2) the number of bits in the swap pte as defined by the
+- * different architectures. In order to find the
+- * largest possible bit mask, a swap entry with swap type 0
+- * and swap offset ~0UL is created, encoded to a swap pte,
+- * decoded to a swp_entry_t again, and finally the swap
+- * offset is extracted. This will mask all the bits from
+- * the initial ~0UL mask that can't be encoded in either
+- * the swp_entry_t or the architecture definition of a
+- * swap pte.
+- */
+- maxpages = swp_offset(pte_to_swp_entry(
+- swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++ maxpages = max_swapfile_size();
+ last_page = swap_header->info.last_page;
+ if (!last_page) {
+ pr_warn("Empty swap-file\n");
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index 578793e97431..f8659f070fc6 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -198,7 +198,6 @@
+ #define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* Cache Allocation Technology L2 */
+ #define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* Code and Data Prioritization L3 */
+ #define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
+-
+ #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
+ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
+ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
+@@ -207,13 +206,20 @@
+ #define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
+ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
+-
++#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
++#define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */
+ #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
+ #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */
+ #define X86_FEATURE_SEV ( 7*32+20) /* AMD Secure Encrypted Virtualization */
+-
+ #define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
+ #define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
++#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
++#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* "" AMD SSBD implementation via LS_CFG MSR */
++#define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */
++#define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */
++#define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */
+
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
+@@ -274,9 +280,10 @@
+ #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
+ #define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */
+ #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */
+-#define X86_FEATURE_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */
+-#define X86_FEATURE_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */
+-#define X86_FEATURE_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */
++#define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */
++#define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
+
+ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
+ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+@@ -333,7 +340,9 @@
+ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
++#define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
+
+ /*
+ * BUG word(s)
+@@ -363,5 +372,7 @@
+ #define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
+ #define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
++#define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
+index af5f8c2df87a..db9f15f5db04 100644
+--- a/tools/include/uapi/linux/prctl.h
++++ b/tools/include/uapi/linux/prctl.h
+@@ -207,4 +207,16 @@ struct prctl_mm_map {
+ # define PR_SVE_VL_LEN_MASK 0xffff
+ # define PR_SVE_VL_INHERIT (1 << 17) /* inherit across exec */
+
++/* Per task speculation control */
++#define PR_GET_SPECULATION_CTRL 52
++#define PR_SET_SPECULATION_CTRL 53
++/* Speculation control variants */
++# define PR_SPEC_STORE_BYPASS 0
++/* Return and control values for PR_SET/GET_SPECULATION_CTRL */
++# define PR_SPEC_NOT_AFFECTED 0
++# define PR_SPEC_PRCTL (1UL << 0)
++# define PR_SPEC_ENABLE (1UL << 1)
++# define PR_SPEC_DISABLE (1UL << 2)
++# define PR_SPEC_FORCE_DISABLE (1UL << 3)
++
+ #endif /* _LINUX_PRCTL_H */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-09 10:55 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-09 10:55 UTC (permalink / raw
To: gentoo-commits
commit: 8901714e54f8d9f28b0236baafa53fea8ace890b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 9 10:55:40 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 9 10:55:40 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8901714e
Linux patch 4.17.14
0000_README | 4 +
1013_linux-4.17.14.patch | 692 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 696 insertions(+)
diff --git a/0000_README b/0000_README
index ba82da5..102b8df 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-4.17.13.patch
From: http://www.kernel.org
Desc: Linux 4.17.13
+Patch: 1013_linux-4.17.14.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-4.17.14.patch b/1013_linux-4.17.14.patch
new file mode 100644
index 0000000..1e63651
--- /dev/null
+++ b/1013_linux-4.17.14.patch
@@ -0,0 +1,692 @@
+diff --git a/Makefile b/Makefile
+index 2534e51de1db..ce4248f558d1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index 414dc7e7c950..041b77692bfa 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -23,7 +23,7 @@
+ #define UNCORE_PCI_DEV_TYPE(data) ((data >> 8) & 0xff)
+ #define UNCORE_PCI_DEV_IDX(data) (data & 0xff)
+ #define UNCORE_EXTRA_PCI_DEV 0xff
+-#define UNCORE_EXTRA_PCI_DEV_MAX 3
++#define UNCORE_EXTRA_PCI_DEV_MAX 4
+
+ #define UNCORE_EVENT_CONSTRAINT(c, n) EVENT_CONSTRAINT(c, n, 0xff)
+
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 77076a102e34..df2d69cb136a 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -1029,6 +1029,7 @@ void snbep_uncore_cpu_init(void)
+ enum {
+ SNBEP_PCI_QPI_PORT0_FILTER,
+ SNBEP_PCI_QPI_PORT1_FILTER,
++ BDX_PCI_QPI_PORT2_FILTER,
+ HSWEP_PCI_PCU_3,
+ };
+
+@@ -3286,15 +3287,18 @@ static const struct pci_device_id bdx_uncore_pci_ids[] = {
+ },
+ { /* QPI Port 0 filter */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f86),
+- .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 0),
++ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
++ SNBEP_PCI_QPI_PORT0_FILTER),
+ },
+ { /* QPI Port 1 filter */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f96),
+- .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 1),
++ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
++ SNBEP_PCI_QPI_PORT1_FILTER),
+ },
+ { /* QPI Port 2 filter */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6f46),
+- .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV, 2),
++ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
++ BDX_PCI_QPI_PORT2_FILTER),
+ },
+ { /* PCU.3 (for Capability registers) */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6fc0),
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 47ab2d9d02d9..77938b512a71 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2174,11 +2174,12 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ if (part->policy && op_is_write(bio_op(bio))) {
+ char b[BDEVNAME_SIZE];
+
+- printk(KERN_ERR
++ WARN_ONCE(1,
+ "generic_make_request: Trying to write "
+ "to read-only block-device %s (partno %d)\n",
+ bio_devname(bio, b), part->partno);
+- return true;
++ /* Older lvm-tools actually trigger this */
++ return false;
+ }
+
+ return false;
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index d7267dd9c7bf..6fca5e64cffb 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -377,6 +377,7 @@ static int i2c_imx_dma_xfer(struct imx_i2c_struct *i2c_imx,
+ goto err_desc;
+ }
+
++ reinit_completion(&dma->cmd_complete);
+ txdesc->callback = i2c_imx_dma_callback;
+ txdesc->callback_param = i2c_imx;
+ if (dma_submit_error(dmaengine_submit(txdesc))) {
+@@ -631,7 +632,6 @@ static int i2c_imx_dma_write(struct imx_i2c_struct *i2c_imx,
+ * The first byte must be transmitted by the CPU.
+ */
+ imx_i2c_write_reg(msgs->addr << 1, i2c_imx, IMX_I2C_I2DR);
+- reinit_completion(&i2c_imx->dma->cmd_complete);
+ time_left = wait_for_completion_timeout(
+ &i2c_imx->dma->cmd_complete,
+ msecs_to_jiffies(DMA_TIMEOUT));
+@@ -690,7 +690,6 @@ static int i2c_imx_dma_read(struct imx_i2c_struct *i2c_imx,
+ if (result)
+ return result;
+
+- reinit_completion(&i2c_imx->dma->cmd_complete);
+ time_left = wait_for_completion_timeout(
+ &i2c_imx->dma->cmd_complete,
+ msecs_to_jiffies(DMA_TIMEOUT));
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 89a4999fa631..c8731568f9c4 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2141,6 +2141,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ msleep(1000);
+
+ qla24xx_disable_vp(vha);
++ qla2x00_wait_for_sess_deletion(vha);
+
+ vha->flags.delete_progress = 1;
+
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 3c4c84ed0f0f..21ffbe694acd 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -213,6 +213,7 @@ void qla2x00_handle_login_done_event(struct scsi_qla_host *, fc_port_t *,
+ int qla24xx_post_gnl_work(struct scsi_qla_host *, fc_port_t *);
+ int qla24xx_async_abort_cmd(srb_t *);
+ int qla24xx_post_relogin_work(struct scsi_qla_host *vha);
++void qla2x00_wait_for_sess_deletion(scsi_qla_host_t *);
+
+ /*
+ * Global Functions in qla_mid.c source file.
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index cbfbab5d9a59..5ee8730d1d5c 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3712,6 +3712,10 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ return rval;
+
+ done_free_sp:
++ spin_lock_irqsave(&vha->hw->vport_slock, flags);
++ list_del(&sp->elem);
++ spin_unlock_irqrestore(&vha->hw->vport_slock, flags);
++
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt),
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 0cb552268be3..26da2b286f90 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1518,11 +1518,10 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+
+ wait_for_completion(&tm_iocb->u.tmf.comp);
+
+- rval = tm_iocb->u.tmf.comp_status == CS_COMPLETE ?
+- QLA_SUCCESS : QLA_FUNCTION_FAILED;
++ rval = tm_iocb->u.tmf.data;
+
+- if ((rval != QLA_SUCCESS) || tm_iocb->u.tmf.data) {
+- ql_dbg(ql_dbg_taskm, vha, 0x8030,
++ if (rval != QLA_SUCCESS) {
++ ql_log(ql_log_warn, vha, 0x8030,
+ "TM IOCB failed (%x).\n", rval);
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 37ae0f6d8ae5..59fd5a9dfeb8 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -222,6 +222,8 @@ qla2xxx_get_qpair_sp(struct qla_qpair *qpair, fc_port_t *fcport, gfp_t flag)
+ sp->fcport = fcport;
+ sp->iocbs = 1;
+ sp->vha = qpair->vha;
++ INIT_LIST_HEAD(&sp->elem);
++
+ done:
+ if (!sp)
+ QLA_QPAIR_MARK_NOT_BUSY(qpair);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 68560a097ae1..bd5ba6acea7a 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -631,6 +631,9 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
+ unsigned long flags;
+ fc_port_t *fcport = NULL;
+
++ if (!vha->hw->flags.fw_started)
++ return;
++
+ /* Setup to process RIO completion. */
+ handle_cnt = 0;
+ if (IS_CNA_CAPABLE(ha))
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index d8a36c13aeda..7a50eba9d496 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -4212,6 +4212,9 @@ qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req)
+ mbx_cmd_t *mcp = &mc;
+ struct qla_hw_data *ha = vha->hw;
+
++ if (!ha->flags.fw_started)
++ return QLA_SUCCESS;
++
+ ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10d3,
+ "Entered %s.\n", __func__);
+
+@@ -4281,6 +4284,9 @@ qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
+ mbx_cmd_t *mcp = &mc;
+ struct qla_hw_data *ha = vha->hw;
+
++ if (!ha->flags.fw_started)
++ return QLA_SUCCESS;
++
+ ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10d6,
+ "Entered %s.\n", __func__);
+
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index f6f0a759a7c2..aa727d07b702 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -152,11 +152,18 @@ int
+ qla24xx_disable_vp(scsi_qla_host_t *vha)
+ {
+ unsigned long flags;
+- int ret;
++ int ret = QLA_SUCCESS;
++ fc_port_t *fcport;
++
++ if (vha->hw->flags.fw_started)
++ ret = qla24xx_control_vp(vha, VCE_COMMAND_DISABLE_VPS_LOGO_ALL);
+
+- ret = qla24xx_control_vp(vha, VCE_COMMAND_DISABLE_VPS_LOGO_ALL);
+ atomic_set(&vha->loop_state, LOOP_DOWN);
+ atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
++ list_for_each_entry(fcport, &vha->vp_fcports, list)
++ fcport->logout_on_delete = 0;
++
++ qla2x00_mark_all_devices_lost(vha, 0);
+
+ /* Remove port id from vp target map */
+ spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 2b0816dfe9bd..88bd730d16f3 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -303,6 +303,7 @@ static void qla2x00_free_device(scsi_qla_host_t *);
+ static int qla2xxx_map_queues(struct Scsi_Host *shost);
+ static void qla2x00_destroy_deferred_work(struct qla_hw_data *);
+
++
+ struct scsi_host_template qla2xxx_driver_template = {
+ .module = THIS_MODULE,
+ .name = QLA2XXX_DRIVER_NAME,
+@@ -1147,7 +1148,7 @@ static inline int test_fcport_count(scsi_qla_host_t *vha)
+ * qla2x00_wait_for_sess_deletion can only be called from remove_one.
+ * it has dependency on UNLOADING flag to stop device discovery
+ */
+-static void
++void
+ qla2x00_wait_for_sess_deletion(scsi_qla_host_t *vha)
+ {
+ qla2x00_mark_all_devices_lost(vha, 0);
+@@ -3603,6 +3604,8 @@ qla2x00_remove_one(struct pci_dev *pdev)
+
+ base_vha = pci_get_drvdata(pdev);
+ ha = base_vha->hw;
++ ql_log(ql_log_info, base_vha, 0xb079,
++ "Removing driver\n");
+
+ /* Indicate device removal to prevent future board_disable and wait
+ * until any pending board_disable has completed. */
+@@ -3625,6 +3628,21 @@ qla2x00_remove_one(struct pci_dev *pdev)
+ }
+ qla2x00_wait_for_hba_ready(base_vha);
+
++ if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha)) {
++ if (ha->flags.fw_started)
++ qla2x00_abort_isp_cleanup(base_vha);
++ } else if (!IS_QLAFX00(ha)) {
++ if (IS_QLA8031(ha)) {
++ ql_dbg(ql_dbg_p3p, base_vha, 0xb07e,
++ "Clearing fcoe driver presence.\n");
++ if (qla83xx_clear_drv_presence(base_vha) != QLA_SUCCESS)
++ ql_dbg(ql_dbg_p3p, base_vha, 0xb079,
++ "Error while clearing DRV-Presence.\n");
++ }
++
++ qla2x00_try_to_stop_firmware(base_vha);
++ }
++
+ qla2x00_wait_for_sess_deletion(base_vha);
+
+ /*
+@@ -3648,14 +3666,6 @@ qla2x00_remove_one(struct pci_dev *pdev)
+
+ qla2x00_delete_all_vps(ha, base_vha);
+
+- if (IS_QLA8031(ha)) {
+- ql_dbg(ql_dbg_p3p, base_vha, 0xb07e,
+- "Clearing fcoe driver presence.\n");
+- if (qla83xx_clear_drv_presence(base_vha) != QLA_SUCCESS)
+- ql_dbg(ql_dbg_p3p, base_vha, 0xb079,
+- "Error while clearing DRV-Presence.\n");
+- }
+-
+ qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
+
+ qla2x00_dfs_remove(base_vha);
+@@ -3715,24 +3725,6 @@ qla2x00_free_device(scsi_qla_host_t *vha)
+ qla2x00_stop_timer(vha);
+
+ qla25xx_delete_queues(vha);
+-
+- if (ha->flags.fce_enabled)
+- qla2x00_disable_fce_trace(vha, NULL, NULL);
+-
+- if (ha->eft)
+- qla2x00_disable_eft_trace(vha);
+-
+- if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha)) {
+- if (ha->flags.fw_started)
+- qla2x00_abort_isp_cleanup(vha);
+- } else {
+- if (ha->flags.fw_started) {
+- /* Stop currently executing firmware. */
+- qla2x00_try_to_stop_firmware(vha);
+- ha->flags.fw_started = 0;
+- }
+- }
+-
+ vha->flags.online = 0;
+
+ /* turn-off interrupts on the card */
+@@ -6022,8 +6014,9 @@ qla2x00_do_dpc(void *data)
+ set_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags);
+ }
+
+- if (test_and_clear_bit(ISP_ABORT_NEEDED,
+- &base_vha->dpc_flags)) {
++ if (test_and_clear_bit
++ (ISP_ABORT_NEEDED, &base_vha->dpc_flags) &&
++ !test_bit(UNLOADING, &base_vha->dpc_flags)) {
+
+ ql_dbg(ql_dbg_dpc, base_vha, 0x4007,
+ "ISP abort scheduled.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index 04458eb19d38..4499c787165f 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -1880,6 +1880,9 @@ qla24xx_beacon_off(struct scsi_qla_host *vha)
+ if (IS_P3P_TYPE(ha))
+ return QLA_SUCCESS;
+
++ if (!ha->flags.fw_started)
++ return QLA_SUCCESS;
++
+ ha->beacon_blink_led = 0;
+
+ if (IS_QLA2031(ha) || IS_QLA27XX(ha))
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index e99b329002cf..47986c0912f0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4245,6 +4245,7 @@ int try_release_extent_mapping(struct extent_map_tree *map,
+ struct extent_map *em;
+ u64 start = page_offset(page);
+ u64 end = start + PAGE_SIZE - 1;
++ struct btrfs_inode *btrfs_inode = BTRFS_I(page->mapping->host);
+
+ if (gfpflags_allow_blocking(mask) &&
+ page->mapping->host->i_size > SZ_16M) {
+@@ -4267,6 +4268,8 @@ int try_release_extent_mapping(struct extent_map_tree *map,
+ extent_map_end(em) - 1,
+ EXTENT_LOCKED | EXTENT_WRITEBACK,
+ 0, NULL)) {
++ set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++ &btrfs_inode->runtime_flags);
+ remove_extent_mapping(map, em);
+ /* once for the rb tree */
+ free_extent_map(em);
+diff --git a/fs/jfs/jfs_dinode.h b/fs/jfs/jfs_dinode.h
+index 395c4c0d0f06..1682a87c00b2 100644
+--- a/fs/jfs/jfs_dinode.h
++++ b/fs/jfs/jfs_dinode.h
+@@ -115,6 +115,13 @@ struct dinode {
+ dxd_t _dxd; /* 16: */
+ union {
+ __le32 _rdev; /* 4: */
++ /*
++ * The fast symlink area
++ * is expected to overflow
++ * into _inlineea when
++ * needed (which will clear
++ * INLINEEA).
++ */
+ u8 _fastsymlink[128];
+ } _u;
+ u8 _inlineea[128];
+diff --git a/fs/jfs/jfs_incore.h b/fs/jfs/jfs_incore.h
+index 1f26d1910409..9940a1e04cbf 100644
+--- a/fs/jfs/jfs_incore.h
++++ b/fs/jfs/jfs_incore.h
+@@ -87,6 +87,7 @@ struct jfs_inode_info {
+ struct {
+ unchar _unused[16]; /* 16: */
+ dxd_t _dxd; /* 16: */
++ /* _inline may overflow into _inline_ea when needed */
+ unchar _inline[128]; /* 128: inline symlink */
+ /* _inline_ea may overlay the last part of
+ * file._xtroot if maxentry = XTROOTINITSLOT
+diff --git a/fs/jfs/super.c b/fs/jfs/super.c
+index 1b9264fd54b6..f08571433aba 100644
+--- a/fs/jfs/super.c
++++ b/fs/jfs/super.c
+@@ -967,8 +967,7 @@ static int __init init_jfs_fs(void)
+ jfs_inode_cachep =
+ kmem_cache_create_usercopy("jfs_ip", sizeof(struct jfs_inode_info),
+ 0, SLAB_RECLAIM_ACCOUNT|SLAB_MEM_SPREAD|SLAB_ACCOUNT,
+- offsetof(struct jfs_inode_info, i_inline),
+- sizeof_field(struct jfs_inode_info, i_inline),
++ offsetof(struct jfs_inode_info, i_inline), IDATASIZE,
+ init_once);
+ if (jfs_inode_cachep == NULL)
+ return -ENOMEM;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index c60f3d32ee91..a6797986b625 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -491,15 +491,17 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+ if (size > PSIZE) {
+ /*
+ * To keep the rest of the code simple. Allocate a
+- * contiguous buffer to work with
++ * contiguous buffer to work with. Make the buffer large
++ * enough to make use of the whole extent.
+ */
+- ea_buf->xattr = kmalloc(size, GFP_KERNEL);
++ ea_buf->max_size = (size + sb->s_blocksize - 1) &
++ ~(sb->s_blocksize - 1);
++
++ ea_buf->xattr = kmalloc(ea_buf->max_size, GFP_KERNEL);
+ if (ea_buf->xattr == NULL)
+ return -ENOMEM;
+
+ ea_buf->flag = EA_MALLOC;
+- ea_buf->max_size = (size + sb->s_blocksize - 1) &
+- ~(sb->s_blocksize - 1);
+
+ if (ea_size == 0)
+ return 0;
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 2135b8e67dcc..1035c2c97886 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -803,9 +803,8 @@ xfs_attr_shortform_to_leaf(
+ ASSERT(blkno == 0);
+ error = xfs_attr3_leaf_create(args, blkno, &bp);
+ if (error) {
+- error = xfs_da_shrink_inode(args, 0, bp);
+- bp = NULL;
+- if (error)
++ /* xfs_attr3_leaf_create may not have instantiated a block */
++ if (bp && (xfs_da_shrink_inode(args, 0, bp) != 0))
+ goto out;
+ xfs_idata_realloc(dp, size, XFS_ATTR_FORK); /* try to put */
+ memcpy(ifp->if_u1.if_data, tmpbuffer, size); /* it back */
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index 9a18f69f6e96..817899961f48 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -308,6 +308,46 @@ xfs_reinit_inode(
+ return error;
+ }
+
++/*
++ * If we are allocating a new inode, then check what was returned is
++ * actually a free, empty inode. If we are not allocating an inode,
++ * then check we didn't find a free inode.
++ *
++ * Returns:
++ * 0 if the inode free state matches the lookup context
++ * -ENOENT if the inode is free and we are not allocating
++ * -EFSCORRUPTED if there is any state mismatch at all
++ */
++static int
++xfs_iget_check_free_state(
++ struct xfs_inode *ip,
++ int flags)
++{
++ if (flags & XFS_IGET_CREATE) {
++ /* should be a free inode */
++ if (VFS_I(ip)->i_mode != 0) {
++ xfs_warn(ip->i_mount,
++"Corruption detected! Free inode 0x%llx not marked free! (mode 0x%x)",
++ ip->i_ino, VFS_I(ip)->i_mode);
++ return -EFSCORRUPTED;
++ }
++
++ if (ip->i_d.di_nblocks != 0) {
++ xfs_warn(ip->i_mount,
++"Corruption detected! Free inode 0x%llx has blocks allocated!",
++ ip->i_ino);
++ return -EFSCORRUPTED;
++ }
++ return 0;
++ }
++
++ /* should be an allocated inode */
++ if (VFS_I(ip)->i_mode == 0)
++ return -ENOENT;
++
++ return 0;
++}
++
+ /*
+ * Check the validity of the inode we just found it the cache
+ */
+@@ -357,12 +397,12 @@ xfs_iget_cache_hit(
+ }
+
+ /*
+- * If lookup is racing with unlink return an error immediately.
++ * Check the inode free state is valid. This also detects lookup
++ * racing with unlinks.
+ */
+- if (VFS_I(ip)->i_mode == 0 && !(flags & XFS_IGET_CREATE)) {
+- error = -ENOENT;
++ error = xfs_iget_check_free_state(ip, flags);
++ if (error)
+ goto out_error;
+- }
+
+ /*
+ * If IRECLAIMABLE is set, we've torn down the VFS inode already.
+@@ -485,29 +525,12 @@ xfs_iget_cache_miss(
+
+
+ /*
+- * If we are allocating a new inode, then check what was returned is
+- * actually a free, empty inode. If we are not allocating an inode,
+- * the check we didn't find a free inode.
++ * Check the inode free state is valid. This also detects lookup
++ * racing with unlinks.
+ */
+- if (flags & XFS_IGET_CREATE) {
+- if (VFS_I(ip)->i_mode != 0) {
+- xfs_warn(mp,
+-"Corruption detected! Free inode 0x%llx not marked free on disk",
+- ino);
+- error = -EFSCORRUPTED;
+- goto out_destroy;
+- }
+- if (ip->i_d.di_nblocks != 0) {
+- xfs_warn(mp,
+-"Corruption detected! Free inode 0x%llx has blocks allocated!",
+- ino);
+- error = -EFSCORRUPTED;
+- goto out_destroy;
+- }
+- } else if (VFS_I(ip)->i_mode == 0) {
+- error = -ENOENT;
++ error = xfs_iget_check_free_state(ip, flags);
++ if (error)
+ goto out_destroy;
+- }
+
+ /*
+ * Preload the radix tree so we can insert safely under the
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index a0233edc0718..72341f7c5673 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -165,6 +165,7 @@ void ring_buffer_record_enable(struct ring_buffer *buffer);
+ void ring_buffer_record_off(struct ring_buffer *buffer);
+ void ring_buffer_record_on(struct ring_buffer *buffer);
+ int ring_buffer_record_is_on(struct ring_buffer *buffer);
++int ring_buffer_record_is_set_on(struct ring_buffer *buffer);
+ void ring_buffer_record_disable_cpu(struct ring_buffer *buffer, int cpu);
+ void ring_buffer_record_enable_cpu(struct ring_buffer *buffer, int cpu);
+
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index facfecfc543c..48b70c368f73 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1067,6 +1067,13 @@ static int irq_setup_forced_threading(struct irqaction *new)
+ if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT))
+ return 0;
+
++ /*
++ * No further action required for interrupts which are requested as
++ * threaded interrupts already
++ */
++ if (new->handler == irq_default_primary_handler)
++ return 0;
++
+ new->flags |= IRQF_ONESHOT;
+
+ /*
+@@ -1074,7 +1081,7 @@ static int irq_setup_forced_threading(struct irqaction *new)
+ * thread handler. We force thread them as well by creating a
+ * secondary action.
+ */
+- if (new->handler != irq_default_primary_handler && new->thread_fn) {
++ if (new->handler && new->thread_fn) {
+ /* Allocate the secondary action */
+ new->secondary = kzalloc(sizeof(struct irqaction), GFP_KERNEL);
+ if (!new->secondary)
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 8a040bcaa033..ce4fb0e12504 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -387,7 +387,7 @@ static inline void tick_irq_exit(void)
+
+ /* Make sure that timer wheel updates are propagated */
+ if ((idle_cpu(cpu) && !need_resched()) || tick_nohz_full_cpu(cpu)) {
+- if (!in_interrupt())
++ if (!in_irq())
+ tick_nohz_irq_exit();
+ }
+ #endif
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index da9455a6b42b..5b33e2f5c0ed 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -642,7 +642,7 @@ static void tick_nohz_restart(struct tick_sched *ts, ktime_t now)
+
+ static inline bool local_timer_softirq_pending(void)
+ {
+- return local_softirq_pending() & TIMER_SOFTIRQ;
++ return local_softirq_pending() & BIT(TIMER_SOFTIRQ);
+ }
+
+ static ktime_t tick_nohz_next_event(struct tick_sched *ts, int cpu)
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index c9cb9767d49b..2bf2a6c7c18e 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -3226,6 +3226,22 @@ int ring_buffer_record_is_on(struct ring_buffer *buffer)
+ return !atomic_read(&buffer->record_disabled);
+ }
+
++/**
++ * ring_buffer_record_is_set_on - return true if the ring buffer is set writable
++ * @buffer: The ring buffer to see if write is set enabled
++ *
++ * Returns true if the ring buffer is set writable by ring_buffer_record_on().
++ * Note that this does NOT mean it is in a writable state.
++ *
++ * It may return true when the ring buffer has been disabled by
++ * ring_buffer_record_disable(), as that is a temporary disabling of
++ * the ring buffer.
++ */
++int ring_buffer_record_is_set_on(struct ring_buffer *buffer)
++{
++ return !(atomic_read(&buffer->record_disabled) & RB_BUFFER_OFF);
++}
++
+ /**
+ * ring_buffer_record_disable_cpu - stop all writes into the cpu_buffer
+ * @buffer: The ring buffer to stop writes to.
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 4e67d0020337..a583b6494b95 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1375,6 +1375,12 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
+
+ arch_spin_lock(&tr->max_lock);
+
++ /* Inherit the recordable setting from trace_buffer */
++ if (ring_buffer_record_is_set_on(tr->trace_buffer.buffer))
++ ring_buffer_record_on(tr->max_buffer.buffer);
++ else
++ ring_buffer_record_off(tr->max_buffer.buffer);
++
+ buf = tr->trace_buffer.buffer;
+ tr->trace_buffer.buffer = tr->max_buffer.buffer;
+ tr->max_buffer.buffer = buf;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index adc434752d67..13a203157dbe 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1013,8 +1013,8 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+
+ if (nlk->ngroups == 0)
+ groups = 0;
+- else
+- groups &= (1ULL << nlk->ngroups) - 1;
++ else if (nlk->ngroups < 8*sizeof(groups))
++ groups &= (1UL << nlk->ngroups) - 1;
+
+ bound = nlk->bound;
+ if (bound) {
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-07 18:10 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-07 18:10 UTC (permalink / raw
To: gentoo-commits
commit: 88db9e5fa6f772f48e71e0ed79fd01a8e4e4c66a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 7 18:10:08 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 7 18:10:08 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=88db9e5f
Linux patch 4.17.13
0000_README | 4 +
1012_linux-4.17.13.patch | 1052 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1056 insertions(+)
diff --git a/0000_README b/0000_README
index 6e0bb48..ba82da5 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-4.17.12.patch
From: http://www.kernel.org
Desc: Linux 4.17.12
+Patch: 1012_linux-4.17.13.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-4.17.13.patch b/1012_linux-4.17.13.patch
new file mode 100644
index 0000000..d684e3c
--- /dev/null
+++ b/1012_linux-4.17.13.patch
@@ -0,0 +1,1052 @@
+diff --git a/Makefile b/Makefile
+index 790e8faf0ddc..2534e51de1db 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 3166b9674429..b9699e63ceda 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -981,7 +981,7 @@ ENTRY(\sym)
+
+ call \do_sym
+
+- jmp error_exit /* %ebx: no swapgs flag */
++ jmp error_exit
+ .endif
+ END(\sym)
+ .endm
+@@ -1222,7 +1222,6 @@ END(paranoid_exit)
+
+ /*
+ * Save all registers in pt_regs, and switch GS if needed.
+- * Return: EBX=0: came from user mode; EBX=1: otherwise
+ */
+ ENTRY(error_entry)
+ UNWIND_HINT_FUNC
+@@ -1269,7 +1268,6 @@ ENTRY(error_entry)
+ * for these here too.
+ */
+ .Lerror_kernelspace:
+- incl %ebx
+ leaq native_irq_return_iret(%rip), %rcx
+ cmpq %rcx, RIP+8(%rsp)
+ je .Lerror_bad_iret
+@@ -1303,28 +1301,20 @@ ENTRY(error_entry)
+
+ /*
+ * Pretend that the exception came from user mode: set up pt_regs
+- * as if we faulted immediately after IRET and clear EBX so that
+- * error_exit knows that we will be returning to user mode.
++ * as if we faulted immediately after IRET.
+ */
+ mov %rsp, %rdi
+ call fixup_bad_iret
+ mov %rax, %rsp
+- decl %ebx
+ jmp .Lerror_entry_from_usermode_after_swapgs
+ END(error_entry)
+
+-
+-/*
+- * On entry, EBX is a "return to kernel mode" flag:
+- * 1: already in kernel mode, don't need SWAPGS
+- * 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode
+- */
+ ENTRY(error_exit)
+ UNWIND_HINT_REGS
+ DISABLE_INTERRUPTS(CLBR_ANY)
+ TRACE_IRQS_OFF
+- testl %ebx, %ebx
+- jnz retint_kernel
++ testb $3, CS(%rsp)
++ jz retint_kernel
+ jmp retint_user
+ END(error_exit)
+
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 2aabd4cb0e3f..adbda5847b14 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -573,6 +573,9 @@ static u32 skx_deadline_rev(void)
+ case 0x04: return 0x02000014;
+ }
+
++ if (boot_cpu_data.x86_stepping > 4)
++ return 0;
++
+ return ~0U;
+ }
+
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index a3bbac8ef4d0..7a28959f1985 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7660,6 +7660,8 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
+ HRTIMER_MODE_REL_PINNED);
+ vmx->nested.preemption_timer.function = vmx_preemption_timer_fn;
+
++ vmx->nested.vpid02 = allocate_vpid();
++
+ vmx->nested.vmxon = true;
+ return 0;
+
+@@ -10108,11 +10110,9 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
+ goto free_vmcs;
+ }
+
+- if (nested) {
++ if (nested)
+ nested_vmx_setup_ctls_msrs(&vmx->nested.msrs,
+ kvm_vcpu_apicv_active(&vmx->vcpu));
+- vmx->nested.vpid02 = allocate_vpid();
+- }
+
+ vmx->nested.posted_intr_nv = -1;
+ vmx->nested.current_vmptr = -1ull;
+@@ -10129,7 +10129,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
+ return &vmx->vcpu;
+
+ free_vmcs:
+- free_vpid(vmx->nested.vpid02);
+ free_loaded_vmcs(vmx->loaded_vmcs);
+ free_msrs:
+ kfree(vmx->guest_msrs);
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index 84fbfaba8404..d320954072cb 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -417,7 +417,7 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
+ if (!(md->attribute & EFI_MEMORY_WB))
+ flags |= _PAGE_PCD;
+
+- if (sev_active())
++ if (sev_active() && md->type != EFI_MEMORY_MAPPED_IO)
+ flags |= _PAGE_ENC;
+
+ pfn = md->phys_addr >> PAGE_SHIFT;
+diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c
+index 1c6cbda56afe..09d823d36d3a 100644
+--- a/drivers/crypto/padlock-aes.c
++++ b/drivers/crypto/padlock-aes.c
+@@ -266,6 +266,8 @@ static inline void padlock_xcrypt_ecb(const u8 *input, u8 *output, void *key,
+ return;
+ }
+
++ count -= initial;
++
+ if (initial)
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
+ : "+S"(input), "+D"(output)
+@@ -273,7 +275,7 @@ static inline void padlock_xcrypt_ecb(const u8 *input, u8 *output, void *key,
+
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
+ : "+S"(input), "+D"(output)
+- : "d"(control_word), "b"(key), "c"(count - initial));
++ : "d"(control_word), "b"(key), "c"(count));
+ }
+
+ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key,
+@@ -284,6 +286,8 @@ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key,
+ if (count < cbc_fetch_blocks)
+ return cbc_crypt(input, output, key, iv, control_word, count);
+
++ count -= initial;
++
+ if (initial)
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */
+ : "+S" (input), "+D" (output), "+a" (iv)
+@@ -291,7 +295,7 @@ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key,
+
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */
+ : "+S" (input), "+D" (output), "+a" (iv)
+- : "d" (control_word), "b" (key), "c" (count-initial));
++ : "d" (control_word), "b" (key), "c" (count));
+ return iv;
+ }
+
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 3448e8e44c35..bc8c1c7afb84 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1499,8 +1499,9 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ {
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *crtc_state;
+- struct drm_plane *plane;
+- struct drm_plane_state *old_plane_state, *new_plane_state;
++ struct drm_plane *plane = NULL;
++ struct drm_plane_state *old_plane_state = NULL;
++ struct drm_plane_state *new_plane_state = NULL;
+ const struct drm_plane_helper_funcs *funcs;
+ int i, n_planes = 0;
+
+@@ -1516,7 +1517,8 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ if (n_planes != 1)
+ return -EINVAL;
+
+- if (!new_plane_state->crtc)
++ if (!new_plane_state->crtc ||
++ old_plane_state->crtc != new_plane_state->crtc)
+ return -EINVAL;
+
+ funcs = plane->helper_private;
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 13dcaad06798..25bc3cffcf93 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -319,6 +319,9 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ vc4_state->x_scaling[0] = VC4_SCALING_TPZ;
+ if (vc4_state->y_scaling[0] == VC4_SCALING_NONE)
+ vc4_state->y_scaling[0] = VC4_SCALING_TPZ;
++ } else {
++ vc4_state->x_scaling[1] = VC4_SCALING_NONE;
++ vc4_state->y_scaling[1] = VC4_SCALING_NONE;
+ }
+
+ vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 7a300e3eb0c2..0c31caa2c333 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -1984,15 +1984,64 @@ static int modify_qp(struct ib_uverbs_file *file,
+ goto release_qp;
+ }
+
+- if ((cmd->base.attr_mask & IB_QP_AV) &&
+- !rdma_is_port_valid(qp->device, cmd->base.dest.port_num)) {
+- ret = -EINVAL;
+- goto release_qp;
++ if ((cmd->base.attr_mask & IB_QP_AV)) {
++ if (!rdma_is_port_valid(qp->device, cmd->base.dest.port_num)) {
++ ret = -EINVAL;
++ goto release_qp;
++ }
++
++ if (cmd->base.attr_mask & IB_QP_STATE &&
++ cmd->base.qp_state == IB_QPS_RTR) {
++ /* We are in INIT->RTR TRANSITION (if we are not,
++ * this transition will be rejected in subsequent checks).
++ * In the INIT->RTR transition, we cannot have IB_QP_PORT set,
++ * but the IB_QP_STATE flag is required.
++ *
++ * Since kernel 3.14 (commit dbf727de7440), the uverbs driver,
++ * when IB_QP_AV is set, has required inclusion of a valid
++ * port number in the primary AV. (AVs are created and handled
++ * differently for infiniband and ethernet (RoCE) ports).
++ *
++ * Check the port number included in the primary AV against
++ * the port number in the qp struct, which was set (and saved)
++ * in the RST->INIT transition.
++ */
++ if (cmd->base.dest.port_num != qp->real_qp->port) {
++ ret = -EINVAL;
++ goto release_qp;
++ }
++ } else {
++ /* We are in SQD->SQD. (If we are not, this transition will
++ * be rejected later in the verbs layer checks).
++ * Check for both IB_QP_PORT and IB_QP_AV, these can be set
++ * together in the SQD->SQD transition.
++ *
++ * If only IP_QP_AV was set, add in IB_QP_PORT as well (the
++ * verbs layer driver does not track primary port changes
++ * resulting from path migration. Thus, in SQD, if the primary
++ * AV is modified, the primary port should also be modified).
++ *
++ * Note that in this transition, the IB_QP_STATE flag
++ * is not allowed.
++ */
++ if (((cmd->base.attr_mask & (IB_QP_AV | IB_QP_PORT))
++ == (IB_QP_AV | IB_QP_PORT)) &&
++ cmd->base.port_num != cmd->base.dest.port_num) {
++ ret = -EINVAL;
++ goto release_qp;
++ }
++ if ((cmd->base.attr_mask & (IB_QP_AV | IB_QP_PORT))
++ == IB_QP_AV) {
++ cmd->base.attr_mask |= IB_QP_PORT;
++ cmd->base.port_num = cmd->base.dest.port_num;
++ }
++ }
+ }
+
+ if ((cmd->base.attr_mask & IB_QP_ALT_PATH) &&
+ (!rdma_is_port_valid(qp->device, cmd->base.alt_port_num) ||
+- !rdma_is_port_valid(qp->device, cmd->base.alt_dest.port_num))) {
++ !rdma_is_port_valid(qp->device, cmd->base.alt_dest.port_num) ||
++ cmd->base.alt_port_num != cmd->base.alt_dest.port_num)) {
+ ret = -EINVAL;
+ goto release_qp;
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 1f1e97b26f95..da94bd39aff6 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1691,6 +1691,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ goto err_upper_unlink;
+ }
+
++ bond->nest_level = dev_get_nest_level(bond_dev) + 1;
++
+ /* If the mode uses primary, then the following is handled by
+ * bond_change_active_slave().
+ */
+@@ -1738,7 +1740,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ if (bond_mode_uses_xmit_hash(bond))
+ bond_update_slave_arr(bond, NULL);
+
+- bond->nest_level = dev_get_nest_level(bond_dev);
+
+ netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n",
+ slave_dev->name,
+@@ -3389,6 +3390,13 @@ static void bond_fold_stats(struct rtnl_link_stats64 *_res,
+ }
+ }
+
++static int bond_get_nest_level(struct net_device *bond_dev)
++{
++ struct bonding *bond = netdev_priv(bond_dev);
++
++ return bond->nest_level;
++}
++
+ static void bond_get_stats(struct net_device *bond_dev,
+ struct rtnl_link_stats64 *stats)
+ {
+@@ -3397,7 +3405,7 @@ static void bond_get_stats(struct net_device *bond_dev,
+ struct list_head *iter;
+ struct slave *slave;
+
+- spin_lock(&bond->stats_lock);
++ spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev));
+ memcpy(stats, &bond->bond_stats, sizeof(*stats));
+
+ rcu_read_lock();
+@@ -4192,6 +4200,7 @@ static const struct net_device_ops bond_netdev_ops = {
+ .ndo_neigh_setup = bond_neigh_setup,
+ .ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid,
++ .ndo_get_lock_subclass = bond_get_nest_level,
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_netpoll_setup = bond_netpoll_setup,
+ .ndo_netpoll_cleanup = bond_netpoll_cleanup,
+@@ -4690,6 +4699,7 @@ static int bond_init(struct net_device *bond_dev)
+ if (!bond->wq)
+ return -ENOMEM;
+
++ bond->nest_level = SINGLE_DEPTH_NESTING;
+ netdev_lockdep_set_classes(bond_dev);
+
+ list_add_tail(&bond->bond_list, &bn->dev_list);
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 12ff0020ecd6..b7dfd4109d24 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -1072,6 +1072,7 @@ static void ems_usb_disconnect(struct usb_interface *intf)
+ usb_free_urb(dev->intr_urb);
+
+ kfree(dev->intr_in_buffer);
++ kfree(dev->tx_msg_buffer);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index 0c6015ce85fd..f7c4feefaf2a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -1057,6 +1057,8 @@ static int mlx5e_trust_initialize(struct mlx5e_priv *priv)
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int err;
+
++ priv->dcbx_dp.trust_state = MLX5_QPTS_TRUST_PCP;
++
+ if (!MLX5_DSCP_SUPPORTED(mdev))
+ return 0;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 0a75e9d441e6..4f52f87cf210 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1698,7 +1698,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
+ int vport_num;
+ int err;
+
+- if (!MLX5_VPORT_MANAGER(dev))
++ if (!MLX5_ESWITCH_MANAGER(dev))
+ return 0;
+
+ esw_info(dev,
+@@ -1767,7 +1767,7 @@ abort:
+
+ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
+ {
+- if (!esw || !MLX5_VPORT_MANAGER(esw->dev))
++ if (!esw || !MLX5_ESWITCH_MANAGER(esw->dev))
+ return;
+
+ esw_info(esw->dev, "cleanup\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index af3bb2f7a504..b7c21eb21a21 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -76,6 +76,7 @@ void mlx5i_init(struct mlx5_core_dev *mdev,
+ void *ppriv)
+ {
+ struct mlx5e_priv *priv = mlx5i_epriv(netdev);
++ u16 max_mtu;
+
+ /* priv init */
+ priv->mdev = mdev;
+@@ -84,6 +85,9 @@ void mlx5i_init(struct mlx5_core_dev *mdev,
+ priv->ppriv = ppriv;
+ mutex_init(&priv->state_lock);
+
++ mlx5_query_port_max_mtu(mdev, &max_mtu, 1);
++ netdev->mtu = max_mtu;
++
+ mlx5e_build_nic_params(mdev, &priv->channels.params,
+ profile->max_nch(mdev), netdev->mtu);
+ mlx5i_build_nic_params(mdev, &priv->channels.params);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+index 8d375e51a526..6a393b16a1fc 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+@@ -257,7 +257,7 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
+ return -ENOMEM;
+
+ /* Enable pci device */
+- ret = pcim_enable_device(pdev);
++ ret = pci_enable_device(pdev);
+ if (ret) {
+ dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n",
+ __func__);
+@@ -300,9 +300,45 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
+ static void stmmac_pci_remove(struct pci_dev *pdev)
+ {
+ stmmac_dvr_remove(&pdev->dev);
++ pci_disable_device(pdev);
+ }
+
+-static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_suspend, stmmac_resume);
++static int stmmac_pci_suspend(struct device *dev)
++{
++ struct pci_dev *pdev = to_pci_dev(dev);
++ int ret;
++
++ ret = stmmac_suspend(dev);
++ if (ret)
++ return ret;
++
++ ret = pci_save_state(pdev);
++ if (ret)
++ return ret;
++
++ pci_disable_device(pdev);
++ pci_wake_from_d3(pdev, true);
++ return 0;
++}
++
++static int stmmac_pci_resume(struct device *dev)
++{
++ struct pci_dev *pdev = to_pci_dev(dev);
++ int ret;
++
++ pci_restore_state(pdev);
++ pci_set_power_state(pdev, PCI_D0);
++
++ ret = pci_enable_device(pdev);
++ if (ret)
++ return ret;
++
++ pci_set_master(pdev);
++
++ return stmmac_resume(dev);
++}
++
++static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_pci_suspend, stmmac_pci_resume);
+
+ /* synthetic ID, no official vendor */
+ #define PCI_VENDOR_ID_STMMAC 0x700
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 091c191ce259..4de3e5f7a2e6 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -1755,7 +1755,8 @@ brcmf_pcie_prepare_fw_request(struct brcmf_pciedev_info *devinfo)
+ fwreq->items[BRCMF_PCIE_FW_CODE].type = BRCMF_FW_TYPE_BINARY;
+ fwreq->items[BRCMF_PCIE_FW_NVRAM].type = BRCMF_FW_TYPE_NVRAM;
+ fwreq->items[BRCMF_PCIE_FW_NVRAM].flags = BRCMF_FW_REQF_OPTIONAL;
+- fwreq->domain_nr = pci_domain_nr(devinfo->pdev->bus);
++ /* NVRAM reserves PCI domain 0 for Broadcom's SDK faked bus */
++ fwreq->domain_nr = pci_domain_nr(devinfo->pdev->bus) + 1;
+ fwreq->bus_nr = devinfo->pdev->bus->number;
+
+ return fwreq;
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c
+index e1c869a1f8cc..67f778bbb897 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c
+@@ -180,6 +180,17 @@ const struct iwl_cfg iwl9260_2ac_cfg = {
+ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
+ };
+
++const struct iwl_cfg iwl9260_killer_2ac_cfg = {
++ .name = "Killer (R) Wireless-AC 1550 Wireless Network Adapter (9260NGW)",
++ .fw_name_pre = IWL9260A_FW_PRE,
++ .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE,
++ IWL_DEVICE_9000,
++ .ht_params = &iwl9000_ht_params,
++ .nvm_ver = IWL9000_NVM_VERSION,
++ .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
++ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
++};
++
+ const struct iwl_cfg iwl9270_2ac_cfg = {
+ .name = "Intel(R) Dual Band Wireless AC 9270",
+ .fw_name_pre = IWL9260A_FW_PRE,
+@@ -269,6 +280,34 @@ const struct iwl_cfg iwl9560_2ac_cfg_soc = {
+ .soc_latency = 5000,
+ };
+
++const struct iwl_cfg iwl9560_killer_2ac_cfg_soc = {
++ .name = "Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)",
++ .fw_name_pre = IWL9000A_FW_PRE,
++ .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE,
++ .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE,
++ IWL_DEVICE_9000,
++ .ht_params = &iwl9000_ht_params,
++ .nvm_ver = IWL9000_NVM_VERSION,
++ .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
++ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
++ .integrated = true,
++ .soc_latency = 5000,
++};
++
++const struct iwl_cfg iwl9560_killer_s_2ac_cfg_soc = {
++ .name = "Killer (R) Wireless-AC 1550s Wireless Network Adapter (9560NGW)",
++ .fw_name_pre = IWL9000A_FW_PRE,
++ .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE,
++ .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE,
++ IWL_DEVICE_9000,
++ .ht_params = &iwl9000_ht_params,
++ .nvm_ver = IWL9000_NVM_VERSION,
++ .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
++ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
++ .integrated = true,
++ .soc_latency = 5000,
++};
++
+ const struct iwl_cfg iwl9460_2ac_cfg_shared_clk = {
+ .name = "Intel(R) Dual Band Wireless AC 9460",
+ .fw_name_pre = IWL9000A_FW_PRE,
+@@ -329,6 +368,36 @@ const struct iwl_cfg iwl9560_2ac_cfg_shared_clk = {
+ .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK
+ };
+
++const struct iwl_cfg iwl9560_killer_2ac_cfg_shared_clk = {
++ .name = "Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)",
++ .fw_name_pre = IWL9000A_FW_PRE,
++ .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE,
++ .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE,
++ IWL_DEVICE_9000,
++ .ht_params = &iwl9000_ht_params,
++ .nvm_ver = IWL9000_NVM_VERSION,
++ .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
++ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
++ .integrated = true,
++ .soc_latency = 5000,
++ .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK
++};
++
++const struct iwl_cfg iwl9560_killer_s_2ac_cfg_shared_clk = {
++ .name = "Killer (R) Wireless-AC 1550s Wireless Network Adapter (9560NGW)",
++ .fw_name_pre = IWL9000A_FW_PRE,
++ .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE,
++ .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE,
++ IWL_DEVICE_9000,
++ .ht_params = &iwl9000_ht_params,
++ .nvm_ver = IWL9000_NVM_VERSION,
++ .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
++ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
++ .integrated = true,
++ .soc_latency = 5000,
++ .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK
++};
++
+ MODULE_FIRMWARE(IWL9000A_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL9000B_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL9000RFB_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index f0f5636dd3ea..919cc9997839 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -471,6 +471,7 @@ extern const struct iwl_cfg iwl8275_2ac_cfg;
+ extern const struct iwl_cfg iwl4165_2ac_cfg;
+ extern const struct iwl_cfg iwl9160_2ac_cfg;
+ extern const struct iwl_cfg iwl9260_2ac_cfg;
++extern const struct iwl_cfg iwl9260_killer_2ac_cfg;
+ extern const struct iwl_cfg iwl9270_2ac_cfg;
+ extern const struct iwl_cfg iwl9460_2ac_cfg;
+ extern const struct iwl_cfg iwl9560_2ac_cfg;
+@@ -478,10 +479,14 @@ extern const struct iwl_cfg iwl9460_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl9461_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl9462_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
++extern const struct iwl_cfg iwl9560_killer_2ac_cfg_soc;
++extern const struct iwl_cfg iwl9560_killer_s_2ac_cfg_soc;
+ extern const struct iwl_cfg iwl9460_2ac_cfg_shared_clk;
+ extern const struct iwl_cfg iwl9461_2ac_cfg_shared_clk;
+ extern const struct iwl_cfg iwl9462_2ac_cfg_shared_clk;
+ extern const struct iwl_cfg iwl9560_2ac_cfg_shared_clk;
++extern const struct iwl_cfg iwl9560_killer_2ac_cfg_shared_clk;
++extern const struct iwl_cfg iwl9560_killer_s_2ac_cfg_shared_clk;
+ extern const struct iwl_cfg iwl22000_2ac_cfg_hr;
+ extern const struct iwl_cfg iwl22000_2ac_cfg_hr_cdb;
+ extern const struct iwl_cfg iwl22000_2ac_cfg_jf;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 959de2f8bb28..1185e937992a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -545,6 +545,9 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2526, 0x1210, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x1410, iwl9270_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x1420, iwl9460_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x1550, iwl9260_killer_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2526, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x1610, iwl9270_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x2034, iwl9560_2ac_cfg_soc)},
+@@ -554,6 +557,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x4234, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x42A4, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x8014, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0xA014, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x271B, 0x0010, iwl9160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x271B, 0x0014, iwl9160_2ac_cfg)},
+@@ -578,6 +582,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2720, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2720, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2720, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_cfg)},
+@@ -604,6 +610,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x30DC, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x30DC, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_cfg_soc)},
+@@ -630,6 +638,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x31DC, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x31DC, 0x1030, iwl9560_2ac_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x31DC, 0x1551, iwl9560_killer_s_2ac_cfg_shared_clk)},
++ {IWL_PCI_DEVICE(0x31DC, 0x1552, iwl9560_killer_2ac_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x2030, iwl9560_2ac_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x2034, iwl9560_2ac_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x4030, iwl9560_2ac_cfg_shared_clk)},
+@@ -656,6 +666,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x34F0, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x34F0, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x34F0, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x34F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x34F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x34F0, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x34F0, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x34F0, 0x4030, iwl9560_2ac_cfg_soc)},
+@@ -682,6 +694,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x3DF0, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x3DF0, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x3DF0, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x3DF0, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x3DF0, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x3DF0, 0x4030, iwl9560_2ac_cfg_soc)},
+@@ -708,6 +722,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x43F0, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x43F0, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x43F0, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x43F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x43F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x43F0, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x43F0, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x43F0, 0x4030, iwl9560_2ac_cfg_soc)},
+@@ -743,6 +759,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x9DF0, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x9DF0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x9DF0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x2010, iwl9460_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x2034, iwl9560_2ac_cfg_soc)},
+@@ -771,6 +789,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0xA0F0, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x4030, iwl9560_2ac_cfg_soc)},
+@@ -797,6 +817,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0xA370, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0xA370, 0x1030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0xA370, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0xA370, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x2030, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x2034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x4030, iwl9560_2ac_cfg_soc)},
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index ecc87a53294f..71cadee2f769 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -2186,6 +2186,7 @@ sg_add_sfp(Sg_device * sdp)
+ write_lock_irqsave(&sdp->sfd_lock, iflags);
+ if (atomic_read(&sdp->detaching)) {
+ write_unlock_irqrestore(&sdp->sfd_lock, iflags);
++ kfree(sfp);
+ return ERR_PTR(-ENODEV);
+ }
+ list_add_tail(&sfp->sfd_siblings, &sdp->sfds);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 6b237e3f4983..3988c0914322 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -513,7 +513,9 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
+ tell_host(vb, vb->inflate_vq);
+
+ /* balloon's page migration 2nd step -- deflate "page" */
++ spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
+ balloon_page_delete(page);
++ spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
+ vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
+ set_page_pfns(vb, vb->pfns, page);
+ tell_host(vb, vb->deflate_vq);
+diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
+index 2751476e6b6e..f098b9f1c396 100644
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -167,6 +167,8 @@ int squashfs_read_data(struct super_block *sb, u64 index, int length,
+ }
+
+ if (compressed) {
++ if (!msblk->stream)
++ goto read_failure;
+ length = squashfs_decompress(msblk, bh, b, offset, length,
+ output);
+ if (length < 0)
+diff --git a/fs/squashfs/fragment.c b/fs/squashfs/fragment.c
+index 86ad9a4b8c36..0681feab4a84 100644
+--- a/fs/squashfs/fragment.c
++++ b/fs/squashfs/fragment.c
+@@ -49,11 +49,16 @@ int squashfs_frag_lookup(struct super_block *sb, unsigned int fragment,
+ u64 *fragment_block)
+ {
+ struct squashfs_sb_info *msblk = sb->s_fs_info;
+- int block = SQUASHFS_FRAGMENT_INDEX(fragment);
+- int offset = SQUASHFS_FRAGMENT_INDEX_OFFSET(fragment);
+- u64 start_block = le64_to_cpu(msblk->fragment_index[block]);
++ int block, offset, size;
+ struct squashfs_fragment_entry fragment_entry;
+- int size;
++ u64 start_block;
++
++ if (fragment >= msblk->fragments)
++ return -EIO;
++ block = SQUASHFS_FRAGMENT_INDEX(fragment);
++ offset = SQUASHFS_FRAGMENT_INDEX_OFFSET(fragment);
++
++ start_block = le64_to_cpu(msblk->fragment_index[block]);
+
+ size = squashfs_read_metadata(sb, &fragment_entry, &start_block,
+ &offset, sizeof(fragment_entry));
+diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
+index 1da565cb50c3..ef69c31947bf 100644
+--- a/fs/squashfs/squashfs_fs_sb.h
++++ b/fs/squashfs/squashfs_fs_sb.h
+@@ -75,6 +75,7 @@ struct squashfs_sb_info {
+ unsigned short block_log;
+ long long bytes_used;
+ unsigned int inodes;
++ unsigned int fragments;
+ int xattr_ids;
+ };
+ #endif
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index 8a73b97217c8..40e657386fa5 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -175,6 +175,7 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent)
+ msblk->inode_table = le64_to_cpu(sblk->inode_table_start);
+ msblk->directory_table = le64_to_cpu(sblk->directory_table_start);
+ msblk->inodes = le32_to_cpu(sblk->inodes);
++ msblk->fragments = le32_to_cpu(sblk->fragments);
+ flags = le16_to_cpu(sblk->flags);
+
+ TRACE("Found valid superblock on %pg\n", sb->s_bdev);
+@@ -185,7 +186,7 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent)
+ TRACE("Filesystem size %lld bytes\n", msblk->bytes_used);
+ TRACE("Block size %d\n", msblk->block_size);
+ TRACE("Number of inodes %d\n", msblk->inodes);
+- TRACE("Number of fragments %d\n", le32_to_cpu(sblk->fragments));
++ TRACE("Number of fragments %d\n", msblk->fragments);
+ TRACE("Number of ids %d\n", le16_to_cpu(sblk->no_ids));
+ TRACE("sblk->inode_table_start %llx\n", msblk->inode_table);
+ TRACE("sblk->directory_table_start %llx\n", msblk->directory_table);
+@@ -272,7 +273,7 @@ allocate_id_index_table:
+ sb->s_export_op = &squashfs_export_ops;
+
+ handle_fragments:
+- fragments = le32_to_cpu(sblk->fragments);
++ fragments = msblk->fragments;
+ if (fragments == 0)
+ goto check_directory_table;
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 1d85efacfc8e..f4845cdd4926 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -631,8 +631,10 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
+ /* the various vma->vm_userfaultfd_ctx still points to it */
+ down_write(&mm->mmap_sem);
+ for (vma = mm->mmap; vma; vma = vma->vm_next)
+- if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx)
++ if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
+ vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
++ vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
++ }
+ up_write(&mm->mmap_sem);
+
+ userfaultfd_ctx_put(release_new_ctx);
+diff --git a/ipc/shm.c b/ipc/shm.c
+index d73269381ec7..5ad05c56dce0 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -427,6 +427,17 @@ static int shm_split(struct vm_area_struct *vma, unsigned long addr)
+ return 0;
+ }
+
++static unsigned long shm_pagesize(struct vm_area_struct *vma)
++{
++ struct file *file = vma->vm_file;
++ struct shm_file_data *sfd = shm_file_data(file);
++
++ if (sfd->vm_ops->pagesize)
++ return sfd->vm_ops->pagesize(vma);
++
++ return PAGE_SIZE;
++}
++
+ #ifdef CONFIG_NUMA
+ static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
+ {
+@@ -554,6 +565,7 @@ static const struct vm_operations_struct shm_vm_ops = {
+ .close = shm_close, /* callback for when the vm-area is released */
+ .fault = shm_fault,
+ .split = shm_split,
++ .pagesize = shm_pagesize,
+ #if defined(CONFIG_NUMA)
+ .set_policy = shm_set_policy,
+ .get_policy = shm_get_policy,
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 479c031ec54c..e6a93c63068b 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1274,8 +1274,12 @@ static void show_special(struct audit_context *context, int *call_panic)
+ break;
+ case AUDIT_KERN_MODULE:
+ audit_log_format(ab, "name=");
+- audit_log_untrustedstring(ab, context->module.name);
+- kfree(context->module.name);
++ if (context->module.name) {
++ audit_log_untrustedstring(ab, context->module.name);
++ kfree(context->module.name);
++ } else
++ audit_log_format(ab, "(null)");
++
+ break;
+ }
+ audit_log_end(ab);
+@@ -2408,8 +2412,9 @@ void __audit_log_kern_module(char *name)
+ {
+ struct audit_context *context = current->audit_context;
+
+- context->module.name = kmalloc(strlen(name) + 1, GFP_KERNEL);
+- strcpy(context->module.name, name);
++ context->module.name = kstrdup(name, GFP_KERNEL);
++ if (!context->module.name)
++ audit_log_lost("out of memory in __audit_log_kern_module");
+ context->type = AUDIT_KERN_MODULE;
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a2d9eb6a0af9..4c1a2c02e13b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3166,6 +3166,13 @@ static int hugetlb_vm_op_fault(struct vm_fault *vmf)
+ return 0;
+ }
+
++/*
++ * When a new function is introduced to vm_operations_struct and added
++ * to hugetlb_vm_ops, please consider adding the function to shm_vm_ops.
++ * This is because under System V memory model, mappings created via
++ * shmget/shmat with "huge page" specified are backed by hugetlbfs files,
++ * their original vm_ops are overwritten with shm_vm_ops.
++ */
+ const struct vm_operations_struct hugetlb_vm_ops = {
+ .fault = hugetlb_vm_op_fault,
+ .open = hugetlb_vm_op_open,
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 18561af7a8f1..01fa96f29734 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -1195,6 +1195,9 @@ int dsa_slave_suspend(struct net_device *slave_dev)
+ {
+ struct dsa_slave_priv *p = netdev_priv(slave_dev);
+
++ if (!netif_running(slave_dev))
++ return 0;
++
+ netif_device_detach(slave_dev);
+
+ if (slave_dev->phydev) {
+@@ -1210,6 +1213,9 @@ int dsa_slave_suspend(struct net_device *slave_dev)
+
+ int dsa_slave_resume(struct net_device *slave_dev)
+ {
++ if (!netif_running(slave_dev))
++ return 0;
++
+ netif_device_attach(slave_dev);
+
+ if (slave_dev->phydev) {
+diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
+index c9e35b81d093..eeb6646aa892 100644
+--- a/net/ipv4/inet_fragment.c
++++ b/net/ipv4/inet_fragment.c
+@@ -157,9 +157,6 @@ static struct inet_frag_queue *inet_frag_alloc(struct netns_frags *nf,
+ {
+ struct inet_frag_queue *q;
+
+- if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh)
+- return NULL;
+-
+ q = kmem_cache_zalloc(f->frags_cachep, GFP_ATOMIC);
+ if (!q)
+ return NULL;
+@@ -204,6 +201,9 @@ struct inet_frag_queue *inet_frag_find(struct netns_frags *nf, void *key)
+ {
+ struct inet_frag_queue *fq;
+
++ if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh)
++ return NULL;
++
+ rcu_read_lock();
+
+ fq = rhashtable_lookup(&nf->rhashtable, key, nf->f->rhash_params);
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index 8e9528ebaa8e..d14d741fb05e 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -383,11 +383,16 @@ found:
+ int i = end - next->ip_defrag_offset; /* overlap is 'i' bytes */
+
+ if (i < next->len) {
++ int delta = -next->truesize;
++
+ /* Eat head of the next overlapped fragment
+ * and leave the loop. The next ones cannot overlap.
+ */
+ if (!pskb_pull(next, i))
+ goto err;
++ delta += next->truesize;
++ if (delta)
++ add_frag_mem_limit(qp->q.net, delta);
+ next->ip_defrag_offset += i;
+ qp->q.meat -= i;
+ if (next->ip_summed != CHECKSUM_UNNECESSARY)
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 890f22f90344..adc434752d67 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -63,6 +63,7 @@
+ #include <linux/hash.h>
+ #include <linux/genetlink.h>
+ #include <linux/net_namespace.h>
++#include <linux/nospec.h>
+
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+@@ -679,6 +680,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
+
+ if (protocol < 0 || protocol >= MAX_LINKS)
+ return -EPROTONOSUPPORT;
++ protocol = array_index_nospec(protocol, MAX_LINKS);
+
+ netlink_lock_table();
+ #ifdef CONFIG_MODULES
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index a9a9be5519b9..9d1e298b784c 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -116,9 +116,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ while (*pp) {
+ parent = *pp;
+ xcall = rb_entry(parent, struct rxrpc_call, sock_node);
+- if (user_call_ID < call->user_call_ID)
++ if (user_call_ID < xcall->user_call_ID)
+ pp = &(*pp)->rb_left;
+- else if (user_call_ID > call->user_call_ID)
++ else if (user_call_ID > xcall->user_call_ID)
+ pp = &(*pp)->rb_right;
+ else
+ goto id_in_use;
+diff --git a/net/socket.c b/net/socket.c
+index d1b02f161429..6a6aa84b64c1 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -89,6 +89,7 @@
+ #include <linux/magic.h>
+ #include <linux/slab.h>
+ #include <linux/xattr.h>
++#include <linux/nospec.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+@@ -2526,6 +2527,7 @@ SYSCALL_DEFINE2(socketcall, int, call, unsigned long __user *, args)
+
+ if (call < 1 || call > SYS_SENDMMSG)
+ return -EINVAL;
++ call = array_index_nospec(call, SYS_SENDMMSG + 1);
+
+ len = nargs[call];
+ if (len > sizeof(a))
+@@ -2692,7 +2694,8 @@ EXPORT_SYMBOL(sock_unregister);
+
+ bool sock_is_registered(int family)
+ {
+- return family < NPROTO && rcu_access_pointer(net_families[family]);
++ return family < NPROTO &&
++ rcu_access_pointer(net_families[array_index_nospec(family, NPROTO)]);
+ }
+
+ static int __init sock_init(void)
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-08-03 12:19 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-08-03 12:19 UTC (permalink / raw
To: gentoo-commits
commit: a435a0a68c5f50f33231d974dc564c153c825d1f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 3 12:18:47 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 3 12:18:47 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a435a0a6
Linux patch 4.17.12
0000_README | 4 +
1011_linux-4.17.12.patch | 11595 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 11599 insertions(+)
diff --git a/0000_README b/0000_README
index a0836f2..6e0bb48 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-4.17.11.patch
From: http://www.kernel.org
Desc: Linux 4.17.11
+Patch: 1011_linux-4.17.12.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-4.17.12.patch b/1011_linux-4.17.12.patch
new file mode 100644
index 0000000..9dd5854
--- /dev/null
+++ b/1011_linux-4.17.12.patch
@@ -0,0 +1,11595 @@
+diff --git a/Documentation/devicetree/bindings/net/dsa/qca8k.txt b/Documentation/devicetree/bindings/net/dsa/qca8k.txt
+index 9c67ee4890d7..bbcb255c3150 100644
+--- a/Documentation/devicetree/bindings/net/dsa/qca8k.txt
++++ b/Documentation/devicetree/bindings/net/dsa/qca8k.txt
+@@ -2,7 +2,10 @@
+
+ Required properties:
+
+-- compatible: should be "qca,qca8337"
++- compatible: should be one of:
++ "qca,qca8334"
++ "qca,qca8337"
++
+ - #size-cells: must be 0
+ - #address-cells: must be 1
+
+@@ -14,6 +17,20 @@ port and PHY id, each subnode describing a port needs to have a valid phandle
+ referencing the internal PHY connected to it. The CPU port of this switch is
+ always port 0.
+
++A CPU port node has the following optional node:
++
++- fixed-link : Fixed-link subnode describing a link to a non-MDIO
++ managed entity. See
++ Documentation/devicetree/bindings/net/fixed-link.txt
++ for details.
++
++For QCA8K the 'fixed-link' sub-node supports only the following properties:
++
++- 'speed' (integer, mandatory), to indicate the link speed. Accepted
++ values are 10, 100 and 1000
++- 'full-duplex' (boolean, optional), to indicate that full duplex is
++ used. When absent, half duplex is assumed.
++
+ Example:
+
+
+@@ -53,6 +70,10 @@ Example:
+ label = "cpu";
+ ethernet = <&gmac1>;
+ phy-mode = "rgmii";
++ fixed-link {
++ speed = 1000;
++ full-duplex;
++ };
+ };
+
+ port@1 {
+diff --git a/Documentation/devicetree/bindings/net/meson-dwmac.txt b/Documentation/devicetree/bindings/net/meson-dwmac.txt
+index 61cada22ae6c..1321bb194ed9 100644
+--- a/Documentation/devicetree/bindings/net/meson-dwmac.txt
++++ b/Documentation/devicetree/bindings/net/meson-dwmac.txt
+@@ -11,6 +11,7 @@ Required properties on all platforms:
+ - "amlogic,meson8b-dwmac"
+ - "amlogic,meson8m2-dwmac"
+ - "amlogic,meson-gxbb-dwmac"
++ - "amlogic,meson-axg-dwmac"
+ Additionally "snps,dwmac" and any applicable more
+ detailed version number described in net/stmmac.txt
+ should be used.
+diff --git a/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
+index 2c12f9789116..54ecb8ab7788 100644
+--- a/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
+@@ -3,8 +3,10 @@
+ Required properties for the root node:
+ - compatible: one of "amlogic,meson8-cbus-pinctrl"
+ "amlogic,meson8b-cbus-pinctrl"
++ "amlogic,meson8m2-cbus-pinctrl"
+ "amlogic,meson8-aobus-pinctrl"
+ "amlogic,meson8b-aobus-pinctrl"
++ "amlogic,meson8m2-aobus-pinctrl"
+ "amlogic,meson-gxbb-periphs-pinctrl"
+ "amlogic,meson-gxbb-aobus-pinctrl"
+ "amlogic,meson-gxl-periphs-pinctrl"
+diff --git a/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
+index 74b2f03c1515..fa56697a1ba6 100644
+--- a/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
++++ b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
+@@ -7,6 +7,7 @@ Required properties:
+ - "renesas,r7s72100-wdt" (RZ/A1)
+ - "renesas,r8a7795-wdt" (R-Car H3)
+ - "renesas,r8a7796-wdt" (R-Car M3-W)
++ - "renesas,r8a77965-wdt" (R-Car M3-N)
+ - "renesas,r8a77970-wdt" (R-Car V3M)
+ - "renesas,r8a77995-wdt" (R-Car D3)
+
+diff --git a/Documentation/vfio-mediated-device.txt b/Documentation/vfio-mediated-device.txt
+index 1b3950346532..c3f69bcaf96e 100644
+--- a/Documentation/vfio-mediated-device.txt
++++ b/Documentation/vfio-mediated-device.txt
+@@ -145,6 +145,11 @@ The functions in the mdev_parent_ops structure are as follows:
+ * create: allocate basic resources in a driver for a mediated device
+ * remove: free resources in a driver when a mediated device is destroyed
+
++(Note that mdev-core provides no implicit serialization of create/remove
++callbacks per mdev parent device, per mdev type, or any other categorization.
++Vendor drivers are expected to be fully asynchronous in this respect or
++provide their own internal resource protection.)
++
+ The callbacks in the mdev_parent_ops structure are as follows:
+
+ * open: open callback of mediated device
+diff --git a/Makefile b/Makefile
+index e2664c641109..790e8faf0ddc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/emev2.dtsi b/arch/arm/boot/dts/emev2.dtsi
+index 42ea246e71cb..fec1241b858f 100644
+--- a/arch/arm/boot/dts/emev2.dtsi
++++ b/arch/arm/boot/dts/emev2.dtsi
+@@ -31,13 +31,13 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- cpu@0 {
++ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <0>;
+ clock-frequency = <533000000>;
+ };
+- cpu@1 {
++ cpu1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <1>;
+@@ -57,6 +57,7 @@
+ compatible = "arm,cortex-a9-pmu";
+ interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-affinity = <&cpu0>, <&cpu1>;
+ };
+
+ clocks@e0110000 {
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index d5628af2e301..563451167e7f 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -559,8 +559,6 @@
+ status = "okay";
+
+ port@2 {
+- reg = <2>;
+-
+ lvds0_out: endpoint {
+ remote-endpoint = <&panel_in_lvds0>;
+ };
+diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi
+index 3d65c0192f69..ab4fc5b99ad3 100644
+--- a/arch/arm/boot/dts/imx53.dtsi
++++ b/arch/arm/boot/dts/imx53.dtsi
+@@ -488,6 +488,10 @@
+ remote-endpoint = <&ipu_di0_lvds0>;
+ };
+ };
++
++ port@2 {
++ reg = <2>;
++ };
+ };
+
+ lvds-channel@1 {
+@@ -503,6 +507,10 @@
+ remote-endpoint = <&ipu_di1_lvds1>;
+ };
+ };
++
++ port@2 {
++ reg = <2>;
++ };
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
+index a32089132263..855dc6f9df75 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
+@@ -17,7 +17,6 @@
+ imx6qdl-wandboard {
+ pinctrl_hog: hoggrp {
+ fsl,pins = <
+- MX6QDL_PAD_GPIO_0__CCM_CLKO1 0x130b0 /* GPIO_0_CLKO */
+ MX6QDL_PAD_GPIO_2__GPIO1_IO02 0x80000000 /* uSDHC1 CD */
+ MX6QDL_PAD_EIM_DA9__GPIO3_IO09 0x80000000 /* uSDHC3 CD */
+ MX6QDL_PAD_EIM_EB1__GPIO2_IO29 0x0f0b0 /* WL_REF_ON */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
+index 8d893a78cdf0..49a0a557e62e 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
+@@ -17,7 +17,6 @@
+ imx6qdl-wandboard {
+ pinctrl_hog: hoggrp {
+ fsl,pins = <
+- MX6QDL_PAD_GPIO_0__CCM_CLKO1 0x130b0 /* GPIO_0_CLKO */
+ MX6QDL_PAD_GPIO_2__GPIO1_IO02 0x80000000 /* uSDHC1 CD */
+ MX6QDL_PAD_EIM_DA9__GPIO3_IO09 0x80000000 /* uSDHC3 CD */
+ MX6QDL_PAD_CSI0_DAT14__GPIO6_IO00 0x0f0b0 /* WIFI_ON (reset, active low) */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
+index 3a8a4952d45e..69d9c8661439 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
+@@ -147,7 +147,6 @@
+ imx6qdl-wandboard {
+ pinctrl_hog: hoggrp {
+ fsl,pins = <
+- MX6QDL_PAD_GPIO_0__CCM_CLKO1 0x130b0
+ MX6QDL_PAD_EIM_D22__USB_OTG_PWR 0x80000000 /* USB Power Enable */
+ MX6QDL_PAD_GPIO_2__GPIO1_IO02 0x80000000 /* USDHC1 CD */
+ MX6QDL_PAD_EIM_DA9__GPIO3_IO09 0x80000000 /* uSDHC3 CD */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+index ed96d7b5feab..6b0a86fa72d3 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+@@ -83,6 +83,8 @@
+ status = "okay";
+
+ codec: sgtl5000@a {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_mclk>;
+ compatible = "fsl,sgtl5000";
+ reg = <0x0a>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
+@@ -142,6 +144,12 @@
+ >;
+ };
+
++ pinctrl_mclk: mclkgrp {
++ fsl,pins = <
++ MX6QDL_PAD_GPIO_0__CCM_CLKO1 0x130b0
++ >;
++ };
++
+ pinctrl_spdif: spdifgrp {
+ fsl,pins = <
+ MX6QDL_PAD_ENET_RXD0__SPDIF_OUT 0x1b0b0
+diff --git a/arch/arm/boot/dts/sh73a0.dtsi b/arch/arm/boot/dts/sh73a0.dtsi
+index 914a7c2a584f..b0c20544df20 100644
+--- a/arch/arm/boot/dts/sh73a0.dtsi
++++ b/arch/arm/boot/dts/sh73a0.dtsi
+@@ -22,7 +22,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- cpu@0 {
++ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <0>;
+@@ -31,7 +31,7 @@
+ power-domains = <&pd_a2sl>;
+ next-level-cache = <&L2>;
+ };
+- cpu@1 {
++ cpu1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <1>;
+@@ -91,6 +91,7 @@
+ compatible = "arm,cortex-a9-pmu";
+ interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-affinity = <&cpu0>, <&cpu1>;
+ };
+
+ cmt1: timer@e6138000 {
+diff --git a/arch/arm/boot/dts/stih407-pinctrl.dtsi b/arch/arm/boot/dts/stih407-pinctrl.dtsi
+index 53c6888d1fc0..e393519fb84c 100644
+--- a/arch/arm/boot/dts/stih407-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stih407-pinctrl.dtsi
+@@ -52,7 +52,7 @@
+ st,syscfg = <&syscfg_sbc>;
+ reg = <0x0961f080 0x4>;
+ reg-names = "irqmux";
+- interrupts = <GIC_SPI 188 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irqmux";
+ ranges = <0 0x09610000 0x6000>;
+
+@@ -376,7 +376,7 @@
+ st,syscfg = <&syscfg_front>;
+ reg = <0x0920f080 0x4>;
+ reg-names = "irqmux";
+- interrupts = <GIC_SPI 189 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irqmux";
+ ranges = <0 0x09200000 0x10000>;
+
+@@ -936,7 +936,7 @@
+ st,syscfg = <&syscfg_front>;
+ reg = <0x0921f080 0x4>;
+ reg-names = "irqmux";
+- interrupts = <GIC_SPI 190 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irqmux";
+ ranges = <0 0x09210000 0x10000>;
+
+@@ -969,7 +969,7 @@
+ st,syscfg = <&syscfg_rear>;
+ reg = <0x0922f080 0x4>;
+ reg-names = "irqmux";
+- interrupts = <GIC_SPI 191 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irqmux";
+ ranges = <0 0x09220000 0x6000>;
+
+@@ -1164,7 +1164,7 @@
+ st,syscfg = <&syscfg_flash>;
+ reg = <0x0923f080 0x4>;
+ reg-names = "irqmux";
+- interrupts = <GIC_SPI 192 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irqmux";
+ ranges = <0 0x09230000 0x3000>;
+
+diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
+index 3313005ee15c..888548ea9b5c 100644
+--- a/arch/arm/boot/dts/stih410.dtsi
++++ b/arch/arm/boot/dts/stih410.dtsi
+@@ -43,7 +43,7 @@
+ ohci0: usb@9a03c00 {
+ compatible = "st,st-ohci-300x";
+ reg = <0x9a03c00 0x100>;
+- interrupts = <GIC_SPI 180 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+ <&clk_s_c0_flexgen CLK_RX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT0_POWERDOWN>,
+@@ -58,7 +58,7 @@
+ ehci0: usb@9a03e00 {
+ compatible = "st,st-ehci-300x";
+ reg = <0x9a03e00 0x100>;
+- interrupts = <GIC_SPI 151 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb0>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+@@ -75,7 +75,7 @@
+ ohci1: usb@9a83c00 {
+ compatible = "st,st-ohci-300x";
+ reg = <0x9a83c00 0x100>;
+- interrupts = <GIC_SPI 181 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+ <&clk_s_c0_flexgen CLK_RX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT1_POWERDOWN>,
+@@ -90,7 +90,7 @@
+ ehci1: usb@9a83e00 {
+ compatible = "st,st-ehci-300x";
+ reg = <0x9a83e00 0x100>;
+- interrupts = <GIC_SPI 153 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb1>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+@@ -202,7 +202,7 @@
+ reg = <0x8d04000 0x1000>;
+ reg-names = "hdmi-reg";
+ #sound-dai-cells = <0>;
+- interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "irq";
+ clock-names = "pix",
+ "tmds",
+@@ -254,7 +254,7 @@
+ bdisp0:bdisp@9f10000 {
+ compatible = "st,stih407-bdisp";
+ reg = <0x9f10000 0x1000>;
+- interrupts = <GIC_SPI 38 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ clock-names = "bdisp";
+ clocks = <&clk_s_c0_flexgen CLK_IC_BDISP_0>;
+ };
+@@ -263,8 +263,8 @@
+ compatible = "st,st-hva";
+ reg = <0x8c85000 0x400>, <0x6000000 0x40000>;
+ reg-names = "hva_registers", "hva_esram";
+- interrupts = <GIC_SPI 58 IRQ_TYPE_NONE>,
+- <GIC_SPI 59 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>;
+ clock-names = "clk_hva";
+ clocks = <&clk_s_c0_flexgen CLK_HVA>;
+ };
+@@ -292,7 +292,7 @@
+ reg = <0x94a087c 0x64>;
+ clocks = <&clk_sysin>;
+ clock-names = "cec-clk";
+- interrupts = <GIC_SPI 140 IRQ_TYPE_NONE>;
++ interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "cec-irq";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_cec0_default>;
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index 5539fba892ce..ba2c10d1db4a 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -708,7 +708,7 @@ static inline void emit_a32_arsh_r64(const u8 dst[], const u8 src[], bool dstk,
+ }
+
+ /* dst = dst >> src */
+-static inline void emit_a32_lsr_r64(const u8 dst[], const u8 src[], bool dstk,
++static inline void emit_a32_rsh_r64(const u8 dst[], const u8 src[], bool dstk,
+ bool sstk, struct jit_ctx *ctx) {
+ const u8 *tmp = bpf2a32[TMP_REG_1];
+ const u8 *tmp2 = bpf2a32[TMP_REG_2];
+@@ -724,7 +724,7 @@ static inline void emit_a32_lsr_r64(const u8 dst[], const u8 src[], bool dstk,
+ emit(ARM_LDR_I(rm, ARM_SP, STACK_VAR(dst_hi)), ctx);
+ }
+
+- /* Do LSH operation */
++ /* Do RSH operation */
+ emit(ARM_RSB_I(ARM_IP, rt, 32), ctx);
+ emit(ARM_SUBS_I(tmp2[0], rt, 32), ctx);
+ emit(ARM_MOV_SR(ARM_LR, rd, SRTYPE_LSR, rt), ctx);
+@@ -774,7 +774,7 @@ static inline void emit_a32_lsh_i64(const u8 dst[], bool dstk,
+ }
+
+ /* dst = dst >> val */
+-static inline void emit_a32_lsr_i64(const u8 dst[], bool dstk,
++static inline void emit_a32_rsh_i64(const u8 dst[], bool dstk,
+ const u32 val, struct jit_ctx *ctx) {
+ const u8 *tmp = bpf2a32[TMP_REG_1];
+ const u8 *tmp2 = bpf2a32[TMP_REG_2];
+@@ -1330,7 +1330,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ case BPF_ALU64 | BPF_RSH | BPF_K:
+ if (unlikely(imm > 63))
+ return -EINVAL;
+- emit_a32_lsr_i64(dst, dstk, imm, ctx);
++ emit_a32_rsh_i64(dst, dstk, imm, ctx);
+ break;
+ /* dst = dst << src */
+ case BPF_ALU64 | BPF_LSH | BPF_X:
+@@ -1338,7 +1338,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ break;
+ /* dst = dst >> src */
+ case BPF_ALU64 | BPF_RSH | BPF_X:
+- emit_a32_lsr_r64(dst, src, dstk, sstk, ctx);
++ emit_a32_rsh_r64(dst, src, dstk, sstk, ctx);
+ break;
+ /* dst = dst >> src (signed) */
+ case BPF_ALU64 | BPF_ARSH | BPF_X:
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 2a7f36abd2dd..326ee6b59aaa 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -93,20 +93,12 @@
+ regulator-always-on;
+ };
+
+- rsnd_ak4613: sound {
+- compatible = "simple-audio-card";
++ sound_card: sound {
++ compatible = "audio-graph-card";
+
+- simple-audio-card,format = "left_j";
+- simple-audio-card,bitclock-master = <&sndcpu>;
+- simple-audio-card,frame-master = <&sndcpu>;
++ label = "rcar-sound";
+
+- sndcpu: simple-audio-card,cpu {
+- sound-dai = <&rcar_sound>;
+- };
+-
+- sndcodec: simple-audio-card,codec {
+- sound-dai = <&ak4613>;
+- };
++ dais = <&rsnd_port0>;
+ };
+
+ vbus0_usb2: regulator-vbus0-usb2 {
+@@ -322,6 +314,12 @@
+ asahi-kasei,out4-single-end;
+ asahi-kasei,out5-single-end;
+ asahi-kasei,out6-single-end;
++
++ port {
++ ak4613_endpoint: endpoint {
++ remote-endpoint = <&rsnd_endpoint0>;
++ };
++ };
+ };
+
+ cs2000: clk_multiplier@4f {
+@@ -581,10 +579,18 @@
+ <&audio_clk_c>,
+ <&cpg CPG_CORE CPG_AUDIO_CLK_I>;
+
+- rcar_sound,dai {
+- dai0 {
+- playback = <&ssi0 &src0 &dvc0>;
+- capture = <&ssi1 &src1 &dvc1>;
++ ports {
++ rsnd_port0: port@0 {
++ rsnd_endpoint0: endpoint {
++ remote-endpoint = <&ak4613_endpoint>;
++
++ dai-format = "left_j";
++ bitclock-master = <&rsnd_endpoint0>;
++ frame-master = <&rsnd_endpoint0>;
++
++ playback = <&ssi0 &src0 &dvc0>;
++ capture = <&ssi1 &src1 &dvc1>;
++ };
+ };
+ };
+ };
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index fe005df02ed3..13ec8815a91b 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -333,6 +333,8 @@ CONFIG_GPIO_XGENE_SB=y
+ CONFIG_GPIO_PCA953X=y
+ CONFIG_GPIO_PCA953X_IRQ=y
+ CONFIG_GPIO_MAX77620=y
++CONFIG_POWER_AVS=y
++CONFIG_ROCKCHIP_IODOMAIN=y
+ CONFIG_POWER_RESET_MSM=y
+ CONFIG_POWER_RESET_XGENE=y
+ CONFIG_POWER_RESET_SYSCON=y
+diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
+index 4f5fd2a36e6e..3b0938281541 100644
+--- a/arch/arm64/include/asm/cmpxchg.h
++++ b/arch/arm64/include/asm/cmpxchg.h
+@@ -204,7 +204,9 @@ static inline void __cmpwait_case_##name(volatile void *ptr, \
+ unsigned long tmp; \
+ \
+ asm volatile( \
+- " ldxr" #sz "\t%" #w "[tmp], %[v]\n" \
++ " sevl\n" \
++ " wfe\n" \
++ " ldxr" #sz "\t%" #w "[tmp], %[v]\n" \
+ " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \
+ " cbnz %" #w "[tmp], 1f\n" \
+ " wfe\n" \
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 1b18b4722420..86d9f9d303b0 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -611,11 +611,13 @@ void __init mem_init(void)
+ BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64);
+ #endif
+
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ /*
+ * Make sure we chose the upper bound of sizeof(struct page)
+- * correctly.
++ * correctly when sizing the VMEMMAP array.
+ */
+ BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT));
++#endif
+
+ if (PAGE_SIZE >= 16384 && get_num_physpages() <= 128) {
+ extern int sysctl_overcommit_memory;
+diff --git a/arch/microblaze/boot/Makefile b/arch/microblaze/boot/Makefile
+index fd46385a4c97..600e5a198bd2 100644
+--- a/arch/microblaze/boot/Makefile
++++ b/arch/microblaze/boot/Makefile
+@@ -22,17 +22,19 @@ $(obj)/linux.bin.gz: $(obj)/linux.bin FORCE
+ quiet_cmd_cp = CP $< $@$2
+ cmd_cp = cat $< >$@$2 || (rm -f $@ && echo false)
+
+-quiet_cmd_strip = STRIP $@
++quiet_cmd_strip = STRIP $< $@$2
+ cmd_strip = $(STRIP) -K microblaze_start -K _end -K __log_buf \
+- -K _fdt_start vmlinux -o $@
++ -K _fdt_start $< -o $@$2
+
+ UIMAGE_LOADADDR = $(CONFIG_KERNEL_BASE_ADDR)
++UIMAGE_IN = $@
++UIMAGE_OUT = $@.ub
+
+ $(obj)/simpleImage.%: vmlinux FORCE
+ $(call if_changed,cp,.unstrip)
+ $(call if_changed,objcopy)
+ $(call if_changed,uimage)
+- $(call if_changed,strip)
+- @echo 'Kernel: $@ is ready' ' (#'`cat .version`')'
++ $(call if_changed,strip,.strip)
++ @echo 'Kernel: $(UIMAGE_OUT) is ready' ' (#'`cat .version`')'
+
+ clean-files += simpleImage.*.unstrip linux.bin.ub
+diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
+index c7c63959ba91..e582d2c88092 100644
+--- a/arch/powerpc/include/asm/barrier.h
++++ b/arch/powerpc/include/asm/barrier.h
+@@ -76,6 +76,21 @@ do { \
+ ___p1; \
+ })
+
++#ifdef CONFIG_PPC_BOOK3S_64
++/*
++ * Prevent execution of subsequent instructions until preceding branches have
++ * been fully resolved and are no longer executing speculatively.
++ */
++#define barrier_nospec_asm ori 31,31,0
++
++// This also acts as a compiler barrier due to the memory clobber.
++#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
++
++#else /* !CONFIG_PPC_BOOK3S_64 */
++#define barrier_nospec_asm
++#define barrier_nospec()
++#endif
++
+ #include <asm-generic/barrier.h>
+
+ #endif /* _ASM_POWERPC_BARRIER_H */
+diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h
+index c1d257aa4c2d..66298461b640 100644
+--- a/arch/powerpc/include/asm/cache.h
++++ b/arch/powerpc/include/asm/cache.h
+@@ -9,11 +9,14 @@
+ #if defined(CONFIG_PPC_8xx) || defined(CONFIG_403GCX)
+ #define L1_CACHE_SHIFT 4
+ #define MAX_COPY_PREFETCH 1
++#define IFETCH_ALIGN_SHIFT 2
+ #elif defined(CONFIG_PPC_E500MC)
+ #define L1_CACHE_SHIFT 6
+ #define MAX_COPY_PREFETCH 4
++#define IFETCH_ALIGN_SHIFT 3
+ #elif defined(CONFIG_PPC32)
+ #define MAX_COPY_PREFETCH 4
++#define IFETCH_ALIGN_SHIFT 3 /* 603 fetches 2 insn at a time */
+ #if defined(CONFIG_PPC_47x)
+ #define L1_CACHE_SHIFT 7
+ #else
+diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
+index 0409c80c32c0..18ef59a9886d 100644
+--- a/arch/powerpc/include/asm/pkeys.h
++++ b/arch/powerpc/include/asm/pkeys.h
+@@ -26,6 +26,8 @@ extern u32 initial_allocation_mask; /* bits set for reserved keys */
+ # define VM_PKEY_BIT2 VM_HIGH_ARCH_2
+ # define VM_PKEY_BIT3 VM_HIGH_ARCH_3
+ # define VM_PKEY_BIT4 VM_HIGH_ARCH_4
++#elif !defined(VM_PKEY_BIT4)
++# define VM_PKEY_BIT4 VM_HIGH_ARCH_4
+ #endif
+
+ #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index b8a329f04814..e03c437a4d06 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -458,9 +458,11 @@ static void *eeh_add_virt_device(void *data, void *userdata)
+
+ driver = eeh_pcid_get(dev);
+ if (driver) {
+- eeh_pcid_put(dev);
+- if (driver->err_handler)
++ if (driver->err_handler) {
++ eeh_pcid_put(dev);
+ return NULL;
++ }
++ eeh_pcid_put(dev);
+ }
+
+ #ifdef CONFIG_PCI_IOV
+@@ -497,17 +499,19 @@ static void *eeh_rmv_device(void *data, void *userdata)
+ if (eeh_dev_removed(edev))
+ return NULL;
+
+- driver = eeh_pcid_get(dev);
+- if (driver) {
+- eeh_pcid_put(dev);
+- if (removed &&
+- eeh_pe_passed(edev->pe))
+- return NULL;
+- if (removed &&
+- driver->err_handler &&
+- driver->err_handler->error_detected &&
+- driver->err_handler->slot_reset)
++ if (removed) {
++ if (eeh_pe_passed(edev->pe))
+ return NULL;
++ driver = eeh_pcid_get(dev);
++ if (driver) {
++ if (driver->err_handler &&
++ driver->err_handler->error_detected &&
++ driver->err_handler->slot_reset) {
++ eeh_pcid_put(dev);
++ return NULL;
++ }
++ eeh_pcid_put(dev);
++ }
+ }
+
+ /* Remove it from PCI subsystem */
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index d8670a37d70c..6cab07e76732 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -913,7 +913,7 @@ start_here:
+ tovirt(r6,r6)
+ lis r5, abatron_pteptrs@h
+ ori r5, r5, abatron_pteptrs@l
+- stw r5, 0xf0(r0) /* Must match your Abatron config file */
++ stw r5, 0xf0(0) /* Must match your Abatron config file */
+ tophys(r5,r5)
+ stw r6, 0(r5)
+
+diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c
+index 85ad2f78b889..51cba7d7a4fd 100644
+--- a/arch/powerpc/kernel/pci_32.c
++++ b/arch/powerpc/kernel/pci_32.c
+@@ -11,6 +11,7 @@
+ #include <linux/sched.h>
+ #include <linux/errno.h>
+ #include <linux/bootmem.h>
++#include <linux/syscalls.h>
+ #include <linux/irq.h>
+ #include <linux/list.h>
+ #include <linux/of.h>
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index f9d6befb55a6..67f9c157bcc0 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -334,6 +334,7 @@ static void __init prom_print_dec(unsigned long val)
+ call_prom("write", 3, 1, prom.stdout, buf+i, size);
+ }
+
++__printf(1, 2)
+ static void __init prom_printf(const char *format, ...)
+ {
+ const char *p, *q, *s;
+@@ -1160,7 +1161,7 @@ static void __init prom_send_capabilities(void)
+ */
+
+ cores = DIV_ROUND_UP(NR_CPUS, prom_count_smt_threads());
+- prom_printf("Max number of cores passed to firmware: %lu (NR_CPUS = %lu)\n",
++ prom_printf("Max number of cores passed to firmware: %u (NR_CPUS = %d)\n",
+ cores, NR_CPUS);
+
+ ibm_architecture_vec.vec5.max_cpus = cpu_to_be32(cores);
+@@ -1242,7 +1243,7 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+
+ if (align)
+ base = _ALIGN_UP(base, align);
+- prom_debug("alloc_up(%x, %x)\n", size, align);
++ prom_debug("%s(%lx, %lx)\n", __func__, size, align);
+ if (ram_top == 0)
+ prom_panic("alloc_up() called with mem not initialized\n");
+
+@@ -1253,7 +1254,7 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+
+ for(; (base + size) <= alloc_top;
+ base = _ALIGN_UP(base + 0x100000, align)) {
+- prom_debug(" trying: 0x%x\n\r", base);
++ prom_debug(" trying: 0x%lx\n\r", base);
+ addr = (unsigned long)prom_claim(base, size, 0);
+ if (addr != PROM_ERROR && addr != 0)
+ break;
+@@ -1265,12 +1266,12 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+ return 0;
+ alloc_bottom = addr + size;
+
+- prom_debug(" -> %x\n", addr);
+- prom_debug(" alloc_bottom : %x\n", alloc_bottom);
+- prom_debug(" alloc_top : %x\n", alloc_top);
+- prom_debug(" alloc_top_hi : %x\n", alloc_top_high);
+- prom_debug(" rmo_top : %x\n", rmo_top);
+- prom_debug(" ram_top : %x\n", ram_top);
++ prom_debug(" -> %lx\n", addr);
++ prom_debug(" alloc_bottom : %lx\n", alloc_bottom);
++ prom_debug(" alloc_top : %lx\n", alloc_top);
++ prom_debug(" alloc_top_hi : %lx\n", alloc_top_high);
++ prom_debug(" rmo_top : %lx\n", rmo_top);
++ prom_debug(" ram_top : %lx\n", ram_top);
+
+ return addr;
+ }
+@@ -1285,7 +1286,7 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ {
+ unsigned long base, addr = 0;
+
+- prom_debug("alloc_down(%x, %x, %s)\n", size, align,
++ prom_debug("%s(%lx, %lx, %s)\n", __func__, size, align,
+ highmem ? "(high)" : "(low)");
+ if (ram_top == 0)
+ prom_panic("alloc_down() called with mem not initialized\n");
+@@ -1313,7 +1314,7 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ base = _ALIGN_DOWN(alloc_top - size, align);
+ for (; base > alloc_bottom;
+ base = _ALIGN_DOWN(base - 0x100000, align)) {
+- prom_debug(" trying: 0x%x\n\r", base);
++ prom_debug(" trying: 0x%lx\n\r", base);
+ addr = (unsigned long)prom_claim(base, size, 0);
+ if (addr != PROM_ERROR && addr != 0)
+ break;
+@@ -1324,12 +1325,12 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ alloc_top = addr;
+
+ bail:
+- prom_debug(" -> %x\n", addr);
+- prom_debug(" alloc_bottom : %x\n", alloc_bottom);
+- prom_debug(" alloc_top : %x\n", alloc_top);
+- prom_debug(" alloc_top_hi : %x\n", alloc_top_high);
+- prom_debug(" rmo_top : %x\n", rmo_top);
+- prom_debug(" ram_top : %x\n", ram_top);
++ prom_debug(" -> %lx\n", addr);
++ prom_debug(" alloc_bottom : %lx\n", alloc_bottom);
++ prom_debug(" alloc_top : %lx\n", alloc_top);
++ prom_debug(" alloc_top_hi : %lx\n", alloc_top_high);
++ prom_debug(" rmo_top : %lx\n", rmo_top);
++ prom_debug(" ram_top : %lx\n", ram_top);
+
+ return addr;
+ }
+@@ -1455,7 +1456,7 @@ static void __init prom_init_mem(void)
+
+ if (size == 0)
+ continue;
+- prom_debug(" %x %x\n", base, size);
++ prom_debug(" %lx %lx\n", base, size);
+ if (base == 0 && (of_platform & PLATFORM_LPAR))
+ rmo_top = size;
+ if ((base + size) > ram_top)
+@@ -1475,12 +1476,12 @@ static void __init prom_init_mem(void)
+
+ if (prom_memory_limit) {
+ if (prom_memory_limit <= alloc_bottom) {
+- prom_printf("Ignoring mem=%x <= alloc_bottom.\n",
+- prom_memory_limit);
++ prom_printf("Ignoring mem=%lx <= alloc_bottom.\n",
++ prom_memory_limit);
+ prom_memory_limit = 0;
+ } else if (prom_memory_limit >= ram_top) {
+- prom_printf("Ignoring mem=%x >= ram_top.\n",
+- prom_memory_limit);
++ prom_printf("Ignoring mem=%lx >= ram_top.\n",
++ prom_memory_limit);
+ prom_memory_limit = 0;
+ } else {
+ ram_top = prom_memory_limit;
+@@ -1512,12 +1513,13 @@ static void __init prom_init_mem(void)
+ alloc_bottom = PAGE_ALIGN(prom_initrd_end);
+
+ prom_printf("memory layout at init:\n");
+- prom_printf(" memory_limit : %x (16 MB aligned)\n", prom_memory_limit);
+- prom_printf(" alloc_bottom : %x\n", alloc_bottom);
+- prom_printf(" alloc_top : %x\n", alloc_top);
+- prom_printf(" alloc_top_hi : %x\n", alloc_top_high);
+- prom_printf(" rmo_top : %x\n", rmo_top);
+- prom_printf(" ram_top : %x\n", ram_top);
++ prom_printf(" memory_limit : %lx (16 MB aligned)\n",
++ prom_memory_limit);
++ prom_printf(" alloc_bottom : %lx\n", alloc_bottom);
++ prom_printf(" alloc_top : %lx\n", alloc_top);
++ prom_printf(" alloc_top_hi : %lx\n", alloc_top_high);
++ prom_printf(" rmo_top : %lx\n", rmo_top);
++ prom_printf(" ram_top : %lx\n", ram_top);
+ }
+
+ static void __init prom_close_stdin(void)
+@@ -1578,7 +1580,7 @@ static void __init prom_instantiate_opal(void)
+ return;
+ }
+
+- prom_printf("instantiating opal at 0x%x...", base);
++ prom_printf("instantiating opal at 0x%llx...", base);
+
+ if (call_prom_ret("call-method", 4, 3, rets,
+ ADDR("load-opal-runtime"),
+@@ -1594,10 +1596,10 @@ static void __init prom_instantiate_opal(void)
+
+ reserve_mem(base, size);
+
+- prom_debug("opal base = 0x%x\n", base);
+- prom_debug("opal align = 0x%x\n", align);
+- prom_debug("opal entry = 0x%x\n", entry);
+- prom_debug("opal size = 0x%x\n", (long)size);
++ prom_debug("opal base = 0x%llx\n", base);
++ prom_debug("opal align = 0x%llx\n", align);
++ prom_debug("opal entry = 0x%llx\n", entry);
++ prom_debug("opal size = 0x%llx\n", size);
+
+ prom_setprop(opal_node, "/ibm,opal", "opal-base-address",
+ &base, sizeof(base));
+@@ -1674,7 +1676,7 @@ static void __init prom_instantiate_rtas(void)
+
+ prom_debug("rtas base = 0x%x\n", base);
+ prom_debug("rtas entry = 0x%x\n", entry);
+- prom_debug("rtas size = 0x%x\n", (long)size);
++ prom_debug("rtas size = 0x%x\n", size);
+
+ prom_debug("prom_instantiate_rtas: end...\n");
+ }
+@@ -1732,7 +1734,7 @@ static void __init prom_instantiate_sml(void)
+ if (base == 0)
+ prom_panic("Could not allocate memory for sml\n");
+
+- prom_printf("instantiating sml at 0x%x...", base);
++ prom_printf("instantiating sml at 0x%llx...", base);
+
+ memset((void *)base, 0, size);
+
+@@ -1751,8 +1753,8 @@ static void __init prom_instantiate_sml(void)
+ prom_setprop(ibmvtpm_node, "/vdevice/vtpm", "linux,sml-size",
+ &size, sizeof(size));
+
+- prom_debug("sml base = 0x%x\n", base);
+- prom_debug("sml size = 0x%x\n", (long)size);
++ prom_debug("sml base = 0x%llx\n", base);
++ prom_debug("sml size = 0x%x\n", size);
+
+ prom_debug("prom_instantiate_sml: end...\n");
+ }
+@@ -1845,7 +1847,7 @@ static void __init prom_initialize_tce_table(void)
+
+ prom_debug("TCE table: %s\n", path);
+ prom_debug("\tnode = 0x%x\n", node);
+- prom_debug("\tbase = 0x%x\n", base);
++ prom_debug("\tbase = 0x%llx\n", base);
+ prom_debug("\tsize = 0x%x\n", minsize);
+
+ /* Initialize the table to have a one-to-one mapping
+@@ -1932,12 +1934,12 @@ static void __init prom_hold_cpus(void)
+ }
+
+ prom_debug("prom_hold_cpus: start...\n");
+- prom_debug(" 1) spinloop = 0x%x\n", (unsigned long)spinloop);
+- prom_debug(" 1) *spinloop = 0x%x\n", *spinloop);
+- prom_debug(" 1) acknowledge = 0x%x\n",
++ prom_debug(" 1) spinloop = 0x%lx\n", (unsigned long)spinloop);
++ prom_debug(" 1) *spinloop = 0x%lx\n", *spinloop);
++ prom_debug(" 1) acknowledge = 0x%lx\n",
+ (unsigned long)acknowledge);
+- prom_debug(" 1) *acknowledge = 0x%x\n", *acknowledge);
+- prom_debug(" 1) secondary_hold = 0x%x\n", secondary_hold);
++ prom_debug(" 1) *acknowledge = 0x%lx\n", *acknowledge);
++ prom_debug(" 1) secondary_hold = 0x%lx\n", secondary_hold);
+
+ /* Set the common spinloop variable, so all of the secondary cpus
+ * will block when they are awakened from their OF spinloop.
+@@ -1965,7 +1967,7 @@ static void __init prom_hold_cpus(void)
+ prom_getprop(node, "reg", ®, sizeof(reg));
+ cpu_no = be32_to_cpu(reg);
+
+- prom_debug("cpu hw idx = %lu\n", cpu_no);
++ prom_debug("cpu hw idx = %u\n", cpu_no);
+
+ /* Init the acknowledge var which will be reset by
+ * the secondary cpu when it awakens from its OF
+@@ -1975,7 +1977,7 @@ static void __init prom_hold_cpus(void)
+
+ if (cpu_no != prom.cpu) {
+ /* Primary Thread of non-boot cpu or any thread */
+- prom_printf("starting cpu hw idx %lu... ", cpu_no);
++ prom_printf("starting cpu hw idx %u... ", cpu_no);
+ call_prom("start-cpu", 3, 0, node,
+ secondary_hold, cpu_no);
+
+@@ -1986,11 +1988,11 @@ static void __init prom_hold_cpus(void)
+ if (*acknowledge == cpu_no)
+ prom_printf("done\n");
+ else
+- prom_printf("failed: %x\n", *acknowledge);
++ prom_printf("failed: %lx\n", *acknowledge);
+ }
+ #ifdef CONFIG_SMP
+ else
+- prom_printf("boot cpu hw idx %lu\n", cpu_no);
++ prom_printf("boot cpu hw idx %u\n", cpu_no);
+ #endif /* CONFIG_SMP */
+ }
+
+@@ -2268,7 +2270,7 @@ static void __init *make_room(unsigned long *mem_start, unsigned long *mem_end,
+ while ((*mem_start + needed) > *mem_end) {
+ unsigned long room, chunk;
+
+- prom_debug("Chunk exhausted, claiming more at %x...\n",
++ prom_debug("Chunk exhausted, claiming more at %lx...\n",
+ alloc_bottom);
+ room = alloc_top - alloc_bottom;
+ if (room > DEVTREE_CHUNK_SIZE)
+@@ -2494,7 +2496,7 @@ static void __init flatten_device_tree(void)
+ room = alloc_top - alloc_bottom - 0x4000;
+ if (room > DEVTREE_CHUNK_SIZE)
+ room = DEVTREE_CHUNK_SIZE;
+- prom_debug("starting device tree allocs at %x\n", alloc_bottom);
++ prom_debug("starting device tree allocs at %lx\n", alloc_bottom);
+
+ /* Now try to claim that */
+ mem_start = (unsigned long)alloc_up(room, PAGE_SIZE);
+@@ -2557,7 +2559,7 @@ static void __init flatten_device_tree(void)
+ int i;
+ prom_printf("reserved memory map:\n");
+ for (i = 0; i < mem_reserve_cnt; i++)
+- prom_printf(" %x - %x\n",
++ prom_printf(" %llx - %llx\n",
+ be64_to_cpu(mem_reserve_map[i].base),
+ be64_to_cpu(mem_reserve_map[i].size));
+ }
+@@ -2567,9 +2569,9 @@ static void __init flatten_device_tree(void)
+ */
+ mem_reserve_cnt = MEM_RESERVE_MAP_SIZE;
+
+- prom_printf("Device tree strings 0x%x -> 0x%x\n",
++ prom_printf("Device tree strings 0x%lx -> 0x%lx\n",
+ dt_string_start, dt_string_end);
+- prom_printf("Device tree struct 0x%x -> 0x%x\n",
++ prom_printf("Device tree struct 0x%lx -> 0x%lx\n",
+ dt_struct_start, dt_struct_end);
+ }
+
+@@ -3001,7 +3003,7 @@ static void __init prom_find_boot_cpu(void)
+ prom_getprop(cpu_pkg, "reg", &rval, sizeof(rval));
+ prom.cpu = be32_to_cpu(rval);
+
+- prom_debug("Booting CPU hw index = %lu\n", prom.cpu);
++ prom_debug("Booting CPU hw index = %d\n", prom.cpu);
+ }
+
+ static void __init prom_check_initrd(unsigned long r3, unsigned long r4)
+@@ -3023,8 +3025,8 @@ static void __init prom_check_initrd(unsigned long r3, unsigned long r4)
+ reserve_mem(prom_initrd_start,
+ prom_initrd_end - prom_initrd_start);
+
+- prom_debug("initrd_start=0x%x\n", prom_initrd_start);
+- prom_debug("initrd_end=0x%x\n", prom_initrd_end);
++ prom_debug("initrd_start=0x%lx\n", prom_initrd_start);
++ prom_debug("initrd_end=0x%lx\n", prom_initrd_end);
+ }
+ #endif /* CONFIG_BLK_DEV_INITRD */
+ }
+@@ -3277,7 +3279,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
+ /* Don't print anything after quiesce under OPAL, it crashes OFW */
+ if (of_platform != PLATFORM_OPAL) {
+ prom_printf("Booting Linux via __start() @ 0x%lx ...\n", kbase);
+- prom_debug("->dt_header_start=0x%x\n", hdr);
++ prom_debug("->dt_header_start=0x%lx\n", hdr);
+ }
+
+ #ifdef CONFIG_PPC32
+diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
+index a787776822d8..0378def28d41 100644
+--- a/arch/powerpc/lib/string.S
++++ b/arch/powerpc/lib/string.S
+@@ -12,6 +12,7 @@
+ #include <asm/errno.h>
+ #include <asm/ppc_asm.h>
+ #include <asm/export.h>
++#include <asm/cache.h>
+
+ .text
+
+@@ -23,7 +24,7 @@ _GLOBAL(strncpy)
+ mtctr r5
+ addi r6,r3,-1
+ addi r4,r4,-1
+- .balign 16
++ .balign IFETCH_ALIGN_BYTES
+ 1: lbzu r0,1(r4)
+ cmpwi 0,r0,0
+ stbu r0,1(r6)
+@@ -43,7 +44,7 @@ _GLOBAL(strncmp)
+ mtctr r5
+ addi r5,r3,-1
+ addi r4,r4,-1
+- .balign 16
++ .balign IFETCH_ALIGN_BYTES
+ 1: lbzu r3,1(r5)
+ cmpwi 1,r3,0
+ lbzu r0,1(r4)
+@@ -77,7 +78,7 @@ _GLOBAL(memchr)
+ beq- 2f
+ mtctr r5
+ addi r3,r3,-1
+- .balign 16
++ .balign IFETCH_ALIGN_BYTES
+ 1: lbzu r0,1(r3)
+ cmpw 0,r0,r4
+ bdnzf 2,1b
+diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
+index 66577cc66dc9..2f4b33b24b3b 100644
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
+ * updating it. No write barriers are needed here, provided
+ * we only update the current CPU's SLB shadow buffer.
+ */
+- p->save_area[index].esid = 0;
+- p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
+- p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
++ WRITE_ONCE(p->save_area[index].esid, 0);
++ WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
++ WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));
+ }
+
+ static inline void slb_shadow_clear(enum slb_index index)
+ {
+- get_slb_shadow()->save_area[index].esid = 0;
++ WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+ }
+
+ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 0ef3d9580e98..5299013bd9c9 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -202,25 +202,37 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
+
+ static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)
+ {
++ unsigned int i, ctx_idx = ctx->idx;
++
++ /* Load function address into r12 */
++ PPC_LI64(12, func);
++
++ /* For bpf-to-bpf function calls, the callee's address is unknown
++ * until the last extra pass. As seen above, we use PPC_LI64() to
++ * load the callee's address, but this may optimize the number of
++ * instructions required based on the nature of the address.
++ *
++ * Since we don't want the number of instructions emitted to change,
++ * we pad the optimized PPC_LI64() call with NOPs to guarantee that
++ * we always have a five-instruction sequence, which is the maximum
++ * that PPC_LI64() can emit.
++ */
++ for (i = ctx->idx - ctx_idx; i < 5; i++)
++ PPC_NOP();
++
+ #ifdef PPC64_ELF_ABI_v1
+- /* func points to the function descriptor */
+- PPC_LI64(b2p[TMP_REG_2], func);
+- /* Load actual entry point from function descriptor */
+- PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);
+- /* ... and move it to LR */
+- PPC_MTLR(b2p[TMP_REG_1]);
+ /*
+ * Load TOC from function descriptor at offset 8.
+ * We can clobber r2 since we get called through a
+ * function pointer (so caller will save/restore r2)
+ * and since we don't use a TOC ourself.
+ */
+- PPC_BPF_LL(2, b2p[TMP_REG_2], 8);
+-#else
+- /* We can clobber r12 */
+- PPC_FUNC_ADDR(12, func);
+- PPC_MTLR(12);
++ PPC_BPF_LL(2, 12, 8);
++ /* Load actual entry point from function descriptor */
++ PPC_BPF_LL(12, 12, 0);
+ #endif
++
++ PPC_MTLR(12);
+ PPC_BLRL();
+ }
+
+diff --git a/arch/powerpc/platforms/chrp/time.c b/arch/powerpc/platforms/chrp/time.c
+index 03d115aaa191..acde7bbe0716 100644
+--- a/arch/powerpc/platforms/chrp/time.c
++++ b/arch/powerpc/platforms/chrp/time.c
+@@ -28,6 +28,8 @@
+ #include <asm/sections.h>
+ #include <asm/time.h>
+
++#include <platforms/chrp/chrp.h>
++
+ extern spinlock_t rtc_lock;
+
+ #define NVRAM_AS0 0x74
+@@ -63,7 +65,7 @@ long __init chrp_time_init(void)
+ return 0;
+ }
+
+-int chrp_cmos_clock_read(int addr)
++static int chrp_cmos_clock_read(int addr)
+ {
+ if (nvram_as1 != 0)
+ outb(addr>>8, nvram_as1);
+@@ -71,7 +73,7 @@ int chrp_cmos_clock_read(int addr)
+ return (inb(nvram_data));
+ }
+
+-void chrp_cmos_clock_write(unsigned long val, int addr)
++static void chrp_cmos_clock_write(unsigned long val, int addr)
+ {
+ if (nvram_as1 != 0)
+ outb(addr>>8, nvram_as1);
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index 89c54de88b7a..bf4a125faec6 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -35,6 +35,8 @@
+ */
+ #define HW_BROADWAY_ICR 0x00
+ #define HW_BROADWAY_IMR 0x04
++#define HW_STARLET_ICR 0x08
++#define HW_STARLET_IMR 0x0c
+
+
+ /*
+@@ -74,6 +76,9 @@ static void hlwd_pic_unmask(struct irq_data *d)
+ void __iomem *io_base = irq_data_get_irq_chip_data(d);
+
+ setbits32(io_base + HW_BROADWAY_IMR, 1 << irq);
++
++ /* Make sure the ARM (aka. Starlet) doesn't handle this interrupt. */
++ clrbits32(io_base + HW_STARLET_IMR, 1 << irq);
+ }
+
+
+diff --git a/arch/powerpc/platforms/powermac/bootx_init.c b/arch/powerpc/platforms/powermac/bootx_init.c
+index c3c9bbb3573a..ba0964c17620 100644
+--- a/arch/powerpc/platforms/powermac/bootx_init.c
++++ b/arch/powerpc/platforms/powermac/bootx_init.c
+@@ -468,7 +468,7 @@ void __init bootx_init(unsigned long r3, unsigned long r4)
+ boot_infos_t *bi = (boot_infos_t *) r4;
+ unsigned long hdr;
+ unsigned long space;
+- unsigned long ptr, x;
++ unsigned long ptr;
+ char *model;
+ unsigned long offset = reloc_offset();
+
+@@ -562,6 +562,8 @@ void __init bootx_init(unsigned long r3, unsigned long r4)
+ * MMU switched OFF, so this should not be useful anymore.
+ */
+ if (bi->version < 4) {
++ unsigned long x __maybe_unused;
++
+ bootx_printf("Touching pages...\n");
+
+ /*
+diff --git a/arch/powerpc/platforms/powermac/setup.c b/arch/powerpc/platforms/powermac/setup.c
+index ab668cb72263..8b2eab1340f4 100644
+--- a/arch/powerpc/platforms/powermac/setup.c
++++ b/arch/powerpc/platforms/powermac/setup.c
+@@ -352,6 +352,7 @@ static int pmac_late_init(void)
+ }
+ machine_late_initcall(powermac, pmac_late_init);
+
++void note_bootable_part(dev_t dev, int part, int goodness);
+ /*
+ * This is __ref because we check for "initializing" before
+ * touching any of the __init sensitive things and "initializing"
+diff --git a/arch/s390/include/asm/cpu_mf.h b/arch/s390/include/asm/cpu_mf.h
+index f58d17e9dd65..de023a9a88ca 100644
+--- a/arch/s390/include/asm/cpu_mf.h
++++ b/arch/s390/include/asm/cpu_mf.h
+@@ -113,7 +113,7 @@ struct hws_basic_entry {
+
+ struct hws_diag_entry {
+ unsigned int def:16; /* 0-15 Data Entry Format */
+- unsigned int R:14; /* 16-19 and 20-30 reserved */
++ unsigned int R:15; /* 16-19 and 20-30 reserved */
+ unsigned int I:1; /* 31 entry valid or invalid */
+ u8 data[]; /* Machine-dependent sample data */
+ } __packed;
+@@ -129,7 +129,9 @@ struct hws_trailer_entry {
+ unsigned int f:1; /* 0 - Block Full Indicator */
+ unsigned int a:1; /* 1 - Alert request control */
+ unsigned int t:1; /* 2 - Timestamp format */
+- unsigned long long:61; /* 3 - 63: Reserved */
++ unsigned int :29; /* 3 - 31: Reserved */
++ unsigned int bsdes:16; /* 32-47: size of basic SDE */
++ unsigned int dsdes:16; /* 48-63: size of diagnostic SDE */
+ };
+ unsigned long long flags; /* 0 - 63: All indicators */
+ };
+diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
+index 271dfc260e82..3d7d0046cf49 100644
+--- a/arch/sh/boards/mach-migor/setup.c
++++ b/arch/sh/boards/mach-migor/setup.c
+@@ -359,7 +359,7 @@ static struct gpiod_lookup_table ov7725_gpios = {
+ static struct gpiod_lookup_table tw9910_gpios = {
+ .dev_id = "0-0045",
+ .table = {
+- GPIO_LOOKUP("sh7722_pfc", GPIO_PTT2, "pdn", GPIO_ACTIVE_HIGH),
++ GPIO_LOOKUP("sh7722_pfc", GPIO_PTT2, "pdn", GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP("sh7722_pfc", GPIO_PTT3, "rstb", GPIO_ACTIVE_LOW),
+ },
+ };
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index a7956fc7ca1d..3b0f93eb3cc0 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -218,7 +218,7 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
+ u64 prev_count, new_count, delta;
+ int shift;
+
+- if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
++ if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
+ shift = 64 - uncore_fixed_ctr_bits(box);
+ else
+ shift = 64 - uncore_perf_ctr_bits(box);
+diff --git a/arch/x86/events/intel/uncore_nhmex.c b/arch/x86/events/intel/uncore_nhmex.c
+index 93e7a8397cde..173e2674be6e 100644
+--- a/arch/x86/events/intel/uncore_nhmex.c
++++ b/arch/x86/events/intel/uncore_nhmex.c
+@@ -246,7 +246,7 @@ static void nhmex_uncore_msr_enable_event(struct intel_uncore_box *box, struct p
+ {
+ struct hw_perf_event *hwc = &event->hw;
+
+- if (hwc->idx >= UNCORE_PMC_IDX_FIXED)
++ if (hwc->idx == UNCORE_PMC_IDX_FIXED)
+ wrmsrl(hwc->config_base, NHMEX_PMON_CTL_EN_BIT0);
+ else if (box->pmu->type->event_mask & NHMEX_PMON_CTL_EN_BIT0)
+ wrmsrl(hwc->config_base, hwc->config | NHMEX_PMON_CTL_EN_BIT22);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 77e201301528..08286269fd24 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -70,7 +70,7 @@ static DEFINE_MUTEX(microcode_mutex);
+ /*
+ * Serialize late loading so that CPUs get updated one-by-one.
+ */
+-static DEFINE_SPINLOCK(update_lock);
++static DEFINE_RAW_SPINLOCK(update_lock);
+
+ struct ucode_cpu_info ucode_cpu_info[NR_CPUS];
+
+@@ -560,9 +560,9 @@ static int __reload_late(void *info)
+ if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC))
+ return -1;
+
+- spin_lock(&update_lock);
++ raw_spin_lock(&update_lock);
+ apply_microcode_local(&err);
+- spin_unlock(&update_lock);
++ raw_spin_unlock(&update_lock);
+
+ /* siblings return UCODE_OK because their engine got updated already */
+ if (err > UCODE_NFOUND) {
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 8494dbae41b9..030c6bb240d9 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -891,7 +891,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
+ if (cache->nobjs >= min)
+ return 0;
+ while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
+- page = (void *)__get_free_page(GFP_KERNEL);
++ page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
+ if (!page)
+ return -ENOMEM;
+ cache->objects[cache->nobjs++] = page;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 26110c202b19..ace53c2934dc 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1768,7 +1768,10 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
+ unsigned long npages, npinned, size;
+ unsigned long locked, lock_limit;
+ struct page **pages;
+- int first, last;
++ unsigned long first, last;
++
++ if (ulen == 0 || uaddr + ulen < uaddr)
++ return NULL;
+
+ /* Calculate number of pages. */
+ first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+@@ -6947,6 +6950,9 @@ static int svm_register_enc_region(struct kvm *kvm,
+ if (!sev_guest(kvm))
+ return -ENOTTY;
+
++ if (range->addr > ULONG_MAX || range->size > ULONG_MAX)
++ return -EINVAL;
++
+ region = kzalloc(sizeof(*region), GFP_KERNEL);
+ if (!region)
+ return -ENOMEM;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 771ae9730ac6..1f0951d36424 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -1898,7 +1898,6 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+
+ if (!RB_EMPTY_NODE(&rq->rb_node))
+ goto end;
+- spin_lock_irq(&bfqq->bfqd->lock);
+
+ /*
+ * If next and rq belong to the same bfq_queue and next is older
+@@ -1923,7 +1922,6 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ bfq_remove_request(q, next);
+ bfqg_stats_update_io_remove(bfqq_group(bfqq), next->cmd_flags);
+
+- spin_unlock_irq(&bfqq->bfqd->lock);
+ end:
+ bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
+ }
+diff --git a/block/bio.c b/block/bio.c
+index 53e0f0a1ed94..9f7fa24d8b15 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -881,16 +881,16 @@ EXPORT_SYMBOL(bio_add_page);
+ */
+ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ {
+- unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
++ unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt, idx;
+ struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
+ struct page **pages = (struct page **)bv;
+- size_t offset, diff;
++ size_t offset;
+ ssize_t size;
+
+ size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+ if (unlikely(size <= 0))
+ return size ? size : -EFAULT;
+- nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
++ idx = nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
+
+ /*
+ * Deep magic below: We need to walk the pinned pages backwards
+@@ -903,17 +903,15 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ bio->bi_iter.bi_size += size;
+ bio->bi_vcnt += nr_pages;
+
+- diff = (nr_pages * PAGE_SIZE - offset) - size;
+- while (nr_pages--) {
+- bv[nr_pages].bv_page = pages[nr_pages];
+- bv[nr_pages].bv_len = PAGE_SIZE;
+- bv[nr_pages].bv_offset = 0;
++ while (idx--) {
++ bv[idx].bv_page = pages[idx];
++ bv[idx].bv_len = PAGE_SIZE;
++ bv[idx].bv_offset = 0;
+ }
+
+ bv[0].bv_offset += offset;
+ bv[0].bv_len -= offset;
+- if (diff)
+- bv[bio->bi_vcnt - 1].bv_len -= diff;
++ bv[nr_pages - 1].bv_len -= nr_pages * PAGE_SIZE - offset - size;
+
+ iov_iter_advance(iter, size);
+ return 0;
+@@ -1808,6 +1806,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
+ bio_integrity_trim(split);
+
+ bio_advance(bio, split->bi_iter.bi_size);
++ bio->bi_iter.bi_done = 0;
+
+ if (bio_flagged(bio, BIO_TRACE_COMPLETION))
+ bio_set_flag(split, BIO_TRACE_COMPLETION);
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index d3d6d72fe649..4fa8d40d947b 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -108,6 +108,7 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+ CRYPTO_TFM_RES_MASK);
+
+ out:
++ memzero_explicit(&keys, sizeof(keys));
+ return err;
+
+ badkey:
+diff --git a/crypto/authencesn.c b/crypto/authencesn.c
+index 15f91ddd7f0e..50b804747e20 100644
+--- a/crypto/authencesn.c
++++ b/crypto/authencesn.c
+@@ -90,6 +90,7 @@ static int crypto_authenc_esn_setkey(struct crypto_aead *authenc_esn, const u8 *
+ CRYPTO_TFM_RES_MASK);
+
+ out:
++ memzero_explicit(&keys, sizeof(keys));
+ return err;
+
+ badkey:
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index eb091375c873..9706613eecf9 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -70,6 +70,10 @@ ACPI_MODULE_NAME("acpi_lpss");
+ #define LPSS_SAVE_CTX BIT(4)
+ #define LPSS_NO_D3_DELAY BIT(5)
+
++/* Crystal Cove PMIC shares same ACPI ID between different platforms */
++#define BYT_CRC_HRV 2
++#define CHT_CRC_HRV 3
++
+ struct lpss_private_data;
+
+ struct lpss_device_desc {
+@@ -163,7 +167,7 @@ static void byt_pwm_setup(struct lpss_private_data *pdata)
+ if (!adev->pnp.unique_id || strcmp(adev->pnp.unique_id, "1"))
+ return;
+
+- if (!acpi_dev_present("INT33FD", NULL, -1))
++ if (!acpi_dev_present("INT33FD", NULL, BYT_CRC_HRV))
+ pwm_add_table(byt_pwm_lookup, ARRAY_SIZE(byt_pwm_lookup));
+ }
+
+@@ -875,6 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev)
+ #define LPSS_GPIODEF0_DMA_LLP BIT(13)
+
+ static DEFINE_MUTEX(lpss_iosf_mutex);
++static bool lpss_iosf_d3_entered;
+
+ static void lpss_iosf_enter_d3_state(void)
+ {
+@@ -917,6 +922,9 @@ static void lpss_iosf_enter_d3_state(void)
+
+ iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
+ LPSS_IOSF_GPIODEF0, value1, mask1);
++
++ lpss_iosf_d3_entered = true;
++
+ exit:
+ mutex_unlock(&lpss_iosf_mutex);
+ }
+@@ -931,6 +939,11 @@ static void lpss_iosf_exit_d3_state(void)
+
+ mutex_lock(&lpss_iosf_mutex);
+
++ if (!lpss_iosf_d3_entered)
++ goto exit;
++
++ lpss_iosf_d3_entered = false;
++
+ iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
+ LPSS_IOSF_GPIODEF0, value1, mask1);
+
+@@ -940,13 +953,13 @@ static void lpss_iosf_exit_d3_state(void)
+ iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
+ LPSS_IOSF_PMCSR, value2, mask2);
+
++exit:
+ mutex_unlock(&lpss_iosf_mutex);
+ }
+
+-static int acpi_lpss_suspend(struct device *dev, bool runtime)
++static int acpi_lpss_suspend(struct device *dev, bool wakeup)
+ {
+ struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
+- bool wakeup = runtime || device_may_wakeup(dev);
+ int ret;
+
+ if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
+@@ -959,14 +972,14 @@ static int acpi_lpss_suspend(struct device *dev, bool runtime)
+ * wrong status for devices being about to be powered off. See
+ * lpss_iosf_enter_d3_state() for further information.
+ */
+- if ((runtime || !pm_suspend_via_firmware()) &&
++ if (acpi_target_system_state() == ACPI_STATE_S0 &&
+ lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ lpss_iosf_enter_d3_state();
+
+ return ret;
+ }
+
+-static int acpi_lpss_resume(struct device *dev, bool runtime)
++static int acpi_lpss_resume(struct device *dev)
+ {
+ struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
+ int ret;
+@@ -975,8 +988,7 @@ static int acpi_lpss_resume(struct device *dev, bool runtime)
+ * This call is kept first to be in symmetry with
+ * acpi_lpss_runtime_suspend() one.
+ */
+- if ((runtime || !pm_resume_via_firmware()) &&
+- lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
++ if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ lpss_iosf_exit_d3_state();
+
+ ret = acpi_dev_resume(dev);
+@@ -1000,12 +1012,12 @@ static int acpi_lpss_suspend_late(struct device *dev)
+ return 0;
+
+ ret = pm_generic_suspend_late(dev);
+- return ret ? ret : acpi_lpss_suspend(dev, false);
++ return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
+ }
+
+ static int acpi_lpss_resume_early(struct device *dev)
+ {
+- int ret = acpi_lpss_resume(dev, false);
++ int ret = acpi_lpss_resume(dev);
+
+ return ret ? ret : pm_generic_resume_early(dev);
+ }
+@@ -1020,7 +1032,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
+
+ static int acpi_lpss_runtime_resume(struct device *dev)
+ {
+- int ret = acpi_lpss_resume(dev, true);
++ int ret = acpi_lpss_resume(dev);
+
+ return ret ? ret : pm_generic_runtime_resume(dev);
+ }
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index ee840be150b5..44f35ab3347d 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -709,15 +709,20 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ } else
+ if ((walk_state->
+ parse_flags & ACPI_PARSE_MODULE_LEVEL)
++ && status != AE_CTRL_TRANSFER
+ && ACPI_FAILURE(status)) {
+ /*
+- * ACPI_PARSE_MODULE_LEVEL means that we are loading a table by
+- * executing it as a control method. However, if we encounter
+- * an error while loading the table, we need to keep trying to
+- * load the table rather than aborting the table load. Set the
+- * status to AE_OK to proceed with the table load. If we get a
+- * failure at this point, it means that the dispatcher got an
+- * error while processing Op (most likely an AML operand error.
++ * ACPI_PARSE_MODULE_LEVEL flag means that we are currently
++ * loading a table by executing it as a control method.
++ * However, if we encounter an error while loading the table,
++ * we need to keep trying to load the table rather than
++ * aborting the table load (setting the status to AE_OK
++ * continues the table load). If we get a failure at this
++ * point, it means that the dispatcher got an error while
++ * processing Op (most likely an AML operand error) or a
++ * control method was called from module level and the
++ * dispatcher returned AE_CTRL_TRANSFER. In the latter case,
++ * leave the status alone, there's nothing wrong with it.
+ */
+ status = AE_OK;
+ }
+diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
+index 0da18bde6a16..dd946b62fedd 100644
+--- a/drivers/acpi/pci_root.c
++++ b/drivers/acpi/pci_root.c
+@@ -472,9 +472,11 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
+ }
+
+ control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL
+- | OSC_PCI_EXPRESS_NATIVE_HP_CONTROL
+ | OSC_PCI_EXPRESS_PME_CONTROL;
+
++ if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
++ control |= OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
++
+ if (pci_aer_available()) {
+ if (aer_acpi_firmware_first())
+ dev_info(&device->dev,
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 513b260bcff1..f5942d09854c 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2218,12 +2218,16 @@ static void ata_eh_link_autopsy(struct ata_link *link)
+ if (qc->err_mask & ~AC_ERR_OTHER)
+ qc->err_mask &= ~AC_ERR_OTHER;
+
+- /* SENSE_VALID trumps dev/unknown error and revalidation */
++ /*
++ * SENSE_VALID trumps dev/unknown error and revalidation. Upper
++ * layers will determine whether the command is worth retrying
++ * based on the sense data and device class/type. Otherwise,
++ * determine directly if the command is worth retrying using its
++ * error mask and flags.
++ */
+ if (qc->flags & ATA_QCFLAG_SENSE_VALID)
+ qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_OTHER);
+-
+- /* determine whether the command is worth retrying */
+- if (ata_eh_worth_retry(qc))
++ else if (ata_eh_worth_retry(qc))
+ qc->flags |= ATA_QCFLAG_RETRY;
+
+ /* accumulate error info */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b937cc1e2c07..836756a5c35c 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -276,6 +276,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x04ca, 0x3011), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
++ { USB_DEVICE(0x04ca, 0x301a), .driver_info = BTUSB_QCA_ROME },
+
+ /* Broadcom BCM2035 */
+ { USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+@@ -371,6 +372,9 @@ static const struct usb_device_id blacklist_table[] = {
+ /* Additional Realtek 8723BU Bluetooth devices */
+ { USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+
++ /* Additional Realtek 8723DE Bluetooth devices */
++ { USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
++
+ /* Additional Realtek 8821AE Bluetooth devices */
+ { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 330e9b29e145..8b017e84db53 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -880,7 +880,7 @@ static int qca_set_baudrate(struct hci_dev *hdev, uint8_t baudrate)
+ */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(msecs_to_jiffies(BAUDRATE_SETTLE_TIMEOUT_MS));
+- set_current_state(TASK_INTERRUPTIBLE);
++ set_current_state(TASK_RUNNING);
+
+ return 0;
+ }
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index cd888d4ee605..bd449ad52442 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1895,14 +1895,22 @@ static int
+ write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
+ {
+ size_t bytes;
+- __u32 buf[16];
++ __u32 t, buf[16];
+ const char __user *p = buffer;
+
+ while (count > 0) {
++ int b, i = 0;
++
+ bytes = min(count, sizeof(buf));
+ if (copy_from_user(&buf, p, bytes))
+ return -EFAULT;
+
++ for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
++ if (!arch_get_random_int(&t))
++ break;
++ buf[i] ^= t;
++ }
++
+ count -= bytes;
+ p += bytes;
+
+diff --git a/drivers/clk/clk-si544.c b/drivers/clk/clk-si544.c
+index 1c96a9f6c022..1e2a3b8f9454 100644
+--- a/drivers/clk/clk-si544.c
++++ b/drivers/clk/clk-si544.c
+@@ -207,6 +207,7 @@ static int si544_calc_muldiv(struct clk_si544_muldiv *settings,
+
+ /* And the fractional bits using the remainder */
+ vco = (u64)tmp << 32;
++ vco += FXO / 2; /* Round to nearest multiple */
+ do_div(vco, FXO);
+ settings->fb_div_frac = vco;
+
+diff --git a/drivers/clk/ingenic/jz4770-cgu.c b/drivers/clk/ingenic/jz4770-cgu.c
+index c78d369b9403..992bb2e146d6 100644
+--- a/drivers/clk/ingenic/jz4770-cgu.c
++++ b/drivers/clk/ingenic/jz4770-cgu.c
+@@ -194,9 +194,10 @@ static const struct ingenic_cgu_clk_info jz4770_cgu_clocks[] = {
+ .div = { CGU_REG_CPCCR, 16, 1, 4, 22, -1, -1 },
+ },
+ [JZ4770_CLK_C1CLK] = {
+- "c1clk", CGU_CLK_DIV,
++ "c1clk", CGU_CLK_DIV | CGU_CLK_GATE,
+ .parents = { JZ4770_CLK_PLL0, },
+ .div = { CGU_REG_CPCCR, 12, 1, 4, 22, -1, -1 },
++ .gate = { CGU_REG_OPCR, 31, true }, // disable CCLK stop on idle
+ },
+ [JZ4770_CLK_PCLK] = {
+ "pclk", CGU_CLK_DIV,
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index 11d6419788c2..4d614d7615a4 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -1106,7 +1106,7 @@ static void *ocram_alloc_mem(size_t size, void **other)
+
+ static void ocram_free_mem(void *p, size_t size, void *other)
+ {
+- gen_pool_free((struct gen_pool *)other, (u32)p, size);
++ gen_pool_free((struct gen_pool *)other, (unsigned long)p, size);
+ }
+
+ static const struct edac_device_prv_data ocramecc_data = {
+diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c
+index 761d8279abca..b2864bdfb30e 100644
+--- a/drivers/gpio/gpio-uniphier.c
++++ b/drivers/gpio/gpio-uniphier.c
+@@ -181,7 +181,11 @@ static int uniphier_gpio_to_irq(struct gpio_chip *chip, unsigned int offset)
+ fwspec.fwnode = of_node_to_fwnode(chip->parent->of_node);
+ fwspec.param_count = 2;
+ fwspec.param[0] = offset - UNIPHIER_GPIO_IRQ_OFFSET;
+- fwspec.param[1] = IRQ_TYPE_NONE;
++ /*
++ * IRQ_TYPE_NONE is rejected by the parent irq domain. Set LEVEL_HIGH
++ * temporarily. Anyway, ->irq_set_type() will override it later.
++ */
++ fwspec.param[1] = IRQ_TYPE_LEVEL_HIGH;
+
+ return irq_create_fwspec_mapping(&fwspec);
+ }
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 586d15137c03..d4411c8becf7 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -64,7 +64,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ * Note that active low is the default.
+ */
+ if (IS_ENABLED(CONFIG_REGULATOR) &&
+- (of_device_is_compatible(np, "reg-fixed-voltage") ||
++ (of_device_is_compatible(np, "regulator-fixed") ||
++ of_device_is_compatible(np, "reg-fixed-voltage") ||
+ of_device_is_compatible(np, "regulator-gpio"))) {
+ /*
+ * The regulator GPIO handles are specified such that the
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+index bd67f4cb8e6c..3b3ee737657c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+@@ -316,7 +316,7 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ unsigned long end = addr + amdgpu_bo_size(bo) - 1;
+ struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ struct amdgpu_mn *rmn;
+- struct amdgpu_mn_node *node = NULL;
++ struct amdgpu_mn_node *node = NULL, *new_node;
+ struct list_head bos;
+ struct interval_tree_node *it;
+
+@@ -324,6 +324,10 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ if (IS_ERR(rmn))
+ return PTR_ERR(rmn);
+
++ new_node = kmalloc(sizeof(*new_node), GFP_KERNEL);
++ if (!new_node)
++ return -ENOMEM;
++
+ INIT_LIST_HEAD(&bos);
+
+ down_write(&rmn->lock);
+@@ -337,13 +341,10 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ list_splice(&node->bos, &bos);
+ }
+
+- if (!node) {
+- node = kmalloc(sizeof(struct amdgpu_mn_node), GFP_KERNEL);
+- if (!node) {
+- up_write(&rmn->lock);
+- return -ENOMEM;
+- }
+- }
++ if (!node)
++ node = new_node;
++ else
++ kfree(new_node);
+
+ bo->mn = rmn;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index b52f26e7db98..d1c4beb79ee6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -689,8 +689,12 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ return -EINVAL;
+
+ /* A shared bo cannot be migrated to VRAM */
+- if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
+- return -EINVAL;
++ if (bo->prime_shared_count) {
++ if (domain & AMDGPU_GEM_DOMAIN_GTT)
++ domain = AMDGPU_GEM_DOMAIN_GTT;
++ else
++ return -EINVAL;
++ }
+
+ if (bo->pin_count) {
+ uint32_t mem_type = bo->tbo.mem.mem_type;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 79afffa00772..8dafb10b7832 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4037,7 +4037,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ }
+ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+
+- if (!pflip_needed) {
++ if (!pflip_needed || plane->type == DRM_PLANE_TYPE_OVERLAY) {
+ WARN_ON(!dm_new_plane_state->dc_state);
+
+ plane_states_constructed[planes_count] = dm_new_plane_state->dc_state;
+@@ -4783,7 +4783,8 @@ static int dm_update_planes_state(struct dc *dc,
+
+ /* Remove any changed/removed planes */
+ if (!enable) {
+- if (pflip_needed)
++ if (pflip_needed &&
++ plane->type != DRM_PLANE_TYPE_OVERLAY)
+ continue;
+
+ if (!old_plane_crtc)
+@@ -4830,7 +4831,8 @@ static int dm_update_planes_state(struct dc *dc,
+ if (!dm_new_crtc_state->stream)
+ continue;
+
+- if (pflip_needed)
++ if (pflip_needed &&
++ plane->type != DRM_PLANE_TYPE_OVERLAY)
+ continue;
+
+ WARN_ON(dm_new_plane_state->dc_state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 4be21bf54749..a910f01838ab 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -555,6 +555,9 @@ static inline int dm_irq_state(struct amdgpu_device *adev,
+ return 0;
+ }
+
++ if (acrtc->otg_inst == -1)
++ return 0;
++
+ irq_source = dal_irq_type + acrtc->otg_inst;
+
+ st = (state == AMDGPU_IRQ_STATE_ENABLE);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index d0575999f172..09c93f6ebb10 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -279,7 +279,9 @@ dce110_set_input_transfer_func(struct pipe_ctx *pipe_ctx,
+ build_prescale_params(&prescale_params, plane_state);
+ ipp->funcs->ipp_program_prescale(ipp, &prescale_params);
+
+- if (plane_state->gamma_correction && dce_use_lut(plane_state->format))
++ if (plane_state->gamma_correction &&
++ !plane_state->gamma_correction->is_identity &&
++ dce_use_lut(plane_state->format))
+ ipp->funcs->ipp_program_input_lut(ipp, plane_state->gamma_correction);
+
+ if (tf == NULL) {
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 18b5b2ff47fe..df2dce8b8e39 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3715,14 +3715,17 @@ static int smu7_trim_dpm_states(struct pp_hwmgr *hwmgr,
+ static int smu7_generate_dpm_level_enable_mask(
+ struct pp_hwmgr *hwmgr, const void *input)
+ {
+- int result;
++ int result = 0;
+ const struct phm_set_power_state_input *states =
+ (const struct phm_set_power_state_input *)input;
+ struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+ const struct smu7_power_state *smu7_ps =
+ cast_const_phw_smu7_power_state(states->pnew_state);
+
+- result = smu7_trim_dpm_states(hwmgr, smu7_ps);
++ /*skip the trim if od is enabled*/
++ if (!hwmgr->od_enabled)
++ result = smu7_trim_dpm_states(hwmgr, smu7_ps);
++
+ if (result)
+ return result;
+
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index c825c76edc1d..d09ee6864ac7 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -1429,7 +1429,9 @@ drm_atomic_set_crtc_for_plane(struct drm_plane_state *plane_state,
+ {
+ struct drm_plane *plane = plane_state->plane;
+ struct drm_crtc_state *crtc_state;
+-
++ /* Nothing to do for same crtc*/
++ if (plane_state->crtc == crtc)
++ return 0;
+ if (plane_state->crtc) {
+ crtc_state = drm_atomic_get_crtc_state(plane_state->state,
+ plane_state->crtc);
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index c35654591c12..3448e8e44c35 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -2881,31 +2881,9 @@ commit:
+ return 0;
+ }
+
+-/**
+- * drm_atomic_helper_disable_all - disable all currently active outputs
+- * @dev: DRM device
+- * @ctx: lock acquisition context
+- *
+- * Loops through all connectors, finding those that aren't turned off and then
+- * turns them off by setting their DPMS mode to OFF and deactivating the CRTC
+- * that they are connected to.
+- *
+- * This is used for example in suspend/resume to disable all currently active
+- * functions when suspending. If you just want to shut down everything at e.g.
+- * driver unload, look at drm_atomic_helper_shutdown().
+- *
+- * Note that if callers haven't already acquired all modeset locks this might
+- * return -EDEADLK, which must be handled by calling drm_modeset_backoff().
+- *
+- * Returns:
+- * 0 on success or a negative error code on failure.
+- *
+- * See also:
+- * drm_atomic_helper_suspend(), drm_atomic_helper_resume() and
+- * drm_atomic_helper_shutdown().
+- */
+-int drm_atomic_helper_disable_all(struct drm_device *dev,
+- struct drm_modeset_acquire_ctx *ctx)
++static int __drm_atomic_helper_disable_all(struct drm_device *dev,
++ struct drm_modeset_acquire_ctx *ctx,
++ bool clean_old_fbs)
+ {
+ struct drm_atomic_state *state;
+ struct drm_connector_state *conn_state;
+@@ -2957,8 +2935,11 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
+ goto free;
+
+ drm_atomic_set_fb_for_plane(plane_state, NULL);
+- plane_mask |= BIT(drm_plane_index(plane));
+- plane->old_fb = plane->fb;
++
++ if (clean_old_fbs) {
++ plane->old_fb = plane->fb;
++ plane_mask |= BIT(drm_plane_index(plane));
++ }
+ }
+
+ ret = drm_atomic_commit(state);
+@@ -2969,6 +2950,34 @@ free:
+ return ret;
+ }
+
++/**
++ * drm_atomic_helper_disable_all - disable all currently active outputs
++ * @dev: DRM device
++ * @ctx: lock acquisition context
++ *
++ * Loops through all connectors, finding those that aren't turned off and then
++ * turns them off by setting their DPMS mode to OFF and deactivating the CRTC
++ * that they are connected to.
++ *
++ * This is used for example in suspend/resume to disable all currently active
++ * functions when suspending. If you just want to shut down everything at e.g.
++ * driver unload, look at drm_atomic_helper_shutdown().
++ *
++ * Note that if callers haven't already acquired all modeset locks this might
++ * return -EDEADLK, which must be handled by calling drm_modeset_backoff().
++ *
++ * Returns:
++ * 0 on success or a negative error code on failure.
++ *
++ * See also:
++ * drm_atomic_helper_suspend(), drm_atomic_helper_resume() and
++ * drm_atomic_helper_shutdown().
++ */
++int drm_atomic_helper_disable_all(struct drm_device *dev,
++ struct drm_modeset_acquire_ctx *ctx)
++{
++ return __drm_atomic_helper_disable_all(dev, ctx, false);
++}
+ EXPORT_SYMBOL(drm_atomic_helper_disable_all);
+
+ /**
+@@ -2991,7 +3000,7 @@ void drm_atomic_helper_shutdown(struct drm_device *dev)
+ while (1) {
+ ret = drm_modeset_lock_all_ctx(dev, &ctx);
+ if (!ret)
+- ret = drm_atomic_helper_disable_all(dev, &ctx);
++ ret = __drm_atomic_helper_disable_all(dev, &ctx, true);
+
+ if (ret != -EDEADLK)
+ break;
+@@ -3095,16 +3104,11 @@ int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state,
+ struct drm_connector_state *new_conn_state;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *new_crtc_state;
+- unsigned plane_mask = 0;
+- struct drm_device *dev = state->dev;
+- int ret;
+
+ state->acquire_ctx = ctx;
+
+- for_each_new_plane_in_state(state, plane, new_plane_state, i) {
+- plane_mask |= BIT(drm_plane_index(plane));
++ for_each_new_plane_in_state(state, plane, new_plane_state, i)
+ state->planes[i].old_state = plane->state;
+- }
+
+ for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
+ state->crtcs[i].old_state = crtc->state;
+@@ -3112,11 +3116,7 @@ int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state,
+ for_each_new_connector_in_state(state, connector, new_conn_state, i)
+ state->connectors[i].old_state = connector->state;
+
+- ret = drm_atomic_commit(state);
+- if (plane_mask)
+- drm_atomic_clean_old_fb(dev, plane_mask, ret);
+-
+- return ret;
++ return drm_atomic_commit(state);
+ }
+ EXPORT_SYMBOL(drm_atomic_helper_commit_duplicated_state);
+
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 6fac4129e6a2..658830620ca3 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2941,12 +2941,14 @@ static void drm_dp_mst_dump_mstb(struct seq_file *m,
+ }
+ }
+
++#define DP_PAYLOAD_TABLE_SIZE 64
++
+ static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
+ char *buf)
+ {
+ int i;
+
+- for (i = 0; i < 64; i += 16) {
++ for (i = 0; i < DP_PAYLOAD_TABLE_SIZE; i += 16) {
+ if (drm_dp_dpcd_read(mgr->aux,
+ DP_PAYLOAD_TABLE_UPDATE_STATUS + i,
+ &buf[i], 16) != 16)
+@@ -3015,7 +3017,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+
+ mutex_lock(&mgr->lock);
+ if (mgr->mst_primary) {
+- u8 buf[64];
++ u8 buf[DP_PAYLOAD_TABLE_SIZE];
+ int ret;
+
+ ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);
+@@ -3033,8 +3035,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ seq_printf(m, " revision: hw: %x.%x sw: %x.%x\n",
+ buf[0x9] >> 4, buf[0x9] & 0xf, buf[0xa], buf[0xb]);
+ if (dump_dp_payload_table(mgr, buf))
+- seq_printf(m, "payload table: %*ph\n", 63, buf);
+-
++ seq_printf(m, "payload table: %*ph\n", DP_PAYLOAD_TABLE_SIZE, buf);
+ }
+
+ mutex_unlock(&mgr->lock);
+diff --git a/drivers/gpu/drm/gma500/psb_intel_drv.h b/drivers/gpu/drm/gma500/psb_intel_drv.h
+index e8e4ea14b12b..e05e5399af2d 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_drv.h
++++ b/drivers/gpu/drm/gma500/psb_intel_drv.h
+@@ -255,7 +255,7 @@ extern int intelfb_remove(struct drm_device *dev,
+ extern bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode);
+-extern int psb_intel_lvds_mode_valid(struct drm_connector *connector,
++extern enum drm_mode_status psb_intel_lvds_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode);
+ extern int psb_intel_lvds_set_property(struct drm_connector *connector,
+ struct drm_property *property,
+diff --git a/drivers/gpu/drm/gma500/psb_intel_lvds.c b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+index be3eefec5152..8baf6325c6e4 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+@@ -343,7 +343,7 @@ static void psb_intel_lvds_restore(struct drm_connector *connector)
+ }
+ }
+
+-int psb_intel_lvds_mode_valid(struct drm_connector *connector,
++enum drm_mode_status psb_intel_lvds_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ struct drm_psb_private *dev_priv = connector->dev->dev_private;
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index ce18b6cf6e68..e3ce2f448020 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -804,6 +804,7 @@ enum intel_sbi_destination {
+ #define QUIRK_BACKLIGHT_PRESENT (1<<3)
+ #define QUIRK_PIN_SWIZZLED_PAGES (1<<5)
+ #define QUIRK_INCREASE_T12_DELAY (1<<6)
++#define QUIRK_INCREASE_DDI_DISABLED_TIME (1<<7)
+
+ struct intel_fbdev;
+ struct intel_fbc_work;
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index 1d14ebc7480d..b752f6221731 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -1605,15 +1605,24 @@ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state)
+ I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp);
+ }
+
+-void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
+- enum transcoder cpu_transcoder)
++void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
++ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
++ enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
+ i915_reg_t reg = TRANS_DDI_FUNC_CTL(cpu_transcoder);
+ uint32_t val = I915_READ(reg);
+
+ val &= ~(TRANS_DDI_FUNC_ENABLE | TRANS_DDI_PORT_MASK | TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
+ val |= TRANS_DDI_PORT_NONE;
+ I915_WRITE(reg, val);
++
++ if (dev_priv->quirks & QUIRK_INCREASE_DDI_DISABLED_TIME &&
++ intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
++ DRM_DEBUG_KMS("Quirk Increase DDI disabled time\n");
++ /* Quirk time at 100ms for reliable operation */
++ msleep(100);
++ }
+ }
+
+ int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder,
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 84011e08adc3..f943c1049c0b 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -5685,7 +5685,7 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
+ intel_ddi_set_vc_payload_alloc(intel_crtc->config, false);
+
+ if (!transcoder_is_dsi(cpu_transcoder))
+- intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
++ intel_ddi_disable_transcoder_func(old_crtc_state);
+
+ if (INTEL_GEN(dev_priv) >= 9)
+ skylake_scaler_disable(intel_crtc);
+@@ -14388,6 +14388,18 @@ static void quirk_increase_t12_delay(struct drm_device *dev)
+ DRM_INFO("Applying T12 delay quirk\n");
+ }
+
++/*
++ * GeminiLake NUC HDMI outputs require additional off time
++ * this allows the onboard retimer to correctly sync to signal
++ */
++static void quirk_increase_ddi_disabled_time(struct drm_device *dev)
++{
++ struct drm_i915_private *dev_priv = to_i915(dev);
++
++ dev_priv->quirks |= QUIRK_INCREASE_DDI_DISABLED_TIME;
++ DRM_INFO("Applying Increase DDI Disabled quirk\n");
++}
++
+ struct intel_quirk {
+ int device;
+ int subsystem_vendor;
+@@ -14474,6 +14486,13 @@ static struct intel_quirk intel_quirks[] = {
+
+ /* Toshiba Satellite P50-C-18C */
+ { 0x191B, 0x1179, 0xF840, quirk_increase_t12_delay },
++
++ /* GeminiLake NUC */
++ { 0x3185, 0x8086, 0x2072, quirk_increase_ddi_disabled_time },
++ { 0x3184, 0x8086, 0x2072, quirk_increase_ddi_disabled_time },
++ /* ASRock ITX*/
++ { 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
++ { 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
+ };
+
+ static void intel_init_quirks(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index a80fbad9be0f..04d2774fe0ac 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -1368,8 +1368,7 @@ void hsw_fdi_link_train(struct intel_crtc *crtc,
+ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port);
+ bool intel_ddi_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe);
+ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state);
+-void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
+- enum transcoder cpu_transcoder);
++void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state);
+ void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state);
+ void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state);
+ struct intel_encoder *
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dma.c b/drivers/gpu/drm/nouveau/nouveau_dma.c
+index 10e84f6ca2b7..e0664d28802b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dma.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dma.c
+@@ -80,18 +80,10 @@ READ_GET(struct nouveau_channel *chan, uint64_t *prev_get, int *timeout)
+ }
+
+ void
+-nv50_dma_push(struct nouveau_channel *chan, struct nouveau_bo *bo,
+- int delta, int length)
++nv50_dma_push(struct nouveau_channel *chan, u64 offset, int length)
+ {
+- struct nouveau_cli *cli = (void *)chan->user.client;
+ struct nouveau_bo *pb = chan->push.buffer;
+- struct nouveau_vma *vma;
+ int ip = (chan->dma.ib_put * 2) + chan->dma.ib_base;
+- u64 offset;
+-
+- vma = nouveau_vma_find(bo, &cli->vmm);
+- BUG_ON(!vma);
+- offset = vma->addr + delta;
+
+ BUG_ON(chan->dma.ib_free < 1);
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dma.h b/drivers/gpu/drm/nouveau/nouveau_dma.h
+index 74e10b14a7da..89c87111bbbd 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dma.h
++++ b/drivers/gpu/drm/nouveau/nouveau_dma.h
+@@ -31,8 +31,7 @@
+ #include "nouveau_chan.h"
+
+ int nouveau_dma_wait(struct nouveau_channel *, int slots, int size);
+-void nv50_dma_push(struct nouveau_channel *, struct nouveau_bo *,
+- int delta, int length);
++void nv50_dma_push(struct nouveau_channel *, u64 addr, int length);
+
+ /*
+ * There's a hw race condition where you can't jump to your PUT offset,
+@@ -151,7 +150,7 @@ FIRE_RING(struct nouveau_channel *chan)
+ chan->accel_done = true;
+
+ if (chan->dma.ib_max) {
+- nv50_dma_push(chan, chan->push.buffer, chan->dma.put << 2,
++ nv50_dma_push(chan, chan->push.addr + (chan->dma.put << 2),
+ (chan->dma.cur - chan->dma.put) << 2);
+ } else {
+ WRITE_PUT(chan->dma.cur);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 591d9c29ede7..f8e67ab5c598 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -116,24 +116,22 @@ nouveau_name(struct drm_device *dev)
+ }
+
+ static inline bool
+-nouveau_cli_work_ready(struct dma_fence *fence, bool wait)
++nouveau_cli_work_ready(struct dma_fence *fence)
+ {
+- if (!dma_fence_is_signaled(fence)) {
+- if (!wait)
+- return false;
+- WARN_ON(dma_fence_wait_timeout(fence, false, 2 * HZ) <= 0);
+- }
++ if (!dma_fence_is_signaled(fence))
++ return false;
+ dma_fence_put(fence);
+ return true;
+ }
+
+ static void
+-nouveau_cli_work_flush(struct nouveau_cli *cli, bool wait)
++nouveau_cli_work(struct work_struct *w)
+ {
++ struct nouveau_cli *cli = container_of(w, typeof(*cli), work);
+ struct nouveau_cli_work *work, *wtmp;
+ mutex_lock(&cli->lock);
+ list_for_each_entry_safe(work, wtmp, &cli->worker, head) {
+- if (!work->fence || nouveau_cli_work_ready(work->fence, wait)) {
++ if (!work->fence || nouveau_cli_work_ready(work->fence)) {
+ list_del(&work->head);
+ work->func(work);
+ }
+@@ -161,17 +159,17 @@ nouveau_cli_work_queue(struct nouveau_cli *cli, struct dma_fence *fence,
+ mutex_unlock(&cli->lock);
+ }
+
+-static void
+-nouveau_cli_work(struct work_struct *w)
+-{
+- struct nouveau_cli *cli = container_of(w, typeof(*cli), work);
+- nouveau_cli_work_flush(cli, false);
+-}
+-
+ static void
+ nouveau_cli_fini(struct nouveau_cli *cli)
+ {
+- nouveau_cli_work_flush(cli, true);
++ /* All our channels are dead now, which means all the fences they
++ * own are signalled, and all callback functions have been called.
++ *
++ * So, after flushing the workqueue, there should be nothing left.
++ */
++ flush_work(&cli->work);
++ WARN_ON(!list_empty(&cli->worker));
++
+ usif_client_fini(cli);
+ nouveau_vmm_fini(&cli->vmm);
+ nvif_mmu_fini(&cli->mmu);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index e72a7e37eb0a..707e02c80f18 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -432,7 +432,20 @@ retry:
+ }
+ }
+
+- b->user_priv = (uint64_t)(unsigned long)nvbo;
++ if (cli->vmm.vmm.object.oclass >= NVIF_CLASS_VMM_NV50) {
++ struct nouveau_vmm *vmm = &cli->vmm;
++ struct nouveau_vma *vma = nouveau_vma_find(nvbo, vmm);
++ if (!vma) {
++ NV_PRINTK(err, cli, "vma not found!\n");
++ ret = -EINVAL;
++ break;
++ }
++
++ b->user_priv = (uint64_t)(unsigned long)vma;
++ } else {
++ b->user_priv = (uint64_t)(unsigned long)nvbo;
++ }
++
+ nvbo->reserved_by = file_priv;
+ nvbo->pbbo_index = i;
+ if ((b->valid_domains & NOUVEAU_GEM_DOMAIN_VRAM) &&
+@@ -763,10 +776,10 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data,
+ }
+
+ for (i = 0; i < req->nr_push; i++) {
+- struct nouveau_bo *nvbo = (void *)(unsigned long)
++ struct nouveau_vma *vma = (void *)(unsigned long)
+ bo[push[i].bo_index].user_priv;
+
+- nv50_dma_push(chan, nvbo, push[i].offset,
++ nv50_dma_push(chan, vma->addr + push[i].offset,
+ push[i].length);
+ }
+ } else
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+index 84bd703dd897..8305cb67cbfc 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+@@ -155,10 +155,10 @@ gk104_fifo_runlist_commit(struct gk104_fifo *fifo, int runl)
+ (target << 28));
+ nvkm_wr32(device, 0x002274, (runl << 20) | nr);
+
+- if (wait_event_timeout(fifo->runlist[runl].wait,
+- !(nvkm_rd32(device, 0x002284 + (runl * 0x08))
+- & 0x00100000),
+- msecs_to_jiffies(2000)) == 0)
++ if (nvkm_msec(device, 2000,
++ if (!(nvkm_rd32(device, 0x002284 + (runl * 0x08)) & 0x00100000))
++ break;
++ ) < 0)
+ nvkm_error(subdev, "runlist %d update timeout\n", runl);
+ unlock:
+ mutex_unlock(&subdev->mutex);
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index df9469a8fdb1..2aea2bdff99b 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -852,7 +852,7 @@ static int radeon_lvds_get_modes(struct drm_connector *connector)
+ return ret;
+ }
+
+-static int radeon_lvds_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_lvds_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ struct drm_encoder *encoder = radeon_best_single_encoder(connector);
+@@ -1012,7 +1012,7 @@ static int radeon_vga_get_modes(struct drm_connector *connector)
+ return ret;
+ }
+
+-static int radeon_vga_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_vga_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ struct drm_device *dev = connector->dev;
+@@ -1156,7 +1156,7 @@ static int radeon_tv_get_modes(struct drm_connector *connector)
+ return 1;
+ }
+
+-static int radeon_tv_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_tv_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ if ((mode->hdisplay > 1024) || (mode->vdisplay > 768))
+@@ -1498,7 +1498,7 @@ static void radeon_dvi_force(struct drm_connector *connector)
+ radeon_connector->use_digital = true;
+ }
+
+-static int radeon_dvi_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_dvi_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ struct drm_device *dev = connector->dev;
+@@ -1800,7 +1800,7 @@ out:
+ return ret;
+ }
+
+-static int radeon_dp_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_dp_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ struct drm_device *dev = connector->dev;
+diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+index 3e8bf79bea58..0259cfe894d6 100644
+--- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+@@ -358,6 +358,8 @@ static void rockchip_dp_unbind(struct device *dev, struct device *master,
+ analogix_dp_unbind(dp->adp);
+ rockchip_drm_psr_unregister(&dp->encoder);
+ dp->encoder.funcs->destroy(&dp->encoder);
++
++ dp->adp = ERR_PTR(-ENODEV);
+ }
+
+ static const struct component_ops rockchip_dp_component_ops = {
+@@ -381,6 +383,7 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ dp->dev = dev;
++ dp->adp = ERR_PTR(-ENODEV);
+ dp->plat_data.panel = panel;
+
+ ret = rockchip_dp_of_probe(dp);
+@@ -404,6 +407,9 @@ static int rockchip_dp_suspend(struct device *dev)
+ {
+ struct rockchip_dp_device *dp = dev_get_drvdata(dev);
+
++ if (IS_ERR(dp->adp))
++ return 0;
++
+ return analogix_dp_suspend(dp->adp);
+ }
+
+@@ -411,6 +417,9 @@ static int rockchip_dp_resume(struct device *dev)
+ {
+ struct rockchip_dp_device *dp = dev_get_drvdata(dev);
+
++ if (IS_ERR(dp->adp))
++ return 0;
++
+ return analogix_dp_resume(dp->adp);
+ }
+ #endif
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 1a3277e483d5..16e80308c6db 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -392,9 +392,6 @@ static void ltdc_crtc_update_clut(struct drm_crtc *crtc)
+ u32 val;
+ int i;
+
+- if (!crtc || !crtc->state)
+- return;
+-
+ if (!crtc->state->color_mgmt_changed || !crtc->state->gamma_lut)
+ return;
+
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 03db71173f5d..f1d5f76e9c33 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -223,10 +223,14 @@ static int host1x_probe(struct platform_device *pdev)
+ struct iommu_domain_geometry *geometry;
+ unsigned long order;
+
++ err = iova_cache_get();
++ if (err < 0)
++ goto put_group;
++
+ host->domain = iommu_domain_alloc(&platform_bus_type);
+ if (!host->domain) {
+ err = -ENOMEM;
+- goto put_group;
++ goto put_cache;
+ }
+
+ err = iommu_attach_group(host->domain, host->group);
+@@ -234,6 +238,7 @@ static int host1x_probe(struct platform_device *pdev)
+ if (err == -ENODEV) {
+ iommu_domain_free(host->domain);
+ host->domain = NULL;
++ iova_cache_put();
+ iommu_group_put(host->group);
+ host->group = NULL;
+ goto skip_iommu;
+@@ -308,6 +313,9 @@ fail_detach_device:
+ fail_free_domain:
+ if (host->domain)
+ iommu_domain_free(host->domain);
++put_cache:
++ if (host->group)
++ iova_cache_put();
+ put_group:
+ iommu_group_put(host->group);
+
+@@ -328,6 +336,7 @@ static int host1x_remove(struct platform_device *pdev)
+ put_iova_domain(&host->iova);
+ iommu_detach_group(host->domain, host->group);
+ iommu_domain_free(host->domain);
++ iova_cache_put();
+ iommu_group_put(host->group);
+ }
+
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index febb21ee190e..584b10d3fc3d 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -2,7 +2,7 @@
+ * Plantronics USB HID Driver
+ *
+ * Copyright (c) 2014 JD Cole <jd.cole@plantronics.com>
+- * Copyright (c) 2015 Terry Junge <terry.junge@plantronics.com>
++ * Copyright (c) 2015-2018 Terry Junge <terry.junge@plantronics.com>
+ */
+
+ /*
+@@ -48,6 +48,10 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ unsigned short mapped_key;
+ unsigned long plt_type = (unsigned long)hid_get_drvdata(hdev);
+
++ /* special case for PTT products */
++ if (field->application == HID_GD_JOYSTICK)
++ goto defaulted;
++
+ /* handle volume up/down mapping */
+ /* non-standard types or multi-HID interfaces - plt_type is PID */
+ if (!(plt_type & HID_USAGE_PAGE)) {
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index a92377285034..8cc2b71c680b 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1054,6 +1054,14 @@ static int i2c_hid_probe(struct i2c_client *client,
+ pm_runtime_enable(&client->dev);
+ device_enable_async_suspend(&client->dev);
+
++ /* Make sure there is something at this address */
++ ret = i2c_smbus_read_byte(client);
++ if (ret < 0) {
++ dev_dbg(&client->dev, "nothing at this address: %d\n", ret);
++ ret = -ENXIO;
++ goto err_pm;
++ }
++
+ ret = i2c_hid_fetch_hid_descriptor(ihid);
+ if (ret < 0)
+ goto err_pm;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index c6915b835396..21d4efa74de2 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -32,6 +32,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/reset.h>
+ #include <linux/slab.h>
+
+ /* register offsets */
+@@ -111,8 +112,9 @@
+ #define ID_ARBLOST (1 << 3)
+ #define ID_NACK (1 << 4)
+ /* persistent flags */
++#define ID_P_NO_RXDMA (1 << 30) /* HW forbids RXDMA sometimes */
+ #define ID_P_PM_BLOCKED (1 << 31)
+-#define ID_P_MASK ID_P_PM_BLOCKED
++#define ID_P_MASK (ID_P_PM_BLOCKED | ID_P_NO_RXDMA)
+
+ enum rcar_i2c_type {
+ I2C_RCAR_GEN1,
+@@ -141,6 +143,8 @@ struct rcar_i2c_priv {
+ struct dma_chan *dma_rx;
+ struct scatterlist sg;
+ enum dma_data_direction dma_direction;
++
++ struct reset_control *rstc;
+ };
+
+ #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent)
+@@ -370,6 +374,11 @@ static void rcar_i2c_dma_unmap(struct rcar_i2c_priv *priv)
+ dma_unmap_single(chan->device->dev, sg_dma_address(&priv->sg),
+ sg_dma_len(&priv->sg), priv->dma_direction);
+
++ /* Gen3 can only do one RXDMA per transfer and we just completed it */
++ if (priv->devtype == I2C_RCAR_GEN3 &&
++ priv->dma_direction == DMA_FROM_DEVICE)
++ priv->flags |= ID_P_NO_RXDMA;
++
+ priv->dma_direction = DMA_NONE;
+ }
+
+@@ -407,8 +416,9 @@ static void rcar_i2c_dma(struct rcar_i2c_priv *priv)
+ unsigned char *buf;
+ int len;
+
+- /* Do not use DMA if it's not available or for messages < 8 bytes */
+- if (IS_ERR(chan) || msg->len < 8 || !(msg->flags & I2C_M_DMA_SAFE))
++ /* Do various checks to see if DMA is feasible at all */
++ if (IS_ERR(chan) || msg->len < 8 || !(msg->flags & I2C_M_DMA_SAFE) ||
++ (read && priv->flags & ID_P_NO_RXDMA))
+ return;
+
+ if (read) {
+@@ -737,6 +747,25 @@ static void rcar_i2c_release_dma(struct rcar_i2c_priv *priv)
+ }
+ }
+
++/* I2C is a special case, we need to poll the status of a reset */
++static int rcar_i2c_do_reset(struct rcar_i2c_priv *priv)
++{
++ int i, ret;
++
++ ret = reset_control_reset(priv->rstc);
++ if (ret)
++ return ret;
++
++ for (i = 0; i < LOOP_TIMEOUT; i++) {
++ ret = reset_control_status(priv->rstc);
++ if (ret == 0)
++ return 0;
++ udelay(1);
++ }
++
++ return -ETIMEDOUT;
++}
++
+ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ struct i2c_msg *msgs,
+ int num)
+@@ -748,6 +777,16 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+
+ pm_runtime_get_sync(dev);
+
++ /* Gen3 needs a reset before allowing RXDMA once */
++ if (priv->devtype == I2C_RCAR_GEN3) {
++ priv->flags |= ID_P_NO_RXDMA;
++ if (!IS_ERR(priv->rstc)) {
++ ret = rcar_i2c_do_reset(priv);
++ if (ret == 0)
++ priv->flags &= ~ID_P_NO_RXDMA;
++ }
++ }
++
+ rcar_i2c_init(priv);
+
+ ret = rcar_i2c_bus_barrier(priv);
+@@ -918,6 +957,15 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto out_pm_put;
+
++ if (priv->devtype == I2C_RCAR_GEN3) {
++ priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
++ if (!IS_ERR(priv->rstc)) {
++ ret = reset_control_status(priv->rstc);
++ if (ret < 0)
++ priv->rstc = ERR_PTR(-ENOTSUPP);
++ }
++ }
++
+ /* Stay always active when multi-master to keep arbitration working */
+ if (of_property_read_bool(dev->of_node, "multi-master"))
+ priv->flags |= ID_P_PM_BLOCKED;
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index b28452a55a08..55a224e8e798 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -1557,7 +1557,8 @@ static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ mad_reg_req->oui, 3)) {
+ method = &(*vendor_table)->vendor_class[
+ vclass]->method_table[i];
+- BUG_ON(!*method);
++ if (!*method)
++ goto error3;
+ goto check_in_use;
+ }
+ }
+@@ -1567,10 +1568,12 @@ static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ vclass]->oui[i])) {
+ method = &(*vendor_table)->vendor_class[
+ vclass]->method_table[i];
+- BUG_ON(*method);
+ /* Allocate method table for this OUI */
+- if ((ret = allocate_method_table(method)))
+- goto error3;
++ if (!*method) {
++ ret = allocate_method_table(method);
++ if (ret)
++ goto error3;
++ }
+ memcpy((*vendor_table)->vendor_class[vclass]->oui[i],
+ mad_reg_req->oui, 3);
+ goto check_in_use;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index eab43b17e9cf..ec8fb289621f 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -235,7 +235,7 @@ static struct ucma_multicast* ucma_alloc_multicast(struct ucma_context *ctx)
+ return NULL;
+
+ mutex_lock(&mut);
+- mc->id = idr_alloc(&multicast_idr, mc, 0, 0, GFP_KERNEL);
++ mc->id = idr_alloc(&multicast_idr, NULL, 0, 0, GFP_KERNEL);
+ mutex_unlock(&mut);
+ if (mc->id < 0)
+ goto error;
+@@ -1421,6 +1421,10 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ goto err3;
+ }
+
++ mutex_lock(&mut);
++ idr_replace(&multicast_idr, mc, mc->id);
++ mutex_unlock(&mut);
++
+ mutex_unlock(&file->mut);
+ ucma_put_ctx(ctx);
+ return 0;
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 21a887c9523b..7a300e3eb0c2 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3478,6 +3478,11 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file,
+ goto err_uobj;
+ }
+
++ if (qp->qp_type != IB_QPT_UD && qp->qp_type != IB_QPT_RAW_PACKET) {
++ err = -EINVAL;
++ goto err_put;
++ }
++
+ flow_attr = kzalloc(sizeof(*flow_attr) + cmd.flow_attr.num_of_specs *
+ sizeof(union ib_flow_spec), GFP_KERNEL);
+ if (!flow_attr) {
+diff --git a/drivers/infiniband/sw/rdmavt/Kconfig b/drivers/infiniband/sw/rdmavt/Kconfig
+index 2b5513da7e83..98e798007f75 100644
+--- a/drivers/infiniband/sw/rdmavt/Kconfig
++++ b/drivers/infiniband/sw/rdmavt/Kconfig
+@@ -1,6 +1,6 @@
+ config INFINIBAND_RDMAVT
+ tristate "RDMA verbs transport library"
+- depends on 64BIT
++ depends on 64BIT && ARCH_DMA_ADDR_T_64BIT
+ depends on PCI
+ select DMA_VIRT_OPS
+ ---help---
+diff --git a/drivers/infiniband/sw/rxe/Kconfig b/drivers/infiniband/sw/rxe/Kconfig
+index bad4a576d7cf..67ae960ab523 100644
+--- a/drivers/infiniband/sw/rxe/Kconfig
++++ b/drivers/infiniband/sw/rxe/Kconfig
+@@ -1,6 +1,7 @@
+ config RDMA_RXE
+ tristate "Software RDMA over Ethernet (RoCE) driver"
+ depends on INET && PCI && INFINIBAND
++ depends on !64BIT || ARCH_DMA_ADDR_T_64BIT
+ select NET_UDP_TUNNEL
+ select CRYPTO_CRC32
+ select DMA_VIRT_OPS
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 37f954b704a6..f8f3aac944ea 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1264,6 +1264,8 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ { "ELAN0611", 0 },
+ { "ELAN0612", 0 },
+ { "ELAN0618", 0 },
++ { "ELAN061D", 0 },
++ { "ELAN0622", 0 },
+ { "ELAN1000", 0 },
+ { }
+ };
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index b353d494ad40..136f6e7bf797 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -527,6 +527,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
+ },
+ },
++ {
++ /* Lenovo LaVie Z */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/irqchip/irq-ls-scfg-msi.c b/drivers/irqchip/irq-ls-scfg-msi.c
+index 57e3d900f19e..1ec3bfe56693 100644
+--- a/drivers/irqchip/irq-ls-scfg-msi.c
++++ b/drivers/irqchip/irq-ls-scfg-msi.c
+@@ -21,6 +21,7 @@
+ #include <linux/of_pci.h>
+ #include <linux/of_platform.h>
+ #include <linux/spinlock.h>
++#include <linux/dma-iommu.h>
+
+ #define MSI_IRQS_PER_MSIR 32
+ #define MSI_MSIR_OFFSET 4
+@@ -94,6 +95,8 @@ static void ls_scfg_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
+
+ if (msi_affinity_flag)
+ msg->data |= cpumask_first(data->common->affinity);
++
++ iommu_dma_map_msi_msg(data->irq, msg);
+ }
+
+ static int ls_scfg_msi_set_affinity(struct irq_data *irq_data,
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index 94d5d97c9d8a..8c371423b111 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -278,7 +278,9 @@ void pblk_free_rqd(struct pblk *pblk, struct nvm_rq *rqd, int type)
+ return;
+ }
+
+- nvm_dev_dma_free(dev->parent, rqd->meta_list, rqd->dma_meta_list);
++ if (rqd->meta_list)
++ nvm_dev_dma_free(dev->parent, rqd->meta_list,
++ rqd->dma_meta_list);
+ mempool_free(rqd, pool);
+ }
+
+@@ -316,7 +318,7 @@ int pblk_bio_add_pages(struct pblk *pblk, struct bio *bio, gfp_t flags,
+
+ return 0;
+ err:
+- pblk_bio_free_pages(pblk, bio, 0, i - 1);
++ pblk_bio_free_pages(pblk, bio, (bio->bi_vcnt - i), i);
+ return -1;
+ }
+
+diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c
+index 52fdd85dbc97..58946ffebe81 100644
+--- a/drivers/lightnvm/pblk-rb.c
++++ b/drivers/lightnvm/pblk-rb.c
+@@ -142,10 +142,9 @@ static void clean_wctx(struct pblk_w_ctx *w_ctx)
+ {
+ int flags;
+
+-try:
+ flags = READ_ONCE(w_ctx->flags);
+- if (!(flags & PBLK_SUBMITTED_ENTRY))
+- goto try;
++ WARN_ONCE(!(flags & PBLK_SUBMITTED_ENTRY),
++ "pblk: overwriting unsubmitted data\n");
+
+ /* Release flags on context. Protect from writes and reads */
+ smp_store_release(&w_ctx->flags, PBLK_WRITABLE_ENTRY);
+diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
+index 9eee10f69df0..d528617c637b 100644
+--- a/drivers/lightnvm/pblk-read.c
++++ b/drivers/lightnvm/pblk-read.c
+@@ -219,7 +219,7 @@ static int pblk_partial_read_bio(struct pblk *pblk, struct nvm_rq *rqd,
+ new_bio = bio_alloc(GFP_KERNEL, nr_holes);
+
+ if (pblk_bio_add_pages(pblk, new_bio, GFP_KERNEL, nr_holes))
+- goto err;
++ goto err_add_pages;
+
+ if (nr_holes != new_bio->bi_vcnt) {
+ pr_err("pblk: malformed bio\n");
+@@ -310,10 +310,10 @@ static int pblk_partial_read_bio(struct pblk *pblk, struct nvm_rq *rqd,
+ return NVM_IO_OK;
+
+ err:
+- pr_err("pblk: failed to perform partial read\n");
+-
+ /* Free allocated pages in new bio */
+- pblk_bio_free_pages(pblk, bio, 0, new_bio->bi_vcnt);
++ pblk_bio_free_pages(pblk, new_bio, 0, new_bio->bi_vcnt);
++err_add_pages:
++ pr_err("pblk: failed to perform partial read\n");
+ __pblk_end_io_read(pblk, rqd, false);
+ return NVM_IO_ERR;
+ }
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index bac480d75d1d..df9eb1a04f26 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6525,6 +6525,9 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
+ char b[BDEVNAME_SIZE];
+ struct md_rdev *rdev;
+
++ if (!mddev->pers)
++ return -ENODEV;
++
+ rdev = find_rdev(mddev, dev);
+ if (!rdev)
+ return -ENXIO;
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index e9e3308cb0a7..4445179aa4c8 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -2474,6 +2474,8 @@ static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio)
+ fix_read_error(conf, r1_bio->read_disk,
+ r1_bio->sector, r1_bio->sectors);
+ unfreeze_array(conf);
++ } else if (mddev->ro == 0 && test_bit(FailFast, &rdev->flags)) {
++ md_error(mddev, rdev);
+ } else {
+ r1_bio->bios[r1_bio->read_disk] = IO_BLOCKED;
+ }
+diff --git a/drivers/media/cec/cec-pin-error-inj.c b/drivers/media/cec/cec-pin-error-inj.c
+index aaa899a175ce..c0088d3b8e3d 100644
+--- a/drivers/media/cec/cec-pin-error-inj.c
++++ b/drivers/media/cec/cec-pin-error-inj.c
+@@ -81,10 +81,9 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ u64 *error;
+ u8 *args;
+ bool has_op;
+- u32 op;
++ u8 op;
+ u8 mode;
+ u8 pos;
+- u8 v;
+
+ p = skip_spaces(p);
+ token = strsep(&p, delims);
+@@ -146,12 +145,18 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ comma = strchr(token, ',');
+ if (comma)
+ *comma++ = '\0';
+- if (!strcmp(token, "any"))
+- op = CEC_ERROR_INJ_OP_ANY;
+- else if (!kstrtou8(token, 0, &v))
+- op = v;
+- else
++ if (!strcmp(token, "any")) {
++ has_op = false;
++ error = pin->error_inj + CEC_ERROR_INJ_OP_ANY;
++ args = pin->error_inj_args[CEC_ERROR_INJ_OP_ANY];
++ } else if (!kstrtou8(token, 0, &op)) {
++ has_op = true;
++ error = pin->error_inj + op;
++ args = pin->error_inj_args[op];
++ } else {
+ return false;
++ }
++
+ mode = CEC_ERROR_INJ_MODE_ONCE;
+ if (comma) {
+ if (!strcmp(comma, "off"))
+@@ -166,10 +171,6 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ return false;
+ }
+
+- error = pin->error_inj + op;
+- args = pin->error_inj_args[op];
+- has_op = op <= 0xff;
+-
+ token = strsep(&p, delims);
+ if (p) {
+ p = skip_spaces(p);
+@@ -203,16 +204,18 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ mode_mask = CEC_ERROR_INJ_MODE_MASK << mode_offset;
+ arg_idx = cec_error_inj_cmds[i].arg_idx;
+
+- if (mode_offset == CEC_ERROR_INJ_RX_ARB_LOST_OFFSET ||
+- mode_offset == CEC_ERROR_INJ_TX_ADD_BYTES_OFFSET)
+- is_bit_pos = false;
+-
+ if (mode_offset == CEC_ERROR_INJ_RX_ARB_LOST_OFFSET) {
+ if (has_op)
+ return false;
+ if (!has_pos)
+ pos = 0x0f;
++ is_bit_pos = false;
++ } else if (mode_offset == CEC_ERROR_INJ_TX_ADD_BYTES_OFFSET) {
++ if (!has_pos || !pos)
++ return false;
++ is_bit_pos = false;
+ }
++
+ if (arg_idx >= 0 && is_bit_pos) {
+ if (!has_pos || pos >= 160)
+ return false;
+diff --git a/drivers/media/common/siano/smsendian.c b/drivers/media/common/siano/smsendian.c
+index bfe831c10b1c..b95a631f23f9 100644
+--- a/drivers/media/common/siano/smsendian.c
++++ b/drivers/media/common/siano/smsendian.c
+@@ -35,7 +35,7 @@ void smsendian_handle_tx_message(void *buffer)
+ switch (msg->x_msg_header.msg_type) {
+ case MSG_SMS_DATA_DOWNLOAD_REQ:
+ {
+- msg->msg_data[0] = le32_to_cpu(msg->msg_data[0]);
++ msg->msg_data[0] = le32_to_cpu((__force __le32)(msg->msg_data[0]));
+ break;
+ }
+
+@@ -44,7 +44,7 @@ void smsendian_handle_tx_message(void *buffer)
+ sizeof(struct sms_msg_hdr))/4;
+
+ for (i = 0; i < msg_words; i++)
+- msg->msg_data[i] = le32_to_cpu(msg->msg_data[i]);
++ msg->msg_data[i] = le32_to_cpu((__force __le32)msg->msg_data[i]);
+
+ break;
+ }
+@@ -64,7 +64,7 @@ void smsendian_handle_rx_message(void *buffer)
+ {
+ struct sms_version_res *ver =
+ (struct sms_version_res *) msg;
+- ver->chip_model = le16_to_cpu(ver->chip_model);
++ ver->chip_model = le16_to_cpu((__force __le16)ver->chip_model);
+ break;
+ }
+
+@@ -81,7 +81,7 @@ void smsendian_handle_rx_message(void *buffer)
+ sizeof(struct sms_msg_hdr))/4;
+
+ for (i = 0; i < msg_words; i++)
+- msg->msg_data[i] = le32_to_cpu(msg->msg_data[i]);
++ msg->msg_data[i] = le32_to_cpu((__force __le32)msg->msg_data[i]);
+
+ break;
+ }
+@@ -95,9 +95,9 @@ void smsendian_handle_message_header(void *msg)
+ #ifdef __BIG_ENDIAN
+ struct sms_msg_hdr *phdr = (struct sms_msg_hdr *)msg;
+
+- phdr->msg_type = le16_to_cpu(phdr->msg_type);
+- phdr->msg_length = le16_to_cpu(phdr->msg_length);
+- phdr->msg_flags = le16_to_cpu(phdr->msg_flags);
++ phdr->msg_type = le16_to_cpu((__force __le16)phdr->msg_type);
++ phdr->msg_length = le16_to_cpu((__force __le16)phdr->msg_length);
++ phdr->msg_flags = le16_to_cpu((__force __le16)phdr->msg_flags);
+ #endif /* __BIG_ENDIAN */
+ }
+ EXPORT_SYMBOL_GPL(smsendian_handle_message_header);
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index d3f7bb33a54d..f32ec7342ef0 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -916,9 +916,12 @@ void vb2_buffer_done(struct vb2_buffer *vb, enum vb2_buffer_state state)
+ dprintk(4, "done processing on buffer %d, state: %d\n",
+ vb->index, state);
+
+- /* sync buffers */
+- for (plane = 0; plane < vb->num_planes; ++plane)
+- call_void_memop(vb, finish, vb->planes[plane].mem_priv);
++ if (state != VB2_BUF_STATE_QUEUED &&
++ state != VB2_BUF_STATE_REQUEUEING) {
++ /* sync buffers */
++ for (plane = 0; plane < vb->num_planes; ++plane)
++ call_void_memop(vb, finish, vb->planes[plane].mem_priv);
++ }
+
+ spin_lock_irqsave(&q->done_lock, flags);
+ if (state == VB2_BUF_STATE_QUEUED ||
+diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
+index 3b7ace395ee6..e1f8208581aa 100644
+--- a/drivers/media/i2c/smiapp/smiapp-core.c
++++ b/drivers/media/i2c/smiapp/smiapp-core.c
+@@ -1001,7 +1001,7 @@ static int smiapp_read_nvm(struct smiapp_sensor *sensor,
+ if (rval)
+ goto out;
+
+- for (i = 0; i < 1000; i++) {
++ for (i = 1000; i > 0; i--) {
+ rval = smiapp_read(
+ sensor,
+ SMIAPP_REG_U8_DATA_TRANSFER_IF_1_STATUS, &s);
+@@ -1012,11 +1012,10 @@ static int smiapp_read_nvm(struct smiapp_sensor *sensor,
+ if (s & SMIAPP_DATA_TRANSFER_IF_1_STATUS_RD_READY)
+ break;
+
+- if (--i == 0) {
+- rval = -ETIMEDOUT;
+- goto out;
+- }
+-
++ }
++ if (!i) {
++ rval = -ETIMEDOUT;
++ goto out;
+ }
+
+ for (i = 0; i < SMIAPP_NVM_PAGE_SIZE; i++) {
+diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c
+index 35e81f7c0d2f..ae59c3177555 100644
+--- a/drivers/media/media-device.c
++++ b/drivers/media/media-device.c
+@@ -54,9 +54,10 @@ static int media_device_close(struct file *filp)
+ return 0;
+ }
+
+-static int media_device_get_info(struct media_device *dev,
+- struct media_device_info *info)
++static long media_device_get_info(struct media_device *dev, void *arg)
+ {
++ struct media_device_info *info = arg;
++
+ memset(info, 0, sizeof(*info));
+
+ if (dev->driver_name[0])
+@@ -93,9 +94,9 @@ static struct media_entity *find_entity(struct media_device *mdev, u32 id)
+ return NULL;
+ }
+
+-static long media_device_enum_entities(struct media_device *mdev,
+- struct media_entity_desc *entd)
++static long media_device_enum_entities(struct media_device *mdev, void *arg)
+ {
++ struct media_entity_desc *entd = arg;
+ struct media_entity *ent;
+
+ ent = find_entity(mdev, entd->id);
+@@ -146,9 +147,9 @@ static void media_device_kpad_to_upad(const struct media_pad *kpad,
+ upad->flags = kpad->flags;
+ }
+
+-static long media_device_enum_links(struct media_device *mdev,
+- struct media_links_enum *links)
++static long media_device_enum_links(struct media_device *mdev, void *arg)
+ {
++ struct media_links_enum *links = arg;
+ struct media_entity *entity;
+
+ entity = find_entity(mdev, links->entity);
+@@ -195,9 +196,9 @@ static long media_device_enum_links(struct media_device *mdev,
+ return 0;
+ }
+
+-static long media_device_setup_link(struct media_device *mdev,
+- struct media_link_desc *linkd)
++static long media_device_setup_link(struct media_device *mdev, void *arg)
+ {
++ struct media_link_desc *linkd = arg;
+ struct media_link *link = NULL;
+ struct media_entity *source;
+ struct media_entity *sink;
+@@ -225,9 +226,9 @@ static long media_device_setup_link(struct media_device *mdev,
+ return __media_entity_setup_link(link, linkd->flags);
+ }
+
+-static long media_device_get_topology(struct media_device *mdev,
+- struct media_v2_topology *topo)
++static long media_device_get_topology(struct media_device *mdev, void *arg)
+ {
++ struct media_v2_topology *topo = arg;
+ struct media_entity *entity;
+ struct media_interface *intf;
+ struct media_pad *pad;
+diff --git a/drivers/media/pci/saa7164/saa7164-fw.c b/drivers/media/pci/saa7164/saa7164-fw.c
+index ef4906406ebf..a50461861133 100644
+--- a/drivers/media/pci/saa7164/saa7164-fw.c
++++ b/drivers/media/pci/saa7164/saa7164-fw.c
+@@ -426,7 +426,8 @@ int saa7164_downloadfirmware(struct saa7164_dev *dev)
+ __func__, fw->size);
+
+ if (fw->size != fwlength) {
+- printk(KERN_ERR "xc5000: firmware incorrect size\n");
++ printk(KERN_ERR "saa7164: firmware incorrect size %zu != %u\n",
++ fw->size, fwlength);
+ ret = -ENOMEM;
+ goto out;
+ }
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index c3fafa97b2d0..0ea8dd44026c 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1228,7 +1228,8 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ vc->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ vc->vidq.min_buffers_needed = 2;
+ vc->vidq.lock = &vc->vb_mutex;
+- vc->vidq.gfp_flags = GFP_DMA32;
++ vc->vidq.gfp_flags = dev->dma_mode != TW686X_DMA_MODE_MEMCPY ?
++ GFP_DMA32 : 0;
+ vc->vidq.dev = &dev->pci_dev->dev;
+
+ err = vb2_queue_init(&vc->vidq);
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 8eb000e3d8fd..f2db5128d786 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -1945,6 +1945,7 @@ error_csi2:
+
+ static void isp_detach_iommu(struct isp_device *isp)
+ {
++ arm_iommu_detach_device(isp->dev);
+ arm_iommu_release_mapping(isp->mapping);
+ isp->mapping = NULL;
+ }
+@@ -1961,8 +1962,7 @@ static int isp_attach_iommu(struct isp_device *isp)
+ mapping = arm_iommu_create_mapping(&platform_bus_type, SZ_1G, SZ_2G);
+ if (IS_ERR(mapping)) {
+ dev_err(isp->dev, "failed to create ARM IOMMU mapping\n");
+- ret = PTR_ERR(mapping);
+- goto error;
++ return PTR_ERR(mapping);
+ }
+
+ isp->mapping = mapping;
+@@ -1977,7 +1977,8 @@ static int isp_attach_iommu(struct isp_device *isp)
+ return 0;
+
+ error:
+- isp_detach_iommu(isp);
++ arm_iommu_release_mapping(isp->mapping);
++ isp->mapping = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/rcar_jpu.c b/drivers/media/platform/rcar_jpu.c
+index f6092ae45912..8b44a849ab41 100644
+--- a/drivers/media/platform/rcar_jpu.c
++++ b/drivers/media/platform/rcar_jpu.c
+@@ -1280,7 +1280,7 @@ static int jpu_open(struct file *file)
+ /* ...issue software reset */
+ ret = jpu_reset(jpu);
+ if (ret)
+- goto device_prepare_rollback;
++ goto jpu_reset_rollback;
+ }
+
+ jpu->ref_count++;
+@@ -1288,6 +1288,8 @@ static int jpu_open(struct file *file)
+ mutex_unlock(&jpu->mutex);
+ return 0;
+
++jpu_reset_rollback:
++ clk_disable_unprepare(jpu->clk);
+ device_prepare_rollback:
+ mutex_unlock(&jpu->mutex);
+ v4l_prepare_rollback:
+diff --git a/drivers/media/platform/renesas-ceu.c b/drivers/media/platform/renesas-ceu.c
+index 6599dba5ab84..dec1b3572e9b 100644
+--- a/drivers/media/platform/renesas-ceu.c
++++ b/drivers/media/platform/renesas-ceu.c
+@@ -777,8 +777,15 @@ static int ceu_try_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ const struct ceu_fmt *ceu_fmt;
+ int ret;
+
++ /*
++ * Set format on sensor sub device: bus format used to produce memory
++ * format is selected at initialization time.
++ */
+ struct v4l2_subdev_format sd_format = {
+- .which = V4L2_SUBDEV_FORMAT_TRY,
++ .which = V4L2_SUBDEV_FORMAT_TRY,
++ .format = {
++ .code = ceu_sd->mbus_fmt.mbus_code,
++ },
+ };
+
+ switch (pix->pixelformat) {
+@@ -800,10 +807,6 @@ static int ceu_try_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ v4l_bound_align_image(&pix->width, 2, CEU_MAX_WIDTH, 4,
+ &pix->height, 4, CEU_MAX_HEIGHT, 4, 0);
+
+- /*
+- * Set format on sensor sub device: bus format used to produce memory
+- * format is selected at initialization time.
+- */
+ v4l2_fill_mbus_format_mplane(&sd_format.format, pix);
+ ret = v4l2_subdev_call(v4l2_sd, pad, set_fmt, &pad_cfg, &sd_format);
+ if (ret)
+@@ -827,8 +830,15 @@ static int ceu_set_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ struct v4l2_subdev *v4l2_sd = ceu_sd->v4l2_sd;
+ int ret;
+
++ /*
++ * Set format on sensor sub device: bus format used to produce memory
++ * format is selected at initialization time.
++ */
+ struct v4l2_subdev_format format = {
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
++ .format = {
++ .code = ceu_sd->mbus_fmt.mbus_code,
++ },
+ };
+
+ ret = ceu_try_fmt(ceudev, v4l2_fmt);
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index 41709b24b28f..f6d1fc3e5e1d 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -91,7 +91,7 @@ MODULE_PARM_DESC(max_rds_errors, "RDS maximum block errors: *1*");
+ */
+ int si470x_get_register(struct si470x_device *radio, int regnr)
+ {
+- u16 buf[READ_REG_NUM];
++ __be16 buf[READ_REG_NUM];
+ struct i2c_msg msgs[1] = {
+ {
+ .addr = radio->client->addr,
+@@ -116,7 +116,7 @@ int si470x_get_register(struct si470x_device *radio, int regnr)
+ int si470x_set_register(struct si470x_device *radio, int regnr)
+ {
+ int i;
+- u16 buf[WRITE_REG_NUM];
++ __be16 buf[WRITE_REG_NUM];
+ struct i2c_msg msgs[1] = {
+ {
+ .addr = radio->client->addr,
+@@ -146,7 +146,7 @@ int si470x_set_register(struct si470x_device *radio, int regnr)
+ static int si470x_get_all_registers(struct si470x_device *radio)
+ {
+ int i;
+- u16 buf[READ_REG_NUM];
++ __be16 buf[READ_REG_NUM];
+ struct i2c_msg msgs[1] = {
+ {
+ .addr = radio->client->addr,
+diff --git a/drivers/media/rc/ir-mce_kbd-decoder.c b/drivers/media/rc/ir-mce_kbd-decoder.c
+index 5478fe08f9d3..d94f1c190f62 100644
+--- a/drivers/media/rc/ir-mce_kbd-decoder.c
++++ b/drivers/media/rc/ir-mce_kbd-decoder.c
+@@ -324,11 +324,13 @@ again:
+ scancode = data->body & 0xffff;
+ dev_dbg(&dev->dev, "keyboard data 0x%08x\n",
+ data->body);
+- if (dev->timeout)
+- delay = usecs_to_jiffies(dev->timeout / 1000);
+- else
+- delay = msecs_to_jiffies(100);
+- mod_timer(&data->rx_timeout, jiffies + delay);
++ if (scancode) {
++ delay = nsecs_to_jiffies(dev->timeout) +
++ msecs_to_jiffies(100);
++ mod_timer(&data->rx_timeout, jiffies + delay);
++ } else {
++ del_timer(&data->rx_timeout);
++ }
+ /* Pass data to keyboard buffer parser */
+ ir_mce_kbd_process_keyboard_data(dev, scancode);
+ lsc.rc_proto = RC_PROTO_MCIR2_KBD;
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index 3f493e0b0716..5f2f61f000fc 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -199,6 +199,7 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ int rc;
+ struct em28xx_i2c_bus *i2c_bus = dvb->adapter.priv;
+ struct em28xx *dev = i2c_bus->dev;
++ struct usb_device *udev = interface_to_usbdev(dev->intf);
+ int dvb_max_packet_size, packet_multiplier, dvb_alt;
+
+ if (dev->dvb_xfer_bulk) {
+@@ -217,6 +218,7 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ dvb_alt = dev->dvb_alt_isoc;
+ }
+
++ usb_set_interface(udev, dev->ifnum, dvb_alt);
+ rc = em28xx_set_mode(dev, EM28XX_DIGITAL_MODE);
+ if (rc < 0)
+ return rc;
+@@ -1392,7 +1394,7 @@ static int em28174_dvb_init_hauppauge_wintv_dualhd_01595(struct em28xx *dev)
+
+ dvb->i2c_client_tuner = dvb_module_probe("si2157", NULL,
+ adapter,
+- 0x60, &si2157_config);
++ addr, &si2157_config);
+ if (!dvb->i2c_client_tuner) {
+ dvb_module_release(dvb->i2c_client_demod);
+ return -ENODEV;
+diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
+index a4803ac192bb..1d49a8dd4a37 100644
+--- a/drivers/memory/tegra/mc.c
++++ b/drivers/memory/tegra/mc.c
+@@ -20,14 +20,6 @@
+ #include "mc.h"
+
+ #define MC_INTSTATUS 0x000
+-#define MC_INT_DECERR_MTS (1 << 16)
+-#define MC_INT_SECERR_SEC (1 << 13)
+-#define MC_INT_DECERR_VPR (1 << 12)
+-#define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
+-#define MC_INT_INVALID_SMMU_PAGE (1 << 10)
+-#define MC_INT_ARBITRATION_EMEM (1 << 9)
+-#define MC_INT_SECURITY_VIOLATION (1 << 8)
+-#define MC_INT_DECERR_EMEM (1 << 6)
+
+ #define MC_INTMASK 0x004
+
+@@ -248,12 +240,13 @@ static const char *const error_names[8] = {
+ static irqreturn_t tegra_mc_irq(int irq, void *data)
+ {
+ struct tegra_mc *mc = data;
+- unsigned long status, mask;
++ unsigned long status;
+ unsigned int bit;
+
+ /* mask all interrupts to avoid flooding */
+- status = mc_readl(mc, MC_INTSTATUS);
+- mask = mc_readl(mc, MC_INTMASK);
++ status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
++ if (!status)
++ return IRQ_NONE;
+
+ for_each_set_bit(bit, &status, 32) {
+ const char *error = status_names[bit] ?: "unknown";
+@@ -346,7 +339,6 @@ static int tegra_mc_probe(struct platform_device *pdev)
+ const struct of_device_id *match;
+ struct resource *res;
+ struct tegra_mc *mc;
+- u32 value;
+ int err;
+
+ match = of_match_node(tegra_mc_of_match, pdev->dev.of_node);
+@@ -414,11 +406,7 @@ static int tegra_mc_probe(struct platform_device *pdev)
+
+ WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n");
+
+- value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
+- MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
+- MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM;
+-
+- mc_writel(mc, value, MC_INTMASK);
++ mc_writel(mc, mc->soc->intmask, MC_INTMASK);
+
+ return 0;
+ }
+diff --git a/drivers/memory/tegra/mc.h b/drivers/memory/tegra/mc.h
+index ddb16676c3af..24e020b4609b 100644
+--- a/drivers/memory/tegra/mc.h
++++ b/drivers/memory/tegra/mc.h
+@@ -14,6 +14,15 @@
+
+ #include <soc/tegra/mc.h>
+
++#define MC_INT_DECERR_MTS (1 << 16)
++#define MC_INT_SECERR_SEC (1 << 13)
++#define MC_INT_DECERR_VPR (1 << 12)
++#define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
++#define MC_INT_INVALID_SMMU_PAGE (1 << 10)
++#define MC_INT_ARBITRATION_EMEM (1 << 9)
++#define MC_INT_SECURITY_VIOLATION (1 << 8)
++#define MC_INT_DECERR_EMEM (1 << 6)
++
+ static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset)
+ {
+ return readl(mc->regs + offset);
+diff --git a/drivers/memory/tegra/tegra114.c b/drivers/memory/tegra/tegra114.c
+index b20e6e3e208e..7560b2f558a7 100644
+--- a/drivers/memory/tegra/tegra114.c
++++ b/drivers/memory/tegra/tegra114.c
+@@ -945,4 +945,6 @@ const struct tegra_mc_soc tegra114_mc_soc = {
+ .atom_size = 32,
+ .client_id_mask = 0x7f,
+ .smmu = &tegra114_smmu_soc,
++ .intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
++ MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/memory/tegra/tegra124.c b/drivers/memory/tegra/tegra124.c
+index 8b6360eabb8a..bd16555cca0f 100644
+--- a/drivers/memory/tegra/tegra124.c
++++ b/drivers/memory/tegra/tegra124.c
+@@ -1035,6 +1035,9 @@ const struct tegra_mc_soc tegra124_mc_soc = {
+ .smmu = &tegra124_smmu_soc,
+ .emem_regs = tegra124_mc_emem_regs,
+ .num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs),
++ .intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++ MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++ MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+ #endif /* CONFIG_ARCH_TEGRA_124_SOC */
+
+@@ -1059,5 +1062,8 @@ const struct tegra_mc_soc tegra132_mc_soc = {
+ .atom_size = 32,
+ .client_id_mask = 0x7f,
+ .smmu = &tegra132_smmu_soc,
++ .intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++ MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++ MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+ #endif /* CONFIG_ARCH_TEGRA_132_SOC */
+diff --git a/drivers/memory/tegra/tegra210.c b/drivers/memory/tegra/tegra210.c
+index d398bcd3fc57..3b8d0100088c 100644
+--- a/drivers/memory/tegra/tegra210.c
++++ b/drivers/memory/tegra/tegra210.c
+@@ -1092,4 +1092,7 @@ const struct tegra_mc_soc tegra210_mc_soc = {
+ .atom_size = 64,
+ .client_id_mask = 0xff,
+ .smmu = &tegra210_smmu_soc,
++ .intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++ MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++ MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/memory/tegra/tegra30.c b/drivers/memory/tegra/tegra30.c
+index d756c837f23e..d2ba50ed0490 100644
+--- a/drivers/memory/tegra/tegra30.c
++++ b/drivers/memory/tegra/tegra30.c
+@@ -967,4 +967,6 @@ const struct tegra_mc_soc tegra30_mc_soc = {
+ .atom_size = 16,
+ .client_id_mask = 0x7f,
+ .smmu = &tegra30_smmu_soc,
++ .intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
++ MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/mfd/cros_ec.c b/drivers/mfd/cros_ec.c
+index d61024141e2b..74780f2964a1 100644
+--- a/drivers/mfd/cros_ec.c
++++ b/drivers/mfd/cros_ec.c
+@@ -112,7 +112,11 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+
+ mutex_init(&ec_dev->lock);
+
+- cros_ec_query_all(ec_dev);
++ err = cros_ec_query_all(ec_dev);
++ if (err) {
++ dev_err(dev, "Cannot identify the EC: error %d\n", err);
++ return err;
++ }
+
+ if (ec_dev->irq) {
+ err = request_threaded_irq(ec_dev->irq, NULL, ec_irq_thread,
+diff --git a/drivers/mmc/core/pwrseq_simple.c b/drivers/mmc/core/pwrseq_simple.c
+index 13ef162cf066..a8b9fee4d62a 100644
+--- a/drivers/mmc/core/pwrseq_simple.c
++++ b/drivers/mmc/core/pwrseq_simple.c
+@@ -40,14 +40,18 @@ static void mmc_pwrseq_simple_set_gpios_value(struct mmc_pwrseq_simple *pwrseq,
+ struct gpio_descs *reset_gpios = pwrseq->reset_gpios;
+
+ if (!IS_ERR(reset_gpios)) {
+- int i;
+- int values[reset_gpios->ndescs];
++ int i, *values;
++ int nvalues = reset_gpios->ndescs;
+
+- for (i = 0; i < reset_gpios->ndescs; i++)
++ values = kmalloc_array(nvalues, sizeof(int), GFP_KERNEL);
++ if (!values)
++ return;
++
++ for (i = 0; i < nvalues; i++)
+ values[i] = value;
+
+- gpiod_set_array_value_cansleep(
+- reset_gpios->ndescs, reset_gpios->desc, values);
++ gpiod_set_array_value_cansleep(nvalues, reset_gpios->desc, values);
++ kfree(values);
+ }
+ }
+
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 3ee8f57fd612..80dc2fd6576c 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -1231,6 +1231,8 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
+ if (host->state == STATE_WAITING_CMD11_DONE)
+ sdmmc_cmd_bits |= SDMMC_CMD_VOLT_SWITCH;
+
++ slot->mmc->actual_clock = 0;
++
+ if (!clock) {
+ mci_writel(host, CLKENA, 0);
+ mci_send_cmd(slot, sdmmc_cmd_bits, 0);
+@@ -1289,6 +1291,8 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
+
+ /* keep the last clock value that was requested from core */
+ slot->__clk_old = clock;
++ slot->mmc->actual_clock = div ? ((host->bus_hz / div) >> 1) :
++ host->bus_hz;
+ }
+
+ host->current_speed = clock;
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 1456abd5eeb9..e7e43f2ae224 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -916,10 +916,6 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ goto err_put_sync;
+ }
+
+- ret = sdhci_omap_config_iodelay_pinctrl_state(omap_host);
+- if (ret)
+- goto err_put_sync;
+-
+ host->mmc_host_ops.get_ro = mmc_gpio_get_ro;
+ host->mmc_host_ops.start_signal_voltage_switch =
+ sdhci_omap_start_signal_voltage_switch;
+@@ -930,12 +926,23 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ sdhci_read_caps(host);
+ host->caps |= SDHCI_CAN_DO_ADMA2;
+
+- ret = sdhci_add_host(host);
++ ret = sdhci_setup_host(host);
+ if (ret)
+ goto err_put_sync;
+
++ ret = sdhci_omap_config_iodelay_pinctrl_state(omap_host);
++ if (ret)
++ goto err_cleanup_host;
++
++ ret = __sdhci_add_host(host);
++ if (ret)
++ goto err_cleanup_host;
++
+ return 0;
+
++err_cleanup_host:
++ sdhci_cleanup_host(host);
++
+ err_put_sync:
+ pm_runtime_put_sync(dev);
+
+diff --git a/drivers/mtd/nand/raw/fsl_ifc_nand.c b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+index 61aae0224078..98aac1f2e9ae 100644
+--- a/drivers/mtd/nand/raw/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+@@ -342,9 +342,16 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
+
+ case NAND_CMD_READID:
+ case NAND_CMD_PARAM: {
++ /*
++ * For READID, read 8 bytes that are currently used.
++ * For PARAM, read all 3 copies of 256-bytes pages.
++ */
++ int len = 8;
+ int timing = IFC_FIR_OP_RB;
+- if (command == NAND_CMD_PARAM)
++ if (command == NAND_CMD_PARAM) {
+ timing = IFC_FIR_OP_RBCD;
++ len = 256 * 3;
++ }
+
+ ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
+ (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
+@@ -354,12 +361,8 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
+ &ifc->ifc_nand.nand_fcr0);
+ ifc_out32(column, &ifc->ifc_nand.row3);
+
+- /*
+- * although currently it's 8 bytes for READID, we always read
+- * the maximum 256 bytes(for PARAM)
+- */
+- ifc_out32(256, &ifc->ifc_nand.nand_fbcr);
+- ifc_nand_ctrl->read_bytes = 256;
++ ifc_out32(len, &ifc->ifc_nand.nand_fbcr);
++ ifc_nand_ctrl->read_bytes = len;
+
+ set_addr(mtd, 0, 0, 0);
+ fsl_ifc_run_command(mtd);
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 600d5ad1fbde..18f51d5ac846 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -473,7 +473,7 @@ qca8k_set_pad_ctrl(struct qca8k_priv *priv, int port, int mode)
+ static void
+ qca8k_port_set_status(struct qca8k_priv *priv, int port, int enable)
+ {
+- u32 mask = QCA8K_PORT_STATUS_TXMAC;
++ u32 mask = QCA8K_PORT_STATUS_TXMAC | QCA8K_PORT_STATUS_RXMAC;
+
+ /* Port 0 and 6 have no internal PHY */
+ if ((port > 0) && (port < 6))
+@@ -490,6 +490,7 @@ qca8k_setup(struct dsa_switch *ds)
+ {
+ struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+ int ret, i, phy_mode = -1;
++ u32 mask;
+
+ /* Make sure that port 0 is the cpu port */
+ if (!dsa_is_cpu_port(ds, 0)) {
+@@ -515,7 +516,10 @@ qca8k_setup(struct dsa_switch *ds)
+ if (ret < 0)
+ return ret;
+
+- /* Enable CPU Port */
++ /* Enable CPU Port, force it to maximum bandwidth and full-duplex */
++ mask = QCA8K_PORT_STATUS_SPEED_1000 | QCA8K_PORT_STATUS_TXFLOW |
++ QCA8K_PORT_STATUS_RXFLOW | QCA8K_PORT_STATUS_DUPLEX;
++ qca8k_write(priv, QCA8K_REG_PORT_STATUS(QCA8K_CPU_PORT), mask);
+ qca8k_reg_set(priv, QCA8K_REG_GLOBAL_FW_CTRL0,
+ QCA8K_GLOBAL_FW_CTRL0_CPU_PORT_EN);
+ qca8k_port_set_status(priv, QCA8K_CPU_PORT, 1);
+@@ -583,6 +587,47 @@ qca8k_setup(struct dsa_switch *ds)
+ return 0;
+ }
+
++static void
++qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
++{
++ struct qca8k_priv *priv = ds->priv;
++ u32 reg;
++
++ /* Force fixed-link setting for CPU port, skip others. */
++ if (!phy_is_pseudo_fixed_link(phy))
++ return;
++
++ /* Set port speed */
++ switch (phy->speed) {
++ case 10:
++ reg = QCA8K_PORT_STATUS_SPEED_10;
++ break;
++ case 100:
++ reg = QCA8K_PORT_STATUS_SPEED_100;
++ break;
++ case 1000:
++ reg = QCA8K_PORT_STATUS_SPEED_1000;
++ break;
++ default:
++ dev_dbg(priv->dev, "port%d link speed %dMbps not supported.\n",
++ port, phy->speed);
++ return;
++ }
++
++ /* Set duplex mode */
++ if (phy->duplex == DUPLEX_FULL)
++ reg |= QCA8K_PORT_STATUS_DUPLEX;
++
++ /* Force flow control */
++ if (dsa_is_cpu_port(ds, port))
++ reg |= QCA8K_PORT_STATUS_RXFLOW | QCA8K_PORT_STATUS_TXFLOW;
++
++ /* Force link down before changing MAC options */
++ qca8k_port_set_status(priv, port, 0);
++ qca8k_write(priv, QCA8K_REG_PORT_STATUS(port), reg);
++ qca8k_port_set_status(priv, port, 1);
++}
++
+ static int
+ qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ {
+@@ -831,6 +876,7 @@ qca8k_get_tag_protocol(struct dsa_switch *ds, int port)
+ static const struct dsa_switch_ops qca8k_switch_ops = {
+ .get_tag_protocol = qca8k_get_tag_protocol,
+ .setup = qca8k_setup,
++ .adjust_link = qca8k_adjust_link,
+ .get_strings = qca8k_get_strings,
+ .phy_read = qca8k_phy_read,
+ .phy_write = qca8k_phy_write,
+@@ -862,6 +908,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ return -ENOMEM;
+
+ priv->bus = mdiodev->bus;
++ priv->dev = &mdiodev->dev;
+
+ /* read the switches ID register */
+ id = qca8k_read(priv, QCA8K_REG_MASK_CTRL);
+@@ -933,6 +980,7 @@ static SIMPLE_DEV_PM_OPS(qca8k_pm_ops,
+ qca8k_suspend, qca8k_resume);
+
+ static const struct of_device_id qca8k_of_match[] = {
++ { .compatible = "qca,qca8334" },
+ { .compatible = "qca,qca8337" },
+ { /* sentinel */ },
+ };
+diff --git a/drivers/net/dsa/qca8k.h b/drivers/net/dsa/qca8k.h
+index 1cf8a920d4ff..613fe5c50236 100644
+--- a/drivers/net/dsa/qca8k.h
++++ b/drivers/net/dsa/qca8k.h
+@@ -51,8 +51,10 @@
+ #define QCA8K_GOL_MAC_ADDR0 0x60
+ #define QCA8K_GOL_MAC_ADDR1 0x64
+ #define QCA8K_REG_PORT_STATUS(_i) (0x07c + (_i) * 4)
+-#define QCA8K_PORT_STATUS_SPEED GENMASK(2, 0)
+-#define QCA8K_PORT_STATUS_SPEED_S 0
++#define QCA8K_PORT_STATUS_SPEED GENMASK(1, 0)
++#define QCA8K_PORT_STATUS_SPEED_10 0
++#define QCA8K_PORT_STATUS_SPEED_100 0x1
++#define QCA8K_PORT_STATUS_SPEED_1000 0x2
+ #define QCA8K_PORT_STATUS_TXMAC BIT(2)
+ #define QCA8K_PORT_STATUS_RXMAC BIT(3)
+ #define QCA8K_PORT_STATUS_TXFLOW BIT(4)
+@@ -165,6 +167,7 @@ struct qca8k_priv {
+ struct ar8xxx_port_status port_sts[QCA8K_NUM_PORTS];
+ struct dsa_switch *ds;
+ struct mutex reg_mutex;
++ struct device *dev;
+ };
+
+ struct qca8k_mib_desc {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 1b9d3130af4d..17f12c18d225 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -333,6 +333,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
+
+ memset(&io_sq->desc_addr, 0x0, sizeof(io_sq->desc_addr));
+
++ io_sq->dma_addr_bits = ena_dev->dma_addr_bits;
+ io_sq->desc_entry_size =
+ (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ?
+ sizeof(struct ena_eth_io_tx_desc) :
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 1b45cd73a258..119777986ea4 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1128,14 +1128,14 @@ static void xgbe_phy_adjust_link(struct xgbe_prv_data *pdata)
+
+ if (pdata->tx_pause != pdata->phy.tx_pause) {
+ new_state = 1;
+- pdata->hw_if.config_tx_flow_control(pdata);
+ pdata->tx_pause = pdata->phy.tx_pause;
++ pdata->hw_if.config_tx_flow_control(pdata);
+ }
+
+ if (pdata->rx_pause != pdata->phy.rx_pause) {
+ new_state = 1;
+- pdata->hw_if.config_rx_flow_control(pdata);
+ pdata->rx_pause = pdata->phy.rx_pause;
++ pdata->hw_if.config_rx_flow_control(pdata);
+ }
+
+ /* Speed support */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index f83769d8047b..401e58939795 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6457,6 +6457,9 @@ static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
+ }
+ mutex_unlock(&bp->hwrm_cmd_lock);
+
++ if (!BNXT_SINGLE_PF(bp))
++ return 0;
++
+ diff = link_info->support_auto_speeds ^ link_info->advertising;
+ if ((link_info->support_auto_speeds | diff) !=
+ link_info->support_auto_speeds) {
+@@ -8614,8 +8617,8 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
+ } else {
+ eth_hw_addr_random(bp->dev);
+- rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
+ }
++ rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
+ #endif
+ }
+ return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index f952963d594e..e1f025b2a6bc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -914,7 +914,8 @@ static int bnxt_vf_configure_mac(struct bnxt *bp, struct bnxt_vf_info *vf)
+ if (req->enables & cpu_to_le32(FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR)) {
+ if (is_valid_ether_addr(req->dflt_mac_addr) &&
+ ((vf->flags & BNXT_VF_TRUST) ||
+- (!is_valid_ether_addr(vf->mac_addr)))) {
++ !is_valid_ether_addr(vf->mac_addr) ||
++ ether_addr_equal(req->dflt_mac_addr, vf->mac_addr))) {
+ ether_addr_copy(vf->vf_mac_addr, req->dflt_mac_addr);
+ return bnxt_hwrm_exec_fwd_resp(bp, vf, msg_size);
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 005283c7cdfe..72c83496e01f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -3066,6 +3066,7 @@ static void cxgb_del_udp_tunnel(struct net_device *netdev,
+
+ adapter->geneve_port = 0;
+ t4_write_reg(adapter, MPS_RX_GENEVE_TYPE_A, 0);
++ break;
+ default:
+ return;
+ }
+@@ -3151,6 +3152,7 @@ static void cxgb_add_udp_tunnel(struct net_device *netdev,
+
+ t4_write_reg(adapter, MPS_RX_GENEVE_TYPE_A,
+ GENEVE_V(be16_to_cpu(ti->port)) | GENEVE_EN_F);
++ break;
+ default:
+ return;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 02145f2de820..618eec654bd3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -283,3 +283,4 @@ EXPORT_SYMBOL(hnae3_unregister_ae_dev);
+ MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("HNAE3(Hisilicon Network Acceleration Engine) Framework");
++MODULE_VERSION(HNAE3_MOD_VERSION);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 37ec1b3286c6..67ed70fc3f0a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -36,6 +36,8 @@
+ #include <linux/pci.h>
+ #include <linux/types.h>
+
++#define HNAE3_MOD_VERSION "1.0"
++
+ /* Device IDs */
+ #define HNAE3_DEV_ID_GE 0xA220
+ #define HNAE3_DEV_ID_25GE 0xA221
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 8c55965a66ac..c23ba15d5e8f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1836,6 +1836,7 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
+ hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ ring->desc_cb[i] = *res_cb;
+ ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++ ring->desc[i].rx.bd_base_info = 0;
+ }
+
+ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+@@ -1843,6 +1844,7 @@ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+ ring->desc_cb[i].reuse_flag = 0;
+ ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma
+ + ring->desc_cb[i].page_offset);
++ ring->desc[i].rx.bd_base_info = 0;
+ }
+
+ static void hns3_nic_reclaim_one_desc(struct hns3_enet_ring *ring, int *bytes,
+@@ -3600,6 +3602,8 @@ static int __init hns3_init_module(void)
+
+ client.ops = &client_ops;
+
++ INIT_LIST_HEAD(&client.node);
++
+ ret = hnae3_register_client(&client);
+ if (ret)
+ return ret;
+@@ -3627,3 +3631,4 @@ MODULE_DESCRIPTION("HNS3: Hisilicon Ethernet Driver");
+ MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("pci:hns-nic");
++MODULE_VERSION(HNS3_MOD_VERSION);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 98cdbd3a1163..5b40f5a53761 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -14,6 +14,8 @@
+
+ #include "hnae3.h"
+
++#define HNS3_MOD_VERSION "1.0"
++
+ extern const char hns3_driver_version[];
+
+ enum hns3_nic_state {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 2066dd734444..553eaa476b19 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1459,8 +1459,11 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)
+ /* We need to alloc a vport for main NIC of PF */
+ num_vport = hdev->num_vmdq_vport + hdev->num_req_vfs + 1;
+
+- if (hdev->num_tqps < num_vport)
+- num_vport = hdev->num_tqps;
++ if (hdev->num_tqps < num_vport) {
++ dev_err(&hdev->pdev->dev, "tqps(%d) is less than vports(%d)",
++ hdev->num_tqps, num_vport);
++ return -EINVAL;
++ }
+
+ /* Alloc the same number of TQPs for every vport */
+ tqp_per_vport = hdev->num_tqps / num_vport;
+@@ -3783,13 +3786,11 @@ static int hclge_ae_start(struct hnae3_handle *handle)
+ hclge_cfg_mac_mode(hdev, true);
+ clear_bit(HCLGE_STATE_DOWN, &hdev->state);
+ mod_timer(&hdev->service_timer, jiffies + HZ);
++ hdev->hw.mac.link = 0;
+
+ /* reset tqp stats */
+ hclge_reset_tqp_stats(handle);
+
+- if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+- return 0;
+-
+ ret = hclge_mac_start_phy(hdev);
+ if (ret)
+ return ret;
+@@ -3805,9 +3806,12 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+
+ del_timer_sync(&hdev->service_timer);
+ cancel_work_sync(&hdev->service_task);
++ clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
+
+- if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) {
++ hclge_mac_stop_phy(hdev);
+ return;
++ }
+
+ for (i = 0; i < vport->alloc_tqps; i++)
+ hclge_tqp_enable(hdev, i, 0, false);
+@@ -3819,7 +3823,6 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+
+ /* reset tqp stats */
+ hclge_reset_tqp_stats(handle);
+- hclge_update_link_status(hdev);
+ }
+
+ static int hclge_get_mac_vlan_cmd_status(struct hclge_vport *vport,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 0f4157e71282..7c88b65353cc 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -15,7 +15,7 @@
+ #include "hclge_cmd.h"
+ #include "hnae3.h"
+
+-#define HCLGE_MOD_VERSION "v1.0"
++#define HCLGE_MOD_VERSION "1.0"
+ #define HCLGE_DRIVER_NAME "hclge"
+
+ #define HCLGE_INVALID_VPORT 0xffff
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 2b8426412cc9..7a6510314657 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1323,6 +1323,7 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
+ hclgevf_reset_tqp_stats(handle);
+ del_timer_sync(&hdev->service_timer);
+ cancel_work_sync(&hdev->service_task);
++ clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+ hclgevf_update_link_status(hdev, 0);
+ }
+
+@@ -1441,6 +1442,8 @@ static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev)
+ return ret;
+ }
+
++ hclgevf_clear_event_cause(hdev, 0);
++
+ /* enable misc. vector(vector 0) */
+ hclgevf_enable_vector(&hdev->misc_vector, true);
+
+@@ -1451,6 +1454,7 @@ static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev)
+ {
+ /* disable misc vector(vector 0) */
+ hclgevf_enable_vector(&hdev->misc_vector, false);
++ synchronize_irq(hdev->misc_vector.vector_irq);
+ free_irq(hdev->misc_vector.vector_irq, hdev);
+ hclgevf_free_vector(hdev, 0);
+ }
+@@ -1489,10 +1493,12 @@ static int hclgevf_init_instance(struct hclgevf_dev *hdev,
+ return ret;
+ break;
+ case HNAE3_CLIENT_ROCE:
+- hdev->roce_client = client;
+- hdev->roce.client = client;
++ if (hnae3_dev_roce_supported(hdev)) {
++ hdev->roce_client = client;
++ hdev->roce.client = client;
++ }
+
+- if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
++ if (hdev->roce_client && hdev->nic_client) {
+ ret = hclgevf_init_roce_base_info(hdev);
+ if (ret)
+ return ret;
+@@ -1625,6 +1631,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+
+ hclgevf_state_init(hdev);
+
++ ret = hclgevf_cmd_init(hdev);
++ if (ret)
++ goto err_cmd_init;
++
+ ret = hclgevf_misc_irq_init(hdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n",
+@@ -1632,10 +1642,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ goto err_misc_irq_init;
+ }
+
+- ret = hclgevf_cmd_init(hdev);
+- if (ret)
+- goto err_cmd_init;
+-
+ ret = hclgevf_configure(hdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret);
+@@ -1683,10 +1689,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ return 0;
+
+ err_config:
+- hclgevf_cmd_uninit(hdev);
+-err_cmd_init:
+ hclgevf_misc_irq_uninit(hdev);
+ err_misc_irq_init:
++ hclgevf_cmd_uninit(hdev);
++err_cmd_init:
+ hclgevf_state_uninit(hdev);
+ hclgevf_uninit_msi(hdev);
+ err_irq_init:
+@@ -1696,9 +1702,9 @@ err_irq_init:
+
+ static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev)
+ {
+- hclgevf_cmd_uninit(hdev);
+- hclgevf_misc_irq_uninit(hdev);
+ hclgevf_state_uninit(hdev);
++ hclgevf_misc_irq_uninit(hdev);
++ hclgevf_cmd_uninit(hdev);
+ hclgevf_uninit_msi(hdev);
+ hclgevf_pci_uninit(hdev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index a477a7c36bbd..9763e742e6fb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -9,7 +9,7 @@
+ #include "hclgevf_cmd.h"
+ #include "hnae3.h"
+
+-#define HCLGEVF_MOD_VERSION "v1.0"
++#define HCLGEVF_MOD_VERSION "1.0"
+ #define HCLGEVF_DRIVER_NAME "hclgevf"
+
+ #define HCLGEVF_ROCEE_VECTOR_NUM 0
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index ec4a9759a6f2..3afb1f3b6f91 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -3546,15 +3546,12 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
+ }
+ break;
+ case e1000_pch_spt:
+- if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
+- /* Stable 24MHz frequency */
+- incperiod = INCPERIOD_24MHZ;
+- incvalue = INCVALUE_24MHZ;
+- shift = INCVALUE_SHIFT_24MHZ;
+- adapter->cc.shift = shift;
+- break;
+- }
+- return -EINVAL;
++ /* Stable 24MHz frequency */
++ incperiod = INCPERIOD_24MHZ;
++ incvalue = INCVALUE_24MHZ;
++ shift = INCVALUE_SHIFT_24MHZ;
++ adapter->cc.shift = shift;
++ break;
+ case e1000_pch_cnp:
+ if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
+ /* Stable 24MHz frequency */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index a44139c1de80..12ba0b9f238b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -608,7 +608,7 @@ struct i40e_pf {
+ unsigned long ptp_tx_start;
+ struct hwtstamp_config tstamp_config;
+ struct mutex tmreg_lock; /* Used to protect the SYSTIME registers. */
+- u64 ptp_base_adj;
++ u32 ptp_adj_mult;
+ u32 tx_hwtstamp_timeouts;
+ u32 tx_hwtstamp_skipped;
+ u32 rx_hwtstamp_cleared;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index b974482ff630..c5e3d5f406ec 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -977,7 +977,9 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 10000baseCR_Full) ||
+ ethtool_link_ksettings_test_link_mode(ks, advertising,
+- 10000baseSR_Full))
++ 10000baseSR_Full) ||
++ ethtool_link_ksettings_test_link_mode(ks, advertising,
++ 10000baseLR_Full))
+ config.link_speed |= I40E_LINK_SPEED_10GB;
+ if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 20000baseKR2_Full))
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+index 5b47dd1f75a5..1a8e0cad787f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+@@ -40,9 +40,9 @@
+ * At 1Gb link, the period is multiplied by 20. (32ns)
+ * 1588 functionality is not supported at 100Mbps.
+ */
+-#define I40E_PTP_40GB_INCVAL 0x0199999999ULL
+-#define I40E_PTP_10GB_INCVAL 0x0333333333ULL
+-#define I40E_PTP_1GB_INCVAL 0x2000000000ULL
++#define I40E_PTP_40GB_INCVAL 0x0199999999ULL
++#define I40E_PTP_10GB_INCVAL_MULT 2
++#define I40E_PTP_1GB_INCVAL_MULT 20
+
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V1 BIT(I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT)
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V2 (2 << \
+@@ -130,17 +130,24 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+ ppb = -ppb;
+ }
+
+- smp_mb(); /* Force any pending update before accessing. */
+- adj = READ_ONCE(pf->ptp_base_adj);
+-
+- freq = adj;
++ freq = I40E_PTP_40GB_INCVAL;
+ freq *= ppb;
+ diff = div_u64(freq, 1000000000ULL);
+
+ if (neg_adj)
+- adj -= diff;
++ adj = I40E_PTP_40GB_INCVAL - diff;
+ else
+- adj += diff;
++ adj = I40E_PTP_40GB_INCVAL + diff;
++
++ /* At some link speeds, the base incval is so large that directly
++ * multiplying by ppb would result in arithmetic overflow even when
++ * using a u64. Avoid this by instead calculating the new incval
++ * always in terms of the 40GbE clock rate and then multiplying by the
++ * link speed factor afterwards. This does result in slightly lower
++ * precision at lower link speeds, but it is fairly minor.
++ */
++ smp_mb(); /* Force any pending update before accessing. */
++ adj *= READ_ONCE(pf->ptp_adj_mult);
+
+ wr32(hw, I40E_PRTTSYN_INC_L, adj & 0xFFFFFFFF);
+ wr32(hw, I40E_PRTTSYN_INC_H, adj >> 32);
+@@ -338,6 +345,8 @@ void i40e_ptp_rx_hang(struct i40e_pf *pf)
+ **/
+ void i40e_ptp_tx_hang(struct i40e_pf *pf)
+ {
++ struct sk_buff *skb;
++
+ if (!(pf->flags & I40E_FLAG_PTP) || !pf->ptp_tx)
+ return;
+
+@@ -350,9 +359,12 @@ void i40e_ptp_tx_hang(struct i40e_pf *pf)
+ * within a second it is reasonable to assume that we never will.
+ */
+ if (time_is_before_jiffies(pf->ptp_tx_start + HZ)) {
+- dev_kfree_skb_any(pf->ptp_tx_skb);
++ skb = pf->ptp_tx_skb;
+ pf->ptp_tx_skb = NULL;
+ clear_bit_unlock(__I40E_PTP_TX_IN_PROGRESS, pf->state);
++
++ /* Free the skb after we clear the bitlock */
++ dev_kfree_skb_any(skb);
+ pf->tx_hwtstamp_timeouts++;
+ }
+ }
+@@ -462,6 +474,7 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ struct i40e_link_status *hw_link_info;
+ struct i40e_hw *hw = &pf->hw;
+ u64 incval;
++ u32 mult;
+
+ hw_link_info = &hw->phy.link_info;
+
+@@ -469,10 +482,10 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+
+ switch (hw_link_info->link_speed) {
+ case I40E_LINK_SPEED_10GB:
+- incval = I40E_PTP_10GB_INCVAL;
++ mult = I40E_PTP_10GB_INCVAL_MULT;
+ break;
+ case I40E_LINK_SPEED_1GB:
+- incval = I40E_PTP_1GB_INCVAL;
++ mult = I40E_PTP_1GB_INCVAL_MULT;
+ break;
+ case I40E_LINK_SPEED_100MB:
+ {
+@@ -483,15 +496,20 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ "1588 functionality is not supported at 100 Mbps. Stopping the PHC.\n");
+ warn_once++;
+ }
+- incval = 0;
++ mult = 0;
+ break;
+ }
+ case I40E_LINK_SPEED_40GB:
+ default:
+- incval = I40E_PTP_40GB_INCVAL;
++ mult = 1;
+ break;
+ }
+
++ /* The increment value is calculated by taking the base 40GbE incvalue
++ * and multiplying it by a factor based on the link speed.
++ */
++ incval = I40E_PTP_40GB_INCVAL * mult;
++
+ /* Write the new increment value into the increment register. The
+ * hardware will not update the clock until both registers have been
+ * written.
+@@ -500,7 +518,7 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ wr32(hw, I40E_PRTTSYN_INC_H, incval >> 32);
+
+ /* Update the base adjustement value. */
+- WRITE_ONCE(pf->ptp_base_adj, incval);
++ WRITE_ONCE(pf->ptp_adj_mult, mult);
+ smp_mb(); /* Force the above update. */
+ }
+
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index cce7ada89255..9afee130c2aa 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8763,12 +8763,17 @@ static void igb_rar_set_index(struct igb_adapter *adapter, u32 index)
+ if (is_valid_ether_addr(addr))
+ rar_high |= E1000_RAH_AV;
+
+- if (hw->mac.type == e1000_82575)
++ switch (hw->mac.type) {
++ case e1000_82575:
++ case e1000_i210:
+ rar_high |= E1000_RAH_POOL_1 *
+ adapter->mac_table[index].queue;
+- else
++ break;
++ default:
+ rar_high |= E1000_RAH_POOL_1 <<
+ adapter->mac_table[index].queue;
++ break;
++ }
+ }
+
+ wr32(E1000_RAL(index), rar_low);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+index ed4cbe94c355..4da10b44b7d3 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+@@ -618,6 +618,14 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
+ }
+
+ #endif
++ /* To support macvlan offload we have to use num_tc to
++ * restrict the queues that can be used by the device.
++ * By doing this we can avoid reporting a false number of
++ * queues.
++ */
++ if (vmdq_i > 1)
++ netdev_set_num_tc(adapter->netdev, 1);
++
+ /* populate TC0 for use by pool 0 */
+ netdev_set_tc_queue(adapter->netdev, 0,
+ adapter->num_rx_queues_per_pool, 0);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index a820a6cd831a..d91a5a59c71a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -8875,14 +8875,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
+ } else {
+ netdev_reset_tc(dev);
+
+- /* To support macvlan offload we have to use num_tc to
+- * restrict the queues that can be used by the device.
+- * By doing this we can avoid reporting a false number of
+- * queues.
+- */
+- if (!tc && adapter->num_rx_pools > 1)
+- netdev_set_num_tc(dev, 1);
+-
+ if (adapter->hw.mac.type == ixgbe_mac_82598EB)
+ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 850f8af95e49..043b695d2a61 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -4187,6 +4187,7 @@ static int ixgbevf_set_mac(struct net_device *netdev, void *p)
+ return -EPERM;
+
+ ether_addr_copy(hw->mac.addr, addr->sa_data);
++ ether_addr_copy(hw->mac.perm_addr, addr->sa_data);
+ ether_addr_copy(netdev->dev_addr, addr->sa_data);
+
+ return 0;
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 6f410235987c..3bc5690d8376 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -2109,6 +2109,9 @@ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+ mvpp2_prs_sram_ai_update(&pe, 0,
+ MVPP2_PRS_SRAM_AI_MASK);
+
++ /* Set result info bits to 'single vlan' */
++ mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_VLAN_SINGLE,
++ MVPP2_PRS_RI_VLAN_MASK);
+ /* If packet is tagged continue check vid filtering */
+ mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_VID);
+ } else {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 1904c0323d39..692855183187 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -5882,24 +5882,24 @@ static int mlxsw_sp_router_fib_rule_event(unsigned long event,
+ switch (info->family) {
+ case AF_INET:
+ if (!fib4_rule_default(rule) && !rule->l3mdev)
+- err = -1;
++ err = -EOPNOTSUPP;
+ break;
+ case AF_INET6:
+ if (!fib6_rule_default(rule) && !rule->l3mdev)
+- err = -1;
++ err = -EOPNOTSUPP;
+ break;
+ case RTNL_FAMILY_IPMR:
+ if (!ipmr_rule_default(rule) && !rule->l3mdev)
+- err = -1;
++ err = -EOPNOTSUPP;
+ break;
+ case RTNL_FAMILY_IP6MR:
+ if (!ip6mr_rule_default(rule) && !rule->l3mdev)
+- err = -1;
++ err = -EOPNOTSUPP;
+ break;
+ }
+
+ if (err < 0)
+- NL_SET_ERR_MSG_MOD(extack, "FIB rules not supported. Aborting offload");
++ NL_SET_ERR_MSG_MOD(extack, "FIB rules not supported");
+
+ return err;
+ }
+@@ -5926,8 +5926,8 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
+ case FIB_EVENT_RULE_DEL:
+ err = mlxsw_sp_router_fib_rule_event(event, info,
+ router->mlxsw_sp);
+- if (!err)
+- return NOTIFY_DONE;
++ if (!err || info->extack)
++ return notifier_from_errno(err);
+ }
+
+ fib_work = kzalloc(sizeof(*fib_work), GFP_ATOMIC);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 4ed01182a82c..0ae2da9d08c7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -1013,8 +1013,10 @@ mlxsw_sp_port_vlan_bridge_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan,
+ int err;
+
+ /* No need to continue if only VLAN flags were changed */
+- if (mlxsw_sp_port_vlan->bridge_port)
++ if (mlxsw_sp_port_vlan->bridge_port) {
++ mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan);
+ return 0;
++ }
+
+ err = mlxsw_sp_port_vlan_fid_join(mlxsw_sp_port_vlan, bridge_port);
+ if (err)
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index 59fbf74dcada..dd963cd255f0 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1057,7 +1057,8 @@ static int netsec_netdev_load_microcode(struct netsec_priv *priv)
+ return 0;
+ }
+
+-static int netsec_reset_hardware(struct netsec_priv *priv)
++static int netsec_reset_hardware(struct netsec_priv *priv,
++ bool load_ucode)
+ {
+ u32 value;
+ int err;
+@@ -1102,11 +1103,14 @@ static int netsec_reset_hardware(struct netsec_priv *priv)
+ netsec_write(priv, NETSEC_REG_NRM_RX_CONFIG,
+ 1 << NETSEC_REG_DESC_ENDIAN);
+
+- err = netsec_netdev_load_microcode(priv);
+- if (err) {
+- netif_err(priv, probe, priv->ndev,
+- "%s: failed to load microcode (%d)\n", __func__, err);
+- return err;
++ if (load_ucode) {
++ err = netsec_netdev_load_microcode(priv);
++ if (err) {
++ netif_err(priv, probe, priv->ndev,
++ "%s: failed to load microcode (%d)\n",
++ __func__, err);
++ return err;
++ }
+ }
+
+ /* start DMA engines */
+@@ -1328,6 +1332,7 @@ err1:
+
+ static int netsec_netdev_stop(struct net_device *ndev)
+ {
++ int ret;
+ struct netsec_priv *priv = netdev_priv(ndev);
+
+ netif_stop_queue(priv->ndev);
+@@ -1343,12 +1348,14 @@ static int netsec_netdev_stop(struct net_device *ndev)
+ netsec_uninit_pkt_dring(priv, NETSEC_RING_TX);
+ netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+
++ ret = netsec_reset_hardware(priv, false);
++
+ phy_stop(ndev->phydev);
+ phy_disconnect(ndev->phydev);
+
+ pm_runtime_put_sync(priv->dev);
+
+- return 0;
++ return ret;
+ }
+
+ static int netsec_netdev_init(struct net_device *ndev)
+@@ -1364,7 +1371,7 @@ static int netsec_netdev_init(struct net_device *ndev)
+ if (ret)
+ goto err1;
+
+- ret = netsec_reset_hardware(priv);
++ ret = netsec_reset_hardware(priv, true);
+ if (ret)
+ goto err2;
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 1e1cc5256eca..57491da89140 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -51,7 +51,7 @@
+ #include <linux/of_mdio.h>
+ #include "dwmac1000.h"
+
+-#define STMMAC_ALIGN(x) L1_CACHE_ALIGN(x)
++#define STMMAC_ALIGN(x) __ALIGN_KERNEL(x, SMP_CACHE_BYTES)
+ #define TSO_MAX_BUFF_SIZE (SZ_16K - 1)
+
+ /* Module parameters */
+diff --git a/drivers/net/ethernet/ti/cpsw-phy-sel.c b/drivers/net/ethernet/ti/cpsw-phy-sel.c
+index 18013645e76c..0c1adad7415d 100644
+--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c
++++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c
+@@ -177,12 +177,18 @@ void cpsw_phy_sel(struct device *dev, phy_interface_t phy_mode, int slave)
+ }
+
+ dev = bus_find_device(&platform_bus_type, NULL, node, match);
+- of_node_put(node);
++ if (!dev) {
++ dev_err(dev, "unable to find platform device for %pOF\n", node);
++ goto out;
++ }
++
+ priv = dev_get_drvdata(dev);
+
+ priv->cpsw_phy_sel(priv, phy_mode, slave);
+
+ put_device(dev);
++out:
++ of_node_put(node);
+ }
+ EXPORT_SYMBOL_GPL(cpsw_phy_sel);
+
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index eaeee3201e8f..37096bf29033 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -738,6 +738,8 @@ struct net_device_context {
+ struct hv_device *device_ctx;
+ /* netvsc_device */
+ struct netvsc_device __rcu *nvdev;
++ /* list of netvsc net_devices */
++ struct list_head list;
+ /* reconfigure work */
+ struct delayed_work dwork;
+ /* last reconfig time */
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 82c3c8e200f0..adc176943d94 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -69,6 +69,8 @@ static int debug = -1;
+ module_param(debug, int, 0444);
+ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
+
++static LIST_HEAD(netvsc_dev_list);
++
+ static void netvsc_change_rx_flags(struct net_device *net, int change)
+ {
+ struct net_device_context *ndev_ctx = netdev_priv(net);
+@@ -1779,13 +1781,10 @@ out_unlock:
+
+ static struct net_device *get_netvsc_bymac(const u8 *mac)
+ {
+- struct net_device *dev;
+-
+- ASSERT_RTNL();
++ struct net_device_context *ndev_ctx;
+
+- for_each_netdev(&init_net, dev) {
+- if (dev->netdev_ops != &device_ops)
+- continue; /* not a netvsc device */
++ list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
++ struct net_device *dev = hv_get_drvdata(ndev_ctx->device_ctx);
+
+ if (ether_addr_equal(mac, dev->perm_addr))
+ return dev;
+@@ -1796,25 +1795,18 @@ static struct net_device *get_netvsc_bymac(const u8 *mac)
+
+ static struct net_device *get_netvsc_byref(struct net_device *vf_netdev)
+ {
++ struct net_device_context *net_device_ctx;
+ struct net_device *dev;
+
+- ASSERT_RTNL();
+-
+- for_each_netdev(&init_net, dev) {
+- struct net_device_context *net_device_ctx;
++ dev = netdev_master_upper_dev_get(vf_netdev);
++ if (!dev || dev->netdev_ops != &device_ops)
++ return NULL; /* not a netvsc device */
+
+- if (dev->netdev_ops != &device_ops)
+- continue; /* not a netvsc device */
++ net_device_ctx = netdev_priv(dev);
++ if (!rtnl_dereference(net_device_ctx->nvdev))
++ return NULL; /* device is removed */
+
+- net_device_ctx = netdev_priv(dev);
+- if (!rtnl_dereference(net_device_ctx->nvdev))
+- continue; /* device is removed */
+-
+- if (rtnl_dereference(net_device_ctx->vf_netdev) == vf_netdev)
+- return dev; /* a match */
+- }
+-
+- return NULL;
++ return dev;
+ }
+
+ /* Called when VF is injecting data into network stack.
+@@ -2094,15 +2086,19 @@ static int netvsc_probe(struct hv_device *dev,
+ else
+ net->max_mtu = ETH_DATA_LEN;
+
+- ret = register_netdev(net);
++ rtnl_lock();
++ ret = register_netdevice(net);
+ if (ret != 0) {
+ pr_err("Unable to register netdev.\n");
+ goto register_failed;
+ }
+
+- return ret;
++ list_add(&net_device_ctx->list, &netvsc_dev_list);
++ rtnl_unlock();
++ return 0;
+
+ register_failed:
++ rtnl_unlock();
+ rndis_filter_device_remove(dev, nvdev);
+ rndis_failed:
+ free_percpu(net_device_ctx->vf_stats);
+@@ -2148,6 +2144,7 @@ static int netvsc_remove(struct hv_device *dev)
+ rndis_filter_device_remove(dev, nvdev);
+
+ unregister_netdevice(net);
++ list_del(&ndev_ctx->list);
+
+ rtnl_unlock();
+ rcu_read_unlock();
+diff --git a/drivers/net/netdevsim/devlink.c b/drivers/net/netdevsim/devlink.c
+index bef7db5d129a..82f0e2663e1a 100644
+--- a/drivers/net/netdevsim/devlink.c
++++ b/drivers/net/netdevsim/devlink.c
+@@ -206,6 +206,7 @@ void nsim_devlink_teardown(struct netdevsim *ns)
+ struct net *net = nsim_to_net(ns);
+ bool *reg_devlink = net_generic(net, nsim_devlink_id);
+
++ devlink_resources_unregister(ns->devlink, NULL);
+ devlink_unregister(ns->devlink);
+ devlink_free(ns->devlink);
+ ns->devlink = NULL;
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0831b7142df7..0c5b68e7da51 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -218,7 +218,7 @@ out:
+
+ static int mdio_mux_iproc_remove(struct platform_device *pdev)
+ {
+- struct iproc_mdiomux_desc *md = dev_get_platdata(&pdev->dev);
++ struct iproc_mdiomux_desc *md = platform_get_drvdata(pdev);
+
+ mdio_mux_uninit(md->mux_handle);
+ mdiobus_unregister(md->mii_bus);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index c582b2d7546c..18ee7546e4a8 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -612,6 +612,8 @@ void phylink_destroy(struct phylink *pl)
+ {
+ if (pl->sfp_bus)
+ sfp_unregister_upstream(pl->sfp_bus);
++ if (!IS_ERR(pl->link_gpio))
++ gpiod_put(pl->link_gpio);
+
+ cancel_work_sync(&pl->resolve);
+ kfree(pl);
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index fd6c23f69c2f..d437f4f5ed52 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -132,6 +132,13 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id,
+ br_max = br_nom + br_nom * id->ext.br_min / 100;
+ br_min = br_nom - br_nom * id->ext.br_min / 100;
+ }
++
++ /* When using passive cables, in case neither BR,min nor BR,max
++ * are specified, set br_min to 0 as the nominal value is then
++ * used as the maximum.
++ */
++ if (br_min == br_max && id->base.sfp_ct_passive)
++ br_min = 0;
+ }
+
+ /* Set ethtool support from the compliance fields. */
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 8a76c1e5de8d..838df4c2b17f 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1216,6 +1216,8 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ mod_timer(&dev->stat_monitor,
+ jiffies + STAT_UPDATE_TIMER);
+ }
++
++ tasklet_schedule(&dev->bh);
+ }
+
+ return ret;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 11a3915e92e9..6bdf01ed07ab 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -551,7 +551,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf, void *ctx,
+ unsigned int len,
+- unsigned int *xdp_xmit)
++ unsigned int *xdp_xmit,
++ unsigned int *rbytes)
+ {
+ struct sk_buff *skb;
+ struct bpf_prog *xdp_prog;
+@@ -567,6 +568,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ int err;
+
+ len -= vi->hdr_len;
++ *rbytes += len;
+
+ rcu_read_lock();
+ xdp_prog = rcu_dereference(rq->xdp_prog);
+@@ -666,11 +668,13 @@ static struct sk_buff *receive_big(struct net_device *dev,
+ struct virtnet_info *vi,
+ struct receive_queue *rq,
+ void *buf,
+- unsigned int len)
++ unsigned int len,
++ unsigned int *rbytes)
+ {
+ struct page *page = buf;
+ struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
+
++ *rbytes += len - vi->hdr_len;
+ if (unlikely(!skb))
+ goto err;
+
+@@ -688,7 +692,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ void *buf,
+ void *ctx,
+ unsigned int len,
+- unsigned int *xdp_xmit)
++ unsigned int *xdp_xmit,
++ unsigned int *rbytes)
+ {
+ struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
+ u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+@@ -702,6 +707,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ int err;
+
+ head_skb = NULL;
++ *rbytes += len - vi->hdr_len;
+
+ rcu_read_lock();
+ xdp_prog = rcu_dereference(rq->xdp_prog);
+@@ -831,6 +837,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ goto err_buf;
+ }
+
++ *rbytes += len;
+ page = virt_to_head_page(buf);
+
+ truesize = mergeable_ctx_to_truesize(ctx);
+@@ -886,6 +893,7 @@ err_skb:
+ dev->stats.rx_length_errors++;
+ break;
+ }
++ *rbytes += len;
+ page = virt_to_head_page(buf);
+ put_page(page);
+ }
+@@ -896,14 +904,13 @@ xdp_xmit:
+ return NULL;
+ }
+
+-static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+- void *buf, unsigned int len, void **ctx,
+- unsigned int *xdp_xmit)
++static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
++ void *buf, unsigned int len, void **ctx,
++ unsigned int *xdp_xmit, unsigned int *rbytes)
+ {
+ struct net_device *dev = vi->dev;
+ struct sk_buff *skb;
+ struct virtio_net_hdr_mrg_rxbuf *hdr;
+- int ret;
+
+ if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
+ pr_debug("%s: short packet %i\n", dev->name, len);
+@@ -915,23 +922,22 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+ } else {
+ put_page(virt_to_head_page(buf));
+ }
+- return 0;
++ return;
+ }
+
+ if (vi->mergeable_rx_bufs)
+- skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit);
++ skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
++ rbytes);
+ else if (vi->big_packets)
+- skb = receive_big(dev, vi, rq, buf, len);
++ skb = receive_big(dev, vi, rq, buf, len, rbytes);
+ else
+- skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit);
++ skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, rbytes);
+
+ if (unlikely(!skb))
+- return 0;
++ return;
+
+ hdr = skb_vnet_hdr(skb);
+
+- ret = skb->len;
+-
+ if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+@@ -948,12 +954,11 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+ ntohs(skb->protocol), skb->len, skb->pkt_type);
+
+ napi_gro_receive(&rq->napi, skb);
+- return ret;
++ return;
+
+ frame_err:
+ dev->stats.rx_frame_errors++;
+ dev_kfree_skb(skb);
+- return 0;
+ }
+
+ /* Unlike mergeable buffers, all buffers are allocated to the
+@@ -1203,13 +1208,13 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
+
+ while (received < budget &&
+ (buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx))) {
+- bytes += receive_buf(vi, rq, buf, len, ctx, xdp_xmit);
++ receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &bytes);
+ received++;
+ }
+ } else {
+ while (received < budget &&
+ (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
+- bytes += receive_buf(vi, rq, buf, len, NULL, xdp_xmit);
++ receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &bytes);
+ received++;
+ }
+ }
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 8a3020dbd4cf..50b52a9e9648 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -1253,14 +1253,61 @@ out:
+ return ret;
+ }
+
++static int ath10k_core_search_bd(struct ath10k *ar,
++ const char *boardname,
++ const u8 *data,
++ size_t len)
++{
++ size_t ie_len;
++ struct ath10k_fw_ie *hdr;
++ int ret = -ENOENT, ie_id;
++
++ while (len > sizeof(struct ath10k_fw_ie)) {
++ hdr = (struct ath10k_fw_ie *)data;
++ ie_id = le32_to_cpu(hdr->id);
++ ie_len = le32_to_cpu(hdr->len);
++
++ len -= sizeof(*hdr);
++ data = hdr->data;
++
++ if (len < ALIGN(ie_len, 4)) {
++ ath10k_err(ar, "invalid length for board ie_id %d ie_len %zu len %zu\n",
++ ie_id, ie_len, len);
++ return -EINVAL;
++ }
++
++ switch (ie_id) {
++ case ATH10K_BD_IE_BOARD:
++ ret = ath10k_core_parse_bd_ie_board(ar, data, ie_len,
++ boardname);
++ if (ret == -ENOENT)
++ /* no match found, continue */
++ break;
++
++ /* either found or error, so stop searching */
++ goto out;
++ }
++
++ /* jump over the padding */
++ ie_len = ALIGN(ie_len, 4);
++
++ len -= ie_len;
++ data += ie_len;
++ }
++
++out:
++ /* return result of parse_bd_ie_board() or -ENOENT */
++ return ret;
++}
++
+ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
+ const char *boardname,
++ const char *fallback_boardname,
+ const char *filename)
+ {
+- size_t len, magic_len, ie_len;
+- struct ath10k_fw_ie *hdr;
++ size_t len, magic_len;
+ const u8 *data;
+- int ret, ie_id;
++ int ret;
+
+ ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+ ar->hw_params.fw.dir,
+@@ -1298,69 +1345,23 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
+ data += magic_len;
+ len -= magic_len;
+
+- while (len > sizeof(struct ath10k_fw_ie)) {
+- hdr = (struct ath10k_fw_ie *)data;
+- ie_id = le32_to_cpu(hdr->id);
+- ie_len = le32_to_cpu(hdr->len);
+-
+- len -= sizeof(*hdr);
+- data = hdr->data;
+-
+- if (len < ALIGN(ie_len, 4)) {
+- ath10k_err(ar, "invalid length for board ie_id %d ie_len %zu len %zu\n",
+- ie_id, ie_len, len);
+- ret = -EINVAL;
+- goto err;
+- }
++ /* attempt to find boardname in the IE list */
++ ret = ath10k_core_search_bd(ar, boardname, data, len);
+
+- switch (ie_id) {
+- case ATH10K_BD_IE_BOARD:
+- ret = ath10k_core_parse_bd_ie_board(ar, data, ie_len,
+- boardname);
+- if (ret == -ENOENT && ar->id.bdf_ext[0] != '\0') {
+- /* try default bdf if variant was not found */
+- char *s, *v = ",variant=";
+- char boardname2[100];
+-
+- strlcpy(boardname2, boardname,
+- sizeof(boardname2));
+-
+- s = strstr(boardname2, v);
+- if (s)
+- *s = '\0'; /* strip ",variant=%s" */
++ /* if we didn't find it and have a fallback name, try that */
++ if (ret == -ENOENT && fallback_boardname)
++ ret = ath10k_core_search_bd(ar, fallback_boardname, data, len);
+
+- ret = ath10k_core_parse_bd_ie_board(ar, data,
+- ie_len,
+- boardname2);
+- }
+-
+- if (ret == -ENOENT)
+- /* no match found, continue */
+- break;
+- else if (ret)
+- /* there was an error, bail out */
+- goto err;
+-
+- /* board data found */
+- goto out;
+- }
+-
+- /* jump over the padding */
+- ie_len = ALIGN(ie_len, 4);
+-
+- len -= ie_len;
+- data += ie_len;
+- }
+-
+-out:
+- if (!ar->normal_mode_fw.board_data || !ar->normal_mode_fw.board_len) {
++ if (ret == -ENOENT) {
+ ath10k_err(ar,
+ "failed to fetch board data for %s from %s/%s\n",
+ boardname, ar->hw_params.fw.dir, filename);
+ ret = -ENODATA;
+- goto err;
+ }
+
++ if (ret)
++ goto err;
++
+ return 0;
+
+ err:
+@@ -1369,12 +1370,12 @@ err:
+ }
+
+ static int ath10k_core_create_board_name(struct ath10k *ar, char *name,
+- size_t name_len)
++ size_t name_len, bool with_variant)
+ {
+ /* strlen(',variant=') + strlen(ar->id.bdf_ext) */
+ char variant[9 + ATH10K_SMBIOS_BDF_EXT_STR_LENGTH] = { 0 };
+
+- if (ar->id.bdf_ext[0] != '\0')
++ if (with_variant && ar->id.bdf_ext[0] != '\0')
+ scnprintf(variant, sizeof(variant), ",variant=%s",
+ ar->id.bdf_ext);
+
+@@ -1400,17 +1401,26 @@ out:
+
+ static int ath10k_core_fetch_board_file(struct ath10k *ar)
+ {
+- char boardname[100];
++ char boardname[100], fallback_boardname[100];
+ int ret;
+
+- ret = ath10k_core_create_board_name(ar, boardname, sizeof(boardname));
++ ret = ath10k_core_create_board_name(ar, boardname,
++ sizeof(boardname), true);
+ if (ret) {
+ ath10k_err(ar, "failed to create board name: %d", ret);
+ return ret;
+ }
+
++ ret = ath10k_core_create_board_name(ar, fallback_boardname,
++ sizeof(boardname), false);
++ if (ret) {
++ ath10k_err(ar, "failed to create fallback board name: %d", ret);
++ return ret;
++ }
++
+ ar->bd_api = 2;
+ ret = ath10k_core_fetch_board_data_api_n(ar, boardname,
++ fallback_boardname,
+ ATH10K_BOARD_API2_FILE);
+ if (!ret)
+ goto success;
+diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
+index bac832ce1873..442a6f37e45e 100644
+--- a/drivers/net/wireless/ath/ath10k/debug.c
++++ b/drivers/net/wireless/ath/ath10k/debug.c
+@@ -1519,7 +1519,13 @@ static void ath10k_tpc_stats_print(struct ath10k_tpc_stats *tpc_stats,
+ *len += scnprintf(buf + *len, buf_len - *len,
+ "********************************\n");
+ *len += scnprintf(buf + *len, buf_len - *len,
+- "No. Preamble Rate_code tpc_value1 tpc_value2 tpc_value3\n");
++ "No. Preamble Rate_code ");
++
++ for (i = 0; i < WMI_TPC_TX_N_CHAIN; i++)
++ *len += scnprintf(buf + *len, buf_len - *len,
++ "tpc_value%d ", i);
++
++ *len += scnprintf(buf + *len, buf_len - *len, "\n");
+
+ for (i = 0; i < tpc_stats->rate_max; i++) {
+ *len += scnprintf(buf + *len, buf_len - *len,
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index c5e1ca5945db..fcc5aa9f3357 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4479,6 +4479,12 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+
+ num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+
++ if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
++ ath10k_warn(ar, "number of tx chain is %d greater than TPC configured tx chain %d\n",
++ num_tx_chain, WMI_TPC_TX_N_CHAIN);
++ return;
++ }
++
+ ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ num_tx_chain);
+
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 6fbc84c29521..7fde22ea2ffa 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -4008,9 +4008,9 @@ struct wmi_pdev_get_tpc_config_cmd {
+ } __packed;
+
+ #define WMI_TPC_CONFIG_PARAM 1
+-#define WMI_TPC_RATE_MAX 160
+ #define WMI_TPC_FINAL_RATE_MAX 240
+ #define WMI_TPC_TX_N_CHAIN 4
++#define WMI_TPC_RATE_MAX (WMI_TPC_TX_N_CHAIN * 65)
+ #define WMI_TPC_PREAM_TABLE_MAX 10
+ #define WMI_TPC_FLAG 3
+ #define WMI_TPC_BUF_SIZE 10
+diff --git a/drivers/net/wireless/ath/regd.h b/drivers/net/wireless/ath/regd.h
+index 5d80be213fac..869f276cc1d8 100644
+--- a/drivers/net/wireless/ath/regd.h
++++ b/drivers/net/wireless/ath/regd.h
+@@ -68,12 +68,14 @@ enum CountryCode {
+ CTRY_AUSTRALIA = 36,
+ CTRY_AUSTRIA = 40,
+ CTRY_AZERBAIJAN = 31,
++ CTRY_BAHAMAS = 44,
+ CTRY_BAHRAIN = 48,
+ CTRY_BANGLADESH = 50,
+ CTRY_BARBADOS = 52,
+ CTRY_BELARUS = 112,
+ CTRY_BELGIUM = 56,
+ CTRY_BELIZE = 84,
++ CTRY_BERMUDA = 60,
+ CTRY_BOLIVIA = 68,
+ CTRY_BOSNIA_HERZ = 70,
+ CTRY_BRAZIL = 76,
+@@ -159,6 +161,7 @@ enum CountryCode {
+ CTRY_ROMANIA = 642,
+ CTRY_RUSSIA = 643,
+ CTRY_SAUDI_ARABIA = 682,
++ CTRY_SERBIA = 688,
+ CTRY_SERBIA_MONTENEGRO = 891,
+ CTRY_SINGAPORE = 702,
+ CTRY_SLOVAKIA = 703,
+@@ -170,11 +173,13 @@ enum CountryCode {
+ CTRY_SWITZERLAND = 756,
+ CTRY_SYRIA = 760,
+ CTRY_TAIWAN = 158,
++ CTRY_TANZANIA = 834,
+ CTRY_THAILAND = 764,
+ CTRY_TRINIDAD_Y_TOBAGO = 780,
+ CTRY_TUNISIA = 788,
+ CTRY_TURKEY = 792,
+ CTRY_UAE = 784,
++ CTRY_UGANDA = 800,
+ CTRY_UKRAINE = 804,
+ CTRY_UNITED_KINGDOM = 826,
+ CTRY_UNITED_STATES = 840,
+diff --git a/drivers/net/wireless/ath/regd_common.h b/drivers/net/wireless/ath/regd_common.h
+index bdd2b4d61f2f..15bbd1e0d912 100644
+--- a/drivers/net/wireless/ath/regd_common.h
++++ b/drivers/net/wireless/ath/regd_common.h
+@@ -35,6 +35,7 @@ enum EnumRd {
+ FRANCE_RES = 0x31,
+ FCC3_FCCA = 0x3A,
+ FCC3_WORLD = 0x3B,
++ FCC3_ETSIC = 0x3F,
+
+ ETSI1_WORLD = 0x37,
+ ETSI3_ETSIA = 0x32,
+@@ -44,6 +45,7 @@ enum EnumRd {
+ ETSI4_ETSIC = 0x38,
+ ETSI5_WORLD = 0x39,
+ ETSI6_WORLD = 0x34,
++ ETSI8_WORLD = 0x3D,
+ ETSI_RESERVED = 0x33,
+
+ MKK1_MKKA = 0x40,
+@@ -59,6 +61,7 @@ enum EnumRd {
+ MKK1_MKKA1 = 0x4A,
+ MKK1_MKKA2 = 0x4B,
+ MKK1_MKKC = 0x4C,
++ APL2_FCCA = 0x4D,
+
+ APL3_FCCA = 0x50,
+ APL1_WORLD = 0x52,
+@@ -67,6 +70,7 @@ enum EnumRd {
+ APL1_ETSIC = 0x55,
+ APL2_ETSIC = 0x56,
+ APL5_WORLD = 0x58,
++ APL13_WORLD = 0x5A,
+ APL6_WORLD = 0x5B,
+ APL7_FCCA = 0x5C,
+ APL8_WORLD = 0x5D,
+@@ -168,6 +172,7 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ {FCC2_ETSIC, CTL_FCC, CTL_ETSI},
+ {FCC3_FCCA, CTL_FCC, CTL_FCC},
+ {FCC3_WORLD, CTL_FCC, CTL_ETSI},
++ {FCC3_ETSIC, CTL_FCC, CTL_ETSI},
+ {FCC4_FCCA, CTL_FCC, CTL_FCC},
+ {FCC5_FCCA, CTL_FCC, CTL_FCC},
+ {FCC6_FCCA, CTL_FCC, CTL_FCC},
+@@ -179,6 +184,7 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ {ETSI4_WORLD, CTL_ETSI, CTL_ETSI},
+ {ETSI5_WORLD, CTL_ETSI, CTL_ETSI},
+ {ETSI6_WORLD, CTL_ETSI, CTL_ETSI},
++ {ETSI8_WORLD, CTL_ETSI, CTL_ETSI},
+
+ /* XXX: For ETSI3_ETSIA, Was NO_CTL meant for the 2 GHz band ? */
+ {ETSI3_ETSIA, CTL_ETSI, CTL_ETSI},
+@@ -188,9 +194,11 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ {FCC1_FCCA, CTL_FCC, CTL_FCC},
+ {APL1_WORLD, CTL_FCC, CTL_ETSI},
+ {APL2_WORLD, CTL_FCC, CTL_ETSI},
++ {APL2_FCCA, CTL_FCC, CTL_FCC},
+ {APL3_WORLD, CTL_FCC, CTL_ETSI},
+ {APL4_WORLD, CTL_FCC, CTL_ETSI},
+ {APL5_WORLD, CTL_FCC, CTL_ETSI},
++ {APL13_WORLD, CTL_ETSI, CTL_ETSI},
+ {APL6_WORLD, CTL_ETSI, CTL_ETSI},
+ {APL8_WORLD, CTL_ETSI, CTL_ETSI},
+ {APL9_WORLD, CTL_ETSI, CTL_ETSI},
+@@ -298,6 +306,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ {CTRY_AUSTRALIA2, FCC6_WORLD, "AU"},
+ {CTRY_AUSTRIA, ETSI1_WORLD, "AT"},
+ {CTRY_AZERBAIJAN, ETSI4_WORLD, "AZ"},
++ {CTRY_BAHAMAS, FCC3_WORLD, "BS"},
+ {CTRY_BAHRAIN, APL6_WORLD, "BH"},
+ {CTRY_BANGLADESH, NULL1_WORLD, "BD"},
+ {CTRY_BARBADOS, FCC2_WORLD, "BB"},
+@@ -305,6 +314,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ {CTRY_BELGIUM, ETSI1_WORLD, "BE"},
+ {CTRY_BELGIUM2, ETSI4_WORLD, "BL"},
+ {CTRY_BELIZE, APL1_ETSIC, "BZ"},
++ {CTRY_BERMUDA, FCC3_FCCA, "BM"},
+ {CTRY_BOLIVIA, APL1_ETSIC, "BO"},
+ {CTRY_BOSNIA_HERZ, ETSI1_WORLD, "BA"},
+ {CTRY_BRAZIL, FCC3_WORLD, "BR"},
+@@ -444,6 +454,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ {CTRY_ROMANIA, NULL1_WORLD, "RO"},
+ {CTRY_RUSSIA, NULL1_WORLD, "RU"},
+ {CTRY_SAUDI_ARABIA, NULL1_WORLD, "SA"},
++ {CTRY_SERBIA, ETSI1_WORLD, "RS"},
+ {CTRY_SERBIA_MONTENEGRO, ETSI1_WORLD, "CS"},
+ {CTRY_SINGAPORE, APL6_WORLD, "SG"},
+ {CTRY_SLOVAKIA, ETSI1_WORLD, "SK"},
+@@ -455,10 +466,12 @@ static struct country_code_to_enum_rd allCountries[] = {
+ {CTRY_SWITZERLAND, ETSI1_WORLD, "CH"},
+ {CTRY_SYRIA, NULL1_WORLD, "SY"},
+ {CTRY_TAIWAN, APL3_FCCA, "TW"},
++ {CTRY_TANZANIA, APL1_WORLD, "TZ"},
+ {CTRY_THAILAND, FCC3_WORLD, "TH"},
+ {CTRY_TRINIDAD_Y_TOBAGO, FCC3_WORLD, "TT"},
+ {CTRY_TUNISIA, ETSI3_WORLD, "TN"},
+ {CTRY_TURKEY, ETSI3_WORLD, "TR"},
++ {CTRY_UGANDA, FCC3_WORLD, "UG"},
+ {CTRY_UKRAINE, NULL1_WORLD, "UA"},
+ {CTRY_UAE, NULL1_WORLD, "AE"},
+ {CTRY_UNITED_KINGDOM, ETSI1_WORLD, "GB"},
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index 0b68240ec7b4..a1915411c280 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -963,6 +963,7 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43340),
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43341),
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43362),
++ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43364),
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4335_4339),
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4339),
+ BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43430),
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 90f8c89ea59c..2f7b9421410f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2652,7 +2652,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+
+ mutex_lock(&mvm->mutex);
+ /* track whether or not the station is associated */
+- mvm_sta->associated = new_state >= IEEE80211_STA_ASSOC;
++ mvm_sta->sta_state = new_state;
+
+ if (old_state == IEEE80211_STA_NOTEXIST &&
+ new_state == IEEE80211_STA_NONE) {
+@@ -2704,8 +2704,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
+ }
+
+- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
+- true);
++ iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band);
+ ret = iwl_mvm_update_sta(mvm, vif, sta);
+ } else if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTHORIZED) {
+@@ -2721,8 +2720,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ /* enable beacon filtering */
+ WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
+
+- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
+- false);
++ iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band);
+
+ ret = 0;
+ } else if (old_state == IEEE80211_STA_AUTHORIZED &&
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 5d776ec1840f..36f27981165c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -3,6 +3,7 @@
+ * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+@@ -13,10 +14,6 @@
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+@@ -651,9 +648,10 @@ static void rs_tl_turn_on_agg(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta,
+ }
+
+ tid_data = &mvmsta->tid_data[tid];
+- if ((tid_data->state == IWL_AGG_OFF) &&
++ if (mvmsta->sta_state >= IEEE80211_STA_AUTHORIZED &&
++ tid_data->state == IWL_AGG_OFF &&
+ (lq_sta->tx_agg_tid_en & BIT(tid)) &&
+- (tid_data->tx_count_last >= IWL_MVM_RS_AGG_START_THRESHOLD)) {
++ tid_data->tx_count_last >= IWL_MVM_RS_AGG_START_THRESHOLD) {
+ IWL_DEBUG_RATE(mvm, "try to aggregate tid %d\n", tid);
+ if (rs_tl_turn_on_agg_for_tid(mvm, lq_sta, tid, sta) == 0)
+ tid_data->state = IWL_AGG_QUEUED;
+@@ -1257,7 +1255,7 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ (unsigned long)(lq_sta->last_tx +
+ (IWL_MVM_RS_IDLE_TIMEOUT * HZ)))) {
+ IWL_DEBUG_RATE(mvm, "Tx idle for too long. reinit rs\n");
+- iwl_mvm_rs_rate_init(mvm, sta, info->band, false);
++ iwl_mvm_rs_rate_init(mvm, sta, info->band);
+ return;
+ }
+ lq_sta->last_tx = jiffies;
+@@ -2684,9 +2682,9 @@ static void rs_get_initial_rate(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ struct iwl_lq_sta *lq_sta,
+ enum nl80211_band band,
+- struct rs_rate *rate,
+- bool init)
++ struct rs_rate *rate)
+ {
++ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ int i, nentries;
+ unsigned long active_rate;
+ s8 best_rssi = S8_MIN;
+@@ -2748,7 +2746,8 @@ static void rs_get_initial_rate(struct iwl_mvm *mvm,
+ * bandwidth rate, and after authorization, when the phy context
+ * is already up-to-date, re-init rs with the correct bw.
+ */
+- u32 bw = init ? RATE_MCS_CHAN_WIDTH_20 : rs_bw_from_sta_bw(sta);
++ u32 bw = mvmsta->sta_state < IEEE80211_STA_AUTHORIZED ?
++ RATE_MCS_CHAN_WIDTH_20 : rs_bw_from_sta_bw(sta);
+
+ switch (bw) {
+ case RATE_MCS_CHAN_WIDTH_40:
+@@ -2833,9 +2832,9 @@ void rs_update_last_rssi(struct iwl_mvm *mvm,
+ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ struct iwl_lq_sta *lq_sta,
+- enum nl80211_band band,
+- bool init)
++ enum nl80211_band band)
+ {
++ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ struct iwl_scale_tbl_info *tbl;
+ struct rs_rate *rate;
+ u8 active_tbl = 0;
+@@ -2851,7 +2850,7 @@ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ tbl = &(lq_sta->lq_info[active_tbl]);
+ rate = &tbl->rate;
+
+- rs_get_initial_rate(mvm, sta, lq_sta, band, rate, init);
++ rs_get_initial_rate(mvm, sta, lq_sta, band, rate);
+ rs_init_optimal_rate(mvm, sta, lq_sta);
+
+ WARN_ONCE(rate->ant != ANT_A && rate->ant != ANT_B,
+@@ -2864,7 +2863,8 @@ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ rs_set_expected_tpt_table(lq_sta, tbl);
+ rs_fill_lq_cmd(mvm, sta, lq_sta, rate);
+ /* TODO restore station should remember the lq cmd */
+- iwl_mvm_send_lq_cmd(mvm, &lq_sta->lq, init);
++ iwl_mvm_send_lq_cmd(mvm, &lq_sta->lq,
++ mvmsta->sta_state < IEEE80211_STA_AUTHORIZED);
+ }
+
+ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+@@ -3117,7 +3117,7 @@ void iwl_mvm_update_frame_stats(struct iwl_mvm *mvm, u32 rate, bool agg)
+ * Called after adding a new station to initialize rate scaling
+ */
+ static void rs_drv_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+- enum nl80211_band band, bool init)
++ enum nl80211_band band)
+ {
+ int i, j;
+ struct ieee80211_hw *hw = mvm->hw;
+@@ -3196,7 +3196,7 @@ static void rs_drv_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ iwl_mvm_reset_frame_stats(mvm);
+ #endif
+- rs_initialize_lq(mvm, sta, lq_sta, band, init);
++ rs_initialize_lq(mvm, sta, lq_sta, band);
+ }
+
+ static void rs_drv_rate_update(void *mvm_r,
+@@ -3216,7 +3216,7 @@ static void rs_drv_rate_update(void *mvm_r,
+ for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++)
+ ieee80211_stop_tx_ba_session(sta, tid);
+
+- iwl_mvm_rs_rate_init(mvm, sta, sband->band, false);
++ iwl_mvm_rs_rate_init(mvm, sta, sband->band);
+ }
+
+ #ifdef CONFIG_MAC80211_DEBUGFS
+@@ -4062,12 +4062,12 @@ static const struct rate_control_ops rs_mvm_ops_drv = {
+ };
+
+ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+- enum nl80211_band band, bool init)
++ enum nl80211_band band)
+ {
+ if (iwl_mvm_has_tlc_offload(mvm))
+ rs_fw_rate_init(mvm, sta, band);
+ else
+- rs_drv_rate_init(mvm, sta, band, init);
++ rs_drv_rate_init(mvm, sta, band);
+ }
+
+ int iwl_mvm_rate_control_register(void)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+index fb18cb8c233d..f9b272236021 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+@@ -3,6 +3,7 @@
+ * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+@@ -13,10 +14,6 @@
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+ * The full GNU General Public License is included in this distribution in the
+ * file called LICENSE.
+ *
+@@ -410,7 +407,7 @@ struct iwl_lq_sta {
+
+ /* Initialize station's rate scaling information after adding station */
+ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+- enum nl80211_band band, bool init);
++ enum nl80211_band band);
+
+ /* Notify RS about Tx status */
+ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 80067eb9ea05..fdc8ba319c1f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -214,7 +214,7 @@ int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
+ add_sta_cmd.station_flags |=
+ cpu_to_le32(mpdu_dens << STA_FLG_AGG_MPDU_DENS_SHIFT);
+- if (mvm_sta->associated)
++ if (mvm_sta->sta_state >= IEEE80211_STA_ASSOC)
+ add_sta_cmd.assoc_id = cpu_to_le16(sta->aid);
+
+ if (sta->wme) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+index 5ffd6adbc383..d0fa0be31b0d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+@@ -8,6 +8,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+@@ -18,11 +19,6 @@
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
+- * USA
+- *
+ * The full GNU General Public License is included in this distribution
+ * in the file called COPYING.
+ *
+@@ -35,6 +31,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+@@ -376,6 +373,7 @@ struct iwl_mvm_rxq_dup_data {
+ * tid.
+ * @max_agg_bufsize: the maximal size of the AGG buffer for this station
+ * @sta_type: station type
++ * @sta_state: station state according to enum %ieee80211_sta_state
+ * @bt_reduced_txpower: is reduced tx power enabled for this station
+ * @next_status_eosp: the next reclaimed packet is a PS-Poll response and
+ * we need to signal the EOSP
+@@ -414,6 +412,7 @@ struct iwl_mvm_sta {
+ u16 tid_disable_agg;
+ u8 max_agg_bufsize;
+ enum iwl_sta_type sta_type;
++ enum ieee80211_sta_state sta_state;
+ bool bt_reduced_txpower;
+ bool next_status_eosp;
+ spinlock_t lock;
+@@ -438,7 +437,6 @@ struct iwl_mvm_sta {
+ bool disable_tx;
+ bool tlc_amsdu;
+ bool sleeping;
+- bool associated;
+ u8 agg_tids;
+ u8 sleep_tx_count;
+ u8 avg_energy;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index f25ce3a1ea50..d57f2a08ca88 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -901,6 +901,8 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
+ }
+ def_rxq = trans_pcie->rxq;
+
++ cancel_work_sync(&rba->rx_alloc);
++
+ spin_lock(&rba->lock);
+ atomic_set(&rba->req_pending, 0);
+ atomic_set(&rba->req_ready, 0);
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 4bc244801636..26ca670584c0 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -644,6 +644,9 @@ static void mwifiex_usb_disconnect(struct usb_interface *intf)
+ MWIFIEX_FUNC_SHUTDOWN);
+ }
+
++ if (adapter->workqueue)
++ flush_workqueue(adapter->workqueue);
++
+ mwifiex_usb_free(card);
+
+ mwifiex_dbg(adapter, FATAL,
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 0cd68ffc2c74..51ccf10f4413 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -708,12 +708,14 @@ void mwifiex_hist_data_set(struct mwifiex_private *priv, u8 rx_rate, s8 snr,
+ s8 nflr)
+ {
+ struct mwifiex_histogram_data *phist_data = priv->hist_data;
++ s8 nf = -nflr;
++ s8 rssi = snr - nflr;
+
+ atomic_inc(&phist_data->num_samples);
+ atomic_inc(&phist_data->rx_rate[rx_rate]);
+- atomic_inc(&phist_data->snr[snr]);
+- atomic_inc(&phist_data->noise_flr[128 + nflr]);
+- atomic_inc(&phist_data->sig_str[nflr - snr]);
++ atomic_inc(&phist_data->snr[snr + 128]);
++ atomic_inc(&phist_data->noise_flr[nf + 128]);
++ atomic_inc(&phist_data->sig_str[rssi + 128]);
+ }
+
+ /* function to reset histogram data during init/reset */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
+index 934c331d995e..1932414e5088 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
+@@ -482,7 +482,10 @@ void mt76x2_set_tx_ackto(struct mt76x2_dev *dev)
+ {
+ u8 ackto, sifs, slottime = dev->slottime;
+
++ /* As defined by IEEE 802.11-2007 17.3.8.6 */
+ slottime += 3 * dev->coverage_class;
++ mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG,
++ MT_BKOFF_SLOT_CFG_SLOTTIME, slottime);
+
+ sifs = mt76_get_field(dev, MT_XIFS_TIME_CFG,
+ MT_XIFS_TIME_CFG_OFDM_SIFS);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
+index 73c127f92613..f66b6ff92ae0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
+@@ -247,8 +247,7 @@ mt76x2_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ int slottime = info->use_short_slot ? 9 : 20;
+
+ dev->slottime = slottime;
+- mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG,
+- MT_BKOFF_SLOT_CFG_SLOTTIME, slottime);
++ mt76x2_set_tx_ackto(dev);
+ }
+
+ mutex_unlock(&dev->mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
+index fcc37eb7ce0b..6b4fa7be573e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
+@@ -492,8 +492,10 @@ mt76x2_phy_update_channel_gain(struct mt76x2_dev *dev)
+ u8 gain_delta;
+ int low_gain;
+
+- dev->cal.avg_rssi[0] = (dev->cal.avg_rssi[0] * 15) / 16 + (rssi0 << 8);
+- dev->cal.avg_rssi[1] = (dev->cal.avg_rssi[1] * 15) / 16 + (rssi1 << 8);
++ dev->cal.avg_rssi[0] = (dev->cal.avg_rssi[0] * 15) / 16 +
++ (rssi0 << 8) / 16;
++ dev->cal.avg_rssi[1] = (dev->cal.avg_rssi[1] * 15) / 16 +
++ (rssi1 << 8) / 16;
+ dev->cal.avg_rssi_all = (dev->cal.avg_rssi[0] +
+ dev->cal.avg_rssi[1]) / 512;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 4eef69bd8a9e..6aca794cf998 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -422,12 +422,14 @@ void mt76_txq_schedule(struct mt76_dev *dev, struct mt76_queue *hwq)
+ {
+ int len;
+
++ rcu_read_lock();
+ do {
+ if (hwq->swq_queued >= 4 || list_empty(&hwq->swq))
+ break;
+
+ len = mt76_txq_schedule_list(dev, hwq);
+ } while (len > 0);
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(mt76_txq_schedule);
+
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+index 0398bece5782..a5f0306a7e29 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+@@ -651,28 +651,35 @@ qtnf_disconnect(struct wiphy *wiphy, struct net_device *dev,
+ {
+ struct qtnf_wmac *mac = wiphy_priv(wiphy);
+ struct qtnf_vif *vif;
+- int ret;
++ int ret = 0;
+
+ vif = qtnf_mac_get_base_vif(mac);
+ if (!vif) {
+ pr_err("MAC%u: primary VIF is not configured\n", mac->macid);
+- return -EFAULT;
++ ret = -EFAULT;
++ goto out;
+ }
+
+- if (vif->wdev.iftype != NL80211_IFTYPE_STATION)
+- return -EOPNOTSUPP;
++ if (vif->wdev.iftype != NL80211_IFTYPE_STATION) {
++ ret = -EOPNOTSUPP;
++ goto out;
++ }
+
+ if (vif->sta_state == QTNF_STA_DISCONNECTED)
+- return 0;
++ goto out;
+
+ ret = qtnf_cmd_send_disconnect(vif, reason_code);
+ if (ret) {
+ pr_err("VIF%u.%u: failed to disconnect\n", mac->macid,
+ vif->vifid);
+- return ret;
++ goto out;
+ }
+
+- return 0;
++out:
++ if (vif->sta_state == QTNF_STA_CONNECTING)
++ vif->sta_state = QTNF_STA_DISCONNECTED;
++
++ return ret;
+ }
+
+ static int
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
+index bcd415f96412..77ee6439ec6e 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
+@@ -198,11 +198,9 @@ qtnf_event_handle_bss_leave(struct qtnf_vif *vif,
+ return -EPROTO;
+ }
+
+- if (vif->sta_state != QTNF_STA_CONNECTED) {
+- pr_err("VIF%u.%u: BSS_LEAVE event when STA is not connected\n",
+- vif->mac->macid, vif->vifid);
+- return -EPROTO;
+- }
++ if (vif->sta_state != QTNF_STA_CONNECTED)
++ pr_warn("VIF%u.%u: BSS_LEAVE event when STA is not connected\n",
++ vif->mac->macid, vif->vifid);
+
+ pr_debug("VIF%u.%u: disconnected\n", vif->mac->macid, vif->vifid);
+
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
+index f117904d9120..6c1e139bb8f7 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
+@@ -1185,6 +1185,10 @@ static void qtnf_fw_work_handler(struct work_struct *work)
+ if (qtnf_poll_state(&priv->bda->bda_ep_state, QTN_EP_FW_LOADRDY,
+ QTN_FW_DL_TIMEOUT_MS)) {
+ pr_err("card is not ready\n");
++
++ if (!flashboot)
++ release_firmware(fw);
++
+ goto fw_load_fail;
+ }
+
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index de608ae365a4..5425726d509b 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -616,28 +616,32 @@ static int bl_write_header(struct rsi_hw *adapter, u8 *flash_content,
+ u32 content_size)
+ {
+ struct rsi_host_intf_ops *hif_ops = adapter->host_intf_ops;
+- struct bl_header bl_hdr;
++ struct bl_header *bl_hdr;
+ u32 write_addr, write_len;
+ int status;
+
+- bl_hdr.flags = 0;
+- bl_hdr.image_no = cpu_to_le32(adapter->priv->coex_mode);
+- bl_hdr.check_sum = cpu_to_le32(
+- *(u32 *)&flash_content[CHECK_SUM_OFFSET]);
+- bl_hdr.flash_start_address = cpu_to_le32(
+- *(u32 *)&flash_content[ADDR_OFFSET]);
+- bl_hdr.flash_len = cpu_to_le32(*(u32 *)&flash_content[LEN_OFFSET]);
++ bl_hdr = kzalloc(sizeof(*bl_hdr), GFP_KERNEL);
++ if (!bl_hdr)
++ return -ENOMEM;
++
++ bl_hdr->flags = 0;
++ bl_hdr->image_no = cpu_to_le32(adapter->priv->coex_mode);
++ bl_hdr->check_sum =
++ cpu_to_le32(*(u32 *)&flash_content[CHECK_SUM_OFFSET]);
++ bl_hdr->flash_start_address =
++ cpu_to_le32(*(u32 *)&flash_content[ADDR_OFFSET]);
++ bl_hdr->flash_len = cpu_to_le32(*(u32 *)&flash_content[LEN_OFFSET]);
+ write_len = sizeof(struct bl_header);
+
+ if (adapter->rsi_host_intf == RSI_HOST_INTF_USB) {
+ write_addr = PING_BUFFER_ADDRESS;
+ status = hif_ops->write_reg_multiple(adapter, write_addr,
+- (u8 *)&bl_hdr, write_len);
++ (u8 *)bl_hdr, write_len);
+ if (status < 0) {
+ rsi_dbg(ERR_ZONE,
+ "%s: Failed to load Version/CRC structure\n",
+ __func__);
+- return status;
++ goto fail;
+ }
+ } else {
+ write_addr = PING_BUFFER_ADDRESS >> 16;
+@@ -646,20 +650,23 @@ static int bl_write_header(struct rsi_hw *adapter, u8 *flash_content,
+ rsi_dbg(ERR_ZONE,
+ "%s: Unable to set ms word to common reg\n",
+ __func__);
+- return status;
++ goto fail;
+ }
+ write_addr = RSI_SD_REQUEST_MASTER |
+ (PING_BUFFER_ADDRESS & 0xFFFF);
+ status = hif_ops->write_reg_multiple(adapter, write_addr,
+- (u8 *)&bl_hdr, write_len);
++ (u8 *)bl_hdr, write_len);
+ if (status < 0) {
+ rsi_dbg(ERR_ZONE,
+ "%s: Failed to load Version/CRC structure\n",
+ __func__);
+- return status;
++ goto fail;
+ }
+ }
+- return 0;
++ status = 0;
++fail:
++ kfree(bl_hdr);
++ return status;
+ }
+
+ static u32 read_flash_capacity(struct rsi_hw *adapter)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 32f5cb46fd4f..8f83303365c8 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -1788,10 +1788,15 @@ int rsi_config_wowlan(struct rsi_hw *adapter, struct cfg80211_wowlan *wowlan)
+ struct rsi_common *common = adapter->priv;
+ u16 triggers = 0;
+ u16 rx_filter_word = 0;
+- struct ieee80211_bss_conf *bss = &adapter->vifs[0]->bss_conf;
++ struct ieee80211_bss_conf *bss = NULL;
+
+ rsi_dbg(INFO_ZONE, "Config WoWLAN to device\n");
+
++ if (!adapter->vifs[0])
++ return -EINVAL;
++
++ bss = &adapter->vifs[0]->bss_conf;
++
+ if (WARN_ON(!wowlan)) {
+ rsi_dbg(ERR_ZONE, "WoW triggers not enabled\n");
+ return -EINVAL;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index d76e69c0beaa..ffea376260eb 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -170,7 +170,6 @@ static void rsi_reset_card(struct sdio_func *pfunction)
+ int err;
+ struct mmc_card *card = pfunction->card;
+ struct mmc_host *host = card->host;
+- s32 bit = (fls(host->ocr_avail) - 1);
+ u8 cmd52_resp;
+ u32 clock, resp, i;
+ u16 rca;
+@@ -190,7 +189,6 @@ static void rsi_reset_card(struct sdio_func *pfunction)
+ msleep(20);
+
+ /* Initialize the SDIO card */
+- host->ios.vdd = bit;
+ host->ios.chip_select = MMC_CS_DONTCARE;
+ host->ios.bus_mode = MMC_BUSMODE_OPENDRAIN;
+ host->ios.power_mode = MMC_POWER_UP;
+@@ -1042,17 +1040,21 @@ static void ulp_read_write(struct rsi_hw *adapter, u16 addr, u32 data,
+ /*This function resets and re-initializes the chip.*/
+ static void rsi_reset_chip(struct rsi_hw *adapter)
+ {
+- __le32 data;
++ u8 *data;
+ u8 sdio_interrupt_status = 0;
+ u8 request = 1;
+ int ret;
+
++ data = kzalloc(sizeof(u32), GFP_KERNEL);
++ if (!data)
++ return;
++
+ rsi_dbg(INFO_ZONE, "Writing disable to wakeup register\n");
+ ret = rsi_sdio_write_register(adapter, 0, SDIO_WAKEUP_REG, &request);
+ if (ret < 0) {
+ rsi_dbg(ERR_ZONE,
+ "%s: Failed to write SDIO wakeup register\n", __func__);
+- return;
++ goto err;
+ }
+ msleep(20);
+ ret = rsi_sdio_read_register(adapter, RSI_FN1_INT_REGISTER,
+@@ -1060,7 +1062,7 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ if (ret < 0) {
+ rsi_dbg(ERR_ZONE, "%s: Failed to Read Intr Status Register\n",
+ __func__);
+- return;
++ goto err;
+ }
+ rsi_dbg(INFO_ZONE, "%s: Intr Status Register value = %d\n",
+ __func__, sdio_interrupt_status);
+@@ -1070,17 +1072,17 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ rsi_dbg(ERR_ZONE,
+ "%s: Unable to set ms word to common reg\n",
+ __func__);
+- return;
++ goto err;
+ }
+
+- data = TA_HOLD_THREAD_VALUE;
++ put_unaligned_le32(TA_HOLD_THREAD_VALUE, data);
+ if (rsi_sdio_write_register_multiple(adapter, TA_HOLD_THREAD_REG |
+ RSI_SD_REQUEST_MASTER,
+- (u8 *)&data, 4)) {
++ data, 4)) {
+ rsi_dbg(ERR_ZONE,
+ "%s: Unable to hold Thread-Arch processor threads\n",
+ __func__);
+- return;
++ goto err;
+ }
+
+ /* This msleep will ensure Thread-Arch processor to go to hold
+@@ -1101,6 +1103,9 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ * read write operations to complete for chip reset.
+ */
+ msleep(500);
++err:
++ kfree(data);
++ return;
+ }
+
+ /**
+diff --git a/drivers/net/wireless/rsi/rsi_sdio.h b/drivers/net/wireless/rsi/rsi_sdio.h
+index ead8e7c4df3a..353dbdf31e75 100644
+--- a/drivers/net/wireless/rsi/rsi_sdio.h
++++ b/drivers/net/wireless/rsi/rsi_sdio.h
+@@ -87,7 +87,7 @@ enum sdio_interrupt_type {
+ #define TA_SOFT_RST_CLR 0
+ #define TA_SOFT_RST_SET BIT(0)
+ #define TA_PC_ZERO 0
+-#define TA_HOLD_THREAD_VALUE cpu_to_le32(0xF)
++#define TA_HOLD_THREAD_VALUE 0xF
+ #define TA_RELEASE_THREAD_VALUE cpu_to_le32(0xF)
+ #define TA_BASE_ADDR 0x2200
+ #define MISC_CFG_BASE_ADDR 0x4105
+diff --git a/drivers/net/wireless/ti/wlcore/sdio.c b/drivers/net/wireless/ti/wlcore/sdio.c
+index 1f727babbea0..5de8305a6fd6 100644
+--- a/drivers/net/wireless/ti/wlcore/sdio.c
++++ b/drivers/net/wireless/ti/wlcore/sdio.c
+@@ -406,6 +406,11 @@ static int wl1271_suspend(struct device *dev)
+ mmc_pm_flag_t sdio_flags;
+ int ret = 0;
+
++ if (!wl) {
++ dev_err(dev, "no wilink module was probed\n");
++ goto out;
++ }
++
+ dev_dbg(dev, "wl1271 suspend. wow_enabled: %d\n",
+ wl->wow_enabled);
+
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 1d5082d30187..42e93cb4eca7 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -87,6 +87,7 @@ struct netfront_cb {
+ /* IRQ name is queue name with "-tx" or "-rx" appended */
+ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+
++static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
+ static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
+
+ struct netfront_stats {
+@@ -239,7 +240,7 @@ static void rx_refill_timeout(struct timer_list *t)
+ static int netfront_tx_slot_available(struct netfront_queue *queue)
+ {
+ return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
+- (NET_TX_RING_SIZE - MAX_SKB_FRAGS - 2);
++ (NET_TX_RING_SIZE - XEN_NETIF_NR_SLOTS_MIN - 1);
+ }
+
+ static void xennet_maybe_wake_tx(struct netfront_queue *queue)
+@@ -790,7 +791,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ RING_IDX cons = queue->rx.rsp_cons;
+ struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+ grant_ref_t ref = xennet_get_rx_ref(queue, cons);
+- int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
++ int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
+ int slots = 1;
+ int err = 0;
+ unsigned long ret;
+@@ -1330,6 +1331,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+ netif_carrier_off(netdev);
+
+ xenbus_switch_state(dev, XenbusStateInitialising);
++ wait_event(module_load_q,
++ xenbus_read_driver_state(dev->otherend) !=
++ XenbusStateClosed &&
++ xenbus_read_driver_state(dev->otherend) !=
++ XenbusStateUnknown);
+ return netdev;
+
+ exit:
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 5dbb0f0c02ef..0483c33a3567 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2679,19 +2679,15 @@ static pci_ers_result_t nvme_slot_reset(struct pci_dev *pdev)
+
+ dev_info(dev->ctrl.device, "restart after slot reset\n");
+ pci_restore_state(pdev);
+- nvme_reset_ctrl_sync(&dev->ctrl);
+-
+- switch (dev->ctrl.state) {
+- case NVME_CTRL_LIVE:
+- case NVME_CTRL_ADMIN_ONLY:
+- return PCI_ERS_RESULT_RECOVERED;
+- default:
+- return PCI_ERS_RESULT_DISCONNECT;
+- }
++ nvme_reset_ctrl(&dev->ctrl);
++ return PCI_ERS_RESULT_RECOVERED;
+ }
+
+ static void nvme_error_resume(struct pci_dev *pdev)
+ {
++ struct nvme_dev *dev = pci_get_drvdata(pdev);
++
++ flush_work(&dev->ctrl.reset_work);
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ }
+
+@@ -2735,6 +2731,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_LIGHTNVM, },
+ { PCI_DEVICE(0x1d1d, 0x2807), /* CNEX WL */
+ .driver_data = NVME_QUIRK_LIGHTNVM, },
++ { PCI_DEVICE(0x1d1d, 0x2601), /* CNEX Granby */
++ .driver_data = NVME_QUIRK_LIGHTNVM, },
+ { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1eb4438a8763..2181299ce8f5 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -778,7 +778,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ if (error) {
+ dev_err(ctrl->ctrl.device,
+ "prop_get NVME_REG_CAP failed\n");
+- goto out_cleanup_queue;
++ goto out_stop_queue;
+ }
+
+ ctrl->ctrl.sqsize =
+@@ -786,23 +786,25 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+
+ error = nvme_enable_ctrl(&ctrl->ctrl, ctrl->ctrl.cap);
+ if (error)
+- goto out_cleanup_queue;
++ goto out_stop_queue;
+
+ ctrl->ctrl.max_hw_sectors =
+ (ctrl->max_fr_pages - 1) << (ilog2(SZ_4K) - 9);
+
+ error = nvme_init_identify(&ctrl->ctrl);
+ if (error)
+- goto out_cleanup_queue;
++ goto out_stop_queue;
+
+ error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
+ &ctrl->async_event_sqe, sizeof(struct nvme_command),
+ DMA_TO_DEVICE);
+ if (error)
+- goto out_cleanup_queue;
++ goto out_stop_queue;
+
+ return 0;
+
++out_stop_queue:
++ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ out_cleanup_queue:
+ if (new)
+ blk_cleanup_queue(ctrl->ctrl.admin_q);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 33ee8d3145f8..9fb28e076c26 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -58,8 +58,8 @@ struct nvmet_fc_ls_iod {
+ struct work_struct work;
+ } __aligned(sizeof(unsigned long long));
+
++/* desired maximum for a single sequence - if sg list allows it */
+ #define NVMET_FC_MAX_SEQ_LENGTH (256 * 1024)
+-#define NVMET_FC_MAX_XFR_SGENTS (NVMET_FC_MAX_SEQ_LENGTH / PAGE_SIZE)
+
+ enum nvmet_fcp_datadir {
+ NVMET_FCP_NODATA,
+@@ -74,6 +74,7 @@ struct nvmet_fc_fcp_iod {
+ struct nvme_fc_cmd_iu cmdiubuf;
+ struct nvme_fc_ersp_iu rspiubuf;
+ dma_addr_t rspdma;
++ struct scatterlist *next_sg;
+ struct scatterlist *data_sg;
+ int data_sg_cnt;
+ u32 offset;
+@@ -1025,8 +1026,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
+ INIT_LIST_HEAD(&newrec->assoc_list);
+ kref_init(&newrec->ref);
+ ida_init(&newrec->assoc_cnt);
+- newrec->max_sg_cnt = min_t(u32, NVMET_FC_MAX_XFR_SGENTS,
+- template->max_sgl_segments);
++ newrec->max_sg_cnt = template->max_sgl_segments;
+
+ ret = nvmet_fc_alloc_ls_iodlist(newrec);
+ if (ret) {
+@@ -1722,6 +1722,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
+ ((fod->io_dir == NVMET_FCP_WRITE) ?
+ DMA_FROM_DEVICE : DMA_TO_DEVICE));
+ /* note: write from initiator perspective */
++ fod->next_sg = fod->data_sg;
+
+ return 0;
+
+@@ -1866,24 +1867,49 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
+ struct nvmet_fc_fcp_iod *fod, u8 op)
+ {
+ struct nvmefc_tgt_fcp_req *fcpreq = fod->fcpreq;
++ struct scatterlist *sg = fod->next_sg;
+ unsigned long flags;
+- u32 tlen;
++ u32 remaininglen = fod->req.transfer_len - fod->offset;
++ u32 tlen = 0;
+ int ret;
+
+ fcpreq->op = op;
+ fcpreq->offset = fod->offset;
+ fcpreq->timeout = NVME_FC_TGTOP_TIMEOUT_SEC;
+
+- tlen = min_t(u32, tgtport->max_sg_cnt * PAGE_SIZE,
+- (fod->req.transfer_len - fod->offset));
++ /*
++ * for next sequence:
++ * break at a sg element boundary
++ * attempt to keep sequence length capped at
++ * NVMET_FC_MAX_SEQ_LENGTH but allow sequence to
++ * be longer if a single sg element is larger
++ * than that amount. This is done to avoid creating
++ * a new sg list to use for the tgtport api.
++ */
++ fcpreq->sg = sg;
++ fcpreq->sg_cnt = 0;
++ while (tlen < remaininglen &&
++ fcpreq->sg_cnt < tgtport->max_sg_cnt &&
++ tlen + sg_dma_len(sg) < NVMET_FC_MAX_SEQ_LENGTH) {
++ fcpreq->sg_cnt++;
++ tlen += sg_dma_len(sg);
++ sg = sg_next(sg);
++ }
++ if (tlen < remaininglen && fcpreq->sg_cnt == 0) {
++ fcpreq->sg_cnt++;
++ tlen += min_t(u32, sg_dma_len(sg), remaininglen);
++ sg = sg_next(sg);
++ }
++ if (tlen < remaininglen)
++ fod->next_sg = sg;
++ else
++ fod->next_sg = NULL;
++
+ fcpreq->transfer_length = tlen;
+ fcpreq->transferred_length = 0;
+ fcpreq->fcp_error = 0;
+ fcpreq->rsplen = 0;
+
+- fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
+- fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
+-
+ /*
+ * If the last READDATA request: check if LLDD supports
+ * combined xfr with response.
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index b05aa8e81303..1e28597138c8 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1107,6 +1107,8 @@ static void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell,
+
+ /* setup the first byte with lsb bits from nvmem */
+ rc = nvmem_reg_read(nvmem, cell->offset, &v, 1);
++ if (rc)
++ goto err;
+ *b++ |= GENMASK(bit_offset - 1, 0) & v;
+
+ /* setup rest of the byte if any */
+@@ -1125,11 +1127,16 @@ static void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell,
+ /* setup the last byte with msb bits from nvmem */
+ rc = nvmem_reg_read(nvmem,
+ cell->offset + cell->bytes - 1, &v, 1);
++ if (rc)
++ goto err;
+ *p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v;
+
+ }
+
+ return buf;
++err:
++ kfree(buf);
++ return ERR_PTR(rc);
+ }
+
+ /**
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 366d93af051d..788a200fb2dc 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -288,13 +288,16 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+- if (!val) {
+- if (pci_is_enabled(pdev))
+- pci_disable_device(pdev);
+- else
+- result = -EIO;
+- } else
++ device_lock(dev);
++ if (dev->driver)
++ result = -EBUSY;
++ else if (val)
+ result = pci_enable_device(pdev);
++ else if (pci_is_enabled(pdev))
++ pci_disable_device(pdev);
++ else
++ result = -EIO;
++ device_unlock(dev);
+
+ return result < 0 ? result : count;
+ }
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index f76eb7704f64..c687c817b47d 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -400,6 +400,15 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
+ info->l1ss_cap = 0;
+ return;
+ }
++
++ /*
++ * If we don't have LTR for the entire path from the Root Complex
++ * to this device, we can't use ASPM L1.2 because it relies on the
++ * LTR_L1.2_THRESHOLD. See PCIe r4.0, secs 5.5.4, 6.18.
++ */
++ if (!pdev->ltr_path)
++ info->l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2;
++
+ pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL1,
+ &info->l1ss_ctl1);
+ pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL2,
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index 8c57d607e603..74562dbacbf1 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -113,7 +113,7 @@ static void dpc_work(struct work_struct *work)
+ }
+
+ pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+- PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT);
++ PCI_EXP_DPC_STATUS_TRIGGER);
+
+ pci_read_config_word(pdev, cap + PCI_EXP_DPC_CTL, &ctl);
+ pci_write_config_word(pdev, cap + PCI_EXP_DPC_CTL,
+@@ -223,6 +223,9 @@ static irqreturn_t dpc_irq(int irq, void *context)
+ if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
+ dpc_process_rp_pio_error(dpc);
+
++ pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
++ PCI_EXP_DPC_STATUS_INTERRUPT);
++
+ schedule_work(&dpc->work);
+
+ return IRQ_HANDLED;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 73ac02796ba9..d21686ad3ce5 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -526,12 +526,14 @@ static void devm_pci_release_host_bridge_dev(struct device *dev)
+
+ if (bridge->release_fn)
+ bridge->release_fn(bridge);
++
++ pci_free_resource_list(&bridge->windows);
+ }
+
+ static void pci_release_host_bridge_dev(struct device *dev)
+ {
+ devm_pci_release_host_bridge_dev(dev);
+- pci_free_host_bridge(to_pci_host_bridge(dev));
++ kfree(to_pci_host_bridge(dev));
+ }
+
+ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
+diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c
+index 383b2d3dcbc6..687ae8e674db 100644
+--- a/drivers/perf/arm-cci.c
++++ b/drivers/perf/arm-cci.c
+@@ -120,9 +120,9 @@ enum cci_models {
+
+ static void pmu_write_counters(struct cci_pmu *cci_pmu,
+ unsigned long *mask);
+-static ssize_t cci_pmu_format_show(struct device *dev,
++static ssize_t __maybe_unused cci_pmu_format_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+-static ssize_t cci_pmu_event_show(struct device *dev,
++static ssize_t __maybe_unused cci_pmu_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+ #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \
+@@ -1466,7 +1466,7 @@ static int cci_pmu_offline_cpu(unsigned int cpu)
+ return 0;
+ }
+
+-static struct cci_pmu_model cci_pmu_models[] = {
++static __maybe_unused struct cci_pmu_model cci_pmu_models[] = {
+ #ifdef CONFIG_ARM_CCI400_PMU
+ [CCI400_R0] = {
+ .name = "CCI_400",
+diff --git a/drivers/perf/arm-ccn.c b/drivers/perf/arm-ccn.c
+index 65b7e4042ece..07771e28f572 100644
+--- a/drivers/perf/arm-ccn.c
++++ b/drivers/perf/arm-ccn.c
+@@ -736,7 +736,7 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ ccn = pmu_to_arm_ccn(event->pmu);
+
+ if (hw->sample_period) {
+- dev_warn(ccn->dev, "Sampling not supported!\n");
++ dev_dbg(ccn->dev, "Sampling not supported!\n");
+ return -EOPNOTSUPP;
+ }
+
+@@ -744,12 +744,12 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ event->attr.exclude_kernel || event->attr.exclude_hv ||
+ event->attr.exclude_idle || event->attr.exclude_host ||
+ event->attr.exclude_guest) {
+- dev_warn(ccn->dev, "Can't exclude execution levels!\n");
++ dev_dbg(ccn->dev, "Can't exclude execution levels!\n");
+ return -EINVAL;
+ }
+
+ if (event->cpu < 0) {
+- dev_warn(ccn->dev, "Can't provide per-task data!\n");
++ dev_dbg(ccn->dev, "Can't provide per-task data!\n");
+ return -EOPNOTSUPP;
+ }
+ /*
+@@ -771,13 +771,13 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ switch (type) {
+ case CCN_TYPE_MN:
+ if (node_xp != ccn->mn_id) {
+- dev_warn(ccn->dev, "Invalid MN ID %d!\n", node_xp);
++ dev_dbg(ccn->dev, "Invalid MN ID %d!\n", node_xp);
+ return -EINVAL;
+ }
+ break;
+ case CCN_TYPE_XP:
+ if (node_xp >= ccn->num_xps) {
+- dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp);
++ dev_dbg(ccn->dev, "Invalid XP ID %d!\n", node_xp);
+ return -EINVAL;
+ }
+ break;
+@@ -785,11 +785,11 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ break;
+ default:
+ if (node_xp >= ccn->num_nodes) {
+- dev_warn(ccn->dev, "Invalid node ID %d!\n", node_xp);
++ dev_dbg(ccn->dev, "Invalid node ID %d!\n", node_xp);
+ return -EINVAL;
+ }
+ if (!arm_ccn_pmu_type_eq(type, ccn->node[node_xp].type)) {
+- dev_warn(ccn->dev, "Invalid type 0x%x for node %d!\n",
++ dev_dbg(ccn->dev, "Invalid type 0x%x for node %d!\n",
+ type, node_xp);
+ return -EINVAL;
+ }
+@@ -808,19 +808,19 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ if (event_id != e->event)
+ continue;
+ if (e->num_ports && port >= e->num_ports) {
+- dev_warn(ccn->dev, "Invalid port %d for node/XP %d!\n",
++ dev_dbg(ccn->dev, "Invalid port %d for node/XP %d!\n",
+ port, node_xp);
+ return -EINVAL;
+ }
+ if (e->num_vcs && vc >= e->num_vcs) {
+- dev_warn(ccn->dev, "Invalid vc %d for node/XP %d!\n",
++ dev_dbg(ccn->dev, "Invalid vc %d for node/XP %d!\n",
+ vc, node_xp);
+ return -EINVAL;
+ }
+ valid = 1;
+ }
+ if (!valid) {
+- dev_warn(ccn->dev, "Invalid event 0x%x for node/XP %d!\n",
++ dev_dbg(ccn->dev, "Invalid event 0x%x for node/XP %d!\n",
+ event_id, node_xp);
+ return -EINVAL;
+ }
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 4b57a13758a4..bafb3d40545e 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -576,8 +576,10 @@ static int atmel_pctl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ for_each_child_of_node(np_config, np) {
+ ret = atmel_pctl_dt_subnode_to_map(pctldev, np, map,
+ &reserved_maps, num_maps);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(np);
+ break;
++ }
+ }
+ }
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index ad80a17c9990..ace2bfbf1bee 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -890,11 +890,24 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ return ret;
+ }
+
+- ret = gpiochip_add_pin_range(&pctrl->chip, dev_name(pctrl->dev), 0, 0, chip->ngpio);
+- if (ret) {
+- dev_err(pctrl->dev, "Failed to add pin range\n");
+- gpiochip_remove(&pctrl->chip);
+- return ret;
++ /*
++ * For DeviceTree-supported systems, the gpio core checks the
++ * pinctrl's device node for the "gpio-ranges" property.
++ * If it is present, it takes care of adding the pin ranges
++ * for the driver. In this case the driver can skip ahead.
++ *
++ * In order to remain compatible with older, existing DeviceTree
++ * files which don't set the "gpio-ranges" property or systems that
++ * utilize ACPI the driver has to call gpiochip_add_pin_range().
++ */
++ if (!of_property_read_bool(pctrl->dev->of_node, "gpio-ranges")) {
++ ret = gpiochip_add_pin_range(&pctrl->chip,
++ dev_name(pctrl->dev), 0, 0, chip->ngpio);
++ if (ret) {
++ dev_err(pctrl->dev, "Failed to add pin range\n");
++ gpiochip_remove(&pctrl->chip);
++ return ret;
++ }
+ }
+
+ ret = gpiochip_irqchip_add(chip,
+diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c
+index 33fb2a20458a..9dc282ed5a9e 100644
+--- a/drivers/platform/x86/dell-smbios-base.c
++++ b/drivers/platform/x86/dell-smbios-base.c
+@@ -555,11 +555,10 @@ static void free_group(struct platform_device *pdev)
+
+ static int __init dell_smbios_init(void)
+ {
+- const struct dmi_device *valid;
+ int ret, wmi, smm;
+
+- valid = dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL);
+- if (!valid) {
++ if (!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "www.dell.com", NULL)) {
+ pr_err("Unable to run on non-Dell system\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/regulator/cpcap-regulator.c b/drivers/regulator/cpcap-regulator.c
+index f541b80f1b54..bd910fe123d9 100644
+--- a/drivers/regulator/cpcap-regulator.c
++++ b/drivers/regulator/cpcap-regulator.c
+@@ -222,7 +222,7 @@ static unsigned int cpcap_map_mode(unsigned int mode)
+ case CPCAP_BIT_AUDIO_LOW_PWR:
+ return REGULATOR_MODE_STANDBY;
+ default:
+- return -EINVAL;
++ return REGULATOR_MODE_INVALID;
+ }
+ }
+
+diff --git a/drivers/regulator/internal.h b/drivers/regulator/internal.h
+index abfd56e8c78a..24fde1e08f3a 100644
+--- a/drivers/regulator/internal.h
++++ b/drivers/regulator/internal.h
+@@ -56,14 +56,19 @@ static inline struct regulator_dev *dev_to_rdev(struct device *dev)
+ return container_of(dev, struct regulator_dev, dev);
+ }
+
+-struct regulator_dev *of_find_regulator_by_node(struct device_node *np);
+-
+ #ifdef CONFIG_OF
++struct regulator_dev *of_find_regulator_by_node(struct device_node *np);
+ struct regulator_init_data *regulator_of_get_init_data(struct device *dev,
+ const struct regulator_desc *desc,
+ struct regulator_config *config,
+ struct device_node **node);
+ #else
++static inline struct regulator_dev *
++of_find_regulator_by_node(struct device_node *np)
++{
++ return NULL;
++}
++
+ static inline struct regulator_init_data *
+ regulator_of_get_init_data(struct device *dev,
+ const struct regulator_desc *desc,
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index f47264fa1940..0d3f73eacb99 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -31,6 +31,7 @@ static void of_get_regulation_constraints(struct device_node *np,
+ struct regulation_constraints *constraints = &(*init_data)->constraints;
+ struct regulator_state *suspend_state;
+ struct device_node *suspend_np;
++ unsigned int mode;
+ int ret, i;
+ u32 pval;
+
+@@ -124,11 +125,11 @@ static void of_get_regulation_constraints(struct device_node *np,
+
+ if (!of_property_read_u32(np, "regulator-initial-mode", &pval)) {
+ if (desc && desc->of_map_mode) {
+- ret = desc->of_map_mode(pval);
+- if (ret == -EINVAL)
++ mode = desc->of_map_mode(pval);
++ if (mode == REGULATOR_MODE_INVALID)
+ pr_err("%s: invalid mode %u\n", np->name, pval);
+ else
+- constraints->initial_mode = ret;
++ constraints->initial_mode = mode;
+ } else {
+ pr_warn("%s: mapping for mode %d not defined\n",
+ np->name, pval);
+@@ -163,12 +164,12 @@ static void of_get_regulation_constraints(struct device_node *np,
+ if (!of_property_read_u32(suspend_np, "regulator-mode",
+ &pval)) {
+ if (desc && desc->of_map_mode) {
+- ret = desc->of_map_mode(pval);
+- if (ret == -EINVAL)
++ mode = desc->of_map_mode(pval);
++ if (mode == REGULATOR_MODE_INVALID)
+ pr_err("%s: invalid mode %u\n",
+ np->name, pval);
+ else
+- suspend_state->mode = ret;
++ suspend_state->mode = mode;
+ } else {
+ pr_warn("%s: mapping for mode %d not defined\n",
+ np->name, pval);
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index 63922a2167e5..659e516455be 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -158,6 +158,7 @@ static const struct regulator_ops pfuze100_sw_regulator_ops = {
+ static const struct regulator_ops pfuze100_swb_regulator_ops = {
+ .enable = regulator_enable_regmap,
+ .disable = regulator_disable_regmap,
++ .is_enabled = regulator_is_enabled_regmap,
+ .list_voltage = regulator_list_voltage_table,
+ .map_voltage = regulator_map_voltage_ascend,
+ .set_voltage_sel = regulator_set_voltage_sel_regmap,
+diff --git a/drivers/regulator/twl-regulator.c b/drivers/regulator/twl-regulator.c
+index a4456db5849d..884c7505ed91 100644
+--- a/drivers/regulator/twl-regulator.c
++++ b/drivers/regulator/twl-regulator.c
+@@ -274,7 +274,7 @@ static inline unsigned int twl4030reg_map_mode(unsigned int mode)
+ case RES_STATE_SLEEP:
+ return REGULATOR_MODE_STANDBY;
+ default:
+- return -EINVAL;
++ return REGULATOR_MODE_INVALID;
+ }
+ }
+
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 7cbdc9228dd5..6d4012dd6922 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -441,6 +441,11 @@ int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ {
+ int err;
+
++ if (!rtc->ops)
++ return -ENODEV;
++ else if (!rtc->ops->set_alarm)
++ return -EINVAL;
++
+ err = rtc_valid_tm(&alarm->time);
+ if (err != 0)
+ return err;
+diff --git a/drivers/rtc/rtc-tps6586x.c b/drivers/rtc/rtc-tps6586x.c
+index d7785ae0a2b4..1144fe07503e 100644
+--- a/drivers/rtc/rtc-tps6586x.c
++++ b/drivers/rtc/rtc-tps6586x.c
+@@ -276,14 +276,15 @@ static int tps6586x_rtc_probe(struct platform_device *pdev)
+ device_init_wakeup(&pdev->dev, 1);
+
+ platform_set_drvdata(pdev, rtc);
+- rtc->rtc = devm_rtc_device_register(&pdev->dev, dev_name(&pdev->dev),
+- &tps6586x_rtc_ops, THIS_MODULE);
++ rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
+ if (IS_ERR(rtc->rtc)) {
+ ret = PTR_ERR(rtc->rtc);
+- dev_err(&pdev->dev, "RTC device register: ret %d\n", ret);
++ dev_err(&pdev->dev, "RTC allocate device: ret %d\n", ret);
+ goto fail_rtc_register;
+ }
+
++ rtc->rtc->ops = &tps6586x_rtc_ops;
++
+ ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL,
+ tps6586x_rtc_irq,
+ IRQF_ONESHOT,
+@@ -294,6 +295,13 @@ static int tps6586x_rtc_probe(struct platform_device *pdev)
+ goto fail_rtc_register;
+ }
+ disable_irq(rtc->irq);
++
++ ret = rtc_register_device(rtc->rtc);
++ if (ret) {
++ dev_err(&pdev->dev, "RTC device register: ret %d\n", ret);
++ goto fail_rtc_register;
++ }
++
+ return 0;
+
+ fail_rtc_register:
+diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c
+index d0244d7979fc..a56b526db89a 100644
+--- a/drivers/rtc/rtc-tps65910.c
++++ b/drivers/rtc/rtc-tps65910.c
+@@ -380,6 +380,10 @@ static int tps65910_rtc_probe(struct platform_device *pdev)
+ if (!tps_rtc)
+ return -ENOMEM;
+
++ tps_rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
++ if (IS_ERR(tps_rtc->rtc))
++ return PTR_ERR(tps_rtc->rtc);
++
+ /* Clear pending interrupts */
+ ret = regmap_read(tps65910->regmap, TPS65910_RTC_STATUS, &rtc_reg);
+ if (ret < 0)
+@@ -421,10 +425,10 @@ static int tps65910_rtc_probe(struct platform_device *pdev)
+ tps_rtc->irq = irq;
+ device_set_wakeup_capable(&pdev->dev, 1);
+
+- tps_rtc->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+- &tps65910_rtc_ops, THIS_MODULE);
+- if (IS_ERR(tps_rtc->rtc)) {
+- ret = PTR_ERR(tps_rtc->rtc);
++ tps_rtc->rtc->ops = &tps65910_rtc_ops;
++
++ ret = rtc_register_device(tps_rtc->rtc);
++ if (ret) {
+ dev_err(&pdev->dev, "RTC device register: err %d\n", ret);
+ return ret;
+ }
+diff --git a/drivers/rtc/rtc-vr41xx.c b/drivers/rtc/rtc-vr41xx.c
+index 7ce22967fd16..7ed010714f29 100644
+--- a/drivers/rtc/rtc-vr41xx.c
++++ b/drivers/rtc/rtc-vr41xx.c
+@@ -292,13 +292,14 @@ static int rtc_probe(struct platform_device *pdev)
+ goto err_rtc1_iounmap;
+ }
+
+- rtc = devm_rtc_device_register(&pdev->dev, rtc_name, &vr41xx_rtc_ops,
+- THIS_MODULE);
++ rtc = devm_rtc_allocate_device(&pdev->dev);
+ if (IS_ERR(rtc)) {
+ retval = PTR_ERR(rtc);
+ goto err_iounmap_all;
+ }
+
++ rtc->ops = &vr41xx_rtc_ops;
++
+ rtc->max_user_freq = MAX_PERIODIC_RATE;
+
+ spin_lock_irq(&rtc_lock);
+@@ -340,6 +341,10 @@ static int rtc_probe(struct platform_device *pdev)
+
+ dev_info(&pdev->dev, "Real Time Clock of NEC VR4100 series\n");
+
++ retval = rtc_register_device(rtc);
++ if (retval)
++ goto err_iounmap_all;
++
+ return 0;
+
+ err_iounmap_all:
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index b415ba42ca73..599447032e50 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -285,6 +285,8 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
+ struct list_head *entry;
+ unsigned long flags;
+
++ lockdep_assert_held(&adapter->erp_lock);
++
+ if (unlikely(!debug_level_enabled(dbf->rec, level)))
+ return;
+
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index b42c9c479d4b..99ba4a770406 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -882,6 +882,11 @@ static int twa_chrdev_open(struct inode *inode, struct file *file)
+ unsigned int minor_number;
+ int retval = TW_IOCTL_ERROR_OS_ENODEV;
+
++ if (!capable(CAP_SYS_ADMIN)) {
++ retval = -EACCES;
++ goto out;
++ }
++
+ minor_number = iminor(inode);
+ if (minor_number >= twa_device_extension_count)
+ goto out;
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index 33261b690774..f6179e3d6953 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -1033,6 +1033,9 @@ static int tw_chrdev_open(struct inode *inode, struct file *file)
+
+ dprintk(KERN_WARNING "3w-xxxx: tw_ioctl_open()\n");
+
++ if (!capable(CAP_SYS_ADMIN))
++ return -EACCES;
++
+ minor_number = iminor(inode);
+ if (minor_number >= tw_device_extension_count)
+ return -ENODEV;
+diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c
+index d8fe7ab870b8..f97f44b4b706 100644
+--- a/drivers/scsi/cxlflash/main.c
++++ b/drivers/scsi/cxlflash/main.c
+@@ -946,9 +946,9 @@ static void cxlflash_remove(struct pci_dev *pdev)
+ return;
+ }
+
+- /* If a Task Management Function is active, wait for it to complete
+- * before continuing with remove.
+- */
++ /* Yield to running recovery threads before continuing with remove */
++ wait_event(cfg->reset_waitq, cfg->state != STATE_RESET &&
++ cfg->state != STATE_PROBING);
+ spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
+ if (cfg->tmf_active)
+ wait_event_interruptible_lock_irq(cfg->tmf_waitq,
+@@ -1303,7 +1303,10 @@ static void afu_err_intr_init(struct afu *afu)
+ for (i = 0; i < afu->num_hwqs; i++) {
+ hwq = get_hwq(afu, i);
+
+- writeq_be(SISL_MSI_SYNC_ERROR, &hwq->host_map->ctx_ctrl);
++ reg = readq_be(&hwq->host_map->ctx_ctrl);
++ WARN_ON((reg & SISL_CTX_CTRL_LISN_MASK) != 0);
++ reg |= SISL_MSI_SYNC_ERROR;
++ writeq_be(reg, &hwq->host_map->ctx_ctrl);
+ writeq_be(SISL_ISTATUS_MASK, &hwq->host_map->intr_mask);
+ }
+ }
+diff --git a/drivers/scsi/cxlflash/sislite.h b/drivers/scsi/cxlflash/sislite.h
+index bedf1ce2f33c..d8940f1ae219 100644
+--- a/drivers/scsi/cxlflash/sislite.h
++++ b/drivers/scsi/cxlflash/sislite.h
+@@ -284,6 +284,7 @@ struct sisl_host_map {
+ __be64 cmd_room;
+ __be64 ctx_ctrl; /* least significant byte or b56:63 is LISN# */
+ #define SISL_CTX_CTRL_UNMAP_SECTOR 0x8000000000000000ULL /* b0 */
++#define SISL_CTX_CTRL_LISN_MASK (0xFFULL)
+ __be64 mbox_w; /* restricted use */
+ __be64 sq_start; /* Submission Queue (R/W): write sequence and */
+ __be64 sq_end; /* inclusion semantics are the same as RRQ */
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 6f3e5ba6b472..3d3aa47bab69 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -348,10 +348,11 @@ struct hisi_sas_err_record_v3 {
+ #define DIR_TO_DEVICE 2
+ #define DIR_RESERVED 3
+
+-#define CMD_IS_UNCONSTRAINT(cmd) \
+- ((cmd == ATA_CMD_READ_LOG_EXT) || \
+- (cmd == ATA_CMD_READ_LOG_DMA_EXT) || \
+- (cmd == ATA_CMD_DEV_RESET))
++#define FIS_CMD_IS_UNCONSTRAINED(fis) \
++ ((fis.command == ATA_CMD_READ_LOG_EXT) || \
++ (fis.command == ATA_CMD_READ_LOG_DMA_EXT) || \
++ ((fis.command == ATA_CMD_DEV_RESET) && \
++ ((fis.control & ATA_SRST) != 0)))
+
+ static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off)
+ {
+@@ -1046,7 +1047,7 @@ static int prep_ata_v3_hw(struct hisi_hba *hisi_hba,
+ << CMD_HDR_FRAME_TYPE_OFF;
+ dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
+
+- if (CMD_IS_UNCONSTRAINT(task->ata_task.fis.command))
++ if (FIS_CMD_IS_UNCONSTRAINED(task->ata_task.fis))
+ dw1 |= 1 << CMD_HDR_UNCON_CMD_OFF;
+
+ hdr->dw1 = cpu_to_le32(dw1);
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index 7195cff51d4c..9b6f5d024dba 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4199,6 +4199,9 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ int irq, i, j;
+ int error = -ENODEV;
+
++ if (hba_count >= MAX_CONTROLLERS)
++ goto out;
++
+ if (pci_enable_device(pdev))
+ goto out;
+ pci_set_master(pdev);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index f4d988dd1e9d..35497abb0e81 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -2981,6 +2981,9 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
+ pRAID_Context->timeout_value = cpu_to_le16(os_timeout_value);
+ pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id);
+ } else {
++ if (os_timeout_value)
++ os_timeout_value++;
++
+ /* system pd Fast Path */
+ io_request->Function = MPI2_FUNCTION_SCSI_IO_REQUEST;
+ timeout_limit = (scmd->device->type == TYPE_DISK) ?
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 284ccb566b19..5015b8fbbfc5 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1647,6 +1647,15 @@ static int qedf_vport_destroy(struct fc_vport *vport)
+ struct Scsi_Host *shost = vport_to_shost(vport);
+ struct fc_lport *n_port = shost_priv(shost);
+ struct fc_lport *vn_port = vport->dd_data;
++ struct qedf_ctx *qedf = lport_priv(vn_port);
++
++ if (!qedf) {
++ QEDF_ERR(NULL, "qedf is NULL.\n");
++ goto out;
++ }
++
++ /* Set unloading bit on vport qedf_ctx to prevent more I/O */
++ set_bit(QEDF_UNLOADING, &qedf->flags);
+
+ mutex_lock(&n_port->lp_mutex);
+ list_del(&vn_port->list);
+@@ -1673,6 +1682,7 @@ static int qedf_vport_destroy(struct fc_vport *vport)
+ if (vn_port->host)
+ scsi_host_put(vn_port->host);
+
++out:
+ return 0;
+ }
+
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index 188f30572aa1..5a58cbf3a75d 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -58,7 +58,10 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
+ {"IBM", "3526", "rdac", },
+ {"IBM", "3542", "rdac", },
+ {"IBM", "3552", "rdac", },
+- {"SGI", "TP9", "rdac", },
++ {"SGI", "TP9300", "rdac", },
++ {"SGI", "TP9400", "rdac", },
++ {"SGI", "TP9500", "rdac", },
++ {"SGI", "TP9700", "rdac", },
+ {"SGI", "IS", "rdac", },
+ {"STK", "OPENstorage", "rdac", },
+ {"STK", "FLEXLINE 380", "rdac", },
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 00e79057f870..15c394d95445 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -4969,6 +4969,7 @@ static void ufshcd_exception_event_handler(struct work_struct *work)
+ hba = container_of(work, struct ufs_hba, eeh_work);
+
+ pm_runtime_get_sync(hba->dev);
++ scsi_block_requests(hba->host);
+ err = ufshcd_get_ee_status(hba, &status);
+ if (err) {
+ dev_err(hba->dev, "%s: failed to get exception status %d\n",
+@@ -4982,6 +4983,7 @@ static void ufshcd_exception_event_handler(struct work_struct *work)
+ ufshcd_bkops_exception_event_handler(hba);
+
+ out:
++ scsi_unblock_requests(hba->host);
+ pm_runtime_put_sync(hba->dev);
+ return;
+ }
+@@ -6799,9 +6801,16 @@ static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+ if (list_empty(head))
+ goto out;
+
+- ret = ufshcd_vops_setup_clocks(hba, on, PRE_CHANGE);
+- if (ret)
+- return ret;
++ /*
++ * vendor specific setup_clocks ops may depend on clocks managed by
++ * this standard driver hence call the vendor specific setup_clocks
++ * before disabling the clocks managed here.
++ */
++ if (!on) {
++ ret = ufshcd_vops_setup_clocks(hba, on, PRE_CHANGE);
++ if (ret)
++ return ret;
++ }
+
+ list_for_each_entry(clki, head, list) {
+ if (!IS_ERR_OR_NULL(clki->clk)) {
+@@ -6825,9 +6834,16 @@ static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+ }
+ }
+
+- ret = ufshcd_vops_setup_clocks(hba, on, POST_CHANGE);
+- if (ret)
+- return ret;
++ /*
++ * vendor specific setup_clocks ops may depend on clocks managed by
++ * this standard driver hence call the vendor specific setup_clocks
++ * after enabling the clocks managed here.
++ */
++ if (on) {
++ ret = ufshcd_vops_setup_clocks(hba, on, POST_CHANGE);
++ if (ret)
++ return ret;
++ }
+
+ out:
+ if (ret) {
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index afc7ecc3c187..f4e3bd40c72e 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -155,7 +155,7 @@ static int imx7_gpc_pu_pgc_sw_pdn_req(struct generic_pm_domain *genpd)
+ return imx7_gpc_pu_pgc_sw_pxx_req(genpd, false);
+ }
+
+-static struct imx7_pgc_domain imx7_pgc_domains[] = {
++static const struct imx7_pgc_domain imx7_pgc_domains[] = {
+ [IMX7_POWER_DOMAIN_MIPI_PHY] = {
+ .genpd = {
+ .name = "mipi-phy",
+@@ -321,11 +321,6 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ continue;
+ }
+
+- domain = &imx7_pgc_domains[domain_index];
+- domain->regmap = regmap;
+- domain->genpd.power_on = imx7_gpc_pu_pgc_sw_pup_req;
+- domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
+-
+ pd_pdev = platform_device_alloc("imx7-pgc-domain",
+ domain_index);
+ if (!pd_pdev) {
+@@ -334,7 +329,20 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ }
+
+- pd_pdev->dev.platform_data = domain;
++ ret = platform_device_add_data(pd_pdev,
++ &imx7_pgc_domains[domain_index],
++ sizeof(imx7_pgc_domains[domain_index]));
++ if (ret) {
++ platform_device_put(pd_pdev);
++ of_node_put(np);
++ return ret;
++ }
++
++ domain = pd_pdev->dev.platform_data;
++ domain->regmap = regmap;
++ domain->genpd.power_on = imx7_gpc_pu_pgc_sw_pup_req;
++ domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
++
+ pd_pdev->dev.parent = dev;
+ pd_pdev->dev.of_node = np;
+
+diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
+index 321982277697..938ca41c56cd 100644
+--- a/drivers/soc/qcom/qmi_interface.c
++++ b/drivers/soc/qcom/qmi_interface.c
+@@ -639,10 +639,11 @@ int qmi_handle_init(struct qmi_handle *qmi, size_t recv_buf_size,
+ if (ops)
+ qmi->ops = *ops;
+
++ /* Make room for the header */
++ recv_buf_size += sizeof(struct qmi_header);
++ /* Must also be sufficient to hold a control packet */
+ if (recv_buf_size < sizeof(struct qrtr_ctrl_pkt))
+ recv_buf_size = sizeof(struct qrtr_ctrl_pkt);
+- else
+- recv_buf_size += sizeof(struct qmi_header);
+
+ qmi->recv_buf_size = recv_buf_size;
+ qmi->recv_buf = kzalloc(recv_buf_size, GFP_KERNEL);
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 0b94d62fad2b..493865977e3d 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -362,13 +362,8 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ cached = phdr_to_last_cached_entry(phdr);
+
+ while (hdr < end) {
+- if (hdr->canary != SMEM_PRIVATE_CANARY) {
+- dev_err(smem->dev,
+- "Found invalid canary in hosts %d:%d partition\n",
+- phdr->host0, phdr->host1);
+- return -EINVAL;
+- }
+-
++ if (hdr->canary != SMEM_PRIVATE_CANARY)
++ goto bad_canary;
+ if (le16_to_cpu(hdr->item) == item)
+ return -EEXIST;
+
+@@ -397,6 +392,11 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ le32_add_cpu(&phdr->offset_free_uncached, alloc_size);
+
+ return 0;
++bad_canary:
++ dev_err(smem->dev, "Found invalid canary in hosts %hu:%hu partition\n",
++ le16_to_cpu(phdr->host0), le16_to_cpu(phdr->host1));
++
++ return -EINVAL;
+ }
+
+ static int qcom_smem_alloc_global(struct qcom_smem *smem,
+@@ -560,8 +560,8 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
+ return ERR_PTR(-ENOENT);
+
+ invalid_canary:
+- dev_err(smem->dev, "Found invalid canary in hosts %d:%d partition\n",
+- phdr->host0, phdr->host1);
++ dev_err(smem->dev, "Found invalid canary in hosts %hu:%hu partition\n",
++ le16_to_cpu(phdr->host0), le16_to_cpu(phdr->host1));
+
+ return ERR_PTR(-EINVAL);
+ }
+@@ -695,9 +695,10 @@ static u32 qcom_smem_get_item_count(struct qcom_smem *smem)
+ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ {
+ struct smem_partition_header *header;
+- struct smem_ptable_entry *entry = NULL;
++ struct smem_ptable_entry *entry;
+ struct smem_ptable *ptable;
+ u32 host0, host1, size;
++ bool found = false;
+ int i;
+
+ ptable = qcom_smem_get_ptable(smem);
+@@ -709,11 +710,13 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ host0 = le16_to_cpu(entry->host0);
+ host1 = le16_to_cpu(entry->host1);
+
+- if (host0 == SMEM_GLOBAL_HOST && host0 == host1)
++ if (host0 == SMEM_GLOBAL_HOST && host0 == host1) {
++ found = true;
+ break;
++ }
+ }
+
+- if (!entry) {
++ if (!found) {
+ dev_err(smem->dev, "Missing entry for global partition\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index d9fcdb592b39..3e3d12ce4587 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -559,22 +559,28 @@ EXPORT_SYMBOL(tegra_powergate_remove_clamping);
+ int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk,
+ struct reset_control *rst)
+ {
+- struct tegra_powergate pg;
++ struct tegra_powergate *pg;
+ int err;
+
+ if (!tegra_powergate_is_available(id))
+ return -EINVAL;
+
+- pg.id = id;
+- pg.clks = &clk;
+- pg.num_clks = 1;
+- pg.reset = rst;
+- pg.pmc = pmc;
++ pg = kzalloc(sizeof(*pg), GFP_KERNEL);
++ if (!pg)
++ return -ENOMEM;
+
+- err = tegra_powergate_power_up(&pg, false);
++ pg->id = id;
++ pg->clks = &clk;
++ pg->num_clks = 1;
++ pg->reset = rst;
++ pg->pmc = pmc;
++
++ err = tegra_powergate_power_up(pg, false);
+ if (err)
+ pr_err("failed to turn on partition %d: %d\n", id, err);
+
++ kfree(pg);
++
+ return err;
+ }
+ EXPORT_SYMBOL(tegra_powergate_sequence_power_up);
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index 5c82910e3480..7fe4488ace57 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -574,10 +574,15 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ master->max_speed_hz = rate >> 2;
+
+ ret = devm_spi_register_master(&pdev->dev, master);
+- if (!ret)
+- return 0;
++ if (ret) {
++ dev_err(&pdev->dev, "spi master registration failed\n");
++ goto out_clk;
++ }
+
+- dev_err(&pdev->dev, "spi master registration failed\n");
++ return 0;
++
++out_clk:
++ clk_disable_unprepare(spicc->core);
+
+ out_master:
+ spi_master_put(master);
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index baa3a9fa2638..92e57e35418b 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -1260,8 +1260,6 @@ static int s3c64xx_spi_resume(struct device *dev)
+ if (ret < 0)
+ return ret;
+
+- s3c64xx_spi_hwinit(sdd, sdd->port_id);
+-
+ return spi_master_resume(master);
+ }
+ #endif /* CONFIG_PM_SLEEP */
+@@ -1299,6 +1297,8 @@ static int s3c64xx_spi_runtime_resume(struct device *dev)
+ if (ret != 0)
+ goto err_disable_src_clk;
+
++ s3c64xx_spi_hwinit(sdd, sdd->port_id);
++
+ return 0;
+
+ err_disable_src_clk:
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 8171eedbfc90..c75641b9df79 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -564,14 +564,16 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+
+ /* Configure native chip select mode/polarity early */
+ clr = MDR1_SYNCMD_MASK;
+- set = MDR1_TRMD | TMDR1_PCON | MDR1_SYNCMD_SPI;
++ set = MDR1_SYNCMD_SPI;
+ if (spi->mode & SPI_CS_HIGH)
+ clr |= BIT(MDR1_SYNCAC_SHIFT);
+ else
+ set |= BIT(MDR1_SYNCAC_SHIFT);
+ pm_runtime_get_sync(&p->pdev->dev);
+ tmp = sh_msiof_read(p, TMDR1) & ~clr;
+- sh_msiof_write(p, TMDR1, tmp | set);
++ sh_msiof_write(p, TMDR1, tmp | set | MDR1_TRMD | TMDR1_PCON);
++ tmp = sh_msiof_read(p, RMDR1) & ~clr;
++ sh_msiof_write(p, RMDR1, tmp | set);
+ pm_runtime_put(&p->pdev->dev);
+ p->native_cs_high = spi->mode & SPI_CS_HIGH;
+ p->native_cs_inited = true;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 7b213faa0a2b..91e76c776037 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1222,6 +1222,7 @@ static void __spi_pump_messages(struct spi_controller *ctlr, bool in_kthread)
+ if (!was_busy && ctlr->auto_runtime_pm) {
+ ret = pm_runtime_get_sync(ctlr->dev.parent);
+ if (ret < 0) {
++ pm_runtime_put_noidle(ctlr->dev.parent);
+ dev_err(&ctlr->dev, "Failed to power device: %d\n",
+ ret);
+ mutex_unlock(&ctlr->io_mutex);
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index b8f55a11ee1c..7391bba405ae 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -657,8 +657,11 @@ static int ks7010_upload_firmware(struct ks_sdio_card *card)
+
+ /* Firmware running ? */
+ ret = ks7010_sdio_readb(priv, GCR_A, &byte);
++ if (ret)
++ goto release_host_and_free;
+ if (byte == GCR_A_RUN) {
+ netdev_dbg(priv->net_dev, "MAC firmware running ...\n");
++ ret = -EBUSY;
+ goto release_host_and_free;
+ }
+
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+index 7ae2955c4db6..355c81651a65 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+@@ -1702,7 +1702,7 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
+ return 0;
+ }
+ spin_unlock(&fps->fps_lock);
+- rc = -EBUSY;
++ rc = -EAGAIN;
+ }
+
+ spin_lock(&fps->fps_lock);
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+index 6690a6cd4e34..5828ee96d74c 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+@@ -48,7 +48,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
+ __u64 dstcookie);
+ static void kiblnd_queue_tx_locked(struct kib_tx *tx, struct kib_conn *conn);
+ static void kiblnd_queue_tx(struct kib_tx *tx, struct kib_conn *conn);
+-static void kiblnd_unmap_tx(struct lnet_ni *ni, struct kib_tx *tx);
++static void kiblnd_unmap_tx(struct kib_tx *tx);
+ static void kiblnd_check_sends_locked(struct kib_conn *conn);
+
+ static void
+@@ -66,7 +66,7 @@ kiblnd_tx_done(struct lnet_ni *ni, struct kib_tx *tx)
+ LASSERT(!tx->tx_waiting); /* mustn't be awaiting peer response */
+ LASSERT(tx->tx_pool);
+
+- kiblnd_unmap_tx(ni, tx);
++ kiblnd_unmap_tx(tx);
+
+ /* tx may have up to 2 lnet msgs to finalise */
+ lntmsg[0] = tx->tx_lntmsg[0]; tx->tx_lntmsg[0] = NULL;
+@@ -591,13 +591,9 @@ kiblnd_fmr_map_tx(struct kib_net *net, struct kib_tx *tx, struct kib_rdma_desc *
+ return 0;
+ }
+
+-static void kiblnd_unmap_tx(struct lnet_ni *ni, struct kib_tx *tx)
++static void kiblnd_unmap_tx(struct kib_tx *tx)
+ {
+- struct kib_net *net = ni->ni_data;
+-
+- LASSERT(net);
+-
+- if (net->ibn_fmr_ps)
++ if (tx->fmr.fmr_pfmr || tx->fmr.fmr_frd)
+ kiblnd_fmr_pool_unmap(&tx->fmr, tx->tx_status);
+
+ if (tx->tx_nfrags) {
+@@ -1290,11 +1286,6 @@ kiblnd_connect_peer(struct kib_peer *peer)
+ goto failed2;
+ }
+
+- LASSERT(cmid->device);
+- CDEBUG(D_NET, "%s: connection bound to %s:%pI4h:%s\n",
+- libcfs_nid2str(peer->ibp_nid), dev->ibd_ifname,
+- &dev->ibd_ifip, cmid->device->name);
+-
+ return;
+
+ failed2:
+@@ -2996,8 +2987,19 @@ kiblnd_cm_callback(struct rdma_cm_id *cmid, struct rdma_cm_event *event)
+ } else {
+ rc = rdma_resolve_route(
+ cmid, *kiblnd_tunables.kib_timeout * 1000);
+- if (!rc)
++ if (!rc) {
++ struct kib_net *net = peer->ibp_ni->ni_data;
++ struct kib_dev *dev = net->ibn_dev;
++
++ CDEBUG(D_NET, "%s: connection bound to "\
++ "%s:%pI4h:%s\n",
++ libcfs_nid2str(peer->ibp_nid),
++ dev->ibd_ifname,
++ &dev->ibd_ifip, cmid->device->name);
++
+ return 0;
++ }
++
+ /* Can't initiate route resolution */
+ CERROR("Can't resolve route for %s: %d\n",
+ libcfs_nid2str(peer->ibp_nid), rc);
+diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+index 95bea351d21d..59d6259f2c14 100644
+--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
++++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+@@ -1565,8 +1565,10 @@ struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
+ return ERR_CAST(res);
+
+ lock = ldlm_lock_new(res);
+- if (!lock)
++ if (!lock) {
++ ldlm_resource_putref(res);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ lock->l_req_mode = mode;
+ lock->l_ast_data = data;
+@@ -1609,6 +1611,8 @@ out:
+ return ERR_PTR(rc);
+ }
+
++
++
+ /**
+ * Enqueue (request) a lock.
+ * On the client this is called from ldlm_cli_enqueue_fini
+diff --git a/drivers/staging/lustre/lustre/llite/xattr.c b/drivers/staging/lustre/lustre/llite/xattr.c
+index 2d78432963dc..5caccfef9c62 100644
+--- a/drivers/staging/lustre/lustre/llite/xattr.c
++++ b/drivers/staging/lustre/lustre/llite/xattr.c
+@@ -94,7 +94,11 @@ ll_xattr_set_common(const struct xattr_handler *handler,
+ __u64 valid;
+ int rc;
+
+- if (flags == XATTR_REPLACE) {
++ /* When setxattr() is called with a size of 0 the value is
++ * unconditionally replaced by "". When removexattr() is
++ * called we get a NULL value and XATTR_REPLACE for flags.
++ */
++ if (!value && flags == XATTR_REPLACE) {
+ ll_stats_ops_tally(ll_i2sbi(inode), LPROC_LL_REMOVEXATTR, 1);
+ valid = OBD_MD_FLXATTRRM;
+ } else {
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+index c0849299d592..bba3d1745908 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+@@ -397,14 +397,13 @@ static long __ov2680_set_exposure(struct v4l2_subdev *sd, int coarse_itg,
+ {
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov2680_device *dev = to_ov2680_sensor(sd);
+- u16 vts,hts;
++ u16 vts;
+ int ret,exp_val;
+
+ dev_dbg(&client->dev,
+ "+++++++__ov2680_set_exposure coarse_itg %d, gain %d, digitgain %d++\n",
+ coarse_itg, gain, digitgain);
+
+- hts = ov2680_res[dev->fmt_idx].pixels_per_line;
+ vts = ov2680_res[dev->fmt_idx].lines_per_frame;
+
+ /* group hold */
+@@ -1185,7 +1184,8 @@ static int ov2680_detect(struct i2c_client *client)
+ OV2680_SC_CMMN_SUB_ID, &high);
+ revision = (u8) high & 0x0f;
+
+- dev_info(&client->dev, "sensor_revision id = 0x%x\n", id);
++ dev_info(&client->dev, "sensor_revision id = 0x%x, rev= %d\n",
++ id, revision);
+
+ return 0;
+ }
+diff --git a/drivers/staging/media/atomisp/i2c/gc2235.h b/drivers/staging/media/atomisp/i2c/gc2235.h
+index 0e805bcfa4d8..54bf7812b27a 100644
+--- a/drivers/staging/media/atomisp/i2c/gc2235.h
++++ b/drivers/staging/media/atomisp/i2c/gc2235.h
+@@ -33,6 +33,11 @@
+
+ #include "../include/linux/atomisp_platform.h"
+
++/*
++ * FIXME: non-preview resolutions are currently broken
++ */
++#define ENABLE_NON_PREVIEW 0
++
+ /* Defines for register writes and register array processing */
+ #define I2C_MSG_LENGTH 0x2
+ #define I2C_RETRY_COUNT 5
+@@ -284,6 +289,7 @@ static struct gc2235_reg const gc2235_init_settings[] = {
+ /*
+ * Register settings for various resolution
+ */
++#if ENABLE_NON_PREVIEW
+ static struct gc2235_reg const gc2235_1296_736_30fps[] = {
+ { GC2235_8BIT, 0x8b, 0xa0 },
+ { GC2235_8BIT, 0x8c, 0x02 },
+@@ -387,6 +393,7 @@ static struct gc2235_reg const gc2235_960_640_30fps[] = {
+ { GC2235_8BIT, 0xfe, 0x00 }, /* switch to P0 */
+ { GC2235_TOK_TERM, 0, 0 }
+ };
++#endif
+
+ static struct gc2235_reg const gc2235_1600_900_30fps[] = {
+ { GC2235_8BIT, 0x8b, 0xa0 },
+@@ -578,7 +585,7 @@ static struct gc2235_resolution gc2235_res_preview[] = {
+ * Disable non-preview configurations until the configuration selection is
+ * improved.
+ */
+-#if 0
++#if ENABLE_NON_PREVIEW
+ static struct gc2235_resolution gc2235_res_still[] = {
+ {
+ .desc = "gc2235_1600_900_30fps",
+diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
+index cb38e6e79409..58d6be07d986 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2680.h
++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
+@@ -295,6 +295,7 @@ struct ov2680_format {
+ };
+
+
++#if 0 /* None of the definitions below are used currently */
+ /*
+ * 176x144 30fps VBlanking 1lane 10Bit (binning)
+ */
+@@ -513,7 +514,6 @@ struct ov2680_format {
+ {OV2680_8BIT, 0x5081, 0x41},
+ {OV2680_TOK_TERM, 0, 0}
+ };
+-
+ /*
+ * 800x600 30fps VBlanking 1lane 10Bit (binning)
+ */
+@@ -685,6 +685,7 @@ struct ov2680_format {
+ // {OV2680_8BIT, 0x5090, 0x0c},
+ {OV2680_TOK_TERM, 0, 0}
+ };
++#endif
+
+ /*
+ *1616x916 30fps VBlanking 1lane 10bit
+@@ -734,6 +735,7 @@ struct ov2680_format {
+ /*
+ * 1612x1212 30fps VBlanking 1lane 10Bit
+ */
++#if 0
+ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+ {OV2680_8BIT, 0x3086, 0x00},
+ {OV2680_8BIT, 0x3501, 0x48},
+@@ -773,6 +775,7 @@ struct ov2680_format {
+ {OV2680_8BIT, 0x5081, 0x41},
+ {OV2680_TOK_TERM, 0, 0}
+ };
++#endif
+ /*
+ * 1616x1216 30fps VBlanking 1lane 10Bit
+ */
+diff --git a/drivers/staging/media/atomisp/i2c/ov2722.h b/drivers/staging/media/atomisp/i2c/ov2722.h
+index 757b37613ccc..d99188a5c9d0 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2722.h
++++ b/drivers/staging/media/atomisp/i2c/ov2722.h
+@@ -254,6 +254,7 @@ struct ov2722_write_ctrl {
+ /*
+ * Register settings for various resolution
+ */
++#if 0
+ static struct ov2722_reg const ov2722_QVGA_30fps[] = {
+ {OV2722_8BIT, 0x3718, 0x10},
+ {OV2722_8BIT, 0x3702, 0x0c},
+@@ -581,6 +582,7 @@ static struct ov2722_reg const ov2722_VGA_30fps[] = {
+ {OV2722_8BIT, 0x3509, 0x10},
+ {OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+
+ static struct ov2722_reg const ov2722_1632_1092_30fps[] = {
+ {OV2722_8BIT, 0x3021, 0x03}, /* For stand wait for
+@@ -784,6 +786,7 @@ static struct ov2722_reg const ov2722_1452_1092_30fps[] = {
+ {OV2722_8BIT, 0x3509, 0x00},
+ {OV2722_TOK_TERM, 0, 0}
+ };
++#if 0
+ static struct ov2722_reg const ov2722_1M3_30fps[] = {
+ {OV2722_8BIT, 0x3718, 0x10},
+ {OV2722_8BIT, 0x3702, 0x24},
+@@ -890,6 +893,7 @@ static struct ov2722_reg const ov2722_1M3_30fps[] = {
+ {OV2722_8BIT, 0x3509, 0x10},
+ {OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+
+ static struct ov2722_reg const ov2722_1080p_30fps[] = {
+ {OV2722_8BIT, 0x3021, 0x03}, /* For stand wait for a whole
+@@ -996,6 +1000,7 @@ static struct ov2722_reg const ov2722_1080p_30fps[] = {
+ {OV2722_TOK_TERM, 0, 0}
+ };
+
++#if 0 /* Currently unused */
+ static struct ov2722_reg const ov2722_720p_30fps[] = {
+ {OV2722_8BIT, 0x3021, 0x03},
+ {OV2722_8BIT, 0x3718, 0x10},
+@@ -1095,6 +1100,7 @@ static struct ov2722_reg const ov2722_720p_30fps[] = {
+ {OV2722_8BIT, 0x3509, 0x00},
+ {OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+
+ static struct ov2722_resolution ov2722_res_preview[] = {
+ {
+diff --git a/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h b/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
+index 9058a82455a6..bba99406785e 100644
+--- a/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
++++ b/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
+@@ -31,6 +31,12 @@
+
+ #include "../../include/linux/atomisp_platform.h"
+
++/*
++ * FIXME: non-preview resolutions are currently broken
++ */
++#define ENABLE_NON_PREVIEW 0
++
++
+ #define OV5693_POWER_UP_RETRY_NUM 5
+
+ /* Defines for register writes and register array processing */
+@@ -503,6 +509,7 @@ static struct ov5693_reg const ov5693_global_setting[] = {
+ {OV5693_TOK_TERM, 0, 0}
+ };
+
++#if ENABLE_NON_PREVIEW
+ /*
+ * 654x496 30fps 17ms VBlanking 2lane 10Bit (Scaling)
+ */
+@@ -695,6 +702,7 @@ static struct ov5693_reg const ov5693_736x496[] = {
+ {OV5693_8BIT, 0x0100, 0x01},
+ {OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+
+ /*
+ static struct ov5693_reg const ov5693_736x496[] = {
+@@ -727,6 +735,7 @@ static struct ov5693_reg const ov5693_736x496[] = {
+ /*
+ * 976x556 30fps 8.8ms VBlanking 2lane 10Bit (Scaling)
+ */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_976x556[] = {
+ {OV5693_8BIT, 0x3501, 0x7b},
+ {OV5693_8BIT, 0x3502, 0x00},
+@@ -819,6 +828,7 @@ static struct ov5693_reg const ov5693_1636p_30fps[] = {
+ {OV5693_8BIT, 0x0100, 0x01},
+ {OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+
+ static struct ov5693_reg const ov5693_1616x1216_30fps[] = {
+ {OV5693_8BIT, 0x3501, 0x7b},
+@@ -859,6 +869,7 @@ static struct ov5693_reg const ov5693_1616x1216_30fps[] = {
+ /*
+ * 1940x1096 30fps 8.8ms VBlanking 2lane 10bit (Scaling)
+ */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_1940x1096[] = {
+ {OV5693_8BIT, 0x3501, 0x7b},
+ {OV5693_8BIT, 0x3502, 0x00},
+@@ -916,6 +927,7 @@ static struct ov5693_reg const ov5693_2592x1456_30fps[] = {
+ {OV5693_8BIT, 0x5002, 0x00},
+ {OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+
+ static struct ov5693_reg const ov5693_2576x1456_30fps[] = {
+ {OV5693_8BIT, 0x3501, 0x7b},
+@@ -951,6 +963,7 @@ static struct ov5693_reg const ov5693_2576x1456_30fps[] = {
+ /*
+ * 2592x1944 30fps 0.6ms VBlanking 2lane 10Bit
+ */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+ {OV5693_8BIT, 0x3501, 0x7b},
+ {OV5693_8BIT, 0x3502, 0x00},
+@@ -977,6 +990,7 @@ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+ {OV5693_8BIT, 0x0100, 0x01},
+ {OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+
+ /*
+ * 11:9 Full FOV Output, expected FOV Res: 2346x1920
+@@ -985,6 +999,7 @@ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+ *
+ * WA: Left Offset: 8, Hor scal: 64
+ */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_1424x1168_30fps[] = {
+ {OV5693_8BIT, 0x3501, 0x3b}, /* long exposure[15:8] */
+ {OV5693_8BIT, 0x3502, 0x80}, /* long exposure[7:0] */
+@@ -1019,6 +1034,7 @@ static struct ov5693_reg const ov5693_1424x1168_30fps[] = {
+ {OV5693_8BIT, 0x0100, 0x01},
+ {OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+
+ /*
+ * 3:2 Full FOV Output, expected FOV Res: 2560x1706
+@@ -1151,7 +1167,7 @@ static struct ov5693_resolution ov5693_res_preview[] = {
+ * Disable non-preview configurations until the configuration selection is
+ * improved.
+ */
+-#if 0
++#if ENABLE_NON_PREVIEW
+ struct ov5693_resolution ov5693_res_still[] = {
+ {
+ .desc = "ov5693_736x496_30fps",
+diff --git a/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c b/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
+index 44c21813a06e..2d008590e26e 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
++++ b/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
+@@ -77,7 +77,7 @@ static int get_v4l2_framebuffer32(struct v4l2_framebuffer *kp,
+ get_user(kp->flags, &up->flags))
+ return -EFAULT;
+
+- kp->base = compat_ptr(tmp);
++ kp->base = (void __force *)compat_ptr(tmp);
+ get_v4l2_pix_format((struct v4l2_pix_format *)&kp->fmt, &up->fmt);
+ return 0;
+ }
+@@ -228,10 +228,10 @@ static int get_atomisp_dvs_6axis_config32(struct atomisp_dvs_6axis_config *kp,
+ get_user(ycoords_uv, &up->ycoords_uv))
+ return -EFAULT;
+
+- kp->xcoords_y = compat_ptr(xcoords_y);
+- kp->ycoords_y = compat_ptr(ycoords_y);
+- kp->xcoords_uv = compat_ptr(xcoords_uv);
+- kp->ycoords_uv = compat_ptr(ycoords_uv);
++ kp->xcoords_y = (void __force *)compat_ptr(xcoords_y);
++ kp->ycoords_y = (void __force *)compat_ptr(ycoords_y);
++ kp->xcoords_uv = (void __force *)compat_ptr(xcoords_uv);
++ kp->ycoords_uv = (void __force *)compat_ptr(ycoords_uv);
+ return 0;
+ }
+
+@@ -292,7 +292,7 @@ static int get_atomisp_metadata_stat32(struct atomisp_metadata *kp,
+ return -EFAULT;
+
+ kp->data = compat_ptr(data);
+- kp->effective_width = compat_ptr(effective_width);
++ kp->effective_width = (void __force *)compat_ptr(effective_width);
+ return 0;
+ }
+
+@@ -356,7 +356,7 @@ static int get_atomisp_metadata_by_type_stat32(
+ return -EFAULT;
+
+ kp->data = compat_ptr(data);
+- kp->effective_width = compat_ptr(effective_width);
++ kp->effective_width = (void __force *)compat_ptr(effective_width);
+ return 0;
+ }
+
+@@ -433,7 +433,7 @@ static int get_atomisp_overlay32(struct atomisp_overlay *kp,
+ get_user(kp->overlay_start_x, &up->overlay_start_y))
+ return -EFAULT;
+
+- kp->frame = compat_ptr(frame);
++ kp->frame = (void __force *)compat_ptr(frame);
+ return 0;
+ }
+
+@@ -477,7 +477,7 @@ static int get_atomisp_calibration_group32(
+ get_user(calb_grp_values, &up->calb_grp_values))
+ return -EFAULT;
+
+- kp->calb_grp_values = compat_ptr(calb_grp_values);
++ kp->calb_grp_values = (void __force *)compat_ptr(calb_grp_values);
+ return 0;
+ }
+
+@@ -699,8 +699,8 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ return -EFAULT;
+
+ while (n >= 0) {
+- compat_uptr_t *src = (compat_uptr_t *)up + n;
+- uintptr_t *dst = (uintptr_t *)kp + n;
++ compat_uptr_t __user *src = ((compat_uptr_t __user *)up) + n;
++ uintptr_t *dst = ((uintptr_t *)kp) + n;
+
+ if (get_user((*dst), src))
+ return -EFAULT;
+@@ -747,12 +747,12 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ return -EFAULT;
+
+- kp->shading_table = user_ptr + offset;
++ kp->shading_table = (void __force *)user_ptr + offset;
+ offset = sizeof(struct atomisp_shading_table);
+ if (!kp->shading_table)
+ return -EFAULT;
+
+- if (copy_to_user(kp->shading_table,
++ if (copy_to_user((void __user *)kp->shading_table,
+ &karg.shading_table,
+ sizeof(struct atomisp_shading_table)))
+ return -EFAULT;
+@@ -773,13 +773,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ return -EFAULT;
+
+- kp->morph_table = user_ptr + offset;
++ kp->morph_table = (void __force *)user_ptr + offset;
+ offset += sizeof(struct atomisp_morph_table);
+ if (!kp->morph_table)
+ return -EFAULT;
+
+- if (copy_to_user(kp->morph_table, &karg.morph_table,
+- sizeof(struct atomisp_morph_table)))
++ if (copy_to_user((void __user *)kp->morph_table,
++ &karg.morph_table,
++ sizeof(struct atomisp_morph_table)))
+ return -EFAULT;
+ }
+
+@@ -798,13 +799,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ return -EFAULT;
+
+- kp->dvs2_coefs = user_ptr + offset;
++ kp->dvs2_coefs = (void __force *)user_ptr + offset;
+ offset += sizeof(struct atomisp_dis_coefficients);
+ if (!kp->dvs2_coefs)
+ return -EFAULT;
+
+- if (copy_to_user(kp->dvs2_coefs, &karg.dvs2_coefs,
+- sizeof(struct atomisp_dis_coefficients)))
++ if (copy_to_user((void __user *)kp->dvs2_coefs,
++ &karg.dvs2_coefs,
++ sizeof(struct atomisp_dis_coefficients)))
+ return -EFAULT;
+ }
+ /* handle dvs 6axis configuration */
+@@ -822,13 +824,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ return -EFAULT;
+
+- kp->dvs_6axis_config = user_ptr + offset;
++ kp->dvs_6axis_config = (void __force *)user_ptr + offset;
+ offset += sizeof(struct atomisp_dvs_6axis_config);
+ if (!kp->dvs_6axis_config)
+ return -EFAULT;
+
+- if (copy_to_user(kp->dvs_6axis_config, &karg.dvs_6axis_config,
+- sizeof(struct atomisp_dvs_6axis_config)))
++ if (copy_to_user((void __user *)kp->dvs_6axis_config,
++ &karg.dvs_6axis_config,
++ sizeof(struct atomisp_dvs_6axis_config)))
+ return -EFAULT;
+ }
+ }
+@@ -887,7 +890,7 @@ static int get_atomisp_sensor_ae_bracketing_lut(
+ get_user(lut, &up->lut))
+ return -EFAULT;
+
+- kp->lut = compat_ptr(lut);
++ kp->lut = (void __force *)compat_ptr(lut);
+ return 0;
+ }
+
+diff --git a/drivers/staging/most/cdev/cdev.c b/drivers/staging/most/cdev/cdev.c
+index 4d7fce8731fe..dfa8e4db2239 100644
+--- a/drivers/staging/most/cdev/cdev.c
++++ b/drivers/staging/most/cdev/cdev.c
+@@ -18,6 +18,8 @@
+ #include <linux/idr.h>
+ #include "most/core.h"
+
++#define CHRDEV_REGION_SIZE 50
++
+ static struct cdev_component {
+ dev_t devno;
+ struct ida minor_id;
+@@ -513,7 +515,7 @@ static int __init mod_init(void)
+ spin_lock_init(&ch_list_lock);
+ ida_init(&comp.minor_id);
+
+- err = alloc_chrdev_region(&comp.devno, 0, 50, "cdev");
++ err = alloc_chrdev_region(&comp.devno, 0, CHRDEV_REGION_SIZE, "cdev");
+ if (err < 0)
+ goto dest_ida;
+ comp.major = MAJOR(comp.devno);
+@@ -523,7 +525,7 @@ static int __init mod_init(void)
+ return 0;
+
+ free_cdev:
+- unregister_chrdev_region(comp.devno, 1);
++ unregister_chrdev_region(comp.devno, CHRDEV_REGION_SIZE);
+ dest_ida:
+ ida_destroy(&comp.minor_id);
+ class_destroy(comp.class);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 5d28fff46557..80f6168f06f6 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -601,6 +601,7 @@ reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking)
+ }
+
+ if (tx_pos == (state->slot_queue_available * VCHIQ_SLOT_SIZE)) {
++ up(&state->slot_available_event);
+ pr_warn("%s: invalid tx_pos: %d\n", __func__, tx_pos);
+ return NULL;
+ }
+diff --git a/drivers/thermal/samsung/exynos_tmu.c b/drivers/thermal/samsung/exynos_tmu.c
+index ac83f721db24..d60069b5dc98 100644
+--- a/drivers/thermal/samsung/exynos_tmu.c
++++ b/drivers/thermal/samsung/exynos_tmu.c
+@@ -598,6 +598,7 @@ static int exynos5433_tmu_initialize(struct platform_device *pdev)
+ threshold_code = temp_to_code(data, temp);
+
+ rising_threshold = readl(data->base + rising_reg_offset);
++ rising_threshold &= ~(0xff << j * 8);
+ rising_threshold |= (threshold_code << j * 8);
+ writel(rising_threshold, data->base + rising_reg_offset);
+
+diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c
+index 2ed07ca6389e..9645c0062a90 100644
+--- a/drivers/tty/hvc/hvc_opal.c
++++ b/drivers/tty/hvc/hvc_opal.c
+@@ -318,7 +318,6 @@ static void udbg_init_opal_common(void)
+ udbg_putc = udbg_opal_putc;
+ udbg_getc = udbg_opal_getc;
+ udbg_getc_poll = udbg_opal_getc_poll;
+- tb_ticks_per_usec = 0x200; /* Make udelay not suck */
+ }
+
+ void __init hvc_opal_init_early(void)
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 6c7151edd715..b0e2c4847a5d 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -110,16 +110,19 @@ static void pty_unthrottle(struct tty_struct *tty)
+ static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c)
+ {
+ struct tty_struct *to = tty->link;
++ unsigned long flags;
+
+ if (tty->stopped)
+ return 0;
+
+ if (c > 0) {
++ spin_lock_irqsave(&to->port->lock, flags);
+ /* Stuff the data into the input queue of the other end */
+ c = tty_insert_flip_string(to->port, buf, c);
+ /* And shovel */
+ if (c)
+ tty_flip_buffer_push(to->port);
++ spin_unlock_irqrestore(&to->port->lock, flags);
+ }
+ return c;
+ }
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 40c2d9878190..c75c1532ca73 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3380,6 +3380,10 @@ static int wait_for_connected(struct usb_device *udev,
+ while (delay_ms < 2000) {
+ if (status || *portstatus & USB_PORT_STAT_CONNECTION)
+ break;
++ if (!port_is_power_on(hub, *portstatus)) {
++ status = -ENODEV;
++ break;
++ }
+ msleep(20);
+ delay_ms += 20;
+ status = hub_port_status(hub, *port1, portstatus, portchange);
+diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
+index 126991046eb7..0212f0ee8aea 100644
+--- a/drivers/vfio/mdev/mdev_core.c
++++ b/drivers/vfio/mdev/mdev_core.c
+@@ -66,34 +66,6 @@ uuid_le mdev_uuid(struct mdev_device *mdev)
+ }
+ EXPORT_SYMBOL(mdev_uuid);
+
+-static int _find_mdev_device(struct device *dev, void *data)
+-{
+- struct mdev_device *mdev;
+-
+- if (!dev_is_mdev(dev))
+- return 0;
+-
+- mdev = to_mdev_device(dev);
+-
+- if (uuid_le_cmp(mdev->uuid, *(uuid_le *)data) == 0)
+- return 1;
+-
+- return 0;
+-}
+-
+-static bool mdev_device_exist(struct mdev_parent *parent, uuid_le uuid)
+-{
+- struct device *dev;
+-
+- dev = device_find_child(parent->dev, &uuid, _find_mdev_device);
+- if (dev) {
+- put_device(dev);
+- return true;
+- }
+-
+- return false;
+-}
+-
+ /* Should be called holding parent_list_lock */
+ static struct mdev_parent *__find_parent_device(struct device *dev)
+ {
+@@ -221,7 +193,6 @@ int mdev_register_device(struct device *dev, const struct mdev_parent_ops *ops)
+ }
+
+ kref_init(&parent->ref);
+- mutex_init(&parent->lock);
+
+ parent->dev = dev;
+ parent->ops = ops;
+@@ -297,6 +268,10 @@ static void mdev_device_release(struct device *dev)
+ {
+ struct mdev_device *mdev = to_mdev_device(dev);
+
++ mutex_lock(&mdev_list_lock);
++ list_del(&mdev->next);
++ mutex_unlock(&mdev_list_lock);
++
+ dev_dbg(&mdev->dev, "MDEV: destroying\n");
+ kfree(mdev);
+ }
+@@ -304,7 +279,7 @@ static void mdev_device_release(struct device *dev)
+ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ {
+ int ret;
+- struct mdev_device *mdev;
++ struct mdev_device *mdev, *tmp;
+ struct mdev_parent *parent;
+ struct mdev_type *type = to_mdev_type(kobj);
+
+@@ -312,21 +287,28 @@ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ if (!parent)
+ return -EINVAL;
+
+- mutex_lock(&parent->lock);
++ mutex_lock(&mdev_list_lock);
+
+ /* Check for duplicate */
+- if (mdev_device_exist(parent, uuid)) {
+- ret = -EEXIST;
+- goto create_err;
++ list_for_each_entry(tmp, &mdev_list, next) {
++ if (!uuid_le_cmp(tmp->uuid, uuid)) {
++ mutex_unlock(&mdev_list_lock);
++ ret = -EEXIST;
++ goto mdev_fail;
++ }
+ }
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev) {
++ mutex_unlock(&mdev_list_lock);
+ ret = -ENOMEM;
+- goto create_err;
++ goto mdev_fail;
+ }
+
+ memcpy(&mdev->uuid, &uuid, sizeof(uuid_le));
++ list_add(&mdev->next, &mdev_list);
++ mutex_unlock(&mdev_list_lock);
++
+ mdev->parent = parent;
+ kref_init(&mdev->ref);
+
+@@ -338,35 +320,28 @@ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ ret = device_register(&mdev->dev);
+ if (ret) {
+ put_device(&mdev->dev);
+- goto create_err;
++ goto mdev_fail;
+ }
+
+ ret = mdev_device_create_ops(kobj, mdev);
+ if (ret)
+- goto create_failed;
++ goto create_fail;
+
+ ret = mdev_create_sysfs_files(&mdev->dev, type);
+ if (ret) {
+ mdev_device_remove_ops(mdev, true);
+- goto create_failed;
++ goto create_fail;
+ }
+
+ mdev->type_kobj = kobj;
++ mdev->active = true;
+ dev_dbg(&mdev->dev, "MDEV: created\n");
+
+- mutex_unlock(&parent->lock);
+-
+- mutex_lock(&mdev_list_lock);
+- list_add(&mdev->next, &mdev_list);
+- mutex_unlock(&mdev_list_lock);
+-
+- return ret;
++ return 0;
+
+-create_failed:
++create_fail:
+ device_unregister(&mdev->dev);
+-
+-create_err:
+- mutex_unlock(&parent->lock);
++mdev_fail:
+ mdev_put_parent(parent);
+ return ret;
+ }
+@@ -377,44 +352,39 @@ int mdev_device_remove(struct device *dev, bool force_remove)
+ struct mdev_parent *parent;
+ struct mdev_type *type;
+ int ret;
+- bool found = false;
+
+ mdev = to_mdev_device(dev);
+
+ mutex_lock(&mdev_list_lock);
+ list_for_each_entry(tmp, &mdev_list, next) {
+- if (tmp == mdev) {
+- found = true;
++ if (tmp == mdev)
+ break;
+- }
+ }
+
+- if (found)
+- list_del(&mdev->next);
++ if (tmp != mdev) {
++ mutex_unlock(&mdev_list_lock);
++ return -ENODEV;
++ }
+
+- mutex_unlock(&mdev_list_lock);
++ if (!mdev->active) {
++ mutex_unlock(&mdev_list_lock);
++ return -EAGAIN;
++ }
+
+- if (!found)
+- return -ENODEV;
++ mdev->active = false;
++ mutex_unlock(&mdev_list_lock);
+
+ type = to_mdev_type(mdev->type_kobj);
+ parent = mdev->parent;
+- mutex_lock(&parent->lock);
+
+ ret = mdev_device_remove_ops(mdev, force_remove);
+ if (ret) {
+- mutex_unlock(&parent->lock);
+-
+- mutex_lock(&mdev_list_lock);
+- list_add(&mdev->next, &mdev_list);
+- mutex_unlock(&mdev_list_lock);
+-
++ mdev->active = true;
+ return ret;
+ }
+
+ mdev_remove_sysfs_files(dev, type);
+ device_unregister(dev);
+- mutex_unlock(&parent->lock);
+ mdev_put_parent(parent);
+
+ return 0;
+diff --git a/drivers/vfio/mdev/mdev_private.h b/drivers/vfio/mdev/mdev_private.h
+index a9cefd70a705..b5819b7d7ef7 100644
+--- a/drivers/vfio/mdev/mdev_private.h
++++ b/drivers/vfio/mdev/mdev_private.h
+@@ -20,7 +20,6 @@ struct mdev_parent {
+ struct device *dev;
+ const struct mdev_parent_ops *ops;
+ struct kref ref;
+- struct mutex lock;
+ struct list_head next;
+ struct kset *mdev_types_kset;
+ struct list_head type_list;
+@@ -34,6 +33,7 @@ struct mdev_device {
+ struct kref ref;
+ struct list_head next;
+ struct kobject *type_kobj;
++ bool active;
+ };
+
+ #define to_mdev_device(dev) container_of(dev, struct mdev_device, dev)
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index 4c27f4be3c3d..aa9e792110e3 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -681,18 +681,23 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
+ group = vfio_iommu_group_get(dev);
+ if (!group) {
+ pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_reset;
+ }
+
+ ret = vfio_add_group_dev(dev, &vfio_platform_ops, vdev);
+- if (ret) {
+- vfio_iommu_group_put(group, dev);
+- return ret;
+- }
++ if (ret)
++ goto put_iommu;
+
+ mutex_init(&vdev->igate);
+
+ return 0;
++
++put_iommu:
++ vfio_iommu_group_put(group, dev);
++put_reset:
++ vfio_platform_put_reset(vdev);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(vfio_platform_probe_common);
+
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 0586ad5eb590..3e5b17710a4f 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -83,6 +83,7 @@ struct vfio_dma {
+ size_t size; /* Map size (bytes) */
+ int prot; /* IOMMU_READ/WRITE */
+ bool iommu_mapped;
++ bool lock_cap; /* capable(CAP_IPC_LOCK) */
+ struct task_struct *task;
+ struct rb_root pfn_list; /* Ex-user pinned pfn list */
+ };
+@@ -253,29 +254,25 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
+ return ret;
+ }
+
+-static int vfio_lock_acct(struct task_struct *task, long npage, bool *lock_cap)
++static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
+ {
+ struct mm_struct *mm;
+- bool is_current;
+ int ret;
+
+ if (!npage)
+ return 0;
+
+- is_current = (task->mm == current->mm);
+-
+- mm = is_current ? task->mm : get_task_mm(task);
++ mm = async ? get_task_mm(dma->task) : dma->task->mm;
+ if (!mm)
+ return -ESRCH; /* process exited */
+
+ ret = down_write_killable(&mm->mmap_sem);
+ if (!ret) {
+ if (npage > 0) {
+- if (lock_cap ? !*lock_cap :
+- !has_capability(task, CAP_IPC_LOCK)) {
++ if (!dma->lock_cap) {
+ unsigned long limit;
+
+- limit = task_rlimit(task,
++ limit = task_rlimit(dma->task,
+ RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+
+ if (mm->locked_vm + npage > limit)
+@@ -289,7 +286,7 @@ static int vfio_lock_acct(struct task_struct *task, long npage, bool *lock_cap)
+ up_write(&mm->mmap_sem);
+ }
+
+- if (!is_current)
++ if (async)
+ mmput(mm);
+
+ return ret;
+@@ -398,7 +395,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ */
+ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ long npage, unsigned long *pfn_base,
+- bool lock_cap, unsigned long limit)
++ unsigned long limit)
+ {
+ unsigned long pfn = 0;
+ long ret, pinned = 0, lock_acct = 0;
+@@ -421,7 +418,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ * pages are already counted against the user.
+ */
+ if (!rsvd && !vfio_find_vpfn(dma, iova)) {
+- if (!lock_cap && current->mm->locked_vm + 1 > limit) {
++ if (!dma->lock_cap && current->mm->locked_vm + 1 > limit) {
+ put_pfn(*pfn_base, dma->prot);
+ pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__,
+ limit << PAGE_SHIFT);
+@@ -447,7 +444,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ }
+
+ if (!rsvd && !vfio_find_vpfn(dma, iova)) {
+- if (!lock_cap &&
++ if (!dma->lock_cap &&
+ current->mm->locked_vm + lock_acct + 1 > limit) {
+ put_pfn(pfn, dma->prot);
+ pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n",
+@@ -460,7 +457,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ }
+
+ out:
+- ret = vfio_lock_acct(current, lock_acct, &lock_cap);
++ ret = vfio_lock_acct(dma, lock_acct, false);
+
+ unpin_out:
+ if (ret) {
+@@ -491,7 +488,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
+ }
+
+ if (do_accounting)
+- vfio_lock_acct(dma->task, locked - unlocked, NULL);
++ vfio_lock_acct(dma, locked - unlocked, true);
+
+ return unlocked;
+ }
+@@ -508,7 +505,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+
+ ret = vaddr_get_pfn(mm, vaddr, dma->prot, pfn_base);
+ if (!ret && do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
+- ret = vfio_lock_acct(dma->task, 1, NULL);
++ ret = vfio_lock_acct(dma, 1, true);
+ if (ret) {
+ put_pfn(*pfn_base, dma->prot);
+ if (ret == -ENOMEM)
+@@ -535,7 +532,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
+ unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
+
+ if (do_accounting)
+- vfio_lock_acct(dma->task, -unlocked, NULL);
++ vfio_lock_acct(dma, -unlocked, true);
+
+ return unlocked;
+ }
+@@ -827,7 +824,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ unlocked += vfio_sync_unpin(dma, domain, &unmapped_region_list);
+
+ if (do_accounting) {
+- vfio_lock_acct(dma->task, -unlocked, NULL);
++ vfio_lock_acct(dma, -unlocked, true);
+ return 0;
+ }
+ return unlocked;
+@@ -1042,14 +1039,12 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ size_t size = map_size;
+ long npage;
+ unsigned long pfn, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+- bool lock_cap = capable(CAP_IPC_LOCK);
+ int ret = 0;
+
+ while (size) {
+ /* Pin a contiguous chunk of memory */
+ npage = vfio_pin_pages_remote(dma, vaddr + dma->size,
+- size >> PAGE_SHIFT, &pfn,
+- lock_cap, limit);
++ size >> PAGE_SHIFT, &pfn, limit);
+ if (npage <= 0) {
+ WARN_ON(!npage);
+ ret = (int)npage;
+@@ -1124,8 +1119,36 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ dma->iova = iova;
+ dma->vaddr = vaddr;
+ dma->prot = prot;
+- get_task_struct(current);
+- dma->task = current;
++
++ /*
++ * We need to be able to both add to a task's locked memory and test
++ * against the locked memory limit and we need to be able to do both
++ * outside of this call path as pinning can be asynchronous via the
++ * external interfaces for mdev devices. RLIMIT_MEMLOCK requires a
++ * task_struct and VM locked pages requires an mm_struct, however
++ * holding an indefinite mm reference is not recommended, therefore we
++ * only hold a reference to a task. We could hold a reference to
++ * current, however QEMU uses this call path through vCPU threads,
++ * which can be killed resulting in a NULL mm and failure in the unmap
++ * path when called via a different thread. Avoid this problem by
++ * using the group_leader as threads within the same group require
++ * both CLONE_THREAD and CLONE_VM and will therefore use the same
++ * mm_struct.
++ *
++ * Previously we also used the task for testing CAP_IPC_LOCK at the
++ * time of pinning and accounting, however has_capability() makes use
++ * of real_cred, a copy-on-write field, so we can't guarantee that it
++ * matches group_leader, or in fact that it might not change by the
++ * time it's evaluated. If a process were to call MAP_DMA with
++ * CAP_IPC_LOCK but later drop it, it doesn't make sense that they
++ * possibly see different results for an iommu_mapped vfio_dma vs
++ * externally mapped. Therefore track CAP_IPC_LOCK in vfio_dma at the
++ * time of calling MAP_DMA.
++ */
++ get_task_struct(current->group_leader);
++ dma->task = current->group_leader;
++ dma->lock_cap = capable(CAP_IPC_LOCK);
++
+ dma->pfn_list = RB_ROOT;
+
+ /* Insert zero-sized and grow as we map chunks of it */
+@@ -1160,7 +1183,6 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ struct vfio_domain *d;
+ struct rb_node *n;
+ unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+- bool lock_cap = capable(CAP_IPC_LOCK);
+ int ret;
+
+ /* Arbitrarily pick the first domain in the list for lookups */
+@@ -1207,8 +1229,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+
+ npage = vfio_pin_pages_remote(dma, vaddr,
+ n >> PAGE_SHIFT,
+- &pfn, lock_cap,
+- limit);
++ &pfn, limit);
+ if (npage <= 0) {
+ WARN_ON(!npage);
+ ret = (int)npage;
+@@ -1485,7 +1506,7 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu)
+ if (!is_invalid_reserved_pfn(vpfn->pfn))
+ locked++;
+ }
+- vfio_lock_acct(dma->task, locked - unlocked, NULL);
++ vfio_lock_acct(dma, locked - unlocked, true);
+ }
+ }
+
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index 1c2289ddd555..0fa7d2bd0e48 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -301,14 +301,14 @@ static int pwm_backlight_probe(struct platform_device *pdev)
+
+ /*
+ * If the GPIO is not known to be already configured as output, that
+- * is, if gpiod_get_direction returns either GPIOF_DIR_IN or -EINVAL,
+- * change the direction to output and set the GPIO as active.
++ * is, if gpiod_get_direction returns either 1 or -EINVAL, change the
++ * direction to output and set the GPIO as active.
+ * Do not force the GPIO to active when it was already output as it
+ * could cause backlight flickering or we would enable the backlight too
+ * early. Leave the decision of the initial backlight state for later.
+ */
+ if (pb->enable_gpio &&
+- gpiod_get_direction(pb->enable_gpio) != GPIOF_DIR_OUT)
++ gpiod_get_direction(pb->enable_gpio) != 0)
+ gpiod_direction_output(pb->enable_gpio, 1);
+
+ pb->power_supply = devm_regulator_get(&pdev->dev, "power");
+diff --git a/drivers/watchdog/da9063_wdt.c b/drivers/watchdog/da9063_wdt.c
+index b17ac1bb1f28..87fb9ab603fa 100644
+--- a/drivers/watchdog/da9063_wdt.c
++++ b/drivers/watchdog/da9063_wdt.c
+@@ -99,10 +99,23 @@ static int da9063_wdt_set_timeout(struct watchdog_device *wdd,
+ {
+ struct da9063 *da9063 = watchdog_get_drvdata(wdd);
+ unsigned int selector;
+- int ret;
++ int ret = 0;
+
+ selector = da9063_wdt_timeout_to_sel(timeout);
+- ret = _da9063_wdt_set_timeout(da9063, selector);
++
++ /*
++ * There are two cases when a set_timeout() will be called:
++ * 1. The watchdog is off and someone wants to set the timeout for the
++ * further use.
++ * 2. The watchdog is already running and a new timeout value should be
++ * set.
++ *
++ * The watchdog can't store a timeout value not equal zero without
++ * enabling the watchdog, so the timeout must be buffered by the driver.
++ */
++ if (watchdog_active(wdd))
++ ret = _da9063_wdt_set_timeout(da9063, selector);
++
+ if (ret)
+ dev_err(da9063->dev, "Failed to set watchdog timeout (err = %d)\n",
+ ret);
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 7ec920e27065..9bfece2e3c88 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -219,7 +219,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+
+ ret = bio_iov_iter_get_pages(&bio, iter);
+ if (unlikely(ret))
+- return ret;
++ goto out;
+ ret = bio.bi_iter.bi_size;
+
+ if (iov_iter_rw(iter) == READ) {
+@@ -248,12 +248,13 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+ put_page(bvec->bv_page);
+ }
+
+- if (vecs != inline_vecs)
+- kfree(vecs);
+-
+ if (unlikely(bio.bi_status))
+ ret = blk_status_to_errno(bio.bi_status);
+
++out:
++ if (vecs != inline_vecs)
++ kfree(vecs);
++
+ bio_uninit(&bio);
+
+ return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b54a55497216..bd400cf2756f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3160,6 +3160,9 @@ out:
+ /* once for the tree */
+ btrfs_put_ordered_extent(ordered_extent);
+
++ /* Try to release some metadata so we don't get an OOM but don't wait */
++ btrfs_btree_balance_dirty_nodelay(fs_info);
++
+ return ret;
+ }
+
+@@ -4668,7 +4671,10 @@ delete:
+ extent_num_bytes, 0,
+ btrfs_header_owner(leaf),
+ ino, extent_offset);
+- BUG_ON(ret);
++ if (ret) {
++ btrfs_abort_transaction(trans, ret);
++ break;
++ }
+ if (btrfs_should_throttle_delayed_refs(trans, fs_info))
+ btrfs_async_run_delayed_refs(fs_info,
+ trans->delayed_ref_updates * 2,
+@@ -5423,13 +5429,18 @@ void btrfs_evict_inode(struct inode *inode)
+ trans->block_rsv = rsv;
+
+ ret = btrfs_truncate_inode_items(trans, root, inode, 0, 0);
+- if (ret != -ENOSPC && ret != -EAGAIN)
++ if (ret) {
++ trans->block_rsv = &fs_info->trans_block_rsv;
++ btrfs_end_transaction(trans);
++ btrfs_btree_balance_dirty(fs_info);
++ if (ret != -ENOSPC && ret != -EAGAIN) {
++ btrfs_orphan_del(NULL, BTRFS_I(inode));
++ btrfs_free_block_rsv(fs_info, rsv);
++ goto no_delete;
++ }
++ } else {
+ break;
+-
+- trans->block_rsv = &fs_info->trans_block_rsv;
+- btrfs_end_transaction(trans);
+- trans = NULL;
+- btrfs_btree_balance_dirty(fs_info);
++ }
+ }
+
+ btrfs_free_block_rsv(fs_info, rsv);
+@@ -5438,12 +5449,8 @@ void btrfs_evict_inode(struct inode *inode)
+ * Errors here aren't a big deal, it just means we leave orphan items
+ * in the tree. They will be cleaned up on the next mount.
+ */
+- if (ret == 0) {
+- trans->block_rsv = root->orphan_block_rsv;
+- btrfs_orphan_del(trans, BTRFS_I(inode));
+- } else {
+- btrfs_orphan_del(NULL, BTRFS_I(inode));
+- }
++ trans->block_rsv = root->orphan_block_rsv;
++ btrfs_orphan_del(trans, BTRFS_I(inode));
+
+ trans->block_rsv = &fs_info->trans_block_rsv;
+ if (!(root == fs_info->tree_root ||
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 9fb758d5077a..d0aba20e0843 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2579,6 +2579,21 @@ out:
+ spin_unlock(&fs_info->qgroup_lock);
+ }
+
++/*
++ * Check if the leaf is the last leaf. Which means all node pointers
++ * are at their last position.
++ */
++static bool is_last_leaf(struct btrfs_path *path)
++{
++ int i;
++
++ for (i = 1; i < BTRFS_MAX_LEVEL && path->nodes[i]; i++) {
++ if (path->slots[i] != btrfs_header_nritems(path->nodes[i]) - 1)
++ return false;
++ }
++ return true;
++}
++
+ /*
+ * returns < 0 on error, 0 when more leafs are to be scanned.
+ * returns 1 when done.
+@@ -2592,6 +2607,7 @@ qgroup_rescan_leaf(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
+ struct ulist *roots = NULL;
+ struct seq_list tree_mod_seq_elem = SEQ_LIST_INIT(tree_mod_seq_elem);
+ u64 num_bytes;
++ bool done;
+ int slot;
+ int ret;
+
+@@ -2620,6 +2636,7 @@ qgroup_rescan_leaf(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
+ mutex_unlock(&fs_info->qgroup_rescan_lock);
+ return ret;
+ }
++ done = is_last_leaf(path);
+
+ btrfs_item_key_to_cpu(path->nodes[0], &found,
+ btrfs_header_nritems(path->nodes[0]) - 1);
+@@ -2666,6 +2683,8 @@ out:
+ }
+ btrfs_put_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+
++ if (done && !ret)
++ ret = 1;
+ return ret;
+ }
+
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 8f23a94dab77..2009cea65d89 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3116,8 +3116,11 @@ out_wake_log_root:
+ mutex_unlock(&log_root_tree->log_mutex);
+
+ /*
+- * The barrier before waitqueue_active is implied by mutex_unlock
++ * The barrier before waitqueue_active is needed so all the updates
++ * above are seen by the woken threads. It might not be necessary, but
++ * proving that seems to be hard.
+ */
++ smp_mb();
+ if (waitqueue_active(&log_root_tree->log_commit_wait[index2]))
+ wake_up(&log_root_tree->log_commit_wait[index2]);
+ out:
+@@ -3128,8 +3131,11 @@ out:
+ mutex_unlock(&root->log_mutex);
+
+ /*
+- * The barrier before waitqueue_active is implied by mutex_unlock
++ * The barrier before waitqueue_active is needed so all the updates
++ * above are seen by the woken threads. It might not be necessary, but
++ * proving that seems to be hard.
+ */
++ smp_mb();
+ if (waitqueue_active(&root->log_commit_wait[index1]))
+ wake_up(&root->log_commit_wait[index1]);
+ return ret;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index b33082e6878f..6f9b4cfbc33d 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -45,7 +45,7 @@ static void ceph_put_super(struct super_block *s)
+ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ {
+ struct ceph_fs_client *fsc = ceph_inode_to_client(d_inode(dentry));
+- struct ceph_monmap *monmap = fsc->client->monc.monmap;
++ struct ceph_mon_client *monc = &fsc->client->monc;
+ struct ceph_statfs st;
+ u64 fsid;
+ int err;
+@@ -58,7 +58,7 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ }
+
+ dout("statfs\n");
+- err = ceph_monc_do_statfs(&fsc->client->monc, data_pool, &st);
++ err = ceph_monc_do_statfs(monc, data_pool, &st);
+ if (err < 0)
+ return err;
+
+@@ -94,8 +94,11 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ buf->f_namelen = NAME_MAX;
+
+ /* Must convert the fsid, for consistent values across arches */
+- fsid = le64_to_cpu(*(__le64 *)(&monmap->fsid)) ^
+- le64_to_cpu(*((__le64 *)&monmap->fsid + 1));
++ mutex_lock(&monc->mutex);
++ fsid = le64_to_cpu(*(__le64 *)(&monc->monmap->fsid)) ^
++ le64_to_cpu(*((__le64 *)&monc->monmap->fsid + 1));
++ mutex_unlock(&monc->mutex);
++
+ buf->f_fsid.val[0] = fsid & 0xffffffff;
+ buf->f_fsid.val[1] = fsid >> 32;
+
+@@ -268,7 +271,7 @@ static int parse_fsopt_token(char *c, void *private)
+ case Opt_rasize:
+ if (intval < 0)
+ return -EINVAL;
+- fsopt->rasize = ALIGN(intval + PAGE_SIZE - 1, PAGE_SIZE);
++ fsopt->rasize = ALIGN(intval, PAGE_SIZE);
+ break;
+ case Opt_caps_wanted_delay_min:
+ if (intval < 1)
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index ce654526c0fb..984e190f9b89 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -427,8 +427,17 @@ fail:
+ */
+ static int __init fscrypt_init(void)
+ {
++ /*
++ * Use an unbound workqueue to allow bios to be decrypted in parallel
++ * even when they happen to complete on the same CPU. This sacrifices
++ * locality, but it's worthwhile since decryption is CPU-intensive.
++ *
++ * Also use a high-priority workqueue to prioritize decryption work,
++ * which blocks reads from completing, over regular application tasks.
++ */
+ fscrypt_read_workqueue = alloc_workqueue("fscrypt_read_queue",
+- WQ_HIGHPRI, 0);
++ WQ_UNBOUND | WQ_HIGHPRI,
++ num_online_cpus());
+ if (!fscrypt_read_workqueue)
+ goto fail;
+
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index f8b5635f0396..e4eab3a38e7c 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -379,6 +379,8 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ return -EFSCORRUPTED;
+
+ ext4_lock_group(sb, block_group);
++ if (buffer_verified(bh))
++ goto verified;
+ if (unlikely(!ext4_block_bitmap_csum_verify(sb, block_group,
+ desc, bh))) {
+ ext4_unlock_group(sb, block_group);
+@@ -401,6 +403,7 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ return -EFSCORRUPTED;
+ }
+ set_buffer_verified(bh);
++verified:
+ ext4_unlock_group(sb, block_group);
+ return 0;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 478b8f21c814..257388a8032b 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -91,6 +91,8 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ return -EFSCORRUPTED;
+
+ ext4_lock_group(sb, block_group);
++ if (buffer_verified(bh))
++ goto verified;
+ blk = ext4_inode_bitmap(sb, desc);
+ if (!ext4_inode_bitmap_csum_verify(sb, block_group, desc, bh,
+ EXT4_INODES_PER_GROUP(sb) / 8)) {
+@@ -108,6 +110,7 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ return -EFSBADCRC;
+ }
+ set_buffer_verified(bh);
++verified:
+ ext4_unlock_group(sb, block_group);
+ return 0;
+ }
+@@ -1392,7 +1395,10 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ ext4_itable_unused_count(sb, gdp)),
+ sbi->s_inodes_per_block);
+
+- if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group)) {
++ if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group) ||
++ ((group == 0) && ((EXT4_INODES_PER_GROUP(sb) -
++ ext4_itable_unused_count(sb, gdp)) <
++ EXT4_FIRST_INO(sb)))) {
+ ext4_error(sb, "Something is wrong with group %u: "
+ "used itable blocks: %d; "
+ "itable unused count: %u",
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 851bc552d849..716adc635506 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -682,6 +682,10 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
+ goto convert;
+ }
+
++ ret = ext4_journal_get_write_access(handle, iloc.bh);
++ if (ret)
++ goto out;
++
+ flags |= AOP_FLAG_NOFS;
+
+ page = grab_cache_page_write_begin(mapping, 0, flags);
+@@ -710,7 +714,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
+ out_up_read:
+ up_read(&EXT4_I(inode)->xattr_sem);
+ out:
+- if (handle)
++ if (handle && (ret != 1))
+ ext4_journal_stop(handle);
+ brelse(iloc.bh);
+ return ret;
+@@ -752,6 +756,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+
+ ext4_write_unlock_xattr(inode, &no_expand);
+ brelse(iloc.bh);
++ mark_inode_dirty(inode);
+ out:
+ return copied;
+ }
+@@ -898,7 +903,6 @@ retry_journal:
+ goto out;
+ }
+
+-
+ page = grab_cache_page_write_begin(mapping, 0, flags);
+ if (!page) {
+ ret = -ENOMEM;
+@@ -916,6 +920,9 @@ retry_journal:
+ if (ret < 0)
+ goto out_release_page;
+ }
++ ret = ext4_journal_get_write_access(handle, iloc.bh);
++ if (ret)
++ goto out_release_page;
+
+ up_read(&EXT4_I(inode)->xattr_sem);
+ *pagep = page;
+@@ -936,7 +943,6 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ unsigned len, unsigned copied,
+ struct page *page)
+ {
+- int i_size_changed = 0;
+ int ret;
+
+ ret = ext4_write_inline_data_end(inode, pos, len, copied, page);
+@@ -954,10 +960,8 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ * But it's important to update i_size while still holding page lock:
+ * page writeout could otherwise come in and zero beyond i_size.
+ */
+- if (pos+copied > inode->i_size) {
++ if (pos+copied > inode->i_size)
+ i_size_write(inode, pos+copied);
+- i_size_changed = 1;
+- }
+ unlock_page(page);
+ put_page(page);
+
+@@ -967,8 +971,7 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ * ordering of page lock and transaction start for journaling
+ * filesystems.
+ */
+- if (i_size_changed)
+- mark_inode_dirty(inode);
++ mark_inode_dirty(inode);
+
+ return copied;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 06b963d2fc36..afb22e01f009 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1389,9 +1389,10 @@ static int ext4_write_end(struct file *file,
+ loff_t old_size = inode->i_size;
+ int ret = 0, ret2;
+ int i_size_changed = 0;
++ int inline_data = ext4_has_inline_data(inode);
+
+ trace_ext4_write_end(inode, pos, len, copied);
+- if (ext4_has_inline_data(inode)) {
++ if (inline_data) {
+ ret = ext4_write_inline_data_end(inode, pos, len,
+ copied, page);
+ if (ret < 0) {
+@@ -1419,7 +1420,7 @@ static int ext4_write_end(struct file *file,
+ * ordering of page lock and transaction start for journaling
+ * filesystems.
+ */
+- if (i_size_changed)
++ if (i_size_changed || inline_data)
+ ext4_mark_inode_dirty(handle, inode);
+
+ if (pos + len > inode->i_size && ext4_can_truncate(inode))
+@@ -1493,6 +1494,7 @@ static int ext4_journalled_write_end(struct file *file,
+ int partial = 0;
+ unsigned from, to;
+ int size_changed = 0;
++ int inline_data = ext4_has_inline_data(inode);
+
+ trace_ext4_journalled_write_end(inode, pos, len, copied);
+ from = pos & (PAGE_SIZE - 1);
+@@ -1500,7 +1502,7 @@ static int ext4_journalled_write_end(struct file *file,
+
+ BUG_ON(!ext4_handle_valid(handle));
+
+- if (ext4_has_inline_data(inode)) {
++ if (inline_data) {
+ ret = ext4_write_inline_data_end(inode, pos, len,
+ copied, page);
+ if (ret < 0) {
+@@ -1531,7 +1533,7 @@ static int ext4_journalled_write_end(struct file *file,
+ if (old_size < pos)
+ pagecache_isize_extended(inode, old_size, pos);
+
+- if (size_changed) {
++ if (size_changed || inline_data) {
+ ret2 = ext4_mark_inode_dirty(handle, inode);
+ if (!ret)
+ ret = ret2;
+@@ -2028,11 +2030,7 @@ static int __ext4_journalled_writepage(struct page *page,
+ }
+
+ if (inline_data) {
+- BUFFER_TRACE(inode_bh, "get write access");
+- ret = ext4_journal_get_write_access(handle, inode_bh);
+-
+- err = ext4_handle_dirty_metadata(handle, inode, inode_bh);
+-
++ ret = ext4_mark_inode_dirty(handle, inode);
+ } else {
+ ret = ext4_walk_page_buffers(handle, page_bufs, 0, len, NULL,
+ do_journal_get_write_access);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 74a6d884ede4..d20cf383f2c1 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2307,7 +2307,7 @@ static int ext4_check_descriptors(struct super_block *sb,
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ ext4_fsblk_t first_block = le32_to_cpu(sbi->s_es->s_first_data_block);
+ ext4_fsblk_t last_block;
+- ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0) + 1;
++ ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0);
+ ext4_fsblk_t block_bitmap;
+ ext4_fsblk_t inode_bitmap;
+ ext4_fsblk_t inode_table;
+@@ -3106,14 +3106,8 @@ static ext4_group_t ext4_has_uninit_itable(struct super_block *sb)
+ if (!gdp)
+ continue;
+
+- if (gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED))
+- continue;
+- if (group != 0)
++ if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED)))
+ break;
+- ext4_error(sb, "Inode table for bg 0 marked as "
+- "needing zeroing");
+- if (sb_rdonly(sb))
+- return ngroups;
+ }
+
+ return group;
+@@ -4050,14 +4044,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ goto failed_mount2;
+ }
+ }
++ sbi->s_gdb_count = db_count;
+ if (!ext4_check_descriptors(sb, logical_sb_block, &first_not_zeroed)) {
+ ext4_msg(sb, KERN_ERR, "group descriptors corrupted!");
+ ret = -EFSCORRUPTED;
+ goto failed_mount2;
+ }
+
+- sbi->s_gdb_count = db_count;
+-
+ timer_setup(&sbi->s_err_report, print_daily_error_info, 0);
+
+ /* Register extent status tree shrinker */
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 02237d4d91f5..a9fec79dc3dd 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1745,6 +1745,12 @@ static int __write_data_page(struct page *page, bool *submitted,
+ /* we should bypass data pages to proceed the kworkder jobs */
+ if (unlikely(f2fs_cp_error(sbi))) {
+ mapping_set_error(page->mapping, -EIO);
++ /*
++ * don't drop any dirty dentry pages for keeping lastest
++ * directory structure.
++ */
++ if (S_ISDIR(inode->i_mode))
++ goto redirty_out;
+ goto out;
+ }
+
+@@ -1842,7 +1848,13 @@ out:
+
+ redirty_out:
+ redirty_page_for_writepage(wbc, page);
+- if (!err)
++ /*
++ * pageout() in MM traslates EAGAIN, so calls handle_write_error()
++ * -> mapping_set_error() -> set_bit(AS_EIO, ...).
++ * file_write_and_wait_range() will see EIO error, which is critical
++ * to return value of fsync() followed by atomic_write failure to user.
++ */
++ if (!err || wbc->for_reclaim)
+ return AOP_WRITEPAGE_ACTIVATE;
+ unlock_page(page);
+ return err;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 20149b8771d9..f6cd5850be75 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1602,7 +1602,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs)
+ }
+
+ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
+- struct inode *inode)
++ struct inode *inode, bool cap)
+ {
+ if (!inode)
+ return true;
+@@ -1615,7 +1615,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
+ if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
+ in_group_p(F2FS_OPTION(sbi).s_resgid))
+ return true;
+- if (capable(CAP_SYS_RESOURCE))
++ if (cap && capable(CAP_SYS_RESOURCE))
+ return true;
+ return false;
+ }
+@@ -1650,7 +1650,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ avail_user_block_count = sbi->user_block_count -
+ sbi->current_reserved_blocks;
+
+- if (!__allow_reserved_blocks(sbi, inode))
++ if (!__allow_reserved_blocks(sbi, inode, true))
+ avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
+
+ if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
+@@ -1857,7 +1857,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
+ valid_block_count = sbi->total_valid_block_count +
+ sbi->current_reserved_blocks + 1;
+
+- if (!__allow_reserved_blocks(sbi, inode))
++ if (!__allow_reserved_blocks(sbi, inode, false))
+ valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
+
+ if (unlikely(valid_block_count > sbi->user_block_count)) {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6b94f19b3fa8..04c95812e5c9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1670,6 +1670,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+
+ inode_lock(inode);
+
++ down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
++
+ if (f2fs_is_atomic_file(inode))
+ goto out;
+
+@@ -1699,6 +1701,7 @@ inc_stat:
+ stat_inc_atomic_write(inode);
+ stat_update_max_atomic_write(inode);
+ out:
++ up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+ inode_unlock(inode);
+ mnt_drop_write_file(filp);
+ return ret;
+@@ -1851,9 +1854,11 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ if (get_user(in, (__u32 __user *)arg))
+ return -EFAULT;
+
+- ret = mnt_want_write_file(filp);
+- if (ret)
+- return ret;
++ if (in != F2FS_GOING_DOWN_FULLSYNC) {
++ ret = mnt_want_write_file(filp);
++ if (ret)
++ return ret;
++ }
+
+ switch (in) {
+ case F2FS_GOING_DOWN_FULLSYNC:
+@@ -1894,7 +1899,8 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+
+ f2fs_update_time(sbi, REQ_TIME);
+ out:
+- mnt_drop_write_file(filp);
++ if (in != F2FS_GOING_DOWN_FULLSYNC)
++ mnt_drop_write_file(filp);
+ return ret;
+ }
+
+@@ -2568,7 +2574,9 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
+ }
+ f2fs_put_page(ipage, 1);
+
+- dquot_initialize(inode);
++ err = dquot_initialize(inode);
++ if (err)
++ goto out_unlock;
+
+ transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
+ if (!IS_ERR(transfer_to[PRJQUOTA])) {
+@@ -2924,6 +2932,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ iov_iter_count(from)) ||
+ f2fs_has_inline_data(inode) ||
+ f2fs_force_buffered_io(inode, WRITE)) {
++ clear_inode_flag(inode,
++ FI_NO_PREALLOC);
+ inode_unlock(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9327411fd93b..6aecdd5b97d0 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -778,9 +778,14 @@ retry:
+ set_cold_data(page);
+
+ err = do_write_data_page(&fio);
+- if (err == -ENOMEM && is_dirty) {
+- congestion_wait(BLK_RW_ASYNC, HZ/50);
+- goto retry;
++ if (err) {
++ clear_cold_data(page);
++ if (err == -ENOMEM) {
++ congestion_wait(BLK_RW_ASYNC, HZ/50);
++ goto retry;
++ }
++ if (is_dirty)
++ set_page_dirty(page);
+ }
+ }
+ out:
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index cffaf842f4e7..c06489634655 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -230,6 +230,8 @@ static int __revoke_inmem_pages(struct inode *inode,
+
+ lock_page(page);
+
++ f2fs_wait_on_page_writeback(page, DATA, true);
++
+ if (recover) {
+ struct dnode_of_data dn;
+ struct node_info ni;
+@@ -478,6 +480,9 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+
+ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
+ {
++ if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
++ return;
++
+ /* try to shrink extent cache when there is no enough memory */
+ if (!available_free_memory(sbi, EXTENT_CACHE))
+ f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 42d564c5ccd0..cad77fbb1f14 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3063,6 +3063,12 @@ static int __init init_f2fs_fs(void)
+ {
+ int err;
+
++ if (PAGE_SIZE != F2FS_BLKSIZE) {
++ printk("F2FS not supported on PAGE_SIZE(%lu) != %d\n",
++ PAGE_SIZE, F2FS_BLKSIZE);
++ return -EINVAL;
++ }
++
+ f2fs_build_trace_ios();
+
+ err = init_inodecache();
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index bd15d0b57626..6e70445213e7 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1629,7 +1629,8 @@ int nfs_post_op_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ nfs_fattr_set_barrier(fattr);
+ status = nfs_post_op_update_inode_locked(inode, fattr,
+ NFS_INO_INVALID_CHANGE
+- | NFS_INO_INVALID_CTIME);
++ | NFS_INO_INVALID_CTIME
++ | NFS_INO_REVAL_FORCED);
+ spin_unlock(&inode->i_lock);
+
+ return status;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 409acdda70dd..2d94eb9cd386 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -746,6 +746,13 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ slot->slot_nr,
+ slot->seq_nr);
+ goto out_retry;
++ case -NFS4ERR_RETRY_UNCACHED_REP:
++ case -NFS4ERR_SEQ_FALSE_RETRY:
++ /*
++ * The server thinks we tried to replay a request.
++ * Retry the call after bumping the sequence ID.
++ */
++ goto retry_new_seq;
+ case -NFS4ERR_BADSLOT:
+ /*
+ * The slot id we used was probably retired. Try again
+@@ -770,10 +777,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ goto retry_nowait;
+ }
+ goto session_recover;
+- case -NFS4ERR_SEQ_FALSE_RETRY:
+- if (interrupted)
+- goto retry_new_seq;
+- goto session_recover;
+ default:
+ /* Just update the slot sequence no. */
+ slot->seq_done = 1;
+@@ -2804,7 +2807,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ if (ret != 0)
+ goto out;
+
+- state = nfs4_opendata_to_nfs4_state(opendata);
++ state = _nfs4_opendata_to_nfs4_state(opendata);
+ ret = PTR_ERR(state);
+ if (IS_ERR(state))
+ goto out;
+@@ -2840,6 +2843,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ nfs4_schedule_stateid_recovery(server, state);
+ }
+ out:
++ nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ return ret;
+ }
+
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index ee723aa153a3..b35d55e4851a 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1144,7 +1144,7 @@ _pnfs_return_layout(struct inode *ino)
+ LIST_HEAD(tmp_list);
+ nfs4_stateid stateid;
+ int status = 0;
+- bool send;
++ bool send, valid_layout;
+
+ dprintk("NFS: %s for inode %lu\n", __func__, ino->i_ino);
+
+@@ -1165,6 +1165,7 @@ _pnfs_return_layout(struct inode *ino)
+ goto out_put_layout_hdr;
+ spin_lock(&ino->i_lock);
+ }
++ valid_layout = pnfs_layout_is_valid(lo);
+ pnfs_clear_layoutcommit(ino, &tmp_list);
+ pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL, 0);
+
+@@ -1178,7 +1179,8 @@ _pnfs_return_layout(struct inode *ino)
+ }
+
+ /* Don't send a LAYOUTRETURN if list was initially empty */
+- if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags)) {
++ if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
++ !valid_layout) {
+ spin_unlock(&ino->i_lock);
+ dprintk("NFS: %s no layout segments to return\n", __func__);
+ goto out_put_layout_hdr;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fc74d6f46bd5..3b40d1b57613 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4378,8 +4378,11 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ spin_unlock(&state_lock);
+
+ if (status)
+- destroy_unhashed_deleg(dp);
++ goto out_unlock;
++
+ return dp;
++out_unlock:
++ vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
+ out_clnt_odstate:
+ put_clnt_odstate(dp->dl_clnt_odstate);
+ out_stid:
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index cfe535c286c3..59d471025949 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1585,6 +1585,8 @@ nfsd4_decode_getdeviceinfo(struct nfsd4_compoundargs *argp,
+ gdev->gd_maxcount = be32_to_cpup(p++);
+ num = be32_to_cpup(p++);
+ if (num) {
++ if (num > 1000)
++ goto xdr_error;
+ READ_BUF(4 * num);
+ gdev->gd_notify_types = be32_to_cpup(p++);
+ for (i = 1; i < num; i++) {
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index b0abfe02beab..308d64e72515 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -673,13 +673,13 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
+ [ilog2(VM_MERGEABLE)] = "mg",
+ [ilog2(VM_UFFD_MISSING)]= "um",
+ [ilog2(VM_UFFD_WP)] = "uw",
+-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
++#ifdef CONFIG_ARCH_HAS_PKEYS
+ /* These come out via ProtectionKey: */
+ [ilog2(VM_PKEY_BIT0)] = "",
+ [ilog2(VM_PKEY_BIT1)] = "",
+ [ilog2(VM_PKEY_BIT2)] = "",
+ [ilog2(VM_PKEY_BIT3)] = "",
+-#endif
++#endif /* CONFIG_ARCH_HAS_PKEYS */
+ };
+ size_t i;
+
+@@ -1259,8 +1259,9 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ if (pte_swp_soft_dirty(pte))
+ flags |= PM_SOFT_DIRTY;
+ entry = pte_to_swp_entry(pte);
+- frame = swp_type(entry) |
+- (swp_offset(entry) << MAX_SWAPFILES_SHIFT);
++ if (pm->show_pfn)
++ frame = swp_type(entry) |
++ (swp_offset(entry) << MAX_SWAPFILES_SHIFT);
+ flags |= PM_SWAP;
+ if (is_migration_entry(entry))
+ page = migration_entry_to_page(entry);
+@@ -1311,11 +1312,14 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ else if (is_swap_pmd(pmd)) {
+ swp_entry_t entry = pmd_to_swp_entry(pmd);
+- unsigned long offset = swp_offset(entry);
++ unsigned long offset;
+
+- offset += (addr & ~PMD_MASK) >> PAGE_SHIFT;
+- frame = swp_type(entry) |
+- (offset << MAX_SWAPFILES_SHIFT);
++ if (pm->show_pfn) {
++ offset = swp_offset(entry) +
++ ((addr & ~PMD_MASK) >> PAGE_SHIFT);
++ frame = swp_type(entry) |
++ (offset << MAX_SWAPFILES_SHIFT);
++ }
+ flags |= PM_SWAP;
+ if (pmd_swp_soft_dirty(pmd))
+ flags |= PM_SOFT_DIRTY;
+@@ -1333,10 +1337,12 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ err = add_to_pagemap(addr, &pme, pm);
+ if (err)
+ break;
+- if (pm->show_pfn && (flags & PM_PRESENT))
+- frame++;
+- else if (flags & PM_SWAP)
+- frame += (1 << MAX_SWAPFILES_SHIFT);
++ if (pm->show_pfn) {
++ if (flags & PM_PRESENT)
++ frame++;
++ else if (flags & PM_SWAP)
++ frame += (1 << MAX_SWAPFILES_SHIFT);
++ }
+ }
+ spin_unlock(ptl);
+ return err;
+diff --git a/fs/squashfs/cache.c b/fs/squashfs/cache.c
+index 23813c078cc9..0839efa720b3 100644
+--- a/fs/squashfs/cache.c
++++ b/fs/squashfs/cache.c
+@@ -350,6 +350,9 @@ int squashfs_read_metadata(struct super_block *sb, void *buffer,
+
+ TRACE("Entered squashfs_read_metadata [%llx:%x]\n", *block, *offset);
+
++ if (unlikely(length < 0))
++ return -EIO;
++
+ while (length) {
+ entry = squashfs_cache_get(sb, msblk->block_cache, *block, 0);
+ if (entry->error) {
+diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
+index 13d80947bf9e..fcff2e0487fe 100644
+--- a/fs/squashfs/file.c
++++ b/fs/squashfs/file.c
+@@ -194,7 +194,11 @@ static long long read_indexes(struct super_block *sb, int n,
+ }
+
+ for (i = 0; i < blocks; i++) {
+- int size = le32_to_cpu(blist[i]);
++ int size = squashfs_block_size(blist[i]);
++ if (size < 0) {
++ err = size;
++ goto failure;
++ }
+ block += SQUASHFS_COMPRESSED_SIZE_BLOCK(size);
+ }
+ n -= blocks;
+@@ -367,7 +371,7 @@ static int read_blocklist(struct inode *inode, int index, u64 *block)
+ sizeof(size));
+ if (res < 0)
+ return res;
+- return le32_to_cpu(size);
++ return squashfs_block_size(size);
+ }
+
+ /* Copy data into page cache */
+diff --git a/fs/squashfs/fragment.c b/fs/squashfs/fragment.c
+index 0ed6edbc5c71..86ad9a4b8c36 100644
+--- a/fs/squashfs/fragment.c
++++ b/fs/squashfs/fragment.c
+@@ -61,9 +61,7 @@ int squashfs_frag_lookup(struct super_block *sb, unsigned int fragment,
+ return size;
+
+ *fragment_block = le64_to_cpu(fragment_entry.start_block);
+- size = le32_to_cpu(fragment_entry.size);
+-
+- return size;
++ return squashfs_block_size(fragment_entry.size);
+ }
+
+
+diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h
+index 24d12fd14177..4e6853f084d0 100644
+--- a/fs/squashfs/squashfs_fs.h
++++ b/fs/squashfs/squashfs_fs.h
+@@ -129,6 +129,12 @@
+
+ #define SQUASHFS_COMPRESSED_BLOCK(B) (!((B) & SQUASHFS_COMPRESSED_BIT_BLOCK))
+
++static inline int squashfs_block_size(__le32 raw)
++{
++ u32 size = le32_to_cpu(raw);
++ return (size >> 25) ? -EIO : size;
++}
++
+ /*
+ * Inode number ops. Inodes consist of a compressed block number, and an
+ * uncompressed offset within that block
+diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
+index 62903bae0221..0bac0c7d0dec 100644
+--- a/include/drm/drm_dp_helper.h
++++ b/include/drm/drm_dp_helper.h
+@@ -478,6 +478,7 @@
+ # define DP_PSR_FRAME_CAPTURE (1 << 3)
+ # define DP_PSR_SELECTIVE_UPDATE (1 << 4)
+ # define DP_PSR_IRQ_HPD_WITH_CRC_ERRORS (1 << 5)
++# define DP_PSR_ENABLE_PSR2 (1 << 6) /* eDP 1.4a */
+
+ #define DP_ADAPTER_CTRL 0x1a0
+ # define DP_ADAPTER_CTRL_FORCE_LOAD_SENSE (1 << 0)
+diff --git a/include/linux/delayacct.h b/include/linux/delayacct.h
+index 5e335b6203f4..31c865d1842e 100644
+--- a/include/linux/delayacct.h
++++ b/include/linux/delayacct.h
+@@ -29,7 +29,7 @@
+
+ #ifdef CONFIG_TASK_DELAY_ACCT
+ struct task_delay_info {
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ unsigned int flags; /* Private per-task flags */
+
+ /* For each stat XXX, add following, aligned appropriately
+@@ -124,7 +124,7 @@ static inline void delayacct_blkio_start(void)
+
+ static inline void delayacct_blkio_end(struct task_struct *p)
+ {
+- if (current->delays)
++ if (p->delays)
+ __delayacct_blkio_end(p);
+ delayacct_clear_flag(DELAYACCT_PF_BLKIO);
+ }
+diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
+index 92f20832fd28..e8ca5e654277 100644
+--- a/include/linux/dma-iommu.h
++++ b/include/linux/dma-iommu.h
+@@ -17,6 +17,7 @@
+ #define __DMA_IOMMU_H
+
+ #ifdef __KERNEL__
++#include <linux/types.h>
+ #include <asm/errno.h>
+
+ #ifdef CONFIG_IOMMU_DMA
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index d99b71bc2c66..091690119144 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -158,6 +158,15 @@ enum memcg_kmem_state {
+ KMEM_ONLINE,
+ };
+
++#if defined(CONFIG_SMP)
++struct memcg_padding {
++ char x[0];
++} ____cacheline_internodealigned_in_smp;
++#define MEMCG_PADDING(name) struct memcg_padding name;
++#else
++#define MEMCG_PADDING(name)
++#endif
++
+ /*
+ * The memory controller data structure. The memory controller controls both
+ * page cache and RSS per cgroup. We would eventually like to provide
+@@ -205,7 +214,6 @@ struct mem_cgroup {
+ int oom_kill_disable;
+
+ /* memory.events */
+- atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
+ struct cgroup_file events_file;
+
+ /* protect arrays of thresholds */
+@@ -225,19 +233,26 @@ struct mem_cgroup {
+ * mem_cgroup ? And what type of charges should we move ?
+ */
+ unsigned long move_charge_at_immigrate;
++ /* taken only while moving_account > 0 */
++ spinlock_t move_lock;
++ unsigned long move_lock_flags;
++
++ MEMCG_PADDING(_pad1_);
++
+ /*
+ * set > 0 if pages under this cgroup are moving to other cgroup.
+ */
+ atomic_t moving_account;
+- /* taken only while moving_account > 0 */
+- spinlock_t move_lock;
+ struct task_struct *move_lock_task;
+- unsigned long move_lock_flags;
+
+ /* memory.stat */
+ struct mem_cgroup_stat_cpu __percpu *stat_cpu;
++
++ MEMCG_PADDING(_pad2_);
++
+ atomic_long_t stat[MEMCG_NR_STAT];
+ atomic_long_t events[NR_VM_EVENT_ITEMS];
++ atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
+
+ unsigned long socket_pressure;
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index d14261d6b213..edab43d2bec8 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -228,15 +228,16 @@ extern unsigned int kobjsize(const void *objp);
+ #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
+ #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
+
+-#if defined(CONFIG_X86)
+-# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
+-#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
++#ifdef CONFIG_ARCH_HAS_PKEYS
+ # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
+ # define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */
+ # define VM_PKEY_BIT1 VM_HIGH_ARCH_1
+ # define VM_PKEY_BIT2 VM_HIGH_ARCH_2
+ # define VM_PKEY_BIT3 VM_HIGH_ARCH_3
+-#endif
++#endif /* CONFIG_ARCH_HAS_PKEYS */
++
++#if defined(CONFIG_X86)
++# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
+ #elif defined(CONFIG_PPC)
+ # define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */
+ #elif defined(CONFIG_PARISC)
+diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h
+index cdd66a5fbd5e..0a7abe8a407f 100644
+--- a/include/linux/mmc/sdio_ids.h
++++ b/include/linux/mmc/sdio_ids.h
+@@ -35,6 +35,7 @@
+ #define SDIO_DEVICE_ID_BROADCOM_4335_4339 0x4335
+ #define SDIO_DEVICE_ID_BROADCOM_4339 0x4339
+ #define SDIO_DEVICE_ID_BROADCOM_43362 0xa962
++#define SDIO_DEVICE_ID_BROADCOM_43364 0xa9a4
+ #define SDIO_DEVICE_ID_BROADCOM_43430 0xa9a6
+ #define SDIO_DEVICE_ID_BROADCOM_4345 0x4345
+ #define SDIO_DEVICE_ID_BROADCOM_43455 0xa9bf
+diff --git a/include/linux/netfilter/ipset/ip_set_timeout.h b/include/linux/netfilter/ipset/ip_set_timeout.h
+index bfb3531fd88a..7ad8ddf9ca8a 100644
+--- a/include/linux/netfilter/ipset/ip_set_timeout.h
++++ b/include/linux/netfilter/ipset/ip_set_timeout.h
+@@ -65,8 +65,14 @@ ip_set_timeout_set(unsigned long *timeout, u32 value)
+ static inline u32
+ ip_set_timeout_get(const unsigned long *timeout)
+ {
+- return *timeout == IPSET_ELEM_PERMANENT ? 0 :
+- jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC;
++ u32 t;
++
++ if (*timeout == IPSET_ELEM_PERMANENT)
++ return 0;
++
++ t = jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC;
++ /* Zero value in userspace means no timeout */
++ return t == 0 ? 1 : t;
+ }
+
+ #endif /* __KERNEL__ */
+diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h
+index df176d7c2b87..25602afd4844 100644
+--- a/include/linux/regulator/consumer.h
++++ b/include/linux/regulator/consumer.h
+@@ -80,6 +80,7 @@ struct regmap;
+ * These modes can be OR'ed together to make up a mask of valid register modes.
+ */
+
++#define REGULATOR_MODE_INVALID 0x0
+ #define REGULATOR_MODE_FAST 0x1
+ #define REGULATOR_MODE_NORMAL 0x2
+ #define REGULATOR_MODE_IDLE 0x4
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index b4c9fda9d833..3361cc8eb635 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -348,7 +348,8 @@ struct earlycon_device {
+ };
+
+ struct earlycon_id {
+- char name[16];
++ char name[15];
++ char name_term; /* In case compiler didn't '\0' term name */
+ char compatible[128];
+ int (*setup)(struct earlycon_device *, const char *options);
+ };
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 9cf770150539..5ccc4ec646cb 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -342,7 +342,7 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+ unsigned int flags);
+
+-void tcp_enter_quickack_mode(struct sock *sk);
++void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks);
+ static inline void tcp_dec_quickack_mode(struct sock *sk,
+ const unsigned int pkts)
+ {
+diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h
+index 233bae954970..be6e49124c6d 100644
+--- a/include/soc/tegra/mc.h
++++ b/include/soc/tegra/mc.h
+@@ -108,6 +108,8 @@ struct tegra_mc_soc {
+ u8 client_id_mask;
+
+ const struct tegra_smmu_soc *smmu;
++
++ u32 intmask;
+ };
+
+ struct tegra_mc {
+diff --git a/include/uapi/sound/asoc.h b/include/uapi/sound/asoc.h
+index 69c37ecbff7e..f3c4b46e39d8 100644
+--- a/include/uapi/sound/asoc.h
++++ b/include/uapi/sound/asoc.h
+@@ -139,6 +139,11 @@
+ #define SND_SOC_TPLG_DAI_FLGBIT_SYMMETRIC_CHANNELS (1 << 1)
+ #define SND_SOC_TPLG_DAI_FLGBIT_SYMMETRIC_SAMPLEBITS (1 << 2)
+
++/* DAI clock gating */
++#define SND_SOC_TPLG_DAI_CLK_GATE_UNDEFINED 0
++#define SND_SOC_TPLG_DAI_CLK_GATE_GATED 1
++#define SND_SOC_TPLG_DAI_CLK_GATE_CONT 2
++
+ /* DAI physical PCM data formats.
+ * Add new formats to the end of the list.
+ */
+@@ -160,6 +165,18 @@
+ #define SND_SOC_TPLG_LNK_FLGBIT_SYMMETRIC_SAMPLEBITS (1 << 2)
+ #define SND_SOC_TPLG_LNK_FLGBIT_VOICE_WAKEUP (1 << 3)
+
++/* DAI topology BCLK parameter
++ * For the backwards capability, by default codec is bclk master
++ */
++#define SND_SOC_TPLG_BCLK_CM 0 /* codec is bclk master */
++#define SND_SOC_TPLG_BCLK_CS 1 /* codec is bclk slave */
++
++/* DAI topology FSYNC parameter
++ * For the backwards capability, by default codec is fsync master
++ */
++#define SND_SOC_TPLG_FSYNC_CM 0 /* codec is fsync master */
++#define SND_SOC_TPLG_FSYNC_CS 1 /* codec is fsync slave */
++
+ /*
+ * Block Header.
+ * This header precedes all object and object arrays below.
+@@ -312,11 +329,11 @@ struct snd_soc_tplg_hw_config {
+ __le32 size; /* in bytes of this structure */
+ __le32 id; /* unique ID - - used to match */
+ __le32 fmt; /* SND_SOC_DAI_FORMAT_ format value */
+- __u8 clock_gated; /* 1 if clock can be gated to save power */
++ __u8 clock_gated; /* SND_SOC_TPLG_DAI_CLK_GATE_ value */
+ __u8 invert_bclk; /* 1 for inverted BCLK, 0 for normal */
+ __u8 invert_fsync; /* 1 for inverted frame clock, 0 for normal */
+- __u8 bclk_master; /* 1 for master of BCLK, 0 for slave */
+- __u8 fsync_master; /* 1 for master of FSYNC, 0 for slave */
++ __u8 bclk_master; /* SND_SOC_TPLG_BCLK_ value */
++ __u8 fsync_master; /* SND_SOC_TPLG_FSYNC_ value */
+ __u8 mclk_direction; /* 0 for input, 1 for output */
+ __le16 reserved; /* for 32bit alignment */
+ __le32 mclk_rate; /* MCLK or SYSCLK freqency in Hz */
+diff --git a/ipc/msg.c b/ipc/msg.c
+index 56fd1c73eedc..574f76c9a2ff 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -758,7 +758,7 @@ static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg,
+ WRITE_ONCE(msr->r_msg, ERR_PTR(-E2BIG));
+ } else {
+ ipc_update_pid(&msq->q_lrpid, task_pid(msr->r_tsk));
+- msq->q_rtime = get_seconds();
++ msq->q_rtime = ktime_get_real_seconds();
+
+ wake_q_add(wake_q, msr->r_tsk);
+ WRITE_ONCE(msr->r_msg, msg);
+@@ -859,7 +859,7 @@ static long do_msgsnd(int msqid, long mtype, void __user *mtext,
+ }
+
+ ipc_update_pid(&msq->q_lspid, task_tgid(current));
+- msq->q_stime = get_seconds();
++ msq->q_stime = ktime_get_real_seconds();
+
+ if (!pipelined_send(msq, msg, &wake_q)) {
+ /* no one is waiting for this message, enqueue it */
+@@ -1087,7 +1087,7 @@ static long do_msgrcv(int msqid, void __user *buf, size_t bufsz, long msgtyp, in
+
+ list_del(&msg->m_list);
+ msq->q_qnum--;
+- msq->q_rtime = get_seconds();
++ msq->q_rtime = ktime_get_real_seconds();
+ ipc_update_pid(&msq->q_lrpid, task_tgid(current));
+ msq->q_cbytes -= msg->m_ts;
+ atomic_sub(msg->m_ts, &ns->msg_bytes);
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 06be75d9217a..c6a8a971769d 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -104,7 +104,7 @@ struct sem {
+ /* that alter the semaphore */
+ struct list_head pending_const; /* pending single-sop operations */
+ /* that do not alter the semaphore*/
+- time_t sem_otime; /* candidate for sem_otime */
++ time64_t sem_otime; /* candidate for sem_otime */
+ } ____cacheline_aligned_in_smp;
+
+ /* One sem_array data structure for each set of semaphores in the system. */
+@@ -984,10 +984,10 @@ again:
+ static void set_semotime(struct sem_array *sma, struct sembuf *sops)
+ {
+ if (sops == NULL) {
+- sma->sems[0].sem_otime = get_seconds();
++ sma->sems[0].sem_otime = ktime_get_real_seconds();
+ } else {
+ sma->sems[sops[0].sem_num].sem_otime =
+- get_seconds();
++ ktime_get_real_seconds();
+ }
+ }
+
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index d7a807e81451..a0c5a3ec6e60 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -426,7 +426,7 @@ static int audit_field_valid(struct audit_entry *entry, struct audit_field *f)
+ return -EINVAL;
+ break;
+ case AUDIT_EXE:
+- if (f->op != Audit_equal)
++ if (f->op != Audit_not_equal && f->op != Audit_equal)
+ return -EINVAL;
+ if (entry->rule.listnr != AUDIT_FILTER_EXIT)
+ return -EINVAL;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 4e0a4ac803db..479c031ec54c 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -471,6 +471,8 @@ static int audit_filter_rules(struct task_struct *tsk,
+ break;
+ case AUDIT_EXE:
+ result = audit_exe_compare(tsk, rule->exe);
++ if (f->op == Audit_not_equal)
++ result = !result;
+ break;
+ case AUDIT_UID:
+ result = audit_uid_comparator(cred->uid, f->op, f->uid);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 74fa60b4b438..4ed4613ed362 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1946,13 +1946,44 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ * for offload.
+ */
+ ulen = info.jited_prog_len;
+- info.jited_prog_len = prog->jited_len;
++ if (prog->aux->func_cnt) {
++ u32 i;
++
++ info.jited_prog_len = 0;
++ for (i = 0; i < prog->aux->func_cnt; i++)
++ info.jited_prog_len += prog->aux->func[i]->jited_len;
++ } else {
++ info.jited_prog_len = prog->jited_len;
++ }
++
+ if (info.jited_prog_len && ulen) {
+ if (bpf_dump_raw_ok()) {
+ uinsns = u64_to_user_ptr(info.jited_prog_insns);
+ ulen = min_t(u32, info.jited_prog_len, ulen);
+- if (copy_to_user(uinsns, prog->bpf_func, ulen))
+- return -EFAULT;
++
++ /* for multi-function programs, copy the JITed
++ * instructions for all the functions
++ */
++ if (prog->aux->func_cnt) {
++ u32 len, free, i;
++ u8 *img;
++
++ free = ulen;
++ for (i = 0; i < prog->aux->func_cnt; i++) {
++ len = prog->aux->func[i]->jited_len;
++ len = min_t(u32, len, free);
++ img = (u8 *) prog->aux->func[i]->bpf_func;
++ if (copy_to_user(uinsns, img, len))
++ return -EFAULT;
++ uinsns += len;
++ free -= len;
++ if (!free)
++ break;
++ }
++ } else {
++ if (copy_to_user(uinsns, prog->bpf_func, ulen))
++ return -EFAULT;
++ }
+ } else {
+ info.jited_prog_insns = 0;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1b586f31cbfd..23d187ec33ea 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5065,7 +5065,7 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
+ /* hold the map. If the program is rejected by verifier,
+ * the map will be released by release_maps() or it
+ * will be used by the valid program until it's unloaded
+- * and all maps are released in free_bpf_prog_info()
++ * and all maps are released in free_used_maps()
+ */
+ map = bpf_map_inc(map, false);
+ if (IS_ERR(map)) {
+@@ -5856,7 +5856,7 @@ skip_full_check:
+ err_release_maps:
+ if (!env->prog->aux->used_maps)
+ /* if we didn't copy map pointers into bpf_prog_info, release
+- * them now. Otherwise free_bpf_prog_info() will release them.
++ * them now. Otherwise free_used_maps() will release them.
+ */
+ release_maps(env);
+ *prog = env->prog;
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index e2764d767f18..ca8ac2824f0b 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -44,23 +44,24 @@ void __delayacct_tsk_init(struct task_struct *tsk)
+ {
+ tsk->delays = kmem_cache_zalloc(delayacct_cache, GFP_KERNEL);
+ if (tsk->delays)
+- spin_lock_init(&tsk->delays->lock);
++ raw_spin_lock_init(&tsk->delays->lock);
+ }
+
+ /*
+ * Finish delay accounting for a statistic using its timestamps (@start),
+ * accumalator (@total) and @count
+ */
+-static void delayacct_end(spinlock_t *lock, u64 *start, u64 *total, u32 *count)
++static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total,
++ u32 *count)
+ {
+ s64 ns = ktime_get_ns() - *start;
+ unsigned long flags;
+
+ if (ns > 0) {
+- spin_lock_irqsave(lock, flags);
++ raw_spin_lock_irqsave(lock, flags);
+ *total += ns;
+ (*count)++;
+- spin_unlock_irqrestore(lock, flags);
++ raw_spin_unlock_irqrestore(lock, flags);
+ }
+ }
+
+@@ -127,7 +128,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+
+ /* zero XXX_total, non-zero XXX_count implies XXX stat overflowed */
+
+- spin_lock_irqsave(&tsk->delays->lock, flags);
++ raw_spin_lock_irqsave(&tsk->delays->lock, flags);
+ tmp = d->blkio_delay_total + tsk->delays->blkio_delay;
+ d->blkio_delay_total = (tmp < d->blkio_delay_total) ? 0 : tmp;
+ tmp = d->swapin_delay_total + tsk->delays->swapin_delay;
+@@ -137,7 +138,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ d->blkio_count += tsk->delays->blkio_count;
+ d->swapin_count += tsk->delays->swapin_count;
+ d->freepages_count += tsk->delays->freepages_count;
+- spin_unlock_irqrestore(&tsk->delays->lock, flags);
++ raw_spin_unlock_irqrestore(&tsk->delays->lock, flags);
+
+ return 0;
+ }
+@@ -147,10 +148,10 @@ __u64 __delayacct_blkio_ticks(struct task_struct *tsk)
+ __u64 ret;
+ unsigned long flags;
+
+- spin_lock_irqsave(&tsk->delays->lock, flags);
++ raw_spin_lock_irqsave(&tsk->delays->lock, flags);
+ ret = nsec_to_clock_t(tsk->delays->blkio_delay +
+ tsk->delays->swapin_delay);
+- spin_unlock_irqrestore(&tsk->delays->lock, flags);
++ raw_spin_unlock_irqrestore(&tsk->delays->lock, flags);
+ return ret;
+ }
+
+diff --git a/kernel/fork.c b/kernel/fork.c
+index a5d21c42acfc..5ad558e6f8fe 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -440,6 +440,14 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ continue;
+ }
+ charge = 0;
++ /*
++ * Don't duplicate many vmas if we've been oom-killed (for
++ * example)
++ */
++ if (fatal_signal_pending(current)) {
++ retval = -EINTR;
++ goto out;
++ }
+ if (mpnt->vm_flags & VM_ACCOUNT) {
+ unsigned long len = vma_pages(mpnt);
+
+diff --git a/kernel/hung_task.c b/kernel/hung_task.c
+index 751593ed7c0b..32b479468e4d 100644
+--- a/kernel/hung_task.c
++++ b/kernel/hung_task.c
+@@ -44,6 +44,7 @@ int __read_mostly sysctl_hung_task_warnings = 10;
+
+ static int __read_mostly did_panic;
+ static bool hung_task_show_lock;
++static bool hung_task_call_panic;
+
+ static struct task_struct *watchdog_task;
+
+@@ -127,10 +128,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
+ touch_nmi_watchdog();
+
+ if (sysctl_hung_task_panic) {
+- if (hung_task_show_lock)
+- debug_show_all_locks();
+- trigger_all_cpu_backtrace();
+- panic("hung_task: blocked tasks");
++ hung_task_show_lock = true;
++ hung_task_call_panic = true;
+ }
+ }
+
+@@ -193,6 +192,10 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
+ rcu_read_unlock();
+ if (hung_task_show_lock)
+ debug_show_all_locks();
++ if (hung_task_call_panic) {
++ trigger_all_cpu_backtrace();
++ panic("hung_task: blocked tasks");
++ }
+ }
+
+ static long hung_timeout_jiffies(unsigned long last_checked,
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 2c16f1ab5e10..5be9a60a959f 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -241,7 +241,8 @@ static void kcov_put(struct kcov *kcov)
+
+ void kcov_task_init(struct task_struct *t)
+ {
+- t->kcov_mode = KCOV_MODE_DISABLED;
++ WRITE_ONCE(t->kcov_mode, KCOV_MODE_DISABLED);
++ barrier();
+ t->kcov_size = 0;
+ t->kcov_area = NULL;
+ t->kcov = NULL;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 481951bf091d..1a481ae12dec 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -319,8 +319,14 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
+ task = create->result;
+ if (!IS_ERR(task)) {
+ static const struct sched_param param = { .sched_priority = 0 };
++ char name[TASK_COMM_LEN];
+
+- vsnprintf(task->comm, sizeof(task->comm), namefmt, args);
++ /*
++ * task is already visible to other tasks, so updating
++ * COMM must be protected.
++ */
++ vsnprintf(name, sizeof(name), namefmt, args);
++ set_task_comm(task, name);
+ /*
+ * root may have changed our (kthreadd's) priority or CPU mask.
+ * The kernel thread should not inherit these properties.
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 895e6b76b25e..1a63739f48e8 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -348,10 +348,27 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+ unsigned long pfn, pgoff, order;
+ pgprot_t pgprot = PAGE_KERNEL;
+ int error, nid, is_ram;
++ struct dev_pagemap *conflict_pgmap;
+
+ align_start = res->start & ~(SECTION_SIZE - 1);
+ align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
+ - align_start;
++ align_end = align_start + align_size - 1;
++
++ conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_start), NULL);
++ if (conflict_pgmap) {
++ dev_WARN(dev, "Conflicting mapping in same section\n");
++ put_dev_pagemap(conflict_pgmap);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_end), NULL);
++ if (conflict_pgmap) {
++ dev_WARN(dev, "Conflicting mapping in same section\n");
++ put_dev_pagemap(conflict_pgmap);
++ return ERR_PTR(-ENOMEM);
++ }
++
+ is_ram = region_intersects(align_start, align_size,
+ IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
+
+@@ -371,7 +388,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+
+ mutex_lock(&pgmap_lock);
+ error = 0;
+- align_end = align_start + align_size - 1;
+
+ foreach_order_pgoff(res, order, pgoff) {
+ error = __radix_tree_insert(&pgmap_radix,
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 4c10be0f4843..34238a7d48f6 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -60,7 +60,7 @@ static const struct platform_s2idle_ops *s2idle_ops;
+ static DECLARE_WAIT_QUEUE_HEAD(s2idle_wait_head);
+
+ enum s2idle_states __read_mostly s2idle_state;
+-static DEFINE_SPINLOCK(s2idle_lock);
++static DEFINE_RAW_SPINLOCK(s2idle_lock);
+
+ void s2idle_set_ops(const struct platform_s2idle_ops *ops)
+ {
+@@ -78,12 +78,12 @@ static void s2idle_enter(void)
+ {
+ trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, true);
+
+- spin_lock_irq(&s2idle_lock);
++ raw_spin_lock_irq(&s2idle_lock);
+ if (pm_wakeup_pending())
+ goto out;
+
+ s2idle_state = S2IDLE_STATE_ENTER;
+- spin_unlock_irq(&s2idle_lock);
++ raw_spin_unlock_irq(&s2idle_lock);
+
+ get_online_cpus();
+ cpuidle_resume();
+@@ -97,11 +97,11 @@ static void s2idle_enter(void)
+ cpuidle_pause();
+ put_online_cpus();
+
+- spin_lock_irq(&s2idle_lock);
++ raw_spin_lock_irq(&s2idle_lock);
+
+ out:
+ s2idle_state = S2IDLE_STATE_NONE;
+- spin_unlock_irq(&s2idle_lock);
++ raw_spin_unlock_irq(&s2idle_lock);
+
+ trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, false);
+ }
+@@ -156,12 +156,12 @@ void s2idle_wake(void)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&s2idle_lock, flags);
++ raw_spin_lock_irqsave(&s2idle_lock, flags);
+ if (s2idle_state > S2IDLE_STATE_NONE) {
+ s2idle_state = S2IDLE_STATE_WAKE;
+ wake_up(&s2idle_wait_head);
+ }
+- spin_unlock_irqrestore(&s2idle_lock, flags);
++ raw_spin_unlock_irqrestore(&s2idle_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(s2idle_wake);
+
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index 449d67edfa4b..d7d091309054 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -281,7 +281,7 @@ void printk_safe_flush_on_panic(void)
+ * Make sure that we could access the main ring buffer.
+ * Do not risk a double release when more CPUs are up.
+ */
+- if (in_nmi() && raw_spin_is_locked(&logbuf_lock)) {
++ if (raw_spin_is_locked(&logbuf_lock)) {
+ if (num_online_cpus() > 1)
+ return;
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e13df951aca7..28592b62b1d5 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
+ {
+ struct rq *rq = cpu_rq(sg_cpu->cpu);
+- unsigned long util;
+
+- if (rq->rt.rt_nr_running) {
+- util = sg_cpu->max;
+- } else {
+- util = sg_cpu->util_dl;
+- if (rq->cfs.h_nr_running)
+- util += sg_cpu->util_cfs;
+- }
++ if (rq->rt.rt_nr_running)
++ return sg_cpu->max;
+
+ /*
++ * Utilization required by DEADLINE must always be granted while, for
++ * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
++ * gracefully reduce the frequency when no tasks show up for longer
++ * periods of time.
++ *
+ * Ideally we would like to set util_dl as min/guaranteed freq and
+ * util_cfs + util_dl as requested freq. However, cpufreq is not yet
+ * ready for such an interface. So, we only do the latter for now.
+ */
+- return min(util, sg_cpu->max);
++ return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
+ }
+
+ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 2f6fa95de2d8..1ff523dae6e2 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -37,7 +37,7 @@ struct cpu_stop_done {
+ struct cpu_stopper {
+ struct task_struct *thread;
+
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ bool enabled; /* is this stopper enabled? */
+ struct list_head works; /* list of pending works */
+
+@@ -81,13 +81,13 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ unsigned long flags;
+ bool enabled;
+
+- spin_lock_irqsave(&stopper->lock, flags);
++ raw_spin_lock_irqsave(&stopper->lock, flags);
+ enabled = stopper->enabled;
+ if (enabled)
+ __cpu_stop_queue_work(stopper, work, &wakeq);
+ else if (work->done)
+ cpu_stop_signal_done(work->done);
+- spin_unlock_irqrestore(&stopper->lock, flags);
++ raw_spin_unlock_irqrestore(&stopper->lock, flags);
+
+ wake_up_q(&wakeq);
+
+@@ -237,8 +237,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ DEFINE_WAKE_Q(wakeq);
+ int err;
+ retry:
+- spin_lock_irq(&stopper1->lock);
+- spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
++ raw_spin_lock_irq(&stopper1->lock);
++ raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
+
+ err = -ENOENT;
+ if (!stopper1->enabled || !stopper2->enabled)
+@@ -261,8 +261,8 @@ retry:
+ __cpu_stop_queue_work(stopper1, work1, &wakeq);
+ __cpu_stop_queue_work(stopper2, work2, &wakeq);
+ unlock:
+- spin_unlock(&stopper2->lock);
+- spin_unlock_irq(&stopper1->lock);
++ raw_spin_unlock(&stopper2->lock);
++ raw_spin_unlock_irq(&stopper1->lock);
+
+ if (unlikely(err == -EDEADLK)) {
+ while (stop_cpus_in_progress)
+@@ -461,9 +461,9 @@ static int cpu_stop_should_run(unsigned int cpu)
+ unsigned long flags;
+ int run;
+
+- spin_lock_irqsave(&stopper->lock, flags);
++ raw_spin_lock_irqsave(&stopper->lock, flags);
+ run = !list_empty(&stopper->works);
+- spin_unlock_irqrestore(&stopper->lock, flags);
++ raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ return run;
+ }
+
+@@ -474,13 +474,13 @@ static void cpu_stopper_thread(unsigned int cpu)
+
+ repeat:
+ work = NULL;
+- spin_lock_irq(&stopper->lock);
++ raw_spin_lock_irq(&stopper->lock);
+ if (!list_empty(&stopper->works)) {
+ work = list_first_entry(&stopper->works,
+ struct cpu_stop_work, list);
+ list_del_init(&work->list);
+ }
+- spin_unlock_irq(&stopper->lock);
++ raw_spin_unlock_irq(&stopper->lock);
+
+ if (work) {
+ cpu_stop_fn_t fn = work->fn;
+@@ -554,7 +554,7 @@ static int __init cpu_stop_init(void)
+ for_each_possible_cpu(cpu) {
+ struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+
+- spin_lock_init(&stopper->lock);
++ raw_spin_lock_init(&stopper->lock);
+ INIT_LIST_HEAD(&stopper->works);
+ }
+
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 84f37420fcf5..e0ff8f94f237 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -453,8 +453,8 @@ static inline int __clocksource_watchdog_kthread(void) { return 0; }
+ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }
+ void clocksource_mark_unstable(struct clocksource *cs) { }
+
+-static void inline clocksource_watchdog_lock(unsigned long *flags) { }
+-static void inline clocksource_watchdog_unlock(unsigned long *flags) { }
++static inline void clocksource_watchdog_lock(unsigned long *flags) { }
++static inline void clocksource_watchdog_unlock(unsigned long *flags) { }
+
+ #endif /* CONFIG_CLOCKSOURCE_WATCHDOG */
+
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 8b5bdcf64871..f14a547f6303 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -681,6 +681,8 @@ event_trigger_callback(struct event_command *cmd_ops,
+ goto out_free;
+
+ out_reg:
++ /* Up the trigger_data count to make sure reg doesn't free it on failure */
++ event_trigger_init(trigger_ops, trigger_data);
+ ret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);
+ /*
+ * The above returns on success the # of functions enabled,
+@@ -688,11 +690,13 @@ event_trigger_callback(struct event_command *cmd_ops,
+ * Consider no functions a failure too.
+ */
+ if (!ret) {
++ cmd_ops->unreg(glob, trigger_ops, trigger_data, file);
+ ret = -ENOENT;
+- goto out_free;
+- } else if (ret < 0)
+- goto out_free;
+- ret = 0;
++ } else if (ret > 0)
++ ret = 0;
++
++ /* Down the counter of trigger_data or free it if not used anymore */
++ event_trigger_free(trigger_ops, trigger_data);
+ out:
+ return ret;
+
+@@ -1418,6 +1422,9 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ goto out;
+ }
+
++ /* Up the trigger_data count to make sure nothing frees it on failure */
++ event_trigger_init(trigger_ops, trigger_data);
++
+ if (trigger) {
+ number = strsep(&trigger, ":");
+
+@@ -1468,6 +1475,7 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ goto out_disable;
+ /* Just return zero, not the number of enabled functions */
+ ret = 0;
++ event_trigger_free(trigger_ops, trigger_data);
+ out:
+ return ret;
+
+@@ -1478,7 +1486,7 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ out_free:
+ if (cmd_ops->set_filter)
+ cmd_ops->set_filter(NULL, trigger_data, NULL);
+- kfree(trigger_data);
++ event_trigger_free(trigger_ops, trigger_data);
+ kfree(enable_data);
+ goto out;
+ }
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index eebc7c92f6d0..dd88cc0af065 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -400,11 +400,10 @@ static struct trace_kprobe *find_trace_kprobe(const char *event,
+ static int
+ enable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
+ {
++ struct event_file_link *link = NULL;
+ int ret = 0;
+
+ if (file) {
+- struct event_file_link *link;
+-
+ link = kmalloc(sizeof(*link), GFP_KERNEL);
+ if (!link) {
+ ret = -ENOMEM;
+@@ -424,6 +423,18 @@ enable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
+ else
+ ret = enable_kprobe(&tk->rp.kp);
+ }
++
++ if (ret) {
++ if (file) {
++ /* Notice the if is true on not WARN() */
++ if (!WARN_ON_ONCE(!link))
++ list_del_rcu(&link->list);
++ kfree(link);
++ tk->tp.flags &= ~TP_FLAG_TRACE;
++ } else {
++ tk->tp.flags &= ~TP_FLAG_PROFILE;
++ }
++ }
+ out:
+ return ret;
+ }
+diff --git a/lib/dma-direct.c b/lib/dma-direct.c
+index bbfb229aa067..970d39155618 100644
+--- a/lib/dma-direct.c
++++ b/lib/dma-direct.c
+@@ -84,6 +84,13 @@ again:
+ __free_pages(page, page_order);
+ page = NULL;
+
++ if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
++ dev->coherent_dma_mask < DMA_BIT_MASK(64) &&
++ !(gfp & (GFP_DMA32 | GFP_DMA))) {
++ gfp |= GFP_DMA32;
++ goto again;
++ }
++
+ if (IS_ENABLED(CONFIG_ZONE_DMA) &&
+ dev->coherent_dma_mask < DMA_BIT_MASK(32) &&
+ !(gfp & GFP_DMA)) {
+diff --git a/mm/slub.c b/mm/slub.c
+index 613c8dc2f409..067db0ff7496 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -711,7 +711,7 @@ void object_err(struct kmem_cache *s, struct page *page,
+ print_trailer(s, page, object);
+ }
+
+-static void slab_err(struct kmem_cache *s, struct page *page,
++static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page,
+ const char *fmt, ...)
+ {
+ va_list args;
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index ebff729cc956..9ff21a12ea00 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -1519,7 +1519,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
+ addr))
+ return;
+
+- area = remove_vm_area(addr);
++ area = find_vmap_area((unsigned long)addr)->vm;
+ if (unlikely(!area)) {
+ WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
+ addr);
+@@ -1529,6 +1529,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
+ debug_check_no_locks_freed(addr, get_vm_area_size(area));
+ debug_check_no_obj_freed(addr, get_vm_area_size(area));
+
++ remove_vm_area(addr);
+ if (deallocate_pages) {
+ int i;
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2af787e8b130..1ccc2a2ac2e9 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -7113,16 +7113,19 @@ int dev_change_tx_queue_len(struct net_device *dev, unsigned long new_len)
+ dev->tx_queue_len = new_len;
+ res = call_netdevice_notifiers(NETDEV_CHANGE_TX_QUEUE_LEN, dev);
+ res = notifier_to_errno(res);
+- if (res) {
+- netdev_err(dev,
+- "refused to change device tx_queue_len\n");
+- dev->tx_queue_len = orig_len;
+- return res;
+- }
+- return dev_qdisc_change_tx_queue_len(dev);
++ if (res)
++ goto err_rollback;
++ res = dev_qdisc_change_tx_queue_len(dev);
++ if (res)
++ goto err_rollback;
+ }
+
+ return 0;
++
++err_rollback:
++ netdev_err(dev, "refused to change device tx_queue_len\n");
++ dev->tx_queue_len = orig_len;
++ return res;
+ }
+
+ /**
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 511d6748ea5f..6901349f07d7 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -292,19 +292,19 @@ __be32 fib_compute_spec_dst(struct sk_buff *skb)
+ return ip_hdr(skb)->daddr;
+
+ in_dev = __in_dev_get_rcu(dev);
+- BUG_ON(!in_dev);
+
+ net = dev_net(dev);
+
+ scope = RT_SCOPE_UNIVERSE;
+ if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) {
++ bool vmark = in_dev && IN_DEV_SRC_VMARK(in_dev);
+ struct flowi4 fl4 = {
+ .flowi4_iif = LOOPBACK_IFINDEX,
+ .flowi4_oif = l3mdev_master_ifindex_rcu(dev),
+ .daddr = ip_hdr(skb)->saddr,
+ .flowi4_tos = RT_TOS(ip_hdr(skb)->tos),
+ .flowi4_scope = scope,
+- .flowi4_mark = IN_DEV_SRC_VMARK(in_dev) ? skb->mark : 0,
++ .flowi4_mark = vmark ? skb->mark : 0,
+ };
+ if (!fib_lookup(net, &fl4, &res, 0))
+ return FIB_RES_PREFSRC(net, res);
+diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
+index 43f620feb1c4..13722462d99b 100644
+--- a/net/ipv4/ipconfig.c
++++ b/net/ipv4/ipconfig.c
+@@ -748,6 +748,11 @@ static void __init ic_bootp_init_ext(u8 *e)
+ */
+ static inline void __init ic_bootp_init(void)
+ {
++ /* Re-initialise all name servers to NONE, in case any were set via the
++ * "ip=" or "nfsaddrs=" kernel command line parameters: any IP addresses
++ * specified there will already have been decoded but are no longer
++ * needed
++ */
+ ic_nameservers_predef();
+
+ dev_add_pack(&bootp_packet_type);
+@@ -1368,6 +1373,13 @@ static int __init ip_auto_config(void)
+ int err;
+ unsigned int i;
+
++ /* Initialise all name servers to NONE (but only if the "ip=" or
++ * "nfsaddrs=" kernel command line parameters weren't decoded, otherwise
++ * we'll overwrite the IP addresses specified there)
++ */
++ if (ic_set_manually == 0)
++ ic_nameservers_predef();
++
+ #ifdef CONFIG_PROC_FS
+ proc_create("pnp", 0444, init_net.proc_net, &pnp_seq_fops);
+ #endif /* CONFIG_PROC_FS */
+@@ -1588,6 +1600,7 @@ static int __init ip_auto_config_setup(char *addrs)
+ return 1;
+ }
+
++ /* Initialise all name servers to NONE */
+ ic_nameservers_predef();
+
+ /* Parse string for static IP assignment. */
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 58e2f479ffb4..4bfff3c87e8e 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -354,6 +354,10 @@ static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain)
+ /* Reduce delayed ACKs by rounding up cwnd to the next even number. */
+ cwnd = (cwnd + 1) & ~1U;
+
++ /* Ensure gain cycling gets inflight above BDP even for small BDPs. */
++ if (bbr->mode == BBR_PROBE_BW && gain > BBR_UNIT)
++ cwnd += 2;
++
+ return cwnd;
+ }
+
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index c78fb53988a1..1a9b88c8cf72 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -138,7 +138,7 @@ static void dctcp_ce_state_0_to_1(struct sock *sk)
+ */
+ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
+ __tcp_send_ack(sk, ca->prior_rcv_nxt);
+- tcp_enter_quickack_mode(sk);
++ tcp_enter_quickack_mode(sk, 1);
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+@@ -159,7 +159,7 @@ static void dctcp_ce_state_1_to_0(struct sock *sk)
+ */
+ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
+ __tcp_send_ack(sk, ca->prior_rcv_nxt);
+- tcp_enter_quickack_mode(sk);
++ tcp_enter_quickack_mode(sk, 1);
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0f5e9510c3fa..4f115830f6a8 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -184,21 +184,23 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ }
+ }
+
+-static void tcp_incr_quickack(struct sock *sk)
++static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ unsigned int quickacks = tcp_sk(sk)->rcv_wnd / (2 * icsk->icsk_ack.rcv_mss);
+
+ if (quickacks == 0)
+ quickacks = 2;
++ quickacks = min(quickacks, max_quickacks);
+ if (quickacks > icsk->icsk_ack.quick)
+- icsk->icsk_ack.quick = min(quickacks, TCP_MAX_QUICKACKS);
++ icsk->icsk_ack.quick = quickacks;
+ }
+
+-void tcp_enter_quickack_mode(struct sock *sk)
++void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+- tcp_incr_quickack(sk);
++
++ tcp_incr_quickack(sk, max_quickacks);
+ icsk->icsk_ack.pingpong = 0;
+ icsk->icsk_ack.ato = TCP_ATO_MIN;
+ }
+@@ -225,8 +227,15 @@ static void tcp_ecn_queue_cwr(struct tcp_sock *tp)
+
+ static void tcp_ecn_accept_cwr(struct tcp_sock *tp, const struct sk_buff *skb)
+ {
+- if (tcp_hdr(skb)->cwr)
++ if (tcp_hdr(skb)->cwr) {
+ tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
++
++ /* If the sender is telling us it has entered CWR, then its
++ * cwnd may be very low (even just 1 packet), so we should ACK
++ * immediately.
++ */
++ tcp_enter_quickack_mode((struct sock *)tp, 2);
++ }
+ }
+
+ static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp)
+@@ -234,8 +243,10 @@ static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp)
+ tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+ }
+
+-static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
++static void __tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+ {
++ struct tcp_sock *tp = tcp_sk(sk);
++
+ switch (TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK) {
+ case INET_ECN_NOT_ECT:
+ /* Funny extension: if ECT is not set on a segment,
+@@ -243,31 +254,31 @@ static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
+ * it is probably a retransmit.
+ */
+ if (tp->ecn_flags & TCP_ECN_SEEN)
+- tcp_enter_quickack_mode((struct sock *)tp);
++ tcp_enter_quickack_mode(sk, 2);
+ break;
+ case INET_ECN_CE:
+- if (tcp_ca_needs_ecn((struct sock *)tp))
+- tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_IS_CE);
++ if (tcp_ca_needs_ecn(sk))
++ tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
+
+ if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
+ /* Better not delay acks, sender can have a very low cwnd */
+- tcp_enter_quickack_mode((struct sock *)tp);
++ tcp_enter_quickack_mode(sk, 2);
+ tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
+ }
+ tp->ecn_flags |= TCP_ECN_SEEN;
+ break;
+ default:
+- if (tcp_ca_needs_ecn((struct sock *)tp))
+- tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_NO_CE);
++ if (tcp_ca_needs_ecn(sk))
++ tcp_ca_event(sk, CA_EVENT_ECN_NO_CE);
+ tp->ecn_flags |= TCP_ECN_SEEN;
+ break;
+ }
+ }
+
+-static void tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
++static void tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+ {
+- if (tp->ecn_flags & TCP_ECN_OK)
+- __tcp_ecn_check_ce(tp, skb);
++ if (tcp_sk(sk)->ecn_flags & TCP_ECN_OK)
++ __tcp_ecn_check_ce(sk, skb);
+ }
+
+ static void tcp_ecn_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th)
+@@ -666,7 +677,7 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
+ /* The _first_ data packet received, initialize
+ * delayed ACK engine.
+ */
+- tcp_incr_quickack(sk);
++ tcp_incr_quickack(sk, TCP_MAX_QUICKACKS);
+ icsk->icsk_ack.ato = TCP_ATO_MIN;
+ } else {
+ int m = now - icsk->icsk_ack.lrcvtime;
+@@ -682,13 +693,13 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
+ /* Too long gap. Apparently sender failed to
+ * restart window, so that we send ACKs quickly.
+ */
+- tcp_incr_quickack(sk);
++ tcp_incr_quickack(sk, TCP_MAX_QUICKACKS);
+ sk_mem_reclaim(sk);
+ }
+ }
+ icsk->icsk_ack.lrcvtime = now;
+
+- tcp_ecn_check_ce(tp, skb);
++ tcp_ecn_check_ce(sk, skb);
+
+ if (skb->len >= 128)
+ tcp_grow_window(sk, skb);
+@@ -4136,7 +4147,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
+ if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
+ before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
+- tcp_enter_quickack_mode(sk);
++ tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+
+ if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
+ u32 end_seq = TCP_SKB_CB(skb)->end_seq;
+@@ -4404,7 +4415,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ u32 seq, end_seq;
+ bool fragstolen;
+
+- tcp_ecn_check_ce(tp, skb);
++ tcp_ecn_check_ce(sk, skb);
+
+ if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
+@@ -4667,7 +4678,7 @@ queue_and_out:
+ tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
+
+ out_of_window:
+- tcp_enter_quickack_mode(sk);
++ tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ inet_csk_schedule_ack(sk);
+ drop:
+ tcp_drop(sk, skb);
+@@ -4678,8 +4689,6 @@ drop:
+ if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt + tcp_receive_window(tp)))
+ goto out_of_window;
+
+- tcp_enter_quickack_mode(sk);
+-
+ if (before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
+ /* Partial packet, seq < rcv_next < end_seq */
+ SOCK_DEBUG(sk, "partial packet: rcv_next %X seq %X - %X\n",
+@@ -5746,7 +5755,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
+ * to stand against the temptation 8) --ANK
+ */
+ inet_csk_schedule_ack(sk);
+- tcp_enter_quickack_mode(sk);
++ tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK,
+ TCP_DELACK_MAX, TCP_RTO_MAX);
+
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index bbad940c0137..8a33dac4e805 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -1234,7 +1234,10 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
+ pr_debug("Create set %s with family %s\n",
+ set->name, set->family == NFPROTO_IPV4 ? "inet" : "inet6");
+
+-#ifndef IP_SET_PROTO_UNDEF
++#ifdef IP_SET_PROTO_UNDEF
++ if (set->family != NFPROTO_UNSPEC)
++ return -IPSET_ERR_INVALID_FAMILY;
++#else
+ if (!(set->family == NFPROTO_IPV4 || set->family == NFPROTO_IPV6))
+ return -IPSET_ERR_INVALID_FAMILY;
+ #endif
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 501e48a7965b..8d8dfe417014 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2728,12 +2728,13 @@ static struct nft_set *nf_tables_set_lookup_byid(const struct net *net,
+ u32 id = ntohl(nla_get_be32(nla));
+
+ list_for_each_entry(trans, &net->nft.commit_list, list) {
+- struct nft_set *set = nft_trans_set(trans);
++ if (trans->msg_type == NFT_MSG_NEWSET) {
++ struct nft_set *set = nft_trans_set(trans);
+
+- if (trans->msg_type == NFT_MSG_NEWSET &&
+- id == nft_trans_set_id(trans) &&
+- nft_active_genmask(set, genmask))
+- return set;
++ if (id == nft_trans_set_id(trans) &&
++ nft_active_genmask(set, genmask))
++ return set;
++ }
+ }
+ return ERR_PTR(-ENOENT);
+ }
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 2e2dd88fc79f..890f22f90344 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1009,6 +1009,11 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ return err;
+ }
+
++ if (nlk->ngroups == 0)
++ groups = 0;
++ else
++ groups &= (1ULL << nlk->ngroups) - 1;
++
+ bound = nlk->bound;
+ if (bound) {
+ /* Ensure nlk->portid is up-to-date. */
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index 48332a6ed738..d152e48ea371 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -344,6 +344,11 @@ struct rds_ib_mr *rds_ib_reg_frmr(struct rds_ib_device *rds_ibdev,
+ struct rds_ib_frmr *frmr;
+ int ret;
+
++ if (!ic) {
++ /* TODO: Add FRWR support for RDS_GET_MR using proxy qp*/
++ return ERR_PTR(-EOPNOTSUPP);
++ }
++
+ do {
+ if (ibmr)
+ rds_ib_free_frmr(ibmr, true);
+diff --git a/net/rds/ib_mr.h b/net/rds/ib_mr.h
+index 0ea4ab017a8c..655f01d427fe 100644
+--- a/net/rds/ib_mr.h
++++ b/net/rds/ib_mr.h
+@@ -115,7 +115,8 @@ void rds_ib_get_mr_info(struct rds_ib_device *rds_ibdev,
+ struct rds_info_rdma_connection *iinfo);
+ void rds_ib_destroy_mr_pool(struct rds_ib_mr_pool *);
+ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+- struct rds_sock *rs, u32 *key_ret);
++ struct rds_sock *rs, u32 *key_ret,
++ struct rds_connection *conn);
+ void rds_ib_sync_mr(void *trans_private, int dir);
+ void rds_ib_free_mr(void *trans_private, int invalidate);
+ void rds_ib_flush_mrs(void);
+diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
+index e678699268a2..2e49a40a5e11 100644
+--- a/net/rds/ib_rdma.c
++++ b/net/rds/ib_rdma.c
+@@ -537,11 +537,12 @@ void rds_ib_flush_mrs(void)
+ }
+
+ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+- struct rds_sock *rs, u32 *key_ret)
++ struct rds_sock *rs, u32 *key_ret,
++ struct rds_connection *conn)
+ {
+ struct rds_ib_device *rds_ibdev;
+ struct rds_ib_mr *ibmr = NULL;
+- struct rds_ib_connection *ic = rs->rs_conn->c_transport_data;
++ struct rds_ib_connection *ic = NULL;
+ int ret;
+
+ rds_ibdev = rds_ib_get_device(rs->rs_bound_addr);
+@@ -550,6 +551,9 @@ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+ goto out;
+ }
+
++ if (conn)
++ ic = conn->c_transport_data;
++
+ if (!rds_ibdev->mr_8k_pool || !rds_ibdev->mr_1m_pool) {
+ ret = -ENODEV;
+ goto out;
+@@ -559,17 +563,18 @@ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+ ibmr = rds_ib_reg_frmr(rds_ibdev, ic, sg, nents, key_ret);
+ else
+ ibmr = rds_ib_reg_fmr(rds_ibdev, sg, nents, key_ret);
+- if (ibmr)
+- rds_ibdev = NULL;
+-
+- out:
+- if (!ibmr)
++ if (IS_ERR(ibmr)) {
++ ret = PTR_ERR(ibmr);
+ pr_warn("RDS/IB: rds_ib_get_mr failed (errno=%d)\n", ret);
++ } else {
++ return ibmr;
++ }
+
++ out:
+ if (rds_ibdev)
+ rds_ib_dev_put(rds_ibdev);
+
+- return ibmr;
++ return ERR_PTR(ret);
+ }
+
+ void rds_ib_destroy_mr_pool(struct rds_ib_mr_pool *pool)
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index 634cfcb7bba6..80920e47f2c7 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -170,7 +170,8 @@ static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages,
+ }
+
+ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+- u64 *cookie_ret, struct rds_mr **mr_ret)
++ u64 *cookie_ret, struct rds_mr **mr_ret,
++ struct rds_conn_path *cp)
+ {
+ struct rds_mr *mr = NULL, *found;
+ unsigned int nr_pages;
+@@ -269,7 +270,8 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+ * Note that dma_map() implies that pending writes are
+ * flushed to RAM, so no dma_sync is needed here. */
+ trans_private = rs->rs_transport->get_mr(sg, nents, rs,
+- &mr->r_key);
++ &mr->r_key,
++ cp ? cp->cp_conn : NULL);
+
+ if (IS_ERR(trans_private)) {
+ for (i = 0 ; i < nents; i++)
+@@ -330,7 +332,7 @@ int rds_get_mr(struct rds_sock *rs, char __user *optval, int optlen)
+ sizeof(struct rds_get_mr_args)))
+ return -EFAULT;
+
+- return __rds_rdma_map(rs, &args, NULL, NULL);
++ return __rds_rdma_map(rs, &args, NULL, NULL, NULL);
+ }
+
+ int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen)
+@@ -354,7 +356,7 @@ int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen)
+ new_args.cookie_addr = args.cookie_addr;
+ new_args.flags = args.flags;
+
+- return __rds_rdma_map(rs, &new_args, NULL, NULL);
++ return __rds_rdma_map(rs, &new_args, NULL, NULL, NULL);
+ }
+
+ /*
+@@ -782,7 +784,8 @@ int rds_cmsg_rdma_map(struct rds_sock *rs, struct rds_message *rm,
+ rm->m_rdma_cookie != 0)
+ return -EINVAL;
+
+- return __rds_rdma_map(rs, CMSG_DATA(cmsg), &rm->m_rdma_cookie, &rm->rdma.op_rdma_mr);
++ return __rds_rdma_map(rs, CMSG_DATA(cmsg), &rm->m_rdma_cookie,
++ &rm->rdma.op_rdma_mr, rm->m_conn_path);
+ }
+
+ /*
+diff --git a/net/rds/rds.h b/net/rds/rds.h
+index f2272fb8cd45..60b3b787fbdb 100644
+--- a/net/rds/rds.h
++++ b/net/rds/rds.h
+@@ -464,6 +464,8 @@ struct rds_message {
+ struct scatterlist *op_sg;
+ } data;
+ };
++
++ struct rds_conn_path *m_conn_path;
+ };
+
+ /*
+@@ -544,7 +546,8 @@ struct rds_transport {
+ unsigned int avail);
+ void (*exit)(void);
+ void *(*get_mr)(struct scatterlist *sg, unsigned long nr_sg,
+- struct rds_sock *rs, u32 *key_ret);
++ struct rds_sock *rs, u32 *key_ret,
++ struct rds_connection *conn);
+ void (*sync_mr)(void *trans_private, int direction);
+ void (*free_mr)(void *trans_private, int invalidate);
+ void (*flush_mrs)(void);
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 94c7f74909be..59f17a2335f4 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1169,6 +1169,13 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ rs->rs_conn = conn;
+ }
+
++ if (conn->c_trans->t_mp_capable)
++ cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
++ else
++ cpath = &conn->c_path[0];
++
++ rm->m_conn_path = cpath;
++
+ /* Parse any control messages the user may have included. */
+ ret = rds_cmsg_send(rs, rm, msg, &allocated_mr);
+ if (ret) {
+@@ -1192,11 +1199,6 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ goto out;
+ }
+
+- if (conn->c_trans->t_mp_capable)
+- cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
+- else
+- cpath = &conn->c_path[0];
+-
+ if (rds_destroy_pending(conn)) {
+ ret = -EAGAIN;
+ goto out;
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 1350f1be8037..8229a52c2acd 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -70,7 +70,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ iov[2].iov_len = sizeof(ack_info);
+
+ pkt.whdr.epoch = htonl(conn->proto.epoch);
+- pkt.whdr.cid = htonl(conn->proto.cid);
++ pkt.whdr.cid = htonl(conn->proto.cid | channel);
+ pkt.whdr.callNumber = htonl(call_id);
+ pkt.whdr.seq = 0;
+ pkt.whdr.type = chan->last_type;
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 74d0bd7e76d7..1309b5509ef2 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -449,6 +449,7 @@ int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
+
+ static int read_idmap[READING_MAX_ID] = {
+ [READING_FIRMWARE] = FIRMWARE_CHECK,
++ [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
+ [READING_MODULE] = MODULE_CHECK,
+ [READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
+ [READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index cefe613ef7b7..a68c7554f30f 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -1858,7 +1858,9 @@ int snd_emu10k1_pcm_efx(struct snd_emu10k1 *emu, int device)
+ if (!kctl)
+ return -ENOMEM;
+ kctl->id.device = device;
+- snd_ctl_add(emu->card, kctl);
++ err = snd_ctl_add(emu->card, kctl);
++ if (err < 0)
++ return err;
+
+ snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(emu->pci), 64*1024, 64*1024);
+
+diff --git a/sound/pci/emu10k1/memory.c b/sound/pci/emu10k1/memory.c
+index 5865f3b90b34..dbc7d8d0e1c4 100644
+--- a/sound/pci/emu10k1/memory.c
++++ b/sound/pci/emu10k1/memory.c
+@@ -248,13 +248,13 @@ __found_pages:
+ static int is_valid_page(struct snd_emu10k1 *emu, dma_addr_t addr)
+ {
+ if (addr & ~emu->dma_mask) {
+- dev_err(emu->card->dev,
++ dev_err_ratelimited(emu->card->dev,
+ "max memory size is 0x%lx (addr = 0x%lx)!!\n",
+ emu->dma_mask, (unsigned long)addr);
+ return 0;
+ }
+ if (addr & (EMUPAGESIZE-1)) {
+- dev_err(emu->card->dev, "page is not aligned\n");
++ dev_err_ratelimited(emu->card->dev, "page is not aligned\n");
+ return 0;
+ }
+ return 1;
+@@ -345,7 +345,7 @@ snd_emu10k1_alloc_pages(struct snd_emu10k1 *emu, struct snd_pcm_substream *subst
+ else
+ addr = snd_pcm_sgbuf_get_addr(substream, ofs);
+ if (! is_valid_page(emu, addr)) {
+- dev_err(emu->card->dev,
++ dev_err_ratelimited(emu->card->dev,
+ "emu: failure page = %d\n", idx);
+ mutex_unlock(&hdr->block_mutex);
+ return NULL;
+diff --git a/sound/pci/fm801.c b/sound/pci/fm801.c
+index 73a67bc3586b..e3fb9c61017c 100644
+--- a/sound/pci/fm801.c
++++ b/sound/pci/fm801.c
+@@ -1068,11 +1068,19 @@ static int snd_fm801_mixer(struct fm801 *chip)
+ if ((err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97_sec)) < 0)
+ return err;
+ }
+- for (i = 0; i < FM801_CONTROLS; i++)
+- snd_ctl_add(chip->card, snd_ctl_new1(&snd_fm801_controls[i], chip));
++ for (i = 0; i < FM801_CONTROLS; i++) {
++ err = snd_ctl_add(chip->card,
++ snd_ctl_new1(&snd_fm801_controls[i], chip));
++ if (err < 0)
++ return err;
++ }
+ if (chip->multichannel) {
+- for (i = 0; i < FM801_CONTROLS_MULTI; i++)
+- snd_ctl_add(chip->card, snd_ctl_new1(&snd_fm801_controls_multi[i], chip));
++ for (i = 0; i < FM801_CONTROLS_MULTI; i++) {
++ err = snd_ctl_add(chip->card,
++ snd_ctl_new1(&snd_fm801_controls_multi[i], chip));
++ if (err < 0)
++ return err;
++ }
+ }
+ return 0;
+ }
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 768ea8651993..84261ef02c93 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -39,6 +39,10 @@
+ /* Enable this to see controls for tuning purpose. */
+ /*#define ENABLE_TUNING_CONTROLS*/
+
++#ifdef ENABLE_TUNING_CONTROLS
++#include <sound/tlv.h>
++#endif
++
+ #define FLOAT_ZERO 0x00000000
+ #define FLOAT_ONE 0x3f800000
+ #define FLOAT_TWO 0x40000000
+@@ -3068,8 +3072,8 @@ static int equalizer_ctl_put(struct snd_kcontrol *kcontrol,
+ return 1;
+ }
+
+-static const DECLARE_TLV_DB_SCALE(voice_focus_db_scale, 2000, 100, 0);
+-static const DECLARE_TLV_DB_SCALE(eq_db_scale, -2400, 100, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(voice_focus_db_scale, 2000, 100, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(eq_db_scale, -2400, 100, 0);
+
+ static int add_tuning_control(struct hda_codec *codec,
+ hda_nid_t pnid, hda_nid_t nid,
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index 89df2d9f63d7..1544166631e3 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -385,8 +385,7 @@ static irqreturn_t fsl_ssi_isr(int irq, void *dev_id)
+ {
+ struct fsl_ssi *ssi = dev_id;
+ struct regmap *regs = ssi->regs;
+- __be32 sisr;
+- __be32 sisr2;
++ u32 sisr, sisr2;
+
+ regmap_read(regs, REG_SSI_SISR, &sisr);
+
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 82402688bd8e..948505f74229 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -33,7 +33,7 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+ struct snd_soc_component *component;
+ struct snd_soc_rtdcom_list *rtdcom;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+- int ret = 0, __ret;
++ int ret;
+
+ mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
+
+@@ -68,16 +68,15 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+ !component->driver->compr_ops->open)
+ continue;
+
+- __ret = component->driver->compr_ops->open(cstream);
+- if (__ret < 0) {
++ ret = component->driver->compr_ops->open(cstream);
++ if (ret < 0) {
+ dev_err(component->dev,
+ "Compress ASoC: can't open platform %s: %d\n",
+- component->name, __ret);
+- ret = __ret;
++ component->name, ret);
++ goto machine_err;
+ }
+ }
+- if (ret < 0)
+- goto machine_err;
++ component = NULL;
+
+ if (rtd->dai_link->compr_ops && rtd->dai_link->compr_ops->startup) {
+ ret = rtd->dai_link->compr_ops->startup(cstream);
+@@ -97,17 +96,20 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+
+ machine_err:
+ for_each_rtdcom(rtd, rtdcom) {
+- component = rtdcom->component;
++ struct snd_soc_component *err_comp = rtdcom->component;
++
++ if (err_comp == component)
++ break;
+
+ /* ignore duplication for now */
+- if (platform && (component == &platform->component))
++ if (platform && (err_comp == &platform->component))
+ continue;
+
+- if (!component->driver->compr_ops ||
+- !component->driver->compr_ops->free)
++ if (!err_comp->driver->compr_ops ||
++ !err_comp->driver->compr_ops->free)
+ continue;
+
+- component->driver->compr_ops->free(cstream);
++ err_comp->driver->compr_ops->free(cstream);
+ }
+
+ if (platform && platform->driver->compr_ops && platform->driver->compr_ops->free)
+@@ -132,7 +134,7 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ struct snd_soc_dpcm *dpcm;
+ struct snd_soc_dapm_widget_list *list;
+ int stream;
+- int ret = 0, __ret;
++ int ret;
+
+ if (cstream->direction == SND_COMPRESS_PLAYBACK)
+ stream = SNDRV_PCM_STREAM_PLAYBACK;
+@@ -172,16 +174,15 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ !component->driver->compr_ops->open)
+ continue;
+
+- __ret = component->driver->compr_ops->open(cstream);
+- if (__ret < 0) {
++ ret = component->driver->compr_ops->open(cstream);
++ if (ret < 0) {
+ dev_err(component->dev,
+ "Compress ASoC: can't open platform %s: %d\n",
+- component->name, __ret);
+- ret = __ret;
++ component->name, ret);
++ goto machine_err;
+ }
+ }
+- if (ret < 0)
+- goto machine_err;
++ component = NULL;
+
+ if (fe->dai_link->compr_ops && fe->dai_link->compr_ops->startup) {
+ ret = fe->dai_link->compr_ops->startup(cstream);
+@@ -236,17 +237,20 @@ fe_err:
+ fe->dai_link->compr_ops->shutdown(cstream);
+ machine_err:
+ for_each_rtdcom(fe, rtdcom) {
+- component = rtdcom->component;
++ struct snd_soc_component *err_comp = rtdcom->component;
++
++ if (err_comp == component)
++ break;
+
+ /* ignore duplication for now */
+- if (platform && (component == &platform->component))
++ if (platform && (err_comp == &platform->component))
+ continue;
+
+- if (!component->driver->compr_ops ||
+- !component->driver->compr_ops->free)
++ if (!err_comp->driver->compr_ops ||
++ !err_comp->driver->compr_ops->free)
+ continue;
+
+- component->driver->compr_ops->free(cstream);
++ err_comp->driver->compr_ops->free(cstream);
+ }
+
+ if (platform && platform->driver->compr_ops && platform->driver->compr_ops->free)
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 68d9dc930096..d800b99ba5cc 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1965,8 +1965,10 @@ int dpcm_be_dai_shutdown(struct snd_soc_pcm_runtime *fe, int stream)
+ continue;
+
+ if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
+- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_OPEN))
+- continue;
++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_OPEN)) {
++ soc_pcm_hw_free(be_substream);
++ be->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_FREE;
++ }
+
+ dev_dbg(be->dev, "ASoC: close BE %s\n",
+ be->dai_link->name);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 986b8b2f90fb..f1b4e3099513 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -2006,6 +2006,13 @@ static void set_link_hw_format(struct snd_soc_dai_link *link,
+
+ link->dai_fmt = hw_config->fmt & SND_SOC_DAIFMT_FORMAT_MASK;
+
++ /* clock gating */
++ if (hw_config->clock_gated == SND_SOC_TPLG_DAI_CLK_GATE_GATED)
++ link->dai_fmt |= SND_SOC_DAIFMT_GATED;
++ else if (hw_config->clock_gated ==
++ SND_SOC_TPLG_DAI_CLK_GATE_CONT)
++ link->dai_fmt |= SND_SOC_DAIFMT_CONT;
++
+ /* clock signal polarity */
+ invert_bclk = hw_config->invert_bclk;
+ invert_fsync = hw_config->invert_fsync;
+@@ -2019,13 +2026,15 @@ static void set_link_hw_format(struct snd_soc_dai_link *link,
+ link->dai_fmt |= SND_SOC_DAIFMT_IB_IF;
+
+ /* clock masters */
+- bclk_master = hw_config->bclk_master;
+- fsync_master = hw_config->fsync_master;
+- if (!bclk_master && !fsync_master)
++ bclk_master = (hw_config->bclk_master ==
++ SND_SOC_TPLG_BCLK_CM);
++ fsync_master = (hw_config->fsync_master ==
++ SND_SOC_TPLG_FSYNC_CM);
++ if (bclk_master && fsync_master)
+ link->dai_fmt |= SND_SOC_DAIFMT_CBM_CFM;
+- else if (bclk_master && !fsync_master)
+- link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFM;
+ else if (!bclk_master && fsync_master)
++ link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFM;
++ else if (bclk_master && !fsync_master)
+ link->dai_fmt |= SND_SOC_DAIFMT_CBM_CFS;
+ else
+ link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFS;
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 3cbfae6604f9..d8a46d46bcd2 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1311,7 +1311,7 @@ static void retire_capture_urb(struct snd_usb_substream *subs,
+ if (bytes % (runtime->sample_bits >> 3) != 0) {
+ int oldbytes = bytes;
+ bytes = frames * stride;
+- dev_warn(&subs->dev->dev,
++ dev_warn_ratelimited(&subs->dev->dev,
+ "Corrected urb data len. %d->%d\n",
+ oldbytes, bytes);
+ }
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index e37608a87dba..155d2570274f 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -73,6 +73,7 @@ static void inc_group_count(struct list_head *list,
+ %type <num> value_sym
+ %type <head> event_config
+ %type <head> opt_event_config
++%type <head> opt_pmu_config
+ %type <term> event_term
+ %type <head> event_pmu
+ %type <head> event_legacy_symbol
+@@ -224,7 +225,7 @@ event_def: event_pmu |
+ event_bpf_file
+
+ event_pmu:
+-PE_NAME opt_event_config
++PE_NAME opt_pmu_config
+ {
+ struct list_head *list, *orig_terms, *terms;
+
+@@ -496,6 +497,17 @@ opt_event_config:
+ $$ = NULL;
+ }
+
++opt_pmu_config:
++'/' event_config '/'
++{
++ $$ = $2;
++}
++|
++'/' '/'
++{
++ $$ = NULL;
++}
++
+ start_terms: event_config
+ {
+ struct parse_events_state *parse_state = _parse_state;
+diff --git a/tools/testing/selftests/filesystems/Makefile b/tools/testing/selftests/filesystems/Makefile
+index 5c7d7001ad37..129880fb42d3 100644
+--- a/tools/testing/selftests/filesystems/Makefile
++++ b/tools/testing/selftests/filesystems/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
++CFLAGS += -I../../../../usr/include/
+ TEST_GEN_PROGS := devpts_pts
+ TEST_GEN_PROGS_EXTENDED := dnotify_test
+
+diff --git a/tools/testing/selftests/filesystems/devpts_pts.c b/tools/testing/selftests/filesystems/devpts_pts.c
+index b9055e974289..a425840dc30c 100644
+--- a/tools/testing/selftests/filesystems/devpts_pts.c
++++ b/tools/testing/selftests/filesystems/devpts_pts.c
+@@ -8,9 +8,10 @@
+ #include <stdlib.h>
+ #include <string.h>
+ #include <unistd.h>
+-#include <sys/ioctl.h>
++#include <asm/ioctls.h>
+ #include <sys/mount.h>
+ #include <sys/wait.h>
++#include "../kselftest.h"
+
+ static bool terminal_dup2(int duplicate, int original)
+ {
+@@ -125,10 +126,12 @@ static int do_tiocgptpeer(char *ptmx, char *expected_procfd_contents)
+ if (errno == EINVAL) {
+ fprintf(stderr, "TIOCGPTPEER is not supported. "
+ "Skipping test.\n");
+- fret = EXIT_SUCCESS;
++ fret = KSFT_SKIP;
++ } else {
++ fprintf(stderr,
++ "Failed to perform TIOCGPTPEER ioctl\n");
++ fret = EXIT_FAILURE;
+ }
+-
+- fprintf(stderr, "Failed to perform TIOCGPTPEER ioctl\n");
+ goto do_cleanup;
+ }
+
+@@ -281,7 +284,7 @@ int main(int argc, char *argv[])
+ if (!isatty(STDIN_FILENO)) {
+ fprintf(stderr, "Standard input file desciptor is not attached "
+ "to a terminal. Skipping test\n");
+- exit(EXIT_FAILURE);
++ exit(KSFT_SKIP);
+ }
+
+ ret = unshare(CLONE_NEWNS);
+diff --git a/tools/testing/selftests/intel_pstate/run.sh b/tools/testing/selftests/intel_pstate/run.sh
+index c670359becc6..928978804342 100755
+--- a/tools/testing/selftests/intel_pstate/run.sh
++++ b/tools/testing/selftests/intel_pstate/run.sh
+@@ -30,9 +30,12 @@
+
+ EVALUATE_ONLY=0
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ if ! uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ | grep -q x86; then
+ echo "$0 # Skipped: Test can only run on x86 architectures."
+- exit 0
++ exit $ksft_skip
+ fi
+
+ max_cpus=$(($(nproc)-1))
+@@ -48,11 +51,12 @@ function run_test () {
+
+ echo "sleeping for 5 seconds"
+ sleep 5
+- num_freqs=$(cat /proc/cpuinfo | grep MHz | sort -u | wc -l)
+- if [ $num_freqs -le 2 ]; then
+- cat /proc/cpuinfo | grep MHz | sort -u | tail -1 > /tmp/result.$1
++ grep MHz /proc/cpuinfo | sort -u > /tmp/result.freqs
++ num_freqs=$(wc -l /tmp/result.freqs | awk ' { print $1 } ')
++ if [ $num_freqs -ge 2 ]; then
++ tail -n 1 /tmp/result.freqs > /tmp/result.$1
+ else
+- cat /proc/cpuinfo | grep MHz | sort -u > /tmp/result.$1
++ cp /tmp/result.freqs /tmp/result.$1
+ fi
+ ./msr 0 >> /tmp/result.$1
+
+@@ -82,21 +86,20 @@ _max_freq=$(cpupower frequency-info -l | tail -1 | awk ' { print $2 } ')
+ max_freq=$(($_max_freq / 1000))
+
+
+-for freq in `seq $max_freq -100 $min_freq`
++[ $EVALUATE_ONLY -eq 0 ] && for freq in `seq $max_freq -100 $min_freq`
+ do
+ echo "Setting maximum frequency to $freq"
+ cpupower frequency-set -g powersave --max=${freq}MHz >& /dev/null
+- [ $EVALUATE_ONLY -eq 0 ] && run_test $freq
++ run_test $freq
+ done
+
+-echo "=============================================================================="
++[ $EVALUATE_ONLY -eq 0 ] && cpupower frequency-set -g powersave --max=${max_freq}MHz >& /dev/null
+
++echo "=============================================================================="
+ echo "The marketing frequency of the cpu is $mkt_freq MHz"
+ echo "The maximum frequency of the cpu is $max_freq MHz"
+ echo "The minimum frequency of the cpu is $min_freq MHz"
+
+-cpupower frequency-set -g powersave --max=${max_freq}MHz >& /dev/null
+-
+ # make a pretty table
+ echo "Target Actual Difference MSR(0x199) max_perf_pct"
+ for freq in `seq $max_freq -100 $min_freq`
+@@ -104,10 +107,6 @@ do
+ result_freq=$(cat /tmp/result.${freq} | grep "cpu MHz" | awk ' { print $4 } ' | awk -F "." ' { print $1 } ')
+ msr=$(cat /tmp/result.${freq} | grep "msr" | awk ' { print $3 } ')
+ max_perf_pct=$(cat /tmp/result.${freq} | grep "max_perf_pct" | awk ' { print $2 } ' )
+- if [ $result_freq -eq $freq ]; then
+- echo " $freq $result_freq 0 $msr $(($max_perf_pct*3300))"
+- else
+- echo " $freq $result_freq $(($result_freq-$freq)) $msr $(($max_perf_pct*$max_freq))"
+- fi
++ echo " $freq $result_freq $(($result_freq-$freq)) $msr $(($max_perf_pct*$max_freq))"
+ done
+ exit 0
+diff --git a/tools/testing/selftests/kvm/lib/assert.c b/tools/testing/selftests/kvm/lib/assert.c
+index c9f5b7d4ce38..cd01144d27c8 100644
+--- a/tools/testing/selftests/kvm/lib/assert.c
++++ b/tools/testing/selftests/kvm/lib/assert.c
+@@ -13,6 +13,8 @@
+ #include <execinfo.h>
+ #include <sys/syscall.h>
+
++#include "../../kselftest.h"
++
+ /* Dumps the current stack trace to stderr. */
+ static void __attribute__((noinline)) test_dump_stack(void);
+ static void test_dump_stack(void)
+@@ -70,8 +72,9 @@ test_assert(bool exp, const char *exp_str,
+
+ fprintf(stderr, "==== Test Assertion Failure ====\n"
+ " %s:%u: %s\n"
+- " pid=%d tid=%d\n",
+- file, line, exp_str, getpid(), gettid());
++ " pid=%d tid=%d - %s\n",
++ file, line, exp_str, getpid(), gettid(),
++ strerror(errno));
+ test_dump_stack();
+ if (fmt) {
+ fputs(" ", stderr);
+@@ -80,6 +83,8 @@ test_assert(bool exp, const char *exp_str,
+ }
+ va_end(ap);
+
++ if (errno == EACCES)
++ ksft_exit_skip("Access denied - Exiting.\n");
+ exit(254);
+ }
+
+diff --git a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+index aaa633263b2c..d7cb7944a42e 100644
+--- a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
++++ b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+@@ -28,6 +28,8 @@
+ #include <string.h>
+ #include <sys/ioctl.h>
+
++#include "../kselftest.h"
++
+ #ifndef MSR_IA32_TSC_ADJUST
+ #define MSR_IA32_TSC_ADJUST 0x3b
+ #endif
+diff --git a/tools/testing/selftests/memfd/run_tests.sh b/tools/testing/selftests/memfd/run_tests.sh
+index c2d41ed81b24..2013f195e623 100755
+--- a/tools/testing/selftests/memfd/run_tests.sh
++++ b/tools/testing/selftests/memfd/run_tests.sh
+@@ -1,6 +1,9 @@
+ #!/bin/bash
+ # please run as root
+
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ #
+ # Normal tests requiring no special resources
+ #
+@@ -29,12 +32,13 @@ if [ -n "$freepgs" ] && [ $freepgs -lt $hpages_test ]; then
+ nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
+ hpages_needed=`expr $hpages_test - $freepgs`
+
++ if [ $UID != 0 ]; then
++ echo "Please run memfd with hugetlbfs test as root"
++ exit $ksft_skip
++ fi
++
+ echo 3 > /proc/sys/vm/drop_caches
+ echo $(( $hpages_needed + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
+- if [ $? -ne 0 ]; then
+- echo "Please run this test as root"
+- exit 1
+- fi
+ while read name size unit; do
+ if [ "$name" = "HugePages_Free:" ]; then
+ freepgs=$size
+@@ -53,7 +57,7 @@ if [ $freepgs -lt $hpages_test ]; then
+ fi
+ printf "Not enough huge pages available (%d < %d)\n" \
+ $freepgs $needpgs
+- exit 1
++ exit $ksft_skip
+ fi
+
+ #
+diff --git a/tools/usb/usbip/libsrc/vhci_driver.c b/tools/usb/usbip/libsrc/vhci_driver.c
+index c9c81614a66a..4204359c9fee 100644
+--- a/tools/usb/usbip/libsrc/vhci_driver.c
++++ b/tools/usb/usbip/libsrc/vhci_driver.c
+@@ -135,11 +135,11 @@ static int refresh_imported_device_list(void)
+ return 0;
+ }
+
+-static int get_nports(void)
++static int get_nports(struct udev_device *hc_device)
+ {
+ const char *attr_nports;
+
+- attr_nports = udev_device_get_sysattr_value(vhci_driver->hc_device, "nports");
++ attr_nports = udev_device_get_sysattr_value(hc_device, "nports");
+ if (!attr_nports) {
+ err("udev_device_get_sysattr_value nports failed");
+ return -1;
+@@ -242,35 +242,41 @@ static int read_record(int rhport, char *host, unsigned long host_len,
+
+ int usbip_vhci_driver_open(void)
+ {
++ int nports;
++ struct udev_device *hc_device;
++
+ udev_context = udev_new();
+ if (!udev_context) {
+ err("udev_new failed");
+ return -1;
+ }
+
+- vhci_driver = calloc(1, sizeof(struct usbip_vhci_driver));
+-
+ /* will be freed in usbip_driver_close() */
+- vhci_driver->hc_device =
++ hc_device =
+ udev_device_new_from_subsystem_sysname(udev_context,
+ USBIP_VHCI_BUS_TYPE,
+ USBIP_VHCI_DEVICE_NAME);
+- if (!vhci_driver->hc_device) {
++ if (!hc_device) {
+ err("udev_device_new_from_subsystem_sysname failed");
+ goto err;
+ }
+
+- vhci_driver->nports = get_nports();
+- dbg("available ports: %d", vhci_driver->nports);
+-
+- if (vhci_driver->nports <= 0) {
++ nports = get_nports(hc_device);
++ if (nports <= 0) {
+ err("no available ports");
+ goto err;
+- } else if (vhci_driver->nports > MAXNPORT) {
+- err("port number exceeds %d", MAXNPORT);
++ }
++ dbg("available ports: %d", nports);
++
++ vhci_driver = calloc(1, sizeof(struct usbip_vhci_driver) +
++ nports * sizeof(struct usbip_imported_device));
++ if (!vhci_driver) {
++ err("vhci_driver allocation failed");
+ goto err;
+ }
+
++ vhci_driver->nports = nports;
++ vhci_driver->hc_device = hc_device;
+ vhci_driver->ncontrollers = get_ncontrollers();
+ dbg("available controllers: %d", vhci_driver->ncontrollers);
+
+@@ -285,7 +291,7 @@ int usbip_vhci_driver_open(void)
+ return 0;
+
+ err:
+- udev_device_unref(vhci_driver->hc_device);
++ udev_device_unref(hc_device);
+
+ if (vhci_driver)
+ free(vhci_driver);
+diff --git a/tools/usb/usbip/libsrc/vhci_driver.h b/tools/usb/usbip/libsrc/vhci_driver.h
+index 418b404d5121..6c9aca216705 100644
+--- a/tools/usb/usbip/libsrc/vhci_driver.h
++++ b/tools/usb/usbip/libsrc/vhci_driver.h
+@@ -13,7 +13,6 @@
+
+ #define USBIP_VHCI_BUS_TYPE "platform"
+ #define USBIP_VHCI_DEVICE_NAME "vhci_hcd.0"
+-#define MAXNPORT 128
+
+ enum hub_speed {
+ HUB_SPEED_HIGH = 0,
+@@ -41,7 +40,7 @@ struct usbip_vhci_driver {
+
+ int ncontrollers;
+ int nports;
+- struct usbip_imported_device idev[MAXNPORT];
++ struct usbip_imported_device idev[];
+ };
+
+
+diff --git a/tools/usb/usbip/src/usbip_detach.c b/tools/usb/usbip/src/usbip_detach.c
+index 9db9d21bb2ec..6a8db858caa5 100644
+--- a/tools/usb/usbip/src/usbip_detach.c
++++ b/tools/usb/usbip/src/usbip_detach.c
+@@ -43,7 +43,7 @@ void usbip_detach_usage(void)
+
+ static int detach_port(char *port)
+ {
+- int ret;
++ int ret = 0;
+ uint8_t portnum;
+ char path[PATH_MAX+1];
+
+@@ -73,9 +73,12 @@ static int detach_port(char *port)
+ }
+
+ ret = usbip_vhci_detach_device(portnum);
+- if (ret < 0)
+- return -1;
++ if (ret < 0) {
++ ret = -1;
++ goto call_driver_close;
++ }
+
++call_driver_close:
+ usbip_vhci_driver_close();
+
+ return ret;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-28 10:41 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-28 10:41 UTC (permalink / raw
To: gentoo-commits
commit: 4465d65275166037e241c0f92c3d23d28c03bbe5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 28 10:41:00 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 28 10:41:00 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4465d652
Linux patch 4.17.11
0000_README | 4 +
1010_linux-4.17.11.patch | 3321 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3325 insertions(+)
diff --git a/0000_README b/0000_README
index f2abee1..a0836f2 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-4.17.10.patch
From: http://www.kernel.org
Desc: Linux 4.17.10
+Patch: 1010_linux-4.17.11.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-4.17.11.patch b/1010_linux-4.17.11.patch
new file mode 100644
index 0000000..a9e2e1a
--- /dev/null
+++ b/1010_linux-4.17.11.patch
@@ -0,0 +1,3321 @@
+diff --git a/Makefile b/Makefile
+index 0ab689c38e82..e2664c641109 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/mips/ath79/common.c b/arch/mips/ath79/common.c
+index 10a405d593df..c782b10ddf50 100644
+--- a/arch/mips/ath79/common.c
++++ b/arch/mips/ath79/common.c
+@@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(ath79_ddr_ctrl_init);
+
+ void ath79_ddr_wb_flush(u32 reg)
+ {
+- void __iomem *flush_reg = ath79_ddr_wb_flush_base + reg;
++ void __iomem *flush_reg = ath79_ddr_wb_flush_base + (reg * 4);
+
+ /* Flush the DDR write buffer. */
+ __raw_writel(0x1, flush_reg);
+diff --git a/arch/mips/pci/pci.c b/arch/mips/pci/pci.c
+index 9632436d74d7..c2e94cf5ecda 100644
+--- a/arch/mips/pci/pci.c
++++ b/arch/mips/pci/pci.c
+@@ -54,5 +54,5 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar,
+ phys_addr_t size = resource_size(rsrc);
+
+ *start = fixup_bigphys_addr(rsrc->start, size);
+- *end = rsrc->start + size;
++ *end = rsrc->start + size - 1;
+ }
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index 1835ca1505d6..7472ffa76fd0 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -35,9 +35,9 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
+ extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+ unsigned long ua, unsigned long entries);
+ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
+- unsigned long ua, unsigned long *hpa);
++ unsigned long ua, unsigned int pageshift, unsigned long *hpa);
+ extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
+- unsigned long ua, unsigned long *hpa);
++ unsigned long ua, unsigned int pageshift, unsigned long *hpa);
+ extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
+ extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
+ #endif
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 4dffa611376d..e14cec6bc339 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -433,7 +433,7 @@ long kvmppc_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl,
+ /* This only handles v2 IOMMU type, v1 is handled via ioctl() */
+ return H_TOO_HARD;
+
+- if (WARN_ON_ONCE(mm_iommu_ua_to_hpa(mem, ua, &hpa)))
++ if (WARN_ON_ONCE(mm_iommu_ua_to_hpa(mem, ua, tbl->it_page_shift, &hpa)))
+ return H_HARDWARE;
+
+ if (mm_iommu_mapped_inc(mem))
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index 6651f736a0b1..eeb9e6651cc4 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -262,7 +262,8 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl,
+ if (!mem)
+ return H_TOO_HARD;
+
+- if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, &hpa)))
++ if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, tbl->it_page_shift,
++ &hpa)))
+ return H_HARDWARE;
+
+ pua = (void *) vmalloc_to_phys(pua);
+@@ -431,7 +432,8 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+
+ mem = mm_iommu_lookup_rm(vcpu->kvm->mm, ua, IOMMU_PAGE_SIZE_4K);
+ if (mem)
+- prereg = mm_iommu_ua_to_hpa_rm(mem, ua, &tces) == 0;
++ prereg = mm_iommu_ua_to_hpa_rm(mem, ua,
++ IOMMU_PAGE_SHIFT_4K, &tces) == 0;
+ }
+
+ if (!prereg) {
+diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
+index 4c615fcb0cf0..4205ce92ee86 100644
+--- a/arch/powerpc/mm/mmu_context_iommu.c
++++ b/arch/powerpc/mm/mmu_context_iommu.c
+@@ -19,6 +19,7 @@
+ #include <linux/hugetlb.h>
+ #include <linux/swap.h>
+ #include <asm/mmu_context.h>
++#include <asm/pte-walk.h>
+
+ static DEFINE_MUTEX(mem_list_mutex);
+
+@@ -27,6 +28,7 @@ struct mm_iommu_table_group_mem_t {
+ struct rcu_head rcu;
+ unsigned long used;
+ atomic64_t mapped;
++ unsigned int pageshift;
+ u64 ua; /* userspace address */
+ u64 entries; /* number of entries in hpas[] */
+ u64 *hpas; /* vmalloc'ed */
+@@ -125,6 +127,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ {
+ struct mm_iommu_table_group_mem_t *mem;
+ long i, j, ret = 0, locked_entries = 0;
++ unsigned int pageshift;
++ unsigned long flags;
+ struct page *page = NULL;
+
+ mutex_lock(&mem_list_mutex);
+@@ -159,6 +163,12 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ goto unlock_exit;
+ }
+
++ /*
++ * For a starting point for a maximum page size calculation
++ * we use @ua and @entries natural alignment to allow IOMMU pages
++ * smaller than huge pages but still bigger than PAGE_SIZE.
++ */
++ mem->pageshift = __ffs(ua | (entries << PAGE_SHIFT));
+ mem->hpas = vzalloc(entries * sizeof(mem->hpas[0]));
+ if (!mem->hpas) {
+ kfree(mem);
+@@ -199,6 +209,23 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ }
+ }
+ populate:
++ pageshift = PAGE_SHIFT;
++ if (PageCompound(page)) {
++ pte_t *pte;
++ struct page *head = compound_head(page);
++ unsigned int compshift = compound_order(head);
++
++ local_irq_save(flags); /* disables as well */
++ pte = find_linux_pte(mm->pgd, ua, NULL, &pageshift);
++ local_irq_restore(flags);
++
++ /* Double check it is still the same pinned page */
++ if (pte && pte_page(*pte) == head &&
++ pageshift == compshift)
++ pageshift = max_t(unsigned int, pageshift,
++ PAGE_SHIFT);
++ }
++ mem->pageshift = min(mem->pageshift, pageshift);
+ mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT;
+ }
+
+@@ -349,7 +376,7 @@ struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+ EXPORT_SYMBOL_GPL(mm_iommu_find);
+
+ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
+- unsigned long ua, unsigned long *hpa)
++ unsigned long ua, unsigned int pageshift, unsigned long *hpa)
+ {
+ const long entry = (ua - mem->ua) >> PAGE_SHIFT;
+ u64 *va = &mem->hpas[entry];
+@@ -357,6 +384,9 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
+ if (entry >= mem->entries)
+ return -EFAULT;
+
++ if (pageshift > mem->pageshift)
++ return -EFAULT;
++
+ *hpa = *va | (ua & ~PAGE_MASK);
+
+ return 0;
+@@ -364,7 +394,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
+ EXPORT_SYMBOL_GPL(mm_iommu_ua_to_hpa);
+
+ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
+- unsigned long ua, unsigned long *hpa)
++ unsigned long ua, unsigned int pageshift, unsigned long *hpa)
+ {
+ const long entry = (ua - mem->ua) >> PAGE_SHIFT;
+ void *va = &mem->hpas[entry];
+@@ -373,6 +403,9 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
+ if (entry >= mem->entries)
+ return -EFAULT;
+
++ if (pageshift > mem->pageshift)
++ return -EFAULT;
++
+ pa = (void *) vmalloc_to_phys(va);
+ if (!pa)
+ return -EFAULT;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index fbc4d17e3ecc..ac01341f2d1f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1092,6 +1092,7 @@ static u32 msr_based_features[] = {
+
+ MSR_F10H_DECFG,
+ MSR_IA32_UCODE_REV,
++ MSR_IA32_ARCH_CAPABILITIES,
+ };
+
+ static unsigned int num_msr_based_features;
+@@ -1100,7 +1101,8 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
+ {
+ switch (msr->index) {
+ case MSR_IA32_UCODE_REV:
+- rdmsrl(msr->index, msr->data);
++ case MSR_IA32_ARCH_CAPABILITIES:
++ rdmsrl_safe(msr->index, &msr->data);
+ break;
+ default:
+ if (kvm_x86_ops->get_msr_feature(msr))
+diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
+index e1a5fbeae08d..5d7554c025fd 100644
+--- a/arch/x86/xen/xen-pvh.S
++++ b/arch/x86/xen/xen-pvh.S
+@@ -54,6 +54,9 @@
+ * charge of setting up it's own stack, GDT and IDT.
+ */
+
++#define PVH_GDT_ENTRY_CANARY 4
++#define PVH_CANARY_SEL (PVH_GDT_ENTRY_CANARY * 8)
++
+ ENTRY(pvh_start_xen)
+ cld
+
+@@ -98,6 +101,12 @@ ENTRY(pvh_start_xen)
+ /* 64-bit entry point. */
+ .code64
+ 1:
++ /* Set base address in stack canary descriptor. */
++ mov $MSR_GS_BASE,%ecx
++ mov $_pa(canary), %eax
++ xor %edx, %edx
++ wrmsr
++
+ call xen_prepare_pvh
+
+ /* startup_64 expects boot_params in %rsi. */
+@@ -107,6 +116,17 @@ ENTRY(pvh_start_xen)
+
+ #else /* CONFIG_X86_64 */
+
++ /* Set base address in stack canary descriptor. */
++ movl $_pa(gdt_start),%eax
++ movl $_pa(canary),%ecx
++ movw %cx, (PVH_GDT_ENTRY_CANARY * 8) + 2(%eax)
++ shrl $16, %ecx
++ movb %cl, (PVH_GDT_ENTRY_CANARY * 8) + 4(%eax)
++ movb %ch, (PVH_GDT_ENTRY_CANARY * 8) + 7(%eax)
++
++ mov $PVH_CANARY_SEL,%eax
++ mov %eax,%gs
++
+ call mk_early_pgtbl_32
+
+ mov $_pa(initial_page_table), %eax
+@@ -150,9 +170,13 @@ gdt_start:
+ .quad GDT_ENTRY(0xc09a, 0, 0xfffff) /* __KERNEL_CS */
+ #endif
+ .quad GDT_ENTRY(0xc092, 0, 0xfffff) /* __KERNEL_DS */
++ .quad GDT_ENTRY(0x4090, 0, 0x18) /* PVH_CANARY_SEL */
+ gdt_end:
+
+- .balign 4
++ .balign 16
++canary:
++ .fill 48, 1, 0
++
+ early_stack:
+ .fill 256, 1, 0
+ early_stack_end:
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index bc5f05906bd1..ee840be150b5 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -497,6 +497,18 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ status =
+ acpi_ps_create_op(walk_state, aml_op_start, &op);
+ if (ACPI_FAILURE(status)) {
++ /*
++ * ACPI_PARSE_MODULE_LEVEL means that we are loading a table by
++ * executing it as a control method. However, if we encounter
++ * an error while loading the table, we need to keep trying to
++ * load the table rather than aborting the table load. Set the
++ * status to AE_OK to proceed with the table load.
++ */
++ if ((walk_state->
++ parse_flags & ACPI_PARSE_MODULE_LEVEL)
++ && status == AE_ALREADY_EXISTS) {
++ status = AE_OK;
++ }
+ if (status == AE_CTRL_PARSE_CONTINUE) {
+ continue;
+ }
+@@ -694,6 +706,20 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ acpi_ps_next_parse_state(walk_state, op, status);
+ if (status == AE_CTRL_PENDING) {
+ status = AE_OK;
++ } else
++ if ((walk_state->
++ parse_flags & ACPI_PARSE_MODULE_LEVEL)
++ && ACPI_FAILURE(status)) {
++ /*
++ * ACPI_PARSE_MODULE_LEVEL means that we are loading a table by
++ * executing it as a control method. However, if we encounter
++ * an error while loading the table, we need to keep trying to
++ * load the table rather than aborting the table load. Set the
++ * status to AE_OK to proceed with the table load. If we get a
++ * failure at this point, it means that the dispatcher got an
++ * error while processing Op (most likely an AML operand error.
++ */
++ status = AE_OK;
+ }
+ }
+
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index c9f54089429b..2cee8d0f3045 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -436,14 +436,6 @@ re_probe:
+ goto probe_failed;
+ }
+
+- /*
+- * Ensure devices are listed in devices_kset in correct order
+- * It's important to move Dev to the end of devices_kset before
+- * calling .probe, because it could be recursive and parent Dev
+- * should always go first
+- */
+- devices_kset_move_last(dev);
+-
+ if (dev->bus->probe) {
+ ret = dev->bus->probe(dev);
+ if (ret)
+diff --git a/drivers/clk/clk-aspeed.c b/drivers/clk/clk-aspeed.c
+index 2c23e7d7ba28..43e0c33ee648 100644
+--- a/drivers/clk/clk-aspeed.c
++++ b/drivers/clk/clk-aspeed.c
+@@ -22,7 +22,7 @@
+ #define ASPEED_MPLL_PARAM 0x20
+ #define ASPEED_HPLL_PARAM 0x24
+ #define AST2500_HPLL_BYPASS_EN BIT(20)
+-#define AST2400_HPLL_STRAPPED BIT(18)
++#define AST2400_HPLL_PROGRAMMED BIT(18)
+ #define AST2400_HPLL_BYPASS_EN BIT(17)
+ #define ASPEED_MISC_CTRL 0x2c
+ #define UART_DIV13_EN BIT(12)
+@@ -88,8 +88,8 @@ static const struct aspeed_gate_data aspeed_gates[] = {
+ [ASPEED_CLK_GATE_GCLK] = { 1, 7, "gclk-gate", NULL, 0 }, /* 2D engine */
+ [ASPEED_CLK_GATE_MCLK] = { 2, -1, "mclk-gate", "mpll", CLK_IS_CRITICAL }, /* SDRAM */
+ [ASPEED_CLK_GATE_VCLK] = { 3, 6, "vclk-gate", NULL, 0 }, /* Video Capture */
+- [ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", 0 }, /* PCIe/PCI */
+- [ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, 0 }, /* DAC */
++ [ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", CLK_IS_CRITICAL }, /* PCIe/PCI */
++ [ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, CLK_IS_CRITICAL }, /* DAC */
+ [ASPEED_CLK_GATE_REFCLK] = { 6, -1, "refclk-gate", "clkin", CLK_IS_CRITICAL },
+ [ASPEED_CLK_GATE_USBPORT2CLK] = { 7, 3, "usb-port2-gate", NULL, 0 }, /* USB2.0 Host port 2 */
+ [ASPEED_CLK_GATE_LCLK] = { 8, 5, "lclk-gate", NULL, 0 }, /* LPC */
+@@ -530,29 +530,45 @@ builtin_platform_driver(aspeed_clk_driver);
+ static void __init aspeed_ast2400_cc(struct regmap *map)
+ {
+ struct clk_hw *hw;
+- u32 val, freq, div;
++ u32 val, div, clkin, hpll;
++ const u16 hpll_rates[][4] = {
++ {384, 360, 336, 408},
++ {400, 375, 350, 425},
++ };
++ int rate;
+
+ /*
+ * CLKIN is the crystal oscillator, 24, 48 or 25MHz selected by
+ * strapping
+ */
+ regmap_read(map, ASPEED_STRAP, &val);
+- if (val & CLKIN_25MHZ_EN)
+- freq = 25000000;
+- else if (val & AST2400_CLK_SOURCE_SEL)
+- freq = 48000000;
+- else
+- freq = 24000000;
+- hw = clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, freq);
+- pr_debug("clkin @%u MHz\n", freq / 1000000);
++ rate = (val >> 8) & 3;
++ if (val & CLKIN_25MHZ_EN) {
++ clkin = 25000000;
++ hpll = hpll_rates[1][rate];
++ } else if (val & AST2400_CLK_SOURCE_SEL) {
++ clkin = 48000000;
++ hpll = hpll_rates[0][rate];
++ } else {
++ clkin = 24000000;
++ hpll = hpll_rates[0][rate];
++ }
++ hw = clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, clkin);
++ pr_debug("clkin @%u MHz\n", clkin / 1000000);
+
+ /*
+ * High-speed PLL clock derived from the crystal. This the CPU clock,
+- * and we assume that it is enabled
++ * and we assume that it is enabled. It can be configured through the
++ * HPLL_PARAM register, or set to a specified frequency by strapping.
+ */
+ regmap_read(map, ASPEED_HPLL_PARAM, &val);
+- WARN(val & AST2400_HPLL_STRAPPED, "hpll is strapped not configured");
+- aspeed_clk_data->hws[ASPEED_CLK_HPLL] = aspeed_ast2400_calc_pll("hpll", val);
++ if (val & AST2400_HPLL_PROGRAMMED)
++ hw = aspeed_ast2400_calc_pll("hpll", val);
++ else
++ hw = clk_hw_register_fixed_rate(NULL, "hpll", "clkin", 0,
++ hpll * 1000000);
++
++ aspeed_clk_data->hws[ASPEED_CLK_HPLL] = hw;
+
+ /*
+ * Strap bits 11:10 define the CPU/AHB clock frequency ratio (aka HCLK)
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index b1e4d9557610..0e053c17d8ba 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -511,6 +511,7 @@ static struct clk_regmap gxbb_fclk_div2 = {
+ .ops = &clk_regmap_gate_ops,
+ .parent_names = (const char *[]){ "fclk_div2_div" },
+ .num_parents = 1,
++ .flags = CLK_IS_CRITICAL,
+ },
+ };
+
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 87213ea7fc84..706dc80ad644 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -35,6 +35,7 @@
+ #define CLK_SEL 0x10
+ #define CLK_DIS 0x14
+
++#define ARMADA_37XX_DVFS_LOAD_1 1
+ #define LOAD_LEVEL_NR 4
+
+ #define ARMADA_37XX_NB_L0L1 0x18
+@@ -507,6 +508,40 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+ return -EINVAL;
+ }
+
++/*
++ * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz
++ * respectively) to L0 frequency (1.2 Ghz) requires a significant
++ * amount of time to let VDD stabilize to the appropriate
++ * voltage. This amount of time is large enough that it cannot be
++ * covered by the hardware countdown register. Due to this, the CPU
++ * might start operating at L0 before the voltage is stabilized,
++ * leading to CPU stalls.
++ *
++ * To work around this problem, we prevent switching directly from the
++ * L2/L3 frequencies to the L0 frequency, and instead switch to the L1
++ * frequency in-between. The sequence therefore becomes:
++ * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ)
++ * 2. Sleep 20ms for stabling VDD voltage
++ * 3. Then switch from L1(600MHZ) to L0(1200Mhz).
++ */
++static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base)
++{
++ unsigned int cur_level;
++
++ if (rate != 1200 * 1000 * 1000)
++ return;
++
++ regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level);
++ cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK;
++ if (cur_level <= ARMADA_37XX_DVFS_LOAD_1)
++ return;
++
++ regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD,
++ ARMADA_37XX_NB_CPU_LOAD_MASK,
++ ARMADA_37XX_DVFS_LOAD_1);
++ msleep(20);
++}
++
+ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+ {
+@@ -537,6 +572,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ */
+ reg = ARMADA_37XX_NB_CPU_LOAD;
+ mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
++
++ clk_pm_cpu_set_rate_wa(rate, base);
++
+ regmap_update_bits(base, reg, mask, load_level);
+
+ return rate;
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+index 501d2d290e9c..70dce544984e 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c
+@@ -55,6 +55,9 @@ nv04_display_create(struct drm_device *dev)
+ nouveau_display(dev)->init = nv04_display_init;
+ nouveau_display(dev)->fini = nv04_display_fini;
+
++ /* Pre-nv50 doesn't support atomic, so don't expose the ioctls */
++ dev->driver->driver_features &= ~DRIVER_ATOMIC;
++
+ nouveau_hw_save_vga_fonts(dev, 1);
+
+ nv04_crtc_create(dev, 0);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 0bffeb95b072..591d9c29ede7 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -79,6 +79,10 @@ MODULE_PARM_DESC(modeset, "enable driver (default: auto, "
+ int nouveau_modeset = -1;
+ module_param_named(modeset, nouveau_modeset, int, 0400);
+
++MODULE_PARM_DESC(atomic, "Expose atomic ioctl (default: disabled)");
++static int nouveau_atomic = 0;
++module_param_named(atomic, nouveau_atomic, int, 0400);
++
+ MODULE_PARM_DESC(runpm, "disable (0), force enable (1), optimus only default (-1)");
+ static int nouveau_runtime_pm = -1;
+ module_param_named(runpm, nouveau_runtime_pm, int, 0400);
+@@ -501,6 +505,9 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
+
+ pci_set_master(pdev);
+
++ if (nouveau_atomic)
++ driver_pci.driver_features |= DRIVER_ATOMIC;
++
+ ret = drm_get_pci_dev(pdev, pent, &driver_pci);
+ if (ret) {
+ nvkm_device_del(&device);
+diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c
+index 2b3ccd850750..abe297fda046 100644
+--- a/drivers/gpu/drm/nouveau/nv50_display.c
++++ b/drivers/gpu/drm/nouveau/nv50_display.c
+@@ -4198,7 +4198,7 @@ nv50_disp_atomic_commit(struct drm_device *dev,
+ nv50_disp_atomic_commit_tail(state);
+
+ drm_for_each_crtc(crtc, dev) {
+- if (crtc->state->enable) {
++ if (crtc->state->active) {
+ if (!drm->have_disp_power_ref) {
+ drm->have_disp_power_ref = true;
+ return 0;
+@@ -4441,10 +4441,6 @@ nv50_display_destroy(struct drm_device *dev)
+ kfree(disp);
+ }
+
+-MODULE_PARM_DESC(atomic, "Expose atomic ioctl (default: disabled)");
+-static int nouveau_atomic = 0;
+-module_param_named(atomic, nouveau_atomic, int, 0400);
+-
+ int
+ nv50_display_create(struct drm_device *dev)
+ {
+@@ -4469,8 +4465,6 @@ nv50_display_create(struct drm_device *dev)
+ disp->disp = &nouveau_display(dev)->disp;
+ dev->mode_config.funcs = &nv50_disp_func;
+ dev->driver->driver_features |= DRIVER_PREFER_XBGR_30BPP;
+- if (nouveau_atomic)
+- dev->driver->driver_features |= DRIVER_ATOMIC;
+
+ /* small shared memory area we use for notifiers and semaphores */
+ ret = nouveau_bo_new(&drm->client, 4096, 0x1000, TTM_PL_FLAG_VRAM,
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index b38798cc5288..f3a21343e636 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -142,7 +142,6 @@ config DMAR_TABLE
+ config INTEL_IOMMU
+ bool "Support for Intel IOMMU using DMA Remapping Devices"
+ depends on PCI_MSI && ACPI && (X86 || IA64_GENERIC)
+- select DMA_DIRECT_OPS
+ select IOMMU_API
+ select IOMMU_IOVA
+ select DMAR_TABLE
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 749d8f235346..6392a4964fc5 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -31,7 +31,6 @@
+ #include <linux/pci.h>
+ #include <linux/dmar.h>
+ #include <linux/dma-mapping.h>
+-#include <linux/dma-direct.h>
+ #include <linux/mempool.h>
+ #include <linux/memory.h>
+ #include <linux/cpu.h>
+@@ -3709,30 +3708,61 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flags,
+ unsigned long attrs)
+ {
+- void *vaddr;
++ struct page *page = NULL;
++ int order;
+
+- vaddr = dma_direct_alloc(dev, size, dma_handle, flags, attrs);
+- if (iommu_no_mapping(dev) || !vaddr)
+- return vaddr;
++ size = PAGE_ALIGN(size);
++ order = get_order(size);
+
+- *dma_handle = __intel_map_single(dev, virt_to_phys(vaddr),
+- PAGE_ALIGN(size), DMA_BIDIRECTIONAL,
+- dev->coherent_dma_mask);
+- if (!*dma_handle)
+- goto out_free_pages;
+- return vaddr;
++ if (!iommu_no_mapping(dev))
++ flags &= ~(GFP_DMA | GFP_DMA32);
++ else if (dev->coherent_dma_mask < dma_get_required_mask(dev)) {
++ if (dev->coherent_dma_mask < DMA_BIT_MASK(32))
++ flags |= GFP_DMA;
++ else
++ flags |= GFP_DMA32;
++ }
++
++ if (gfpflags_allow_blocking(flags)) {
++ unsigned int count = size >> PAGE_SHIFT;
++
++ page = dma_alloc_from_contiguous(dev, count, order, flags);
++ if (page && iommu_no_mapping(dev) &&
++ page_to_phys(page) + size > dev->coherent_dma_mask) {
++ dma_release_from_contiguous(dev, page, count);
++ page = NULL;
++ }
++ }
++
++ if (!page)
++ page = alloc_pages(flags, order);
++ if (!page)
++ return NULL;
++ memset(page_address(page), 0, size);
++
++ *dma_handle = __intel_map_single(dev, page_to_phys(page), size,
++ DMA_BIDIRECTIONAL,
++ dev->coherent_dma_mask);
++ if (*dma_handle)
++ return page_address(page);
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, order);
+
+-out_free_pages:
+- dma_direct_free(dev, size, vaddr, *dma_handle, attrs);
+ return NULL;
+ }
+
+ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_handle, unsigned long attrs)
+ {
+- if (!iommu_no_mapping(dev))
+- intel_unmap(dev, dma_handle, PAGE_ALIGN(size));
+- dma_direct_free(dev, size, vaddr, dma_handle, attrs);
++ int order;
++ struct page *page = virt_to_page(vaddr);
++
++ size = PAGE_ALIGN(size);
++ order = get_order(size);
++
++ intel_unmap(dev, dma_handle, size);
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, order);
+ }
+
+ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index b594bae1adbd..cdc72b7e3d26 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -743,15 +743,20 @@ const struct bond_option *bond_opt_get(unsigned int option)
+ static int bond_option_mode_set(struct bonding *bond,
+ const struct bond_opt_value *newval)
+ {
+- if (!bond_mode_uses_arp(newval->value) && bond->params.arp_interval) {
+- netdev_dbg(bond->dev, "%s mode is incompatible with arp monitoring, start mii monitoring\n",
+- newval->string);
+- /* disable arp monitoring */
+- bond->params.arp_interval = 0;
+- /* set miimon to default value */
+- bond->params.miimon = BOND_DEFAULT_MIIMON;
+- netdev_dbg(bond->dev, "Setting MII monitoring interval to %d\n",
+- bond->params.miimon);
++ if (!bond_mode_uses_arp(newval->value)) {
++ if (bond->params.arp_interval) {
++ netdev_dbg(bond->dev, "%s mode is incompatible with arp monitoring, start mii monitoring\n",
++ newval->string);
++ /* disable arp monitoring */
++ bond->params.arp_interval = 0;
++ }
++
++ if (!bond->params.miimon) {
++ /* set miimon to default value */
++ bond->params.miimon = BOND_DEFAULT_MIIMON;
++ netdev_dbg(bond->dev, "Setting MII monitoring interval to %d\n",
++ bond->params.miimon);
++ }
+ }
+
+ if (newval->value == BOND_MODE_ALB)
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index b397a33f3d32..e2f965c2e3aa 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -634,10 +634,12 @@ static int m_can_clk_start(struct m_can_priv *priv)
+ int err;
+
+ err = pm_runtime_get_sync(priv->device);
+- if (err)
++ if (err < 0) {
+ pm_runtime_put_noidle(priv->device);
++ return err;
++ }
+
+- return err;
++ return 0;
+ }
+
+ static void m_can_clk_stop(struct m_can_priv *priv)
+@@ -1109,7 +1111,8 @@ static void m_can_chip_config(struct net_device *dev)
+
+ } else {
+ /* Version 3.1.x or 3.2.x */
+- cccr &= ~(CCCR_TEST | CCCR_MON | CCCR_BRSE | CCCR_FDOE);
++ cccr &= ~(CCCR_TEST | CCCR_MON | CCCR_BRSE | CCCR_FDOE |
++ CCCR_NISO);
+
+ /* Only 3.2.x has NISO Bit implemented */
+ if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
+@@ -1687,8 +1690,6 @@ failed_ret:
+ return ret;
+ }
+
+-/* TODO: runtime PM with power down or sleep mode */
+-
+ static __maybe_unused int m_can_suspend(struct device *dev)
+ {
+ struct net_device *ndev = dev_get_drvdata(dev);
+diff --git a/drivers/net/can/peak_canfd/peak_pciefd_main.c b/drivers/net/can/peak_canfd/peak_pciefd_main.c
+index 3c51a884db87..fa689854f16b 100644
+--- a/drivers/net/can/peak_canfd/peak_pciefd_main.c
++++ b/drivers/net/can/peak_canfd/peak_pciefd_main.c
+@@ -58,6 +58,10 @@ MODULE_LICENSE("GPL v2");
+ #define PCIEFD_REG_SYS_VER1 0x0040 /* version reg #1 */
+ #define PCIEFD_REG_SYS_VER2 0x0044 /* version reg #2 */
+
++#define PCIEFD_FW_VERSION(x, y, z) (((u32)(x) << 24) | \
++ ((u32)(y) << 16) | \
++ ((u32)(z) << 8))
++
+ /* System Control Registers Bits */
+ #define PCIEFD_SYS_CTL_TS_RST 0x00000001 /* timestamp clock */
+ #define PCIEFD_SYS_CTL_CLK_EN 0x00000002 /* system clock */
+@@ -783,6 +787,21 @@ static int peak_pciefd_probe(struct pci_dev *pdev,
+ "%ux CAN-FD PCAN-PCIe FPGA v%u.%u.%u:\n", can_count,
+ hw_ver_major, hw_ver_minor, hw_ver_sub);
+
++#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++ /* FW < v3.3.0 DMA logic doesn't handle correctly the mix of 32-bit and
++ * 64-bit logical addresses: this workaround forces usage of 32-bit
++ * DMA addresses only when such a fw is detected.
++ */
++ if (PCIEFD_FW_VERSION(hw_ver_major, hw_ver_minor, hw_ver_sub) <
++ PCIEFD_FW_VERSION(3, 3, 0)) {
++ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
++ if (err)
++ dev_warn(&pdev->dev,
++ "warning: can't set DMA mask %llxh (err %d)\n",
++ DMA_BIT_MASK(32), err);
++ }
++#endif
++
+ /* stop system clock */
+ pciefd_sys_writereg(pciefd, PCIEFD_SYS_CTL_CLK_EN,
+ PCIEFD_REG_SYS_CTL_CLR);
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 89aec07c225f..5a24039733ef 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -2,6 +2,7 @@
+ *
+ * Copyright (C) 2012 - 2014 Xilinx, Inc.
+ * Copyright (C) 2009 PetaLogix. All rights reserved.
++ * Copyright (C) 2017 Sandvik Mining and Construction Oy
+ *
+ * Description:
+ * This driver is developed for Axi CAN IP and for Zynq CANPS Controller.
+@@ -25,8 +26,10 @@
+ #include <linux/module.h>
+ #include <linux/netdevice.h>
+ #include <linux/of.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/skbuff.h>
++#include <linux/spinlock.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/can/dev.h>
+@@ -101,7 +104,7 @@ enum xcan_reg {
+ #define XCAN_INTR_ALL (XCAN_IXR_TXOK_MASK | XCAN_IXR_BSOFF_MASK |\
+ XCAN_IXR_WKUP_MASK | XCAN_IXR_SLP_MASK | \
+ XCAN_IXR_RXNEMP_MASK | XCAN_IXR_ERROR_MASK | \
+- XCAN_IXR_ARBLST_MASK | XCAN_IXR_RXOK_MASK)
++ XCAN_IXR_RXOFLW_MASK | XCAN_IXR_ARBLST_MASK)
+
+ /* CAN register bit shift - XCAN_<REG>_<BIT>_SHIFT */
+ #define XCAN_BTR_SJW_SHIFT 7 /* Synchronous jump width */
+@@ -118,6 +121,7 @@ enum xcan_reg {
+ /**
+ * struct xcan_priv - This definition define CAN driver instance
+ * @can: CAN private data structure.
++ * @tx_lock: Lock for synchronizing TX interrupt handling
+ * @tx_head: Tx CAN packets ready to send on the queue
+ * @tx_tail: Tx CAN packets successfully sended on the queue
+ * @tx_max: Maximum number packets the driver can send
+@@ -132,6 +136,7 @@ enum xcan_reg {
+ */
+ struct xcan_priv {
+ struct can_priv can;
++ spinlock_t tx_lock;
+ unsigned int tx_head;
+ unsigned int tx_tail;
+ unsigned int tx_max;
+@@ -159,6 +164,11 @@ static const struct can_bittiming_const xcan_bittiming_const = {
+ .brp_inc = 1,
+ };
+
++#define XCAN_CAP_WATERMARK 0x0001
++struct xcan_devtype_data {
++ unsigned int caps;
++};
++
+ /**
+ * xcan_write_reg_le - Write a value to the device register little endian
+ * @priv: Driver private data structure
+@@ -238,6 +248,10 @@ static int set_reset_mode(struct net_device *ndev)
+ usleep_range(500, 10000);
+ }
+
++ /* reset clears FIFOs */
++ priv->tx_head = 0;
++ priv->tx_tail = 0;
++
+ return 0;
+ }
+
+@@ -392,6 +406,7 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ struct net_device_stats *stats = &ndev->stats;
+ struct can_frame *cf = (struct can_frame *)skb->data;
+ u32 id, dlc, data[2] = {0, 0};
++ unsigned long flags;
+
+ if (can_dropped_invalid_skb(ndev, skb))
+ return NETDEV_TX_OK;
+@@ -439,6 +454,9 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ data[1] = be32_to_cpup((__be32 *)(cf->data + 4));
+
+ can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max);
++
++ spin_lock_irqsave(&priv->tx_lock, flags);
++
+ priv->tx_head++;
+
+ /* Write the Frame to Xilinx CAN TX FIFO */
+@@ -454,10 +472,16 @@ static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ stats->tx_bytes += cf->can_dlc;
+ }
+
++ /* Clear TX-FIFO-empty interrupt for xcan_tx_interrupt() */
++ if (priv->tx_max > 1)
++ priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_TXFEMP_MASK);
++
+ /* Check if the TX buffer is full */
+ if ((priv->tx_head - priv->tx_tail) == priv->tx_max)
+ netif_stop_queue(ndev);
+
++ spin_unlock_irqrestore(&priv->tx_lock, flags);
++
+ return NETDEV_TX_OK;
+ }
+
+@@ -529,6 +553,123 @@ static int xcan_rx(struct net_device *ndev)
+ return 1;
+ }
+
++/**
++ * xcan_current_error_state - Get current error state from HW
++ * @ndev: Pointer to net_device structure
++ *
++ * Checks the current CAN error state from the HW. Note that this
++ * only checks for ERROR_PASSIVE and ERROR_WARNING.
++ *
++ * Return:
++ * ERROR_PASSIVE or ERROR_WARNING if either is active, ERROR_ACTIVE
++ * otherwise.
++ */
++static enum can_state xcan_current_error_state(struct net_device *ndev)
++{
++ struct xcan_priv *priv = netdev_priv(ndev);
++ u32 status = priv->read_reg(priv, XCAN_SR_OFFSET);
++
++ if ((status & XCAN_SR_ESTAT_MASK) == XCAN_SR_ESTAT_MASK)
++ return CAN_STATE_ERROR_PASSIVE;
++ else if (status & XCAN_SR_ERRWRN_MASK)
++ return CAN_STATE_ERROR_WARNING;
++ else
++ return CAN_STATE_ERROR_ACTIVE;
++}
++
++/**
++ * xcan_set_error_state - Set new CAN error state
++ * @ndev: Pointer to net_device structure
++ * @new_state: The new CAN state to be set
++ * @cf: Error frame to be populated or NULL
++ *
++ * Set new CAN error state for the device, updating statistics and
++ * populating the error frame if given.
++ */
++static void xcan_set_error_state(struct net_device *ndev,
++ enum can_state new_state,
++ struct can_frame *cf)
++{
++ struct xcan_priv *priv = netdev_priv(ndev);
++ u32 ecr = priv->read_reg(priv, XCAN_ECR_OFFSET);
++ u32 txerr = ecr & XCAN_ECR_TEC_MASK;
++ u32 rxerr = (ecr & XCAN_ECR_REC_MASK) >> XCAN_ESR_REC_SHIFT;
++
++ priv->can.state = new_state;
++
++ if (cf) {
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
++ }
++
++ switch (new_state) {
++ case CAN_STATE_ERROR_PASSIVE:
++ priv->can.can_stats.error_passive++;
++ if (cf)
++ cf->data[1] = (rxerr > 127) ?
++ CAN_ERR_CRTL_RX_PASSIVE :
++ CAN_ERR_CRTL_TX_PASSIVE;
++ break;
++ case CAN_STATE_ERROR_WARNING:
++ priv->can.can_stats.error_warning++;
++ if (cf)
++ cf->data[1] |= (txerr > rxerr) ?
++ CAN_ERR_CRTL_TX_WARNING :
++ CAN_ERR_CRTL_RX_WARNING;
++ break;
++ case CAN_STATE_ERROR_ACTIVE:
++ if (cf)
++ cf->data[1] |= CAN_ERR_CRTL_ACTIVE;
++ break;
++ default:
++ /* non-ERROR states are handled elsewhere */
++ WARN_ON(1);
++ break;
++ }
++}
++
++/**
++ * xcan_update_error_state_after_rxtx - Update CAN error state after RX/TX
++ * @ndev: Pointer to net_device structure
++ *
++ * If the device is in a ERROR-WARNING or ERROR-PASSIVE state, check if
++ * the performed RX/TX has caused it to drop to a lesser state and set
++ * the interface state accordingly.
++ */
++static void xcan_update_error_state_after_rxtx(struct net_device *ndev)
++{
++ struct xcan_priv *priv = netdev_priv(ndev);
++ enum can_state old_state = priv->can.state;
++ enum can_state new_state;
++
++ /* changing error state due to successful frame RX/TX can only
++ * occur from these states
++ */
++ if (old_state != CAN_STATE_ERROR_WARNING &&
++ old_state != CAN_STATE_ERROR_PASSIVE)
++ return;
++
++ new_state = xcan_current_error_state(ndev);
++
++ if (new_state != old_state) {
++ struct sk_buff *skb;
++ struct can_frame *cf;
++
++ skb = alloc_can_err_skb(ndev, &cf);
++
++ xcan_set_error_state(ndev, new_state, skb ? cf : NULL);
++
++ if (skb) {
++ struct net_device_stats *stats = &ndev->stats;
++
++ stats->rx_packets++;
++ stats->rx_bytes += cf->can_dlc;
++ netif_rx(skb);
++ }
++ }
++}
++
+ /**
+ * xcan_err_interrupt - error frame Isr
+ * @ndev: net_device pointer
+@@ -544,16 +685,12 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
+ struct net_device_stats *stats = &ndev->stats;
+ struct can_frame *cf;
+ struct sk_buff *skb;
+- u32 err_status, status, txerr = 0, rxerr = 0;
++ u32 err_status;
+
+ skb = alloc_can_err_skb(ndev, &cf);
+
+ err_status = priv->read_reg(priv, XCAN_ESR_OFFSET);
+ priv->write_reg(priv, XCAN_ESR_OFFSET, err_status);
+- txerr = priv->read_reg(priv, XCAN_ECR_OFFSET) & XCAN_ECR_TEC_MASK;
+- rxerr = ((priv->read_reg(priv, XCAN_ECR_OFFSET) &
+- XCAN_ECR_REC_MASK) >> XCAN_ESR_REC_SHIFT);
+- status = priv->read_reg(priv, XCAN_SR_OFFSET);
+
+ if (isr & XCAN_IXR_BSOFF_MASK) {
+ priv->can.state = CAN_STATE_BUS_OFF;
+@@ -563,28 +700,10 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
+ can_bus_off(ndev);
+ if (skb)
+ cf->can_id |= CAN_ERR_BUSOFF;
+- } else if ((status & XCAN_SR_ESTAT_MASK) == XCAN_SR_ESTAT_MASK) {
+- priv->can.state = CAN_STATE_ERROR_PASSIVE;
+- priv->can.can_stats.error_passive++;
+- if (skb) {
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] = (rxerr > 127) ?
+- CAN_ERR_CRTL_RX_PASSIVE :
+- CAN_ERR_CRTL_TX_PASSIVE;
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+- }
+- } else if (status & XCAN_SR_ERRWRN_MASK) {
+- priv->can.state = CAN_STATE_ERROR_WARNING;
+- priv->can.can_stats.error_warning++;
+- if (skb) {
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] |= (txerr > rxerr) ?
+- CAN_ERR_CRTL_TX_WARNING :
+- CAN_ERR_CRTL_RX_WARNING;
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
+- }
++ } else {
++ enum can_state new_state = xcan_current_error_state(ndev);
++
++ xcan_set_error_state(ndev, new_state, skb ? cf : NULL);
+ }
+
+ /* Check for Arbitration lost interrupt */
+@@ -600,7 +719,6 @@ static void xcan_err_interrupt(struct net_device *ndev, u32 isr)
+ if (isr & XCAN_IXR_RXOFLW_MASK) {
+ stats->rx_over_errors++;
+ stats->rx_errors++;
+- priv->write_reg(priv, XCAN_SRR_OFFSET, XCAN_SRR_RESET_MASK);
+ if (skb) {
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] |= CAN_ERR_CRTL_RX_OVERFLOW;
+@@ -709,26 +827,20 @@ static int xcan_rx_poll(struct napi_struct *napi, int quota)
+
+ isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
+ while ((isr & XCAN_IXR_RXNEMP_MASK) && (work_done < quota)) {
+- if (isr & XCAN_IXR_RXOK_MASK) {
+- priv->write_reg(priv, XCAN_ICR_OFFSET,
+- XCAN_IXR_RXOK_MASK);
+- work_done += xcan_rx(ndev);
+- } else {
+- priv->write_reg(priv, XCAN_ICR_OFFSET,
+- XCAN_IXR_RXNEMP_MASK);
+- break;
+- }
++ work_done += xcan_rx(ndev);
+ priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_RXNEMP_MASK);
+ isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
+ }
+
+- if (work_done)
++ if (work_done) {
+ can_led_event(ndev, CAN_LED_EVENT_RX);
++ xcan_update_error_state_after_rxtx(ndev);
++ }
+
+ if (work_done < quota) {
+ napi_complete_done(napi, work_done);
+ ier = priv->read_reg(priv, XCAN_IER_OFFSET);
+- ier |= (XCAN_IXR_RXOK_MASK | XCAN_IXR_RXNEMP_MASK);
++ ier |= XCAN_IXR_RXNEMP_MASK;
+ priv->write_reg(priv, XCAN_IER_OFFSET, ier);
+ }
+ return work_done;
+@@ -743,18 +855,71 @@ static void xcan_tx_interrupt(struct net_device *ndev, u32 isr)
+ {
+ struct xcan_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
++ unsigned int frames_in_fifo;
++ int frames_sent = 1; /* TXOK => at least 1 frame was sent */
++ unsigned long flags;
++ int retries = 0;
++
++ /* Synchronize with xmit as we need to know the exact number
++ * of frames in the FIFO to stay in sync due to the TXFEMP
++ * handling.
++ * This also prevents a race between netif_wake_queue() and
++ * netif_stop_queue().
++ */
++ spin_lock_irqsave(&priv->tx_lock, flags);
++
++ frames_in_fifo = priv->tx_head - priv->tx_tail;
++
++ if (WARN_ON_ONCE(frames_in_fifo == 0)) {
++ /* clear TXOK anyway to avoid getting back here */
++ priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_TXOK_MASK);
++ spin_unlock_irqrestore(&priv->tx_lock, flags);
++ return;
++ }
++
++ /* Check if 2 frames were sent (TXOK only means that at least 1
++ * frame was sent).
++ */
++ if (frames_in_fifo > 1) {
++ WARN_ON(frames_in_fifo > priv->tx_max);
++
++ /* Synchronize TXOK and isr so that after the loop:
++ * (1) isr variable is up-to-date at least up to TXOK clear
++ * time. This avoids us clearing a TXOK of a second frame
++ * but not noticing that the FIFO is now empty and thus
++ * marking only a single frame as sent.
++ * (2) No TXOK is left. Having one could mean leaving a
++ * stray TXOK as we might process the associated frame
++ * via TXFEMP handling as we read TXFEMP *after* TXOK
++ * clear to satisfy (1).
++ */
++ while ((isr & XCAN_IXR_TXOK_MASK) && !WARN_ON(++retries == 100)) {
++ priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_TXOK_MASK);
++ isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
++ }
+
+- while ((priv->tx_head - priv->tx_tail > 0) &&
+- (isr & XCAN_IXR_TXOK_MASK)) {
++ if (isr & XCAN_IXR_TXFEMP_MASK) {
++ /* nothing in FIFO anymore */
++ frames_sent = frames_in_fifo;
++ }
++ } else {
++ /* single frame in fifo, just clear TXOK */
+ priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_TXOK_MASK);
++ }
++
++ while (frames_sent--) {
+ can_get_echo_skb(ndev, priv->tx_tail %
+ priv->tx_max);
+ priv->tx_tail++;
+ stats->tx_packets++;
+- isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
+ }
+- can_led_event(ndev, CAN_LED_EVENT_TX);
++
+ netif_wake_queue(ndev);
++
++ spin_unlock_irqrestore(&priv->tx_lock, flags);
++
++ can_led_event(ndev, CAN_LED_EVENT_TX);
++ xcan_update_error_state_after_rxtx(ndev);
+ }
+
+ /**
+@@ -773,6 +938,7 @@ static irqreturn_t xcan_interrupt(int irq, void *dev_id)
+ struct net_device *ndev = (struct net_device *)dev_id;
+ struct xcan_priv *priv = netdev_priv(ndev);
+ u32 isr, ier;
++ u32 isr_errors;
+
+ /* Get the interrupt status from Xilinx CAN */
+ isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
+@@ -791,18 +957,17 @@ static irqreturn_t xcan_interrupt(int irq, void *dev_id)
+ xcan_tx_interrupt(ndev, isr);
+
+ /* Check for the type of error interrupt and Processing it */
+- if (isr & (XCAN_IXR_ERROR_MASK | XCAN_IXR_RXOFLW_MASK |
+- XCAN_IXR_BSOFF_MASK | XCAN_IXR_ARBLST_MASK)) {
+- priv->write_reg(priv, XCAN_ICR_OFFSET, (XCAN_IXR_ERROR_MASK |
+- XCAN_IXR_RXOFLW_MASK | XCAN_IXR_BSOFF_MASK |
+- XCAN_IXR_ARBLST_MASK));
++ isr_errors = isr & (XCAN_IXR_ERROR_MASK | XCAN_IXR_RXOFLW_MASK |
++ XCAN_IXR_BSOFF_MASK | XCAN_IXR_ARBLST_MASK);
++ if (isr_errors) {
++ priv->write_reg(priv, XCAN_ICR_OFFSET, isr_errors);
+ xcan_err_interrupt(ndev, isr);
+ }
+
+ /* Check for the type of receive interrupt and Processing it */
+- if (isr & (XCAN_IXR_RXNEMP_MASK | XCAN_IXR_RXOK_MASK)) {
++ if (isr & XCAN_IXR_RXNEMP_MASK) {
+ ier = priv->read_reg(priv, XCAN_IER_OFFSET);
+- ier &= ~(XCAN_IXR_RXNEMP_MASK | XCAN_IXR_RXOK_MASK);
++ ier &= ~XCAN_IXR_RXNEMP_MASK;
+ priv->write_reg(priv, XCAN_IER_OFFSET, ier);
+ napi_schedule(&priv->napi);
+ }
+@@ -819,13 +984,9 @@ static irqreturn_t xcan_interrupt(int irq, void *dev_id)
+ static void xcan_chip_stop(struct net_device *ndev)
+ {
+ struct xcan_priv *priv = netdev_priv(ndev);
+- u32 ier;
+
+ /* Disable interrupts and leave the can in configuration mode */
+- ier = priv->read_reg(priv, XCAN_IER_OFFSET);
+- ier &= ~XCAN_INTR_ALL;
+- priv->write_reg(priv, XCAN_IER_OFFSET, ier);
+- priv->write_reg(priv, XCAN_SRR_OFFSET, XCAN_SRR_RESET_MASK);
++ set_reset_mode(ndev);
+ priv->can.state = CAN_STATE_STOPPED;
+ }
+
+@@ -958,10 +1119,15 @@ static const struct net_device_ops xcan_netdev_ops = {
+ */
+ static int __maybe_unused xcan_suspend(struct device *dev)
+ {
+- if (!device_may_wakeup(dev))
+- return pm_runtime_force_suspend(dev);
++ struct net_device *ndev = dev_get_drvdata(dev);
+
+- return 0;
++ if (netif_running(ndev)) {
++ netif_stop_queue(ndev);
++ netif_device_detach(ndev);
++ xcan_chip_stop(ndev);
++ }
++
++ return pm_runtime_force_suspend(dev);
+ }
+
+ /**
+@@ -973,11 +1139,27 @@ static int __maybe_unused xcan_suspend(struct device *dev)
+ */
+ static int __maybe_unused xcan_resume(struct device *dev)
+ {
+- if (!device_may_wakeup(dev))
+- return pm_runtime_force_resume(dev);
++ struct net_device *ndev = dev_get_drvdata(dev);
++ int ret;
+
+- return 0;
++ ret = pm_runtime_force_resume(dev);
++ if (ret) {
++ dev_err(dev, "pm_runtime_force_resume failed on resume\n");
++ return ret;
++ }
++
++ if (netif_running(ndev)) {
++ ret = xcan_chip_start(ndev);
++ if (ret) {
++ dev_err(dev, "xcan_chip_start failed on resume\n");
++ return ret;
++ }
++
++ netif_device_attach(ndev);
++ netif_start_queue(ndev);
++ }
+
++ return 0;
+ }
+
+ /**
+@@ -992,14 +1174,6 @@ static int __maybe_unused xcan_runtime_suspend(struct device *dev)
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct xcan_priv *priv = netdev_priv(ndev);
+
+- if (netif_running(ndev)) {
+- netif_stop_queue(ndev);
+- netif_device_detach(ndev);
+- }
+-
+- priv->write_reg(priv, XCAN_MSR_OFFSET, XCAN_MSR_SLEEP_MASK);
+- priv->can.state = CAN_STATE_SLEEPING;
+-
+ clk_disable_unprepare(priv->bus_clk);
+ clk_disable_unprepare(priv->can_clk);
+
+@@ -1018,7 +1192,6 @@ static int __maybe_unused xcan_runtime_resume(struct device *dev)
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct xcan_priv *priv = netdev_priv(ndev);
+ int ret;
+- u32 isr, status;
+
+ ret = clk_prepare_enable(priv->bus_clk);
+ if (ret) {
+@@ -1032,27 +1205,6 @@ static int __maybe_unused xcan_runtime_resume(struct device *dev)
+ return ret;
+ }
+
+- priv->write_reg(priv, XCAN_SRR_OFFSET, XCAN_SRR_RESET_MASK);
+- isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
+- status = priv->read_reg(priv, XCAN_SR_OFFSET);
+-
+- if (netif_running(ndev)) {
+- if (isr & XCAN_IXR_BSOFF_MASK) {
+- priv->can.state = CAN_STATE_BUS_OFF;
+- priv->write_reg(priv, XCAN_SRR_OFFSET,
+- XCAN_SRR_RESET_MASK);
+- } else if ((status & XCAN_SR_ESTAT_MASK) ==
+- XCAN_SR_ESTAT_MASK) {
+- priv->can.state = CAN_STATE_ERROR_PASSIVE;
+- } else if (status & XCAN_SR_ERRWRN_MASK) {
+- priv->can.state = CAN_STATE_ERROR_WARNING;
+- } else {
+- priv->can.state = CAN_STATE_ERROR_ACTIVE;
+- }
+- netif_device_attach(ndev);
+- netif_start_queue(ndev);
+- }
+-
+ return 0;
+ }
+
+@@ -1061,6 +1213,18 @@ static const struct dev_pm_ops xcan_dev_pm_ops = {
+ SET_RUNTIME_PM_OPS(xcan_runtime_suspend, xcan_runtime_resume, NULL)
+ };
+
++static const struct xcan_devtype_data xcan_zynq_data = {
++ .caps = XCAN_CAP_WATERMARK,
++};
++
++/* Match table for OF platform binding */
++static const struct of_device_id xcan_of_match[] = {
++ { .compatible = "xlnx,zynq-can-1.0", .data = &xcan_zynq_data },
++ { .compatible = "xlnx,axi-can-1.00.a", },
++ { /* end of list */ },
++};
++MODULE_DEVICE_TABLE(of, xcan_of_match);
++
+ /**
+ * xcan_probe - Platform registration call
+ * @pdev: Handle to the platform device structure
+@@ -1075,8 +1239,10 @@ static int xcan_probe(struct platform_device *pdev)
+ struct resource *res; /* IO mem resources */
+ struct net_device *ndev;
+ struct xcan_priv *priv;
++ const struct of_device_id *of_id;
++ int caps = 0;
+ void __iomem *addr;
+- int ret, rx_max, tx_max;
++ int ret, rx_max, tx_max, tx_fifo_depth;
+
+ /* Get the virtual base address for the device */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -1086,7 +1252,8 @@ static int xcan_probe(struct platform_device *pdev)
+ goto err;
+ }
+
+- ret = of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth", &tx_max);
++ ret = of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth",
++ &tx_fifo_depth);
+ if (ret < 0)
+ goto err;
+
+@@ -1094,6 +1261,30 @@ static int xcan_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto err;
+
++ of_id = of_match_device(xcan_of_match, &pdev->dev);
++ if (of_id) {
++ const struct xcan_devtype_data *devtype_data = of_id->data;
++
++ if (devtype_data)
++ caps = devtype_data->caps;
++ }
++
++ /* There is no way to directly figure out how many frames have been
++ * sent when the TXOK interrupt is processed. If watermark programming
++ * is supported, we can have 2 frames in the FIFO and use TXFEMP
++ * to determine if 1 or 2 frames have been sent.
++ * Theoretically we should be able to use TXFWMEMP to determine up
++ * to 3 frames, but it seems that after putting a second frame in the
++ * FIFO, with watermark at 2 frames, it can happen that TXFWMEMP (less
++ * than 2 frames in FIFO) is set anyway with no TXOK (a frame was
++ * sent), which is not a sensible state - possibly TXFWMEMP is not
++ * completely synchronized with the rest of the bits?
++ */
++ if (caps & XCAN_CAP_WATERMARK)
++ tx_max = min(tx_fifo_depth, 2);
++ else
++ tx_max = 1;
++
+ /* Create a CAN device instance */
+ ndev = alloc_candev(sizeof(struct xcan_priv), tx_max);
+ if (!ndev)
+@@ -1108,6 +1299,7 @@ static int xcan_probe(struct platform_device *pdev)
+ CAN_CTRLMODE_BERR_REPORTING;
+ priv->reg_base = addr;
+ priv->tx_max = tx_max;
++ spin_lock_init(&priv->tx_lock);
+
+ /* Get IRQ for the device */
+ ndev->irq = platform_get_irq(pdev, 0);
+@@ -1172,9 +1364,9 @@ static int xcan_probe(struct platform_device *pdev)
+
+ pm_runtime_put(&pdev->dev);
+
+- netdev_dbg(ndev, "reg_base=0x%p irq=%d clock=%d, tx fifo depth:%d\n",
++ netdev_dbg(ndev, "reg_base=0x%p irq=%d clock=%d, tx fifo depth: actual %d, using %d\n",
+ priv->reg_base, ndev->irq, priv->can.clock.freq,
+- priv->tx_max);
++ tx_fifo_depth, priv->tx_max);
+
+ return 0;
+
+@@ -1208,14 +1400,6 @@ static int xcan_remove(struct platform_device *pdev)
+ return 0;
+ }
+
+-/* Match table for OF platform binding */
+-static const struct of_device_id xcan_of_match[] = {
+- { .compatible = "xlnx,zynq-can-1.0", },
+- { .compatible = "xlnx,axi-can-1.00.a", },
+- { /* end of list */ },
+-};
+-MODULE_DEVICE_TABLE(of, xcan_of_match);
+-
+ static struct platform_driver xcan_driver = {
+ .probe = xcan_probe,
+ .remove = xcan_remove,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 5b4374f21d76..04371b0bba80 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -341,6 +341,7 @@ static const struct irq_domain_ops mv88e6xxx_g1_irq_domain_ops = {
+ .xlate = irq_domain_xlate_twocell,
+ };
+
++/* To be called with reg_lock held */
+ static void mv88e6xxx_g1_irq_free_common(struct mv88e6xxx_chip *chip)
+ {
+ int irq, virq;
+@@ -360,9 +361,15 @@ static void mv88e6xxx_g1_irq_free_common(struct mv88e6xxx_chip *chip)
+
+ static void mv88e6xxx_g1_irq_free(struct mv88e6xxx_chip *chip)
+ {
+- mv88e6xxx_g1_irq_free_common(chip);
+-
++ /*
++ * free_irq must be called without reg_lock taken because the irq
++ * handler takes this lock, too.
++ */
+ free_irq(chip->irq, chip);
++
++ mutex_lock(&chip->reg_lock);
++ mv88e6xxx_g1_irq_free_common(chip);
++ mutex_unlock(&chip->reg_lock);
+ }
+
+ static int mv88e6xxx_g1_irq_setup_common(struct mv88e6xxx_chip *chip)
+@@ -467,10 +474,12 @@ static int mv88e6xxx_irq_poll_setup(struct mv88e6xxx_chip *chip)
+
+ static void mv88e6xxx_irq_poll_free(struct mv88e6xxx_chip *chip)
+ {
+- mv88e6xxx_g1_irq_free_common(chip);
+-
+ kthread_cancel_delayed_work_sync(&chip->irq_poll_work);
+ kthread_destroy_worker(chip->kworker);
++
++ mutex_lock(&chip->reg_lock);
++ mv88e6xxx_g1_irq_free_common(chip);
++ mutex_unlock(&chip->reg_lock);
+ }
+
+ int mv88e6xxx_wait(struct mv88e6xxx_chip *chip, int addr, int reg, u16 mask)
+@@ -4286,12 +4295,10 @@ out_g2_irq:
+ if (chip->info->g2_irqs > 0)
+ mv88e6xxx_g2_irq_free(chip);
+ out_g1_irq:
+- mutex_lock(&chip->reg_lock);
+ if (chip->irq > 0)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
+- mutex_unlock(&chip->reg_lock);
+ out:
+ return err;
+ }
+@@ -4316,12 +4323,10 @@ static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+ if (chip->info->g2_irqs > 0)
+ mv88e6xxx_g2_irq_free(chip);
+
+- mutex_lock(&chip->reg_lock);
+ if (chip->irq > 0)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
+- mutex_unlock(&chip->reg_lock);
+ }
+
+ static const struct of_device_id mv88e6xxx_of_match[] = {
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 9128858479c4..2353ec829c04 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -229,6 +229,7 @@ netdev_tx_t hinic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+ txq->txq_stats.tx_busy++;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+ err = NETDEV_TX_BUSY;
++ wqe_size = 0;
+ goto flush_skbs;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 29e50f787349..db63f0ec3d01 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -2956,7 +2956,7 @@ int mlx4_RST2INIT_QP_wrapper(struct mlx4_dev *dev, int slave,
+ u32 srqn = qp_get_srqn(qpc) & 0xffffff;
+ int use_srq = (qp_get_srqn(qpc) >> 24) & 1;
+ struct res_srq *srq;
+- int local_qpn = be32_to_cpu(qpc->local_qpn) & 0xffffff;
++ int local_qpn = vhcr->in_modifier & 0xffffff;
+
+ err = adjust_qp_sched_queue(dev, slave, qpc, inbox);
+ if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 30cad07be2b5..065ff87f0bef 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1092,9 +1092,6 @@ int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
+ int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
+ struct ethtool_flash *flash);
+
+-int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+- void *cb_priv);
+-
+ /* mlx5e generic netdev management API */
+ struct net_device*
+ mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+index 610d485c4b03..dda281cff880 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+@@ -381,14 +381,14 @@ static void arfs_may_expire_flow(struct mlx5e_priv *priv)
+ HLIST_HEAD(del_list);
+ spin_lock_bh(&priv->fs.arfs.arfs_lock);
+ mlx5e_for_each_arfs_rule(arfs_rule, htmp, priv->fs.arfs.arfs_tables, i, j) {
+- if (quota++ > MLX5E_ARFS_EXPIRY_QUOTA)
+- break;
+ if (!work_pending(&arfs_rule->arfs_work) &&
+ rps_may_expire_flow(priv->netdev,
+ arfs_rule->rxq, arfs_rule->flow_id,
+ arfs_rule->filter_id)) {
+ hlist_del_init(&arfs_rule->hlist);
+ hlist_add_head(&arfs_rule->hlist, &del_list);
++ if (quota++ > MLX5E_ARFS_EXPIRY_QUOTA)
++ break;
+ }
+ }
+ spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+@@ -711,6 +711,9 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ skb->protocol != htons(ETH_P_IPV6))
+ return -EPROTONOSUPPORT;
+
++ if (skb->encapsulation)
++ return -EPROTONOSUPPORT;
++
+ arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol);
+ if (!arfs_t)
+ return -EPROTONOSUPPORT;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index c641d5656b2d..0c6015ce85fd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -272,7 +272,8 @@ int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets)
+ }
+
+ static int mlx5e_dbcnl_validate_ets(struct net_device *netdev,
+- struct ieee_ets *ets)
++ struct ieee_ets *ets,
++ bool zero_sum_allowed)
+ {
+ bool have_ets_tc = false;
+ int bw_sum = 0;
+@@ -297,8 +298,9 @@ static int mlx5e_dbcnl_validate_ets(struct net_device *netdev,
+ }
+
+ if (have_ets_tc && bw_sum != 100) {
+- netdev_err(netdev,
+- "Failed to validate ETS: BW sum is illegal\n");
++ if (bw_sum || (!bw_sum && !zero_sum_allowed))
++ netdev_err(netdev,
++ "Failed to validate ETS: BW sum is illegal\n");
+ return -EINVAL;
+ }
+ return 0;
+@@ -313,7 +315,7 @@ static int mlx5e_dcbnl_ieee_setets(struct net_device *netdev,
+ if (!MLX5_CAP_GEN(priv->mdev, ets))
+ return -EOPNOTSUPP;
+
+- err = mlx5e_dbcnl_validate_ets(netdev, ets);
++ err = mlx5e_dbcnl_validate_ets(netdev, ets, false);
+ if (err)
+ return err;
+
+@@ -613,12 +615,9 @@ static u8 mlx5e_dcbnl_setall(struct net_device *netdev)
+ ets.prio_tc[i]);
+ }
+
+- err = mlx5e_dbcnl_validate_ets(netdev, &ets);
+- if (err) {
+- netdev_err(netdev,
+- "%s, Failed to validate ETS: %d\n", __func__, err);
++ err = mlx5e_dbcnl_validate_ets(netdev, &ets, true);
++ if (err)
+ goto out;
+- }
+
+ err = mlx5e_dcbnl_ieee_setets_core(priv, &ets);
+ if (err) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d3a1a2281e77..fdf40812a2a9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3093,22 +3093,23 @@ out:
+
+ #ifdef CONFIG_MLX5_ESWITCH
+ static int mlx5e_setup_tc_cls_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *cls_flower)
++ struct tc_cls_flower_offload *cls_flower,
++ int flags)
+ {
+ switch (cls_flower->command) {
+ case TC_CLSFLOWER_REPLACE:
+- return mlx5e_configure_flower(priv, cls_flower);
++ return mlx5e_configure_flower(priv, cls_flower, flags);
+ case TC_CLSFLOWER_DESTROY:
+- return mlx5e_delete_flower(priv, cls_flower);
++ return mlx5e_delete_flower(priv, cls_flower, flags);
+ case TC_CLSFLOWER_STATS:
+- return mlx5e_stats_flower(priv, cls_flower);
++ return mlx5e_stats_flower(priv, cls_flower, flags);
+ default:
+ return -EOPNOTSUPP;
+ }
+ }
+
+-int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+- void *cb_priv)
++static int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
++ void *cb_priv)
+ {
+ struct mlx5e_priv *priv = cb_priv;
+
+@@ -3117,7 +3118,7 @@ int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+
+ switch (type) {
+ case TC_SETUP_CLSFLOWER:
+- return mlx5e_setup_tc_cls_flower(priv, type_data);
++ return mlx5e_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS);
+ default:
+ return -EOPNOTSUPP;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 286565862341..c88eb80278dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -723,15 +723,31 @@ static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
+
+ static int
+ mlx5e_rep_setup_tc_cls_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *cls_flower)
++ struct tc_cls_flower_offload *cls_flower, int flags)
+ {
+ switch (cls_flower->command) {
+ case TC_CLSFLOWER_REPLACE:
+- return mlx5e_configure_flower(priv, cls_flower);
++ return mlx5e_configure_flower(priv, cls_flower, flags);
+ case TC_CLSFLOWER_DESTROY:
+- return mlx5e_delete_flower(priv, cls_flower);
++ return mlx5e_delete_flower(priv, cls_flower, flags);
+ case TC_CLSFLOWER_STATS:
+- return mlx5e_stats_flower(priv, cls_flower);
++ return mlx5e_stats_flower(priv, cls_flower, flags);
++ default:
++ return -EOPNOTSUPP;
++ }
++}
++
++static int mlx5e_rep_setup_tc_cb_egdev(enum tc_setup_type type, void *type_data,
++ void *cb_priv)
++{
++ struct mlx5e_priv *priv = cb_priv;
++
++ if (!tc_cls_can_offload_and_chain0(priv->netdev, type_data))
++ return -EOPNOTSUPP;
++
++ switch (type) {
++ case TC_SETUP_CLSFLOWER:
++ return mlx5e_rep_setup_tc_cls_flower(priv, type_data, MLX5E_TC_EGRESS);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -747,7 +763,7 @@ static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
+
+ switch (type) {
+ case TC_SETUP_CLSFLOWER:
+- return mlx5e_rep_setup_tc_cls_flower(priv, type_data);
++ return mlx5e_rep_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -1111,7 +1127,7 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
+
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(dev->priv.eswitch, REP_ETH);
+ upriv = netdev_priv(uplink_rpriv->netdev);
+- err = tc_setup_cb_egdev_register(netdev, mlx5e_setup_tc_block_cb,
++ err = tc_setup_cb_egdev_register(netdev, mlx5e_rep_setup_tc_cb_egdev,
+ upriv);
+ if (err)
+ goto err_neigh_cleanup;
+@@ -1126,7 +1142,7 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
+ return 0;
+
+ err_egdev_cleanup:
+- tc_setup_cb_egdev_unregister(netdev, mlx5e_setup_tc_block_cb,
++ tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb_egdev,
+ upriv);
+
+ err_neigh_cleanup:
+@@ -1155,7 +1171,7 @@ mlx5e_vport_rep_unload(struct mlx5_eswitch_rep *rep)
+ uplink_rpriv = mlx5_eswitch_get_uplink_priv(priv->mdev->priv.eswitch,
+ REP_ETH);
+ upriv = netdev_priv(uplink_rpriv->netdev);
+- tc_setup_cb_egdev_unregister(netdev, mlx5e_setup_tc_block_cb,
++ tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb_egdev,
+ upriv);
+ mlx5e_rep_neigh_cleanup(rpriv);
+ mlx5e_detach_netdev(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index b94276db3ce9..a0ba6cfc9092 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -61,12 +61,16 @@ struct mlx5_nic_flow_attr {
+ struct mlx5_flow_table *hairpin_ft;
+ };
+
++#define MLX5E_TC_FLOW_BASE (MLX5E_TC_LAST_EXPORTED_BIT + 1)
++
+ enum {
+- MLX5E_TC_FLOW_ESWITCH = BIT(0),
+- MLX5E_TC_FLOW_NIC = BIT(1),
+- MLX5E_TC_FLOW_OFFLOADED = BIT(2),
+- MLX5E_TC_FLOW_HAIRPIN = BIT(3),
+- MLX5E_TC_FLOW_HAIRPIN_RSS = BIT(4),
++ MLX5E_TC_FLOW_INGRESS = MLX5E_TC_INGRESS,
++ MLX5E_TC_FLOW_EGRESS = MLX5E_TC_EGRESS,
++ MLX5E_TC_FLOW_ESWITCH = BIT(MLX5E_TC_FLOW_BASE),
++ MLX5E_TC_FLOW_NIC = BIT(MLX5E_TC_FLOW_BASE + 1),
++ MLX5E_TC_FLOW_OFFLOADED = BIT(MLX5E_TC_FLOW_BASE + 2),
++ MLX5E_TC_FLOW_HAIRPIN = BIT(MLX5E_TC_FLOW_BASE + 3),
++ MLX5E_TC_FLOW_HAIRPIN_RSS = BIT(MLX5E_TC_FLOW_BASE + 4),
+ };
+
+ struct mlx5e_tc_flow {
+@@ -1890,6 +1894,10 @@ static bool actions_match_supported(struct mlx5e_priv *priv,
+ else
+ actions = flow->nic_attr->action;
+
++ if (flow->flags & MLX5E_TC_FLOW_EGRESS &&
++ !(actions & MLX5_FLOW_CONTEXT_ACTION_DECAP))
++ return false;
++
+ if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
+ return modify_header_match_supported(&parse_attr->spec, exts);
+
+@@ -2566,8 +2574,20 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+ return err;
+ }
+
++static void get_flags(int flags, u8 *flow_flags)
++{
++ u8 __flow_flags = 0;
++
++ if (flags & MLX5E_TC_INGRESS)
++ __flow_flags |= MLX5E_TC_FLOW_INGRESS;
++ if (flags & MLX5E_TC_EGRESS)
++ __flow_flags |= MLX5E_TC_FLOW_EGRESS;
++
++ *flow_flags = __flow_flags;
++}
++
+ int mlx5e_configure_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f)
++ struct tc_cls_flower_offload *f, int flags)
+ {
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mlx5e_tc_flow_parse_attr *parse_attr;
+@@ -2576,11 +2596,13 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
+ int attr_size, err = 0;
+ u8 flow_flags = 0;
+
++ get_flags(flags, &flow_flags);
++
+ if (esw && esw->mode == SRIOV_OFFLOADS) {
+- flow_flags = MLX5E_TC_FLOW_ESWITCH;
++ flow_flags |= MLX5E_TC_FLOW_ESWITCH;
+ attr_size = sizeof(struct mlx5_esw_flow_attr);
+ } else {
+- flow_flags = MLX5E_TC_FLOW_NIC;
++ flow_flags |= MLX5E_TC_FLOW_NIC;
+ attr_size = sizeof(struct mlx5_nic_flow_attr);
+ }
+
+@@ -2639,7 +2661,7 @@ err_free:
+ }
+
+ int mlx5e_delete_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f)
++ struct tc_cls_flower_offload *f, int flags)
+ {
+ struct mlx5e_tc_flow *flow;
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
+@@ -2659,7 +2681,7 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv,
+ }
+
+ int mlx5e_stats_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f)
++ struct tc_cls_flower_offload *f, int flags)
+ {
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
+ struct mlx5e_tc_flow *flow;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+index c14c263a739b..2255345c2e18 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+@@ -38,16 +38,23 @@
+ #define MLX5E_TC_FLOW_ID_MASK 0x0000ffff
+
+ #ifdef CONFIG_MLX5_ESWITCH
++
++enum {
++ MLX5E_TC_INGRESS = BIT(0),
++ MLX5E_TC_EGRESS = BIT(1),
++ MLX5E_TC_LAST_EXPORTED_BIT = 1,
++};
++
+ int mlx5e_tc_init(struct mlx5e_priv *priv);
+ void mlx5e_tc_cleanup(struct mlx5e_priv *priv);
+
+ int mlx5e_configure_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f);
++ struct tc_cls_flower_offload *f, int flags);
+ int mlx5e_delete_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f);
++ struct tc_cls_flower_offload *f, int flags);
+
+ int mlx5e_stats_flower(struct mlx5e_priv *priv,
+- struct tc_cls_flower_offload *f);
++ struct tc_cls_flower_offload *f, int flags);
+
+ struct mlx5e_encap_entry;
+ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index c3a18ddf5dba..0a75e9d441e6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2221,6 +2221,6 @@ free_out:
+
+ u8 mlx5_eswitch_mode(struct mlx5_eswitch *esw)
+ {
+- return esw->mode;
++ return ESW_ALLOWED(esw) ? esw->mode : SRIOV_NONE;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_eswitch_mode);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 857035583ccd..c14e7fc11d8a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -487,6 +487,7 @@ void mlx5_pps_event(struct mlx5_core_dev *mdev,
+ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+ {
+ struct mlx5_clock *clock = &mdev->clock;
++ u64 overflow_cycles;
+ u64 ns;
+ u64 frac = 0;
+ u32 dev_freq;
+@@ -510,10 +511,17 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+
+ /* Calculate period in seconds to call the overflow watchdog - to make
+ * sure counter is checked at least once every wrap around.
++ * The period is calculated as the minimum between max HW cycles count
++ * (The clock source mask) and max amount of cycles that can be
++ * multiplied by clock multiplier where the result doesn't exceed
++ * 64bits.
+ */
+- ns = cyclecounter_cyc2ns(&clock->cycles, clock->cycles.mask,
++ overflow_cycles = div64_u64(~0ULL >> 1, clock->cycles.mult);
++ overflow_cycles = min(overflow_cycles, clock->cycles.mask >> 1);
++
++ ns = cyclecounter_cyc2ns(&clock->cycles, overflow_cycles,
+ frac, &frac);
+- do_div(ns, NSEC_PER_SEC / 2 / HZ);
++ do_div(ns, NSEC_PER_SEC / HZ);
+ clock->overflow_period = ns;
+
+ mdev->clock_info_page = alloc_page(GFP_KERNEL);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index ec524d97869d..5ef61132604e 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -317,7 +317,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
+ payload.dst_ipv4 = flow->daddr;
+
+ /* If entry has expired send dst IP with all other fields 0. */
+- if (!(neigh->nud_state & NUD_VALID)) {
++ if (!(neigh->nud_state & NUD_VALID) || neigh->dead) {
+ nfp_tun_del_route_from_cache(app, payload.dst_ipv4);
+ /* Trigger ARP to verify invalid neighbour state. */
+ neigh_event_send(neigh, NULL);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index c7aac1fc99e8..764b25fa470c 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -8272,8 +8272,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return rc;
+ }
+
+- /* override BIOS settings, use userspace tools to enable WOL */
+- __rtl8169_set_wol(tp, 0);
++ tp->saved_wolopts = __rtl8169_get_wol(tp);
+
+ if (rtl_tbi_enabled(tp)) {
+ tp->set_speed = rtl8169_set_speed_tbi;
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 05c1e8ef15e6..69a8106b9b98 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -514,7 +514,7 @@ static int phy_start_aneg_priv(struct phy_device *phydev, bool sync)
+ * negotiation may already be done and aneg interrupt may not be
+ * generated.
+ */
+- if (phy_interrupt_is_valid(phydev) && (phydev->state == PHY_AN)) {
++ if (phydev->irq != PHY_POLL && phydev->state == PHY_AN) {
+ err = phy_aneg_done(phydev);
+ if (err > 0) {
+ trigger = true;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 4b170599fa5e..3b050817bbda 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -636,8 +636,61 @@ static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
+ return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
+ }
+
+-/* Add new entry to forwarding table -- assumes lock held */
++static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan,
++ const u8 *mac, __u16 state,
++ __be32 src_vni, __u8 ndm_flags)
++{
++ struct vxlan_fdb *f;
++
++ f = kmalloc(sizeof(*f), GFP_ATOMIC);
++ if (!f)
++ return NULL;
++ f->state = state;
++ f->flags = ndm_flags;
++ f->updated = f->used = jiffies;
++ f->vni = src_vni;
++ INIT_LIST_HEAD(&f->remotes);
++ memcpy(f->eth_addr, mac, ETH_ALEN);
++
++ return f;
++}
++
+ static int vxlan_fdb_create(struct vxlan_dev *vxlan,
++ const u8 *mac, union vxlan_addr *ip,
++ __u16 state, __be16 port, __be32 src_vni,
++ __be32 vni, __u32 ifindex, __u8 ndm_flags,
++ struct vxlan_fdb **fdb)
++{
++ struct vxlan_rdst *rd = NULL;
++ struct vxlan_fdb *f;
++ int rc;
++
++ if (vxlan->cfg.addrmax &&
++ vxlan->addrcnt >= vxlan->cfg.addrmax)
++ return -ENOSPC;
++
++ netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
++ f = vxlan_fdb_alloc(vxlan, mac, state, src_vni, ndm_flags);
++ if (!f)
++ return -ENOMEM;
++
++ rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
++ if (rc < 0) {
++ kfree(f);
++ return rc;
++ }
++
++ ++vxlan->addrcnt;
++ hlist_add_head_rcu(&f->hlist,
++ vxlan_fdb_head(vxlan, mac, src_vni));
++
++ *fdb = f;
++
++ return 0;
++}
++
++/* Add new entry to forwarding table -- assumes lock held */
++static int vxlan_fdb_update(struct vxlan_dev *vxlan,
+ const u8 *mac, union vxlan_addr *ip,
+ __u16 state, __u16 flags,
+ __be16 port, __be32 src_vni, __be32 vni,
+@@ -687,37 +740,17 @@ static int vxlan_fdb_create(struct vxlan_dev *vxlan,
+ if (!(flags & NLM_F_CREATE))
+ return -ENOENT;
+
+- if (vxlan->cfg.addrmax &&
+- vxlan->addrcnt >= vxlan->cfg.addrmax)
+- return -ENOSPC;
+-
+ /* Disallow replace to add a multicast entry */
+ if ((flags & NLM_F_REPLACE) &&
+ (is_multicast_ether_addr(mac) || is_zero_ether_addr(mac)))
+ return -EOPNOTSUPP;
+
+ netdev_dbg(vxlan->dev, "add %pM -> %pIS\n", mac, ip);
+- f = kmalloc(sizeof(*f), GFP_ATOMIC);
+- if (!f)
+- return -ENOMEM;
+-
+- notify = 1;
+- f->state = state;
+- f->flags = ndm_flags;
+- f->updated = f->used = jiffies;
+- f->vni = src_vni;
+- INIT_LIST_HEAD(&f->remotes);
+- memcpy(f->eth_addr, mac, ETH_ALEN);
+-
+- rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd);
+- if (rc < 0) {
+- kfree(f);
++ rc = vxlan_fdb_create(vxlan, mac, ip, state, port, src_vni,
++ vni, ifindex, ndm_flags, &f);
++ if (rc < 0)
+ return rc;
+- }
+-
+- ++vxlan->addrcnt;
+- hlist_add_head_rcu(&f->hlist,
+- vxlan_fdb_head(vxlan, mac, src_vni));
++ notify = 1;
+ }
+
+ if (notify) {
+@@ -741,13 +774,15 @@ static void vxlan_fdb_free(struct rcu_head *head)
+ kfree(f);
+ }
+
+-static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f)
++static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
++ bool do_notify)
+ {
+ netdev_dbg(vxlan->dev,
+ "delete %pM\n", f->eth_addr);
+
+ --vxlan->addrcnt;
+- vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_DELNEIGH);
++ if (do_notify)
++ vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_DELNEIGH);
+
+ hlist_del_rcu(&f->hlist);
+ call_rcu(&f->rcu, vxlan_fdb_free);
+@@ -863,7 +898,7 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ return -EAFNOSUPPORT;
+
+ spin_lock_bh(&vxlan->hash_lock);
+- err = vxlan_fdb_create(vxlan, addr, &ip, ndm->ndm_state, flags,
++ err = vxlan_fdb_update(vxlan, addr, &ip, ndm->ndm_state, flags,
+ port, src_vni, vni, ifindex, ndm->ndm_flags);
+ spin_unlock_bh(&vxlan->hash_lock);
+
+@@ -897,7 +932,7 @@ static int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
+ goto out;
+ }
+
+- vxlan_fdb_destroy(vxlan, f);
++ vxlan_fdb_destroy(vxlan, f, true);
+
+ out:
+ return 0;
+@@ -1006,7 +1041,7 @@ static bool vxlan_snoop(struct net_device *dev,
+
+ /* close off race between vxlan_flush and incoming packets */
+ if (netif_running(dev))
+- vxlan_fdb_create(vxlan, src_mac, src_ip,
++ vxlan_fdb_update(vxlan, src_mac, src_ip,
+ NUD_REACHABLE,
+ NLM_F_EXCL|NLM_F_CREATE,
+ vxlan->cfg.dst_port,
+@@ -2360,7 +2395,7 @@ static void vxlan_cleanup(struct timer_list *t)
+ "garbage collect %pM\n",
+ f->eth_addr);
+ f->state = NUD_STALE;
+- vxlan_fdb_destroy(vxlan, f);
++ vxlan_fdb_destroy(vxlan, f, true);
+ } else if (time_before(timeout, next_timer))
+ next_timer = timeout;
+ }
+@@ -2411,7 +2446,7 @@ static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni)
+ spin_lock_bh(&vxlan->hash_lock);
+ f = __vxlan_find_mac(vxlan, all_zeros_mac, vni);
+ if (f)
+- vxlan_fdb_destroy(vxlan, f);
++ vxlan_fdb_destroy(vxlan, f, true);
+ spin_unlock_bh(&vxlan->hash_lock);
+ }
+
+@@ -2465,7 +2500,7 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
+ continue;
+ /* the all_zeros_mac entry is deleted at vxlan_uninit */
+ if (!is_zero_ether_addr(f->eth_addr))
+- vxlan_fdb_destroy(vxlan, f);
++ vxlan_fdb_destroy(vxlan, f, true);
+ }
+ }
+ spin_unlock_bh(&vxlan->hash_lock);
+@@ -3155,6 +3190,7 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
+ {
+ struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+ struct vxlan_dev *vxlan = netdev_priv(dev);
++ struct vxlan_fdb *f = NULL;
+ int err;
+
+ err = vxlan_dev_configure(net, dev, conf, false, extack);
+@@ -3168,24 +3204,35 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
+ err = vxlan_fdb_create(vxlan, all_zeros_mac,
+ &vxlan->default_dst.remote_ip,
+ NUD_REACHABLE | NUD_PERMANENT,
+- NLM_F_EXCL | NLM_F_CREATE,
+ vxlan->cfg.dst_port,
+ vxlan->default_dst.remote_vni,
+ vxlan->default_dst.remote_vni,
+ vxlan->default_dst.remote_ifindex,
+- NTF_SELF);
++ NTF_SELF, &f);
+ if (err)
+ return err;
+ }
+
+ err = register_netdevice(dev);
++ if (err)
++ goto errout;
++
++ err = rtnl_configure_link(dev, NULL);
+ if (err) {
+- vxlan_fdb_delete_default(vxlan, vxlan->default_dst.remote_vni);
+- return err;
++ unregister_netdevice(dev);
++ goto errout;
+ }
+
++ /* notify default fdb entry */
++ if (f)
++ vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH);
++
+ list_add(&vxlan->next, &vn->vxlan_list);
+ return 0;
++errout:
++ if (f)
++ vxlan_fdb_destroy(vxlan, f, false);
++ return err;
+ }
+
+ static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[],
+@@ -3414,6 +3461,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ struct vxlan_rdst *dst = &vxlan->default_dst;
+ struct vxlan_rdst old_dst;
+ struct vxlan_config conf;
++ struct vxlan_fdb *f = NULL;
+ int err;
+
+ err = vxlan_nl2conf(tb, data,
+@@ -3442,16 +3490,16 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ err = vxlan_fdb_create(vxlan, all_zeros_mac,
+ &dst->remote_ip,
+ NUD_REACHABLE | NUD_PERMANENT,
+- NLM_F_CREATE | NLM_F_APPEND,
+ vxlan->cfg.dst_port,
+ dst->remote_vni,
+ dst->remote_vni,
+ dst->remote_ifindex,
+- NTF_SELF);
++ NTF_SELF, &f);
+ if (err) {
+ spin_unlock_bh(&vxlan->hash_lock);
+ return err;
+ }
++ vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH);
+ }
+ spin_unlock_bh(&vxlan->hash_lock);
+ }
+diff --git a/drivers/staging/rtl8188eu/Kconfig b/drivers/staging/rtl8188eu/Kconfig
+index 673fdce25530..ff7832798a77 100644
+--- a/drivers/staging/rtl8188eu/Kconfig
++++ b/drivers/staging/rtl8188eu/Kconfig
+@@ -7,7 +7,6 @@ config R8188EU
+ select LIB80211
+ select LIB80211_CRYPT_WEP
+ select LIB80211_CRYPT_CCMP
+- select LIB80211_CRYPT_TKIP
+ ---help---
+ This option adds the Realtek RTL8188EU USB device such as TP-Link TL-WN725N.
+ If built as a module, it will be called r8188eu.
+diff --git a/drivers/staging/rtl8188eu/core/rtw_recv.c b/drivers/staging/rtl8188eu/core/rtw_recv.c
+index 05936a45eb93..c6857a5be12a 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_recv.c
++++ b/drivers/staging/rtl8188eu/core/rtw_recv.c
+@@ -23,7 +23,6 @@
+ #include <mon.h>
+ #include <wifi.h>
+ #include <linux/vmalloc.h>
+-#include <net/lib80211.h>
+
+ #define ETHERNET_HEADER_SIZE 14 /* Ethernet Header Length */
+ #define LLC_HEADER_SIZE 6 /* LLC Header Length */
+@@ -221,20 +220,31 @@ u32 rtw_free_uc_swdec_pending_queue(struct adapter *adapter)
+ static int recvframe_chkmic(struct adapter *adapter,
+ struct recv_frame *precvframe)
+ {
+- int res = _SUCCESS;
+- struct rx_pkt_attrib *prxattrib = &precvframe->attrib;
+- struct sta_info *stainfo = rtw_get_stainfo(&adapter->stapriv, prxattrib->ta);
++ int i, res = _SUCCESS;
++ u32 datalen;
++ u8 miccode[8];
++ u8 bmic_err = false, brpt_micerror = true;
++ u8 *pframe, *payload, *pframemic;
++ u8 *mickey;
++ struct sta_info *stainfo;
++ struct rx_pkt_attrib *prxattrib = &precvframe->attrib;
++ struct security_priv *psecuritypriv = &adapter->securitypriv;
++
++ struct mlme_ext_priv *pmlmeext = &adapter->mlmeextpriv;
++ struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info);
++
++ stainfo = rtw_get_stainfo(&adapter->stapriv, &prxattrib->ta[0]);
+
+ if (prxattrib->encrypt == _TKIP_) {
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_info_,
++ ("\n %s: prxattrib->encrypt==_TKIP_\n", __func__));
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_info_,
++ ("\n %s: da=0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x\n",
++ __func__, prxattrib->ra[0], prxattrib->ra[1], prxattrib->ra[2],
++ prxattrib->ra[3], prxattrib->ra[4], prxattrib->ra[5]));
++
++ /* calculate mic code */
+ if (stainfo) {
+- int key_idx;
+- const int iv_len = 8, icv_len = 4, key_length = 32;
+- struct sk_buff *skb = precvframe->pkt;
+- u8 key[32], iv[8], icv[4], *pframe = skb->data;
+- void *crypto_private = NULL;
+- struct lib80211_crypto_ops *crypto_ops = try_then_request_module(lib80211_get_crypto_ops("TKIP"), "lib80211_crypt_tkip");
+- struct security_priv *psecuritypriv = &adapter->securitypriv;
+-
+ if (IS_MCAST(prxattrib->ra)) {
+ if (!psecuritypriv) {
+ res = _FAIL;
+@@ -243,58 +253,115 @@ static int recvframe_chkmic(struct adapter *adapter,
+ DBG_88E("\n %s: didn't install group key!!!!!!!!!!\n", __func__);
+ goto exit;
+ }
+- key_idx = prxattrib->key_index;
+- memcpy(key, psecuritypriv->dot118021XGrpKey[key_idx].skey, 16);
+- memcpy(key + 16, psecuritypriv->dot118021XGrprxmickey[key_idx].skey, 16);
++ mickey = &psecuritypriv->dot118021XGrprxmickey[prxattrib->key_index].skey[0];
++
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_info_,
++ ("\n %s: bcmc key\n", __func__));
+ } else {
+- key_idx = 0;
+- memcpy(key, stainfo->dot118021x_UncstKey.skey, 16);
+- memcpy(key + 16, stainfo->dot11tkiprxmickey.skey, 16);
++ mickey = &stainfo->dot11tkiprxmickey.skey[0];
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("\n %s: unicast key\n", __func__));
+ }
+
+- if (!crypto_ops) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
++ /* icv_len included the mic code */
++ datalen = precvframe->pkt->len-prxattrib->hdrlen -
++ prxattrib->iv_len-prxattrib->icv_len-8;
++ pframe = precvframe->pkt->data;
++ payload = pframe+prxattrib->hdrlen+prxattrib->iv_len;
+
+- memcpy(iv, pframe + prxattrib->hdrlen, iv_len);
+- memcpy(icv, pframe + skb->len - icv_len, icv_len);
+- memmove(pframe + iv_len, pframe, prxattrib->hdrlen);
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, ("\n prxattrib->iv_len=%d prxattrib->icv_len=%d\n", prxattrib->iv_len, prxattrib->icv_len));
++ rtw_seccalctkipmic(mickey, pframe, payload, datalen, &miccode[0],
++ (unsigned char)prxattrib->priority); /* care the length of the data */
+
+- skb_pull(skb, iv_len);
+- skb_trim(skb, skb->len - icv_len);
++ pframemic = payload+datalen;
+
+- crypto_private = crypto_ops->init(key_idx);
+- if (!crypto_private) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
+- if (crypto_ops->set_key(key, key_length, NULL, crypto_private) < 0) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
+- if (crypto_ops->decrypt_msdu(skb, key_idx, prxattrib->hdrlen, crypto_private)) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
++ bmic_err = false;
++
++ for (i = 0; i < 8; i++) {
++ if (miccode[i] != *(pframemic+i)) {
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("%s: miccode[%d](%02x)!=*(pframemic+%d)(%02x) ",
++ __func__, i, miccode[i], i, *(pframemic + i)));
++ bmic_err = true;
++ }
+ }
+
+- memmove(pframe, pframe + iv_len, prxattrib->hdrlen);
+- skb_push(skb, iv_len);
+- skb_put(skb, icv_len);
++ if (bmic_err) {
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("\n *(pframemic-8)-*(pframemic-1)=0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x\n",
++ *(pframemic-8), *(pframemic-7), *(pframemic-6),
++ *(pframemic-5), *(pframemic-4), *(pframemic-3),
++ *(pframemic-2), *(pframemic-1)));
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("\n *(pframemic-16)-*(pframemic-9)=0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x\n",
++ *(pframemic-16), *(pframemic-15), *(pframemic-14),
++ *(pframemic-13), *(pframemic-12), *(pframemic-11),
++ *(pframemic-10), *(pframemic-9)));
++ {
++ uint i;
+
+- memcpy(pframe + prxattrib->hdrlen, iv, iv_len);
+- memcpy(pframe + skb->len - icv_len, icv, icv_len);
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("\n ======demp packet (len=%d)======\n",
++ precvframe->pkt->len));
++ for (i = 0; i < precvframe->pkt->len; i += 8) {
++ RT_TRACE(_module_rtl871x_recv_c_,
++ _drv_err_,
++ ("0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x:0x%02x",
++ *(precvframe->pkt->data+i),
++ *(precvframe->pkt->data+i+1),
++ *(precvframe->pkt->data+i+2),
++ *(precvframe->pkt->data+i+3),
++ *(precvframe->pkt->data+i+4),
++ *(precvframe->pkt->data+i+5),
++ *(precvframe->pkt->data+i+6),
++ *(precvframe->pkt->data+i+7)));
++ }
++ RT_TRACE(_module_rtl871x_recv_c_,
++ _drv_err_,
++ ("\n ====== demp packet end [len=%d]======\n",
++ precvframe->pkt->len));
++ RT_TRACE(_module_rtl871x_recv_c_,
++ _drv_err_,
++ ("\n hrdlen=%d,\n",
++ prxattrib->hdrlen));
++ }
+
+-exit_lib80211_tkip:
+- if (crypto_ops && crypto_private)
+- crypto_ops->deinit(crypto_private);
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
++ ("ra=0x%.2x 0x%.2x 0x%.2x 0x%.2x 0x%.2x 0x%.2x psecuritypriv->binstallGrpkey=%d ",
++ prxattrib->ra[0], prxattrib->ra[1], prxattrib->ra[2],
++ prxattrib->ra[3], prxattrib->ra[4], prxattrib->ra[5], psecuritypriv->binstallGrpkey));
++
++ /* double check key_index for some timing issue , */
++ /* cannot compare with psecuritypriv->dot118021XGrpKeyid also cause timing issue */
++ if ((IS_MCAST(prxattrib->ra) == true) && (prxattrib->key_index != pmlmeinfo->key_index))
++ brpt_micerror = false;
++
++ if ((prxattrib->bdecrypted) && (brpt_micerror)) {
++ rtw_handle_tkip_mic_err(adapter, (u8)IS_MCAST(prxattrib->ra));
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" mic error :prxattrib->bdecrypted=%d ", prxattrib->bdecrypted));
++ DBG_88E(" mic error :prxattrib->bdecrypted=%d\n", prxattrib->bdecrypted);
++ } else {
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" mic error :prxattrib->bdecrypted=%d ", prxattrib->bdecrypted));
++ DBG_88E(" mic error :prxattrib->bdecrypted=%d\n", prxattrib->bdecrypted);
++ }
++ res = _FAIL;
++ } else {
++ /* mic checked ok */
++ if ((!psecuritypriv->bcheck_grpkey) && (IS_MCAST(prxattrib->ra))) {
++ psecuritypriv->bcheck_grpkey = true;
++ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, ("psecuritypriv->bcheck_grpkey = true"));
++ }
++ }
+ } else {
+ RT_TRACE(_module_rtl871x_recv_c_, _drv_err_,
+ ("%s: rtw_get_stainfo==NULL!!!\n", __func__));
+ }
++
++ skb_trim(precvframe->pkt, precvframe->pkt->len - 8);
+ }
+
+ exit:
++
+ return res;
+ }
+
+diff --git a/drivers/staging/rtl8188eu/core/rtw_security.c b/drivers/staging/rtl8188eu/core/rtw_security.c
+index bfe0b217e679..67a2490f055e 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_security.c
++++ b/drivers/staging/rtl8188eu/core/rtw_security.c
+@@ -650,71 +650,71 @@ u32 rtw_tkip_encrypt(struct adapter *padapter, u8 *pxmitframe)
+ return res;
+ }
+
++/* The hlen isn't include the IV */
+ u32 rtw_tkip_decrypt(struct adapter *padapter, u8 *precvframe)
+-{
+- struct rx_pkt_attrib *prxattrib = &((struct recv_frame *)precvframe)->attrib;
+- u32 res = _SUCCESS;
++{ /* exclude ICV */
++ u16 pnl;
++ u32 pnh;
++ u8 rc4key[16];
++ u8 ttkey[16];
++ u8 crc[4];
++ struct arc4context mycontext;
++ int length;
++
++ u8 *pframe, *payload, *iv, *prwskey;
++ union pn48 dot11txpn;
++ struct sta_info *stainfo;
++ struct rx_pkt_attrib *prxattrib = &((struct recv_frame *)precvframe)->attrib;
++ struct security_priv *psecuritypriv = &padapter->securitypriv;
++ u32 res = _SUCCESS;
++
++
++ pframe = (unsigned char *)((struct recv_frame *)precvframe)->pkt->data;
+
+ /* 4 start to decrypt recvframe */
+ if (prxattrib->encrypt == _TKIP_) {
+- struct sta_info *stainfo = rtw_get_stainfo(&padapter->stapriv, prxattrib->ta);
+-
++ stainfo = rtw_get_stainfo(&padapter->stapriv, &prxattrib->ta[0]);
+ if (stainfo) {
+- int key_idx;
+- const int iv_len = 8, icv_len = 4, key_length = 32;
+- void *crypto_private = NULL;
+- struct sk_buff *skb = ((struct recv_frame *)precvframe)->pkt;
+- u8 key[32], iv[8], icv[4], *pframe = skb->data;
+- struct lib80211_crypto_ops *crypto_ops = try_then_request_module(lib80211_get_crypto_ops("TKIP"), "lib80211_crypt_tkip");
+- struct security_priv *psecuritypriv = &padapter->securitypriv;
+-
+ if (IS_MCAST(prxattrib->ra)) {
+ if (!psecuritypriv->binstallGrpkey) {
+ res = _FAIL;
+ DBG_88E("%s:rx bc/mc packets, but didn't install group key!!!!!!!!!!\n", __func__);
+ goto exit;
+ }
+- key_idx = prxattrib->key_index;
+- memcpy(key, psecuritypriv->dot118021XGrpKey[key_idx].skey, 16);
+- memcpy(key + 16, psecuritypriv->dot118021XGrprxmickey[key_idx].skey, 16);
++ prwskey = psecuritypriv->dot118021XGrpKey[prxattrib->key_index].skey;
+ } else {
+- key_idx = 0;
+- memcpy(key, stainfo->dot118021x_UncstKey.skey, 16);
+- memcpy(key + 16, stainfo->dot11tkiprxmickey.skey, 16);
++ RT_TRACE(_module_rtl871x_security_c_, _drv_err_, ("%s: stainfo!= NULL!!!\n", __func__));
++ prwskey = &stainfo->dot118021x_UncstKey.skey[0];
+ }
+
+- if (!crypto_ops) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
++ iv = pframe+prxattrib->hdrlen;
++ payload = pframe+prxattrib->iv_len+prxattrib->hdrlen;
++ length = ((struct recv_frame *)precvframe)->pkt->len-prxattrib->hdrlen-prxattrib->iv_len;
+
+- memcpy(iv, pframe + prxattrib->hdrlen, iv_len);
+- memcpy(icv, pframe + skb->len - icv_len, icv_len);
++ GET_TKIP_PN(iv, dot11txpn);
+
+- crypto_private = crypto_ops->init(key_idx);
+- if (!crypto_private) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
+- if (crypto_ops->set_key(key, key_length, NULL, crypto_private) < 0) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
+- if (crypto_ops->decrypt_mpdu(skb, prxattrib->hdrlen, crypto_private)) {
+- res = _FAIL;
+- goto exit_lib80211_tkip;
+- }
++ pnl = (u16)(dot11txpn.val);
++ pnh = (u32)(dot11txpn.val>>16);
+
+- memmove(pframe, pframe + iv_len, prxattrib->hdrlen);
+- skb_push(skb, iv_len);
+- skb_put(skb, icv_len);
++ phase1((u16 *)&ttkey[0], prwskey, &prxattrib->ta[0], pnh);
++ phase2(&rc4key[0], prwskey, (unsigned short *)&ttkey[0], pnl);
+
+- memcpy(pframe + prxattrib->hdrlen, iv, iv_len);
+- memcpy(pframe + skb->len - icv_len, icv, icv_len);
++ /* 4 decrypt payload include icv */
+
+-exit_lib80211_tkip:
+- if (crypto_ops && crypto_private)
+- crypto_ops->deinit(crypto_private);
++ arcfour_init(&mycontext, rc4key, 16);
++ arcfour_encrypt(&mycontext, payload, payload, length);
++
++ *((__le32 *)crc) = getcrc32(payload, length-4);
++
++ if (crc[3] != payload[length-1] ||
++ crc[2] != payload[length-2] ||
++ crc[1] != payload[length-3] ||
++ crc[0] != payload[length-4]) {
++ RT_TRACE(_module_rtl871x_security_c_, _drv_err_,
++ ("rtw_wep_decrypt:icv error crc (%4ph)!=payload (%4ph)\n",
++ &crc, &payload[length-4]));
++ res = _FAIL;
++ }
+ } else {
+ RT_TRACE(_module_rtl871x_security_c_, _drv_err_, ("rtw_tkip_decrypt: stainfo==NULL!!!\n"));
+ res = _FAIL;
+diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
+index 0a1a7c259ab0..2f8f4ed62e40 100644
+--- a/drivers/staging/speakup/speakup_soft.c
++++ b/drivers/staging/speakup/speakup_soft.c
+@@ -197,11 +197,15 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ int chars_sent = 0;
+ char __user *cp;
+ char *init;
++ size_t bytes_per_ch = unicode ? 3 : 1;
+ u16 ch;
+ int empty;
+ unsigned long flags;
+ DEFINE_WAIT(wait);
+
++ if (count < bytes_per_ch)
++ return -EINVAL;
++
+ spin_lock_irqsave(&speakup_info.spinlock, flags);
+ while (1) {
+ prepare_to_wait(&speakup_event, &wait, TASK_INTERRUPTIBLE);
+@@ -227,7 +231,7 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ init = get_initstring();
+
+ /* Keep 3 bytes available for a 16bit UTF-8-encoded character */
+- while (chars_sent <= count - 3) {
++ while (chars_sent <= count - bytes_per_ch) {
+ if (speakup_info.flushing) {
+ speakup_info.flushing = 0;
+ ch = '\x18';
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 998b32d0167e..75c4623ad779 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1831,6 +1831,9 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x09d8, 0x0320), /* Elatec GmbH TWN3 */
+ .driver_info = NO_UNION_NORMAL, /* has misplaced union descriptor */
+ },
++ { USB_DEVICE(0x0ca6, 0xa050), /* Castles VEGA3000 */
++ .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
++ },
+
+ { USB_DEVICE(0x2912, 0x0001), /* ATOL FPrint */
+ .driver_info = CLEAR_HALT_CONDITIONS,
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index e3bf65e213cd..40c2d9878190 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1142,10 +1142,14 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+
+ if (!udev || udev->state == USB_STATE_NOTATTACHED) {
+ /* Tell hub_wq to disconnect the device or
+- * check for a new connection
++ * check for a new connection or over current condition.
++ * Based on USB2.0 Spec Section 11.12.5,
++ * C_PORT_OVER_CURRENT could be set while
++ * PORT_OVER_CURRENT is not. So check for any of them.
+ */
+ if (udev || (portstatus & USB_PORT_STAT_CONNECTION) ||
+- (portstatus & USB_PORT_STAT_OVERCURRENT))
++ (portstatus & USB_PORT_STAT_OVERCURRENT) ||
++ (portchange & USB_PORT_STAT_C_OVERCURRENT))
+ set_bit(port1, hub->change_bits);
+
+ } else if (portstatus & USB_PORT_STAT_ENABLE) {
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index c51b73b3e048..3a5f0005fae5 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -2627,34 +2627,29 @@ static void dwc2_hc_init_xfer(struct dwc2_hsotg *hsotg,
+
+ #define DWC2_USB_DMA_ALIGN 4
+
+-struct dma_aligned_buffer {
+- void *kmalloc_ptr;
+- void *old_xfer_buffer;
+- u8 data[0];
+-};
+-
+ static void dwc2_free_dma_aligned_buffer(struct urb *urb)
+ {
+- struct dma_aligned_buffer *temp;
++ void *stored_xfer_buffer;
+
+ if (!(urb->transfer_flags & URB_ALIGNED_TEMP_BUFFER))
+ return;
+
+- temp = container_of(urb->transfer_buffer,
+- struct dma_aligned_buffer, data);
++ /* Restore urb->transfer_buffer from the end of the allocated area */
++ memcpy(&stored_xfer_buffer, urb->transfer_buffer +
++ urb->transfer_buffer_length, sizeof(urb->transfer_buffer));
+
+ if (usb_urb_dir_in(urb))
+- memcpy(temp->old_xfer_buffer, temp->data,
++ memcpy(stored_xfer_buffer, urb->transfer_buffer,
+ urb->transfer_buffer_length);
+- urb->transfer_buffer = temp->old_xfer_buffer;
+- kfree(temp->kmalloc_ptr);
++ kfree(urb->transfer_buffer);
++ urb->transfer_buffer = stored_xfer_buffer;
+
+ urb->transfer_flags &= ~URB_ALIGNED_TEMP_BUFFER;
+ }
+
+ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
+ {
+- struct dma_aligned_buffer *temp, *kmalloc_ptr;
++ void *kmalloc_ptr;
+ size_t kmalloc_size;
+
+ if (urb->num_sgs || urb->sg ||
+@@ -2662,22 +2657,29 @@ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
+ !((uintptr_t)urb->transfer_buffer & (DWC2_USB_DMA_ALIGN - 1)))
+ return 0;
+
+- /* Allocate a buffer with enough padding for alignment */
++ /*
++ * Allocate a buffer with enough padding for original transfer_buffer
++ * pointer. This allocation is guaranteed to be aligned properly for
++ * DMA
++ */
+ kmalloc_size = urb->transfer_buffer_length +
+- sizeof(struct dma_aligned_buffer) + DWC2_USB_DMA_ALIGN - 1;
++ sizeof(urb->transfer_buffer);
+
+ kmalloc_ptr = kmalloc(kmalloc_size, mem_flags);
+ if (!kmalloc_ptr)
+ return -ENOMEM;
+
+- /* Position our struct dma_aligned_buffer such that data is aligned */
+- temp = PTR_ALIGN(kmalloc_ptr + 1, DWC2_USB_DMA_ALIGN) - 1;
+- temp->kmalloc_ptr = kmalloc_ptr;
+- temp->old_xfer_buffer = urb->transfer_buffer;
++ /*
++ * Position value of original urb->transfer_buffer pointer to the end
++ * of allocation for later referencing
++ */
++ memcpy(kmalloc_ptr + urb->transfer_buffer_length,
++ &urb->transfer_buffer, sizeof(urb->transfer_buffer));
++
+ if (usb_urb_dir_out(urb))
+- memcpy(temp->data, urb->transfer_buffer,
++ memcpy(kmalloc_ptr, urb->transfer_buffer,
+ urb->transfer_buffer_length);
+- urb->transfer_buffer = temp->data;
++ urb->transfer_buffer = kmalloc_ptr;
+
+ urb->transfer_flags |= URB_ALIGNED_TEMP_BUFFER;
+
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 63a7cb87514a..330c591fd7d6 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1816,7 +1816,6 @@ unknown:
+ if (cdev->use_os_string && cdev->os_desc_config &&
+ (ctrl->bRequestType & USB_TYPE_VENDOR) &&
+ ctrl->bRequest == cdev->b_vendor_code) {
+- struct usb_request *req;
+ struct usb_configuration *os_desc_cfg;
+ u8 *buf;
+ int interface;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 0294e4f18873..7e57439ac282 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3242,7 +3242,7 @@ static int ffs_func_setup(struct usb_function *f,
+ __ffs_event_add(ffs, FUNCTIONFS_SETUP);
+ spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags);
+
+- return USB_GADGET_DELAYED_STATUS;
++ return creq->wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0;
+ }
+
+ static bool ffs_func_req_match(struct usb_function *f,
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 61c3dc2f3be5..5fb4319d7fd1 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -2981,6 +2981,7 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ if (!list_empty(&ep->ring->td_list)) {
+ dev_err(&udev->dev, "EP not empty, refuse reset\n");
+ spin_unlock_irqrestore(&xhci->lock, flags);
++ xhci_free_command(xhci, cfg_cmd);
+ goto cleanup;
+ }
+ xhci_queue_stop_endpoint(xhci, stop_cmd, udev->slot_id, ep_index, 0);
+diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
+index 2da5f054257a..7cd63b0c1a46 100644
+--- a/drivers/vfio/vfio_iommu_spapr_tce.c
++++ b/drivers/vfio/vfio_iommu_spapr_tce.c
+@@ -467,7 +467,7 @@ static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container,
+ if (!mem)
+ return -EINVAL;
+
+- ret = mm_iommu_ua_to_hpa(mem, tce, phpa);
++ ret = mm_iommu_ua_to_hpa(mem, tce, shift, phpa);
+ if (ret)
+ return -EINVAL;
+
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 9c9b3768b350..9cf770150539 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -342,6 +342,7 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+ unsigned int flags);
+
++void tcp_enter_quickack_mode(struct sock *sk);
+ static inline void tcp_dec_quickack_mode(struct sock *sk,
+ const unsigned int pkts)
+ {
+@@ -535,6 +536,7 @@ void tcp_send_fin(struct sock *sk);
+ void tcp_send_active_reset(struct sock *sk, gfp_t priority);
+ int tcp_send_synack(struct sock *);
+ void tcp_push_one(struct sock *, unsigned int mss_now);
++void __tcp_send_ack(struct sock *sk, u32 rcv_nxt);
+ void tcp_send_ack(struct sock *sk);
+ void tcp_send_delayed_ack(struct sock *sk);
+ void tcp_send_loss_probe(struct sock *sk);
+@@ -826,6 +828,11 @@ struct tcp_skb_cb {
+ * as TCP moves IP6CB into a different location in skb->cb[]
+ */
+ static inline int tcp_v6_iif(const struct sk_buff *skb)
++{
++ return TCP_SKB_CB(skb)->header.h6.iif;
++}
++
++static inline int tcp_v6_iif_l3_slave(const struct sk_buff *skb)
+ {
+ bool l3_slave = ipv6_l3mdev_skb(TCP_SKB_CB(skb)->header.h6.flags);
+
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 19f6ab5de6e1..3dab3c7b6831 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2749,9 +2749,12 @@ int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm)
+ return err;
+ }
+
+- dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
+-
+- __dev_notify_flags(dev, old_flags, ~0U);
++ if (dev->rtnl_link_state == RTNL_LINK_INITIALIZED) {
++ __dev_notify_flags(dev, old_flags, 0U);
++ } else {
++ dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
++ __dev_notify_flags(dev, old_flags, ~0U);
++ }
+ return 0;
+ }
+ EXPORT_SYMBOL(rtnl_configure_link);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index a84d69c047ac..b2d457df7d86 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3705,6 +3705,7 @@ normal:
+ net_warn_ratelimited(
+ "skb_segment: too many frags: %u %u\n",
+ pos, mss);
++ err = -EINVAL;
+ goto err;
+ }
+
+@@ -3738,11 +3739,10 @@ skip_fraglist:
+
+ perform_csum_check:
+ if (!csum) {
+- if (skb_has_shared_frag(nskb)) {
+- err = __skb_linearize(nskb);
+- if (err)
+- goto err;
+- }
++ if (skb_has_shared_frag(nskb) &&
++ __skb_linearize(nskb))
++ goto err;
++
+ if (!nskb->remcsum_offload)
+ nskb->ip_summed = CHECKSUM_NONE;
+ SKB_GSO_CB(nskb)->csum =
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3b6d02854e57..f82843756534 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2270,9 +2270,9 @@ int sk_alloc_sg(struct sock *sk, int len, struct scatterlist *sg,
+ pfrag->offset += use;
+
+ sge = sg + sg_curr - 1;
+- if (sg_curr > first_coalesce && sg_page(sg) == pfrag->page &&
+- sg->offset + sg->length == orig_offset) {
+- sg->length += use;
++ if (sg_curr > first_coalesce && sg_page(sge) == pfrag->page &&
++ sge->offset + sge->length == orig_offset) {
++ sge->length += use;
+ } else {
+ sge = sg + sg_curr;
+ sg_unmark_end(sge);
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index b26a81a7de42..4af0625344a0 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -1201,8 +1201,7 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
+ if (pmc) {
+ im->interface = pmc->interface;
+ im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+- im->sfmode = pmc->sfmode;
+- if (pmc->sfmode == MCAST_INCLUDE) {
++ if (im->sfmode == MCAST_INCLUDE) {
+ im->tomb = pmc->tomb;
+ im->sources = pmc->sources;
+ for (psf = im->sources; psf; psf = psf->sf_next)
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index d54abc097800..267b69cfea71 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -523,6 +523,8 @@ static void ip_copy_metadata(struct sk_buff *to, struct sk_buff *from)
+ to->dev = from->dev;
+ to->mark = from->mark;
+
++ skb_copy_hash(to, from);
++
+ /* Copy the flags to each fragment. */
+ IPCB(to)->flags = IPCB(from)->flags;
+
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 57bbb060faaf..7c14c7818ead 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -148,15 +148,18 @@ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ {
+ struct sockaddr_in sin;
+ const struct iphdr *iph = ip_hdr(skb);
+- __be16 *ports = (__be16 *)skb_transport_header(skb);
++ __be16 *ports;
++ int end;
+
+- if (skb_transport_offset(skb) + 4 > (int)skb->len)
++ end = skb_transport_offset(skb) + 4;
++ if (end > 0 && !pskb_may_pull(skb, end))
+ return;
+
+ /* All current transport protocols have the port numbers in the
+ * first four bytes of the transport header and this function is
+ * written with this assumption in mind.
+ */
++ ports = (__be16 *)skb_transport_header(skb);
+
+ sin.sin_family = AF_INET;
+ sin.sin_addr.s_addr = iph->daddr;
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index 5f5e5936760e..c78fb53988a1 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -131,23 +131,14 @@ static void dctcp_ce_state_0_to_1(struct sock *sk)
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+- /* State has changed from CE=0 to CE=1 and delayed
+- * ACK has not sent yet.
+- */
+- if (!ca->ce_state && ca->delayed_ack_reserved) {
+- u32 tmp_rcv_nxt;
+-
+- /* Save current rcv_nxt. */
+- tmp_rcv_nxt = tp->rcv_nxt;
+-
+- /* Generate previous ack with CE=0. */
+- tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+- tp->rcv_nxt = ca->prior_rcv_nxt;
+-
+- tcp_send_ack(sk);
+-
+- /* Recover current rcv_nxt. */
+- tp->rcv_nxt = tmp_rcv_nxt;
++ if (!ca->ce_state) {
++ /* State has changed from CE=0 to CE=1, force an immediate
++ * ACK to reflect the new CE state. If an ACK was delayed,
++ * send that first to reflect the prior CE state.
++ */
++ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
++ __tcp_send_ack(sk, ca->prior_rcv_nxt);
++ tcp_enter_quickack_mode(sk);
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+@@ -161,23 +152,14 @@ static void dctcp_ce_state_1_to_0(struct sock *sk)
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+- /* State has changed from CE=1 to CE=0 and delayed
+- * ACK has not sent yet.
+- */
+- if (ca->ce_state && ca->delayed_ack_reserved) {
+- u32 tmp_rcv_nxt;
+-
+- /* Save current rcv_nxt. */
+- tmp_rcv_nxt = tp->rcv_nxt;
+-
+- /* Generate previous ack with CE=1. */
+- tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
+- tp->rcv_nxt = ca->prior_rcv_nxt;
+-
+- tcp_send_ack(sk);
+-
+- /* Recover current rcv_nxt. */
+- tp->rcv_nxt = tmp_rcv_nxt;
++ if (ca->ce_state) {
++ /* State has changed from CE=1 to CE=0, force an immediate
++ * ACK to reflect the new CE state. If an ACK was delayed,
++ * send that first to reflect the prior CE state.
++ */
++ if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
++ __tcp_send_ack(sk, ca->prior_rcv_nxt);
++ tcp_enter_quickack_mode(sk);
+ }
+
+ ca->prior_rcv_nxt = tp->rcv_nxt;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 1f25ebab25d2..0f5e9510c3fa 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -195,13 +195,14 @@ static void tcp_incr_quickack(struct sock *sk)
+ icsk->icsk_ack.quick = min(quickacks, TCP_MAX_QUICKACKS);
+ }
+
+-static void tcp_enter_quickack_mode(struct sock *sk)
++void tcp_enter_quickack_mode(struct sock *sk)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ tcp_incr_quickack(sk);
+ icsk->icsk_ack.pingpong = 0;
+ icsk->icsk_ack.ato = TCP_ATO_MIN;
+ }
++EXPORT_SYMBOL(tcp_enter_quickack_mode);
+
+ /* Send ACKs quickly, if "quick" count is not exhausted
+ * and the session is not interactive.
+@@ -4298,6 +4299,23 @@ static bool tcp_try_coalesce(struct sock *sk,
+ return true;
+ }
+
++static bool tcp_ooo_try_coalesce(struct sock *sk,
++ struct sk_buff *to,
++ struct sk_buff *from,
++ bool *fragstolen)
++{
++ bool res = tcp_try_coalesce(sk, to, from, fragstolen);
++
++ /* In case tcp_drop() is called later, update to->gso_segs */
++ if (res) {
++ u32 gso_segs = max_t(u16, 1, skb_shinfo(to)->gso_segs) +
++ max_t(u16, 1, skb_shinfo(from)->gso_segs);
++
++ skb_shinfo(to)->gso_segs = min_t(u32, gso_segs, 0xFFFF);
++ }
++ return res;
++}
++
+ static void tcp_drop(struct sock *sk, struct sk_buff *skb)
+ {
+ sk_drops_add(sk, skb);
+@@ -4421,8 +4439,8 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ /* In the typical case, we are adding an skb to the end of the list.
+ * Use of ooo_last_skb avoids the O(Log(N)) rbtree lookup.
+ */
+- if (tcp_try_coalesce(sk, tp->ooo_last_skb,
+- skb, &fragstolen)) {
++ if (tcp_ooo_try_coalesce(sk, tp->ooo_last_skb,
++ skb, &fragstolen)) {
+ coalesce_done:
+ tcp_grow_window(sk, skb);
+ kfree_skb_partial(skb, fragstolen);
+@@ -4450,7 +4468,7 @@ coalesce_done:
+ /* All the bits are present. Drop. */
+ NET_INC_STATS(sock_net(sk),
+ LINUX_MIB_TCPOFOMERGE);
+- __kfree_skb(skb);
++ tcp_drop(sk, skb);
+ skb = NULL;
+ tcp_dsack_set(sk, seq, end_seq);
+ goto add_sack;
+@@ -4469,11 +4487,11 @@ coalesce_done:
+ TCP_SKB_CB(skb1)->end_seq);
+ NET_INC_STATS(sock_net(sk),
+ LINUX_MIB_TCPOFOMERGE);
+- __kfree_skb(skb1);
++ tcp_drop(sk, skb1);
+ goto merge_right;
+ }
+- } else if (tcp_try_coalesce(sk, skb1,
+- skb, &fragstolen)) {
++ } else if (tcp_ooo_try_coalesce(sk, skb1,
++ skb, &fragstolen)) {
+ goto coalesce_done;
+ }
+ p = &parent->rb_right;
+@@ -4833,6 +4851,7 @@ end:
+ static void tcp_collapse_ofo_queue(struct sock *sk)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
++ u32 range_truesize, sum_tiny = 0;
+ struct sk_buff *skb, *head;
+ u32 start, end;
+
+@@ -4844,6 +4863,7 @@ new_range:
+ }
+ start = TCP_SKB_CB(skb)->seq;
+ end = TCP_SKB_CB(skb)->end_seq;
++ range_truesize = skb->truesize;
+
+ for (head = skb;;) {
+ skb = skb_rb_next(skb);
+@@ -4854,11 +4874,20 @@ new_range:
+ if (!skb ||
+ after(TCP_SKB_CB(skb)->seq, end) ||
+ before(TCP_SKB_CB(skb)->end_seq, start)) {
+- tcp_collapse(sk, NULL, &tp->out_of_order_queue,
+- head, skb, start, end);
++ /* Do not attempt collapsing tiny skbs */
++ if (range_truesize != head->truesize ||
++ end - start >= SKB_WITH_OVERHEAD(SK_MEM_QUANTUM)) {
++ tcp_collapse(sk, NULL, &tp->out_of_order_queue,
++ head, skb, start, end);
++ } else {
++ sum_tiny += range_truesize;
++ if (sum_tiny > sk->sk_rcvbuf >> 3)
++ return;
++ }
+ goto new_range;
+ }
+
++ range_truesize += skb->truesize;
+ if (unlikely(before(TCP_SKB_CB(skb)->seq, start)))
+ start = TCP_SKB_CB(skb)->seq;
+ if (after(TCP_SKB_CB(skb)->end_seq, end))
+@@ -4873,6 +4902,7 @@ new_range:
+ * 2) not add too big latencies if thousands of packets sit there.
+ * (But if application shrinks SO_RCVBUF, we could still end up
+ * freeing whole queue here)
++ * 3) Drop at least 12.5 % of sk_rcvbuf to avoid malicious attacks.
+ *
+ * Return true if queue has shrunk.
+ */
+@@ -4880,20 +4910,26 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct rb_node *node, *prev;
++ int goal;
+
+ if (RB_EMPTY_ROOT(&tp->out_of_order_queue))
+ return false;
+
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED);
++ goal = sk->sk_rcvbuf >> 3;
+ node = &tp->ooo_last_skb->rbnode;
+ do {
+ prev = rb_prev(node);
+ rb_erase(node, &tp->out_of_order_queue);
++ goal -= rb_to_skb(node)->truesize;
+ tcp_drop(sk, rb_to_skb(node));
+- sk_mem_reclaim(sk);
+- if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
+- !tcp_under_memory_pressure(sk))
+- break;
++ if (!prev || goal <= 0) {
++ sk_mem_reclaim(sk);
++ if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
++ !tcp_under_memory_pressure(sk))
++ break;
++ goal = sk->sk_rcvbuf >> 3;
++ }
+ node = prev;
+ } while (node);
+ tp->ooo_last_skb = rb_to_skb(prev);
+@@ -4928,6 +4964,9 @@ static int tcp_prune_queue(struct sock *sk)
+ else if (tcp_under_memory_pressure(sk))
+ tp->rcv_ssthresh = min(tp->rcv_ssthresh, 4U * tp->advmss);
+
++ if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf)
++ return 0;
++
+ tcp_collapse_ofo_queue(sk);
+ if (!skb_queue_empty(&sk->sk_receive_queue))
+ tcp_collapse(sk, &sk->sk_receive_queue, NULL,
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index d07e34f8e309..3049d10a1476 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -160,8 +160,13 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
+ }
+
+ /* Account for an ACK we sent. */
+-static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts)
++static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
++ u32 rcv_nxt)
+ {
++ struct tcp_sock *tp = tcp_sk(sk);
++
++ if (unlikely(rcv_nxt != tp->rcv_nxt))
++ return; /* Special ACK sent by DCTCP to reflect ECN */
+ tcp_dec_quickack_mode(sk, pkts);
+ inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
+ }
+@@ -1031,8 +1036,8 @@ static void tcp_update_skb_after_send(struct tcp_sock *tp, struct sk_buff *skb)
+ * We are working here with either a clone of the original
+ * SKB, or a fresh unique copy made by the retransmit engine.
+ */
+-static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
+- gfp_t gfp_mask)
++static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
++ int clone_it, gfp_t gfp_mask, u32 rcv_nxt)
+ {
+ const struct inet_connection_sock *icsk = inet_csk(sk);
+ struct inet_sock *inet;
+@@ -1108,7 +1113,7 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
+ th->source = inet->inet_sport;
+ th->dest = inet->inet_dport;
+ th->seq = htonl(tcb->seq);
+- th->ack_seq = htonl(tp->rcv_nxt);
++ th->ack_seq = htonl(rcv_nxt);
+ *(((__be16 *)th) + 6) = htons(((tcp_header_size >> 2) << 12) |
+ tcb->tcp_flags);
+
+@@ -1149,7 +1154,7 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
+ icsk->icsk_af_ops->send_check(sk, skb);
+
+ if (likely(tcb->tcp_flags & TCPHDR_ACK))
+- tcp_event_ack_sent(sk, tcp_skb_pcount(skb));
++ tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
+
+ if (skb->len != tcp_header_size) {
+ tcp_event_data_sent(tp, sk);
+@@ -1186,6 +1191,13 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
+ return err;
+ }
+
++static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
++ gfp_t gfp_mask)
++{
++ return __tcp_transmit_skb(sk, skb, clone_it, gfp_mask,
++ tcp_sk(sk)->rcv_nxt);
++}
++
+ /* This routine just queues the buffer for sending.
+ *
+ * NOTE: probe0 timer is not checked, do not forget tcp_push_pending_frames,
+@@ -3583,7 +3595,7 @@ void tcp_send_delayed_ack(struct sock *sk)
+ }
+
+ /* This routine sends an ack and also updates the window. */
+-void tcp_send_ack(struct sock *sk)
++void __tcp_send_ack(struct sock *sk, u32 rcv_nxt)
+ {
+ struct sk_buff *buff;
+
+@@ -3618,9 +3630,14 @@ void tcp_send_ack(struct sock *sk)
+ skb_set_tcp_pure_ack(buff);
+
+ /* Send it off, this clears delayed acks for us. */
+- tcp_transmit_skb(sk, buff, 0, (__force gfp_t)0);
++ __tcp_transmit_skb(sk, buff, 0, (__force gfp_t)0, rcv_nxt);
++}
++EXPORT_SYMBOL_GPL(__tcp_send_ack);
++
++void tcp_send_ack(struct sock *sk)
++{
++ __tcp_send_ack(sk, tcp_sk(sk)->rcv_nxt);
+ }
+-EXPORT_SYMBOL_GPL(tcp_send_ack);
+
+ /* This routine sends a packet with an out of date sequence
+ * number. It assumes the other end will try to ack it.
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 2ee08b6a86a4..1a1f876f8e28 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -700,13 +700,16 @@ void ip6_datagram_recv_specific_ctl(struct sock *sk, struct msghdr *msg,
+ }
+ if (np->rxopt.bits.rxorigdstaddr) {
+ struct sockaddr_in6 sin6;
+- __be16 *ports = (__be16 *) skb_transport_header(skb);
++ __be16 *ports;
++ int end;
+
+- if (skb_transport_offset(skb) + 4 <= (int)skb->len) {
++ end = skb_transport_offset(skb) + 4;
++ if (end <= 0 || pskb_may_pull(skb, end)) {
+ /* All current transport protocols have the port numbers in the
+ * first four bytes of the transport header and this function is
+ * written with this assumption in mind.
+ */
++ ports = (__be16 *)skb_transport_header(skb);
+
+ sin6.sin6_family = AF_INET6;
+ sin6.sin6_addr = ipv6_hdr(skb)->daddr;
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index d8c4b6374377..ca893a798d8a 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -402,9 +402,10 @@ static int icmp6_iif(const struct sk_buff *skb)
+
+ /* for local traffic to local address, skb dev is the loopback
+ * device. Check if there is a dst attached to the skb and if so
+- * get the real device index.
++ * get the real device index. Same is needed for replies to a link
++ * local address on a device enslaved to an L3 master device
+ */
+- if (unlikely(iif == LOOPBACK_IFINDEX)) {
++ if (unlikely(iif == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {
+ const struct rt6_info *rt6 = skb_rt6_info(skb);
+
+ if (rt6)
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index af49f6cb5d3e..8f4c596a683d 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -596,6 +596,8 @@ static void ip6_copy_metadata(struct sk_buff *to, struct sk_buff *from)
+ to->dev = from->dev;
+ to->mark = from->mark;
+
++ skb_copy_hash(to, from);
++
+ #ifdef CONFIG_NET_SCHED
+ to->tc_index = from->tc_index;
+ #endif
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 793159d77d8a..0604a737eecf 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -771,8 +771,7 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im)
+ if (pmc) {
+ im->idev = pmc->idev;
+ im->mca_crcount = idev->mc_qrv;
+- im->mca_sfmode = pmc->mca_sfmode;
+- if (pmc->mca_sfmode == MCAST_INCLUDE) {
++ if (im->mca_sfmode == MCAST_INCLUDE) {
+ im->mca_tomb = pmc->mca_tomb;
+ im->mca_sources = pmc->mca_sources;
+ for (psf = im->mca_sources; psf; psf = psf->sf_next)
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 5d4eb9d2c3a7..1adf7eb80d03 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -934,7 +934,8 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
+ &tcp_hashinfo, NULL, 0,
+ &ipv6h->saddr,
+ th->source, &ipv6h->daddr,
+- ntohs(th->source), tcp_v6_iif(skb),
++ ntohs(th->source),
++ tcp_v6_iif_l3_slave(skb),
+ tcp_v6_sdif(skb));
+ if (!sk1)
+ goto out;
+@@ -1605,7 +1606,8 @@ do_time_wait:
+ skb, __tcp_hdrlen(th),
+ &ipv6_hdr(skb)->saddr, th->source,
+ &ipv6_hdr(skb)->daddr,
+- ntohs(th->dest), tcp_v6_iif(skb),
++ ntohs(th->dest),
++ tcp_v6_iif_l3_slave(skb),
+ sdif);
+ if (sk2) {
+ struct inet_timewait_sock *tw = inet_twsk(sk);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 9a7f91232de8..60708a4ebed4 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -646,6 +646,9 @@ static struct sk_buff *tls_wait_data(struct sock *sk, int flags,
+ return NULL;
+ }
+
++ if (sk->sk_shutdown & RCV_SHUTDOWN)
++ return NULL;
++
+ if (sock_flag(sk, SOCK_DONE))
+ return NULL;
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-25 12:19 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-25 12:19 UTC (permalink / raw
To: gentoo-commits
commit: 860315db25abb93d3480d577b4b51876b52ed8cb
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 25 12:19:23 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 25 12:19:23 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=860315db
Removal of redundant patch: 1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
0000_README | 4 --
...ne-pvclock-pvti-cpu0-va-setter-for-X86-32.patch | 55 ----------------------
2 files changed, 59 deletions(-)
diff --git a/0000_README b/0000_README
index 148c985..f2abee1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,10 +91,6 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
-Patch: 1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
-From: https://marc.info/?l=kvm&m=152960320011592&w=2
-Desc: kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32. See bug #658544.
-
Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
diff --git a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
deleted file mode 100644
index 0732c51..0000000
--- a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-pvti_cpu0_va is the address of shared kvmclock data structure.
-
-pvti_cpu0_va is currently kept unset (1) on 32 bit systems, (2) when
-kvmclock vsyscall is disabled, and (3) if kvmclock is not stable.
-This poses a problem, because kvm_ptp needs pvti_cpu0_va, but (1) can
-work on 32 bit, (2) has little relation to the vsyscall, and (3) does
-not need stable kvmclock (although kvmclock won't be used for system
-clock if it's not stable, so kvm_ptp is pointless in that case).
-
-Expose pvti_cpu0_va whenever kvmclock is enabled to allow all users to
-work with it.
-
-This fixes a regression found on Gentoo: https://bugs.gentoo.org/658544.
-
-Fixes: 9f08890ab906 ("x86/pvclock: add setter for pvclock_pvti_cpu0_va")
-Reported-by: Andreas Steinmetz <ast@domdv.de>
-Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
----
- arch/x86/kernel/kvmclock.c | 11 +++++------
- 1 file changed, 5 insertions(+), 6 deletions(-)
-
-diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
-index bf8d1eb7fca3..46ffa8327563 100644
---- a/arch/x86/kernel/kvmclock.c
-+++ b/arch/x86/kernel/kvmclock.c
-@@ -319,6 +319,8 @@ void __init kvmclock_init(void)
- printk(KERN_INFO "kvm-clock: Using msrs %x and %x",
- msr_kvm_system_time, msr_kvm_wall_clock);
-
-+ pvclock_set_pvti_cpu0_va(hv_clock);
-+
- if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT))
- pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT);
-
-@@ -366,14 +368,11 @@ int __init kvm_setup_vsyscall_timeinfo(void)
- vcpu_time = &hv_clock[cpu].pvti;
- flags = pvclock_read_flags(vcpu_time);
-
-- if (!(flags & PVCLOCK_TSC_STABLE_BIT)) {
-- put_cpu();
-- return 1;
-- }
--
-- pvclock_set_pvti_cpu0_va(hv_clock);
- put_cpu();
-
-+ if (!(flags & PVCLOCK_TSC_STABLE_BIT))
-+ return 1;
-+
- kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK;
- #endif
- return 0;
---
-2.18.0.rc2
-
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-25 10:28 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-25 10:28 UTC (permalink / raw
To: gentoo-commits
commit: 003f68bb0343eec1d19eec6a761fae1405ddad26
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 25 10:28:04 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 25 10:28:04 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=003f68bb
Linux patch 4.17.10
0000_README | 4 +
1009_linux-4.17.10.patch | 2457 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2461 insertions(+)
diff --git a/0000_README b/0000_README
index 378d9da..148c985 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-4.17.9.patch
From: http://www.kernel.org
Desc: Linux 4.17.9
+Patch: 1009_linux-4.17.10.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-4.17.10.patch b/1009_linux-4.17.10.patch
new file mode 100644
index 0000000..dc09395
--- /dev/null
+++ b/1009_linux-4.17.10.patch
@@ -0,0 +1,2457 @@
+diff --git a/Makefile b/Makefile
+index 693fde3aa317..0ab689c38e82 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index 89faa6f4de47..6a92843c0699 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -1183,13 +1183,10 @@ SYSCALL_DEFINE2(osf_getrusage, int, who, struct rusage32 __user *, ru)
+ SYSCALL_DEFINE4(osf_wait4, pid_t, pid, int __user *, ustatus, int, options,
+ struct rusage32 __user *, ur)
+ {
+- unsigned int status = 0;
+ struct rusage r;
+- long err = kernel_wait4(pid, &status, options, &r);
++ long err = kernel_wait4(pid, ustatus, options, &r);
+ if (err <= 0)
+ return err;
+- if (put_user(status, ustatus))
+- return -EFAULT;
+ if (!ur)
+ return err;
+ if (put_tv_to_tv32(&ur->ru_utime, &r.ru_utime))
+diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
+index d76bf4a83740..bc0bcf01ec98 100644
+--- a/arch/arc/Kconfig
++++ b/arch/arc/Kconfig
+@@ -408,7 +408,7 @@ config ARC_HAS_DIV_REM
+
+ config ARC_HAS_ACCL_REGS
+ bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
+- default n
++ default y
+ help
+ Depending on the configuration, CPU can contain accumulator reg-pair
+ (also referred to as r58:r59). These can also be used by gcc as GPR so
+diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig
+index 09f85154c5a4..a635ea972304 100644
+--- a/arch/arc/configs/axs101_defconfig
++++ b/arch/arc/configs/axs101_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs/"
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig
+index 09fed3ef22b6..aa507e423075 100644
+--- a/arch/arc/configs/axs103_defconfig
++++ b/arch/arc/configs/axs103_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig
+index ea2f6d817d1a..eba07f468654 100644
+--- a/arch/arc/configs/axs103_smp_defconfig
++++ b/arch/arc/configs/axs103_smp_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/configs/haps_hs_defconfig b/arch/arc/configs/haps_hs_defconfig
+index ab231c040efe..098b19fbaa51 100644
+--- a/arch/arc/configs/haps_hs_defconfig
++++ b/arch/arc/configs/haps_hs_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_EXPERT=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_COMPAT_BRK is not set
+diff --git a/arch/arc/configs/haps_hs_smp_defconfig b/arch/arc/configs/haps_hs_smp_defconfig
+index cf449cbf440d..0104c404d897 100644
+--- a/arch/arc/configs/haps_hs_smp_defconfig
++++ b/arch/arc/configs/haps_hs_smp_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig
+index 1b54c72f4296..6491be0ddbc9 100644
+--- a/arch/arc/configs/hsdk_defconfig
++++ b/arch/arc/configs/hsdk_defconfig
+@@ -9,7 +9,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/configs/nsim_700_defconfig b/arch/arc/configs/nsim_700_defconfig
+index 31c2c70b34a1..99e05cf63fca 100644
+--- a/arch/arc/configs/nsim_700_defconfig
++++ b/arch/arc/configs/nsim_700_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs/"
+ CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+diff --git a/arch/arc/configs/nsim_hs_defconfig b/arch/arc/configs/nsim_hs_defconfig
+index a578c721d50f..0dc4f9b737e7 100644
+--- a/arch/arc/configs/nsim_hs_defconfig
++++ b/arch/arc/configs/nsim_hs_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"
+ CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+diff --git a/arch/arc/configs/nsim_hs_smp_defconfig b/arch/arc/configs/nsim_hs_smp_defconfig
+index 37d7395f3272..be3c30a15e54 100644
+--- a/arch/arc/configs/nsim_hs_smp_defconfig
++++ b/arch/arc/configs/nsim_hs_smp_defconfig
+@@ -9,7 +9,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"
+ CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+diff --git a/arch/arc/configs/nsimosci_defconfig b/arch/arc/configs/nsimosci_defconfig
+index 1e1470e2a7f0..3a74b9b21772 100644
+--- a/arch/arc/configs/nsimosci_defconfig
++++ b/arch/arc/configs/nsimosci_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs/"
+ CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+diff --git a/arch/arc/configs/nsimosci_hs_defconfig b/arch/arc/configs/nsimosci_hs_defconfig
+index 084a6e42685b..ea2834b4dc1d 100644
+--- a/arch/arc/configs/nsimosci_hs_defconfig
++++ b/arch/arc/configs/nsimosci_hs_defconfig
+@@ -11,7 +11,6 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"
+ CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+diff --git a/arch/arc/configs/nsimosci_hs_smp_defconfig b/arch/arc/configs/nsimosci_hs_smp_defconfig
+index f36d47990415..80a5a1b4924b 100644
+--- a/arch/arc/configs/nsimosci_hs_smp_defconfig
++++ b/arch/arc/configs/nsimosci_hs_smp_defconfig
+@@ -9,7 +9,6 @@ CONFIG_IKCONFIG_PROC=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_COMPAT_BRK is not set
+ CONFIG_KPROBES=y
+diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
+index 109baa06831c..09ddddf71cc5 100644
+--- a/arch/arc/include/asm/page.h
++++ b/arch/arc/include/asm/page.h
+@@ -105,7 +105,7 @@ typedef pte_t * pgtable_t;
+ #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
+
+ /* Default Permissions for stack/heaps pages (Non Executable) */
+-#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE)
++#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+ #define WANT_PAGE_VIRTUAL 1
+
+diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
+index 08fe33830d4b..77676e18da69 100644
+--- a/arch/arc/include/asm/pgtable.h
++++ b/arch/arc/include/asm/pgtable.h
+@@ -379,7 +379,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+
+ /* Decode a PTE containing swap "identifier "into constituents */
+ #define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f)
+-#define __swp_offset(pte_lookalike) ((pte_lookalike).val << 13)
++#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13)
+
+ /* NOPs, to keep generic kernel happy */
+ #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig
+index 19ab3cf98f0f..fcc9a9e27e9c 100644
+--- a/arch/arc/plat-hsdk/Kconfig
++++ b/arch/arc/plat-hsdk/Kconfig
+@@ -7,5 +7,7 @@
+
+ menuconfig ARC_SOC_HSDK
+ bool "ARC HS Development Kit SOC"
++ depends on ISA_ARCV2
++ select ARC_HAS_ACCL_REGS
+ select CLK_HSDK
+ select RESET_HSDK
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index e734f6e45abc..689306118b48 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -144,7 +144,9 @@ power9_restore_additional_sprs:
+ mtspr SPRN_MMCR1, r4
+
+ ld r3, STOP_MMCR2(r13)
++ ld r4, PACA_SPRG_VDSO(r13)
+ mtspr SPRN_MMCR2, r3
++ mtspr SPRN_SPRG3, r4
+ blr
+
+ /*
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 8a10a045b57b..8cf03f101938 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -408,9 +408,11 @@ static int alloc_bts_buffer(int cpu)
+ ds->bts_buffer_base = (unsigned long) cea;
+ ds_update_cea(cea, buffer, BTS_BUFFER_SIZE, PAGE_KERNEL);
+ ds->bts_index = ds->bts_buffer_base;
+- max = BTS_RECORD_SIZE * (BTS_BUFFER_SIZE / BTS_RECORD_SIZE);
+- ds->bts_absolute_maximum = ds->bts_buffer_base + max;
+- ds->bts_interrupt_threshold = ds->bts_absolute_maximum - (max / 16);
++ max = BTS_BUFFER_SIZE / BTS_RECORD_SIZE;
++ ds->bts_absolute_maximum = ds->bts_buffer_base +
++ max * BTS_RECORD_SIZE;
++ ds->bts_interrupt_threshold = ds->bts_absolute_maximum -
++ (max / 16) * BTS_RECORD_SIZE;
+ return 0;
+ }
+
+diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
+index c356098b6fb9..4d4015ddcf26 100644
+--- a/arch/x86/include/asm/apm.h
++++ b/arch/x86/include/asm/apm.h
+@@ -7,8 +7,6 @@
+ #ifndef _ASM_X86_MACH_DEFAULT_APM_H
+ #define _ASM_X86_MACH_DEFAULT_APM_H
+
+-#include <asm/nospec-branch.h>
+-
+ #ifdef APM_ZERO_SEGS
+ # define APM_DO_ZERO_SEGS \
+ "pushl %%ds\n\t" \
+@@ -34,7 +32,6 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
+ * N.B. We do NOT need a cld after the BIOS call
+ * because we always save and restore the flags.
+ */
+- firmware_restrict_branch_speculation_start();
+ __asm__ __volatile__(APM_DO_ZERO_SEGS
+ "pushl %%edi\n\t"
+ "pushl %%ebp\n\t"
+@@ -47,7 +44,6 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
+ "=S" (*esi)
+ : "a" (func), "b" (ebx_in), "c" (ecx_in)
+ : "memory", "cc");
+- firmware_restrict_branch_speculation_end();
+ }
+
+ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+@@ -60,7 +56,6 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+ * N.B. We do NOT need a cld after the BIOS call
+ * because we always save and restore the flags.
+ */
+- firmware_restrict_branch_speculation_start();
+ __asm__ __volatile__(APM_DO_ZERO_SEGS
+ "pushl %%edi\n\t"
+ "pushl %%ebp\n\t"
+@@ -73,7 +68,6 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+ "=S" (si)
+ : "a" (func), "b" (ebx_in), "c" (ecx_in)
+ : "memory", "cc");
+- firmware_restrict_branch_speculation_end();
+ return error;
+ }
+
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index dfcbe6924eaf..3dd661dcc3f7 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -240,6 +240,7 @@
+ #include <asm/olpc.h>
+ #include <asm/paravirt.h>
+ #include <asm/reboot.h>
++#include <asm/nospec-branch.h>
+
+ #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
+ extern int (*console_blank_hook)(int);
+@@ -614,11 +615,13 @@ static long __apm_bios_call(void *_call)
+ gdt[0x40 / 8] = bad_bios_desc;
+
+ apm_irq_save(flags);
++ firmware_restrict_branch_speculation_start();
+ APM_DO_SAVE_SEGS;
+ apm_bios_call_asm(call->func, call->ebx, call->ecx,
+ &call->eax, &call->ebx, &call->ecx, &call->edx,
+ &call->esi);
+ APM_DO_RESTORE_SEGS;
++ firmware_restrict_branch_speculation_end();
+ apm_irq_restore(flags);
+ gdt[0x40 / 8] = save_desc_40;
+ put_cpu();
+@@ -690,10 +693,12 @@ static long __apm_bios_call_simple(void *_call)
+ gdt[0x40 / 8] = bad_bios_desc;
+
+ apm_irq_save(flags);
++ firmware_restrict_branch_speculation_start();
+ APM_DO_SAVE_SEGS;
+ error = apm_bios_call_simple_asm(call->func, call->ebx, call->ecx,
+ &call->eax);
+ APM_DO_RESTORE_SEGS;
++ firmware_restrict_branch_speculation_end();
+ apm_irq_restore(flags);
+ gdt[0x40 / 8] = save_desc_40;
+ put_cpu();
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 6f7eda9d5297..79ae1423b619 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -2147,9 +2147,6 @@ static ssize_t store_int_with_restart(struct device *s,
+ if (check_interval == old_check_interval)
+ return ret;
+
+- if (check_interval < 1)
+- check_interval = 1;
+-
+ mutex_lock(&mce_sysfs_mutex);
+ mce_restart();
+ mutex_unlock(&mce_sysfs_mutex);
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index 8b26c9e01cc4..d79a18b4cf9d 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -319,6 +319,8 @@ void __init kvmclock_init(void)
+ printk(KERN_INFO "kvm-clock: Using msrs %x and %x",
+ msr_kvm_system_time, msr_kvm_wall_clock);
+
++ pvclock_set_pvti_cpu0_va(hv_clock);
++
+ if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT))
+ pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT);
+
+@@ -366,14 +368,11 @@ int __init kvm_setup_vsyscall_timeinfo(void)
+ vcpu_time = &hv_clock[cpu].pvti;
+ flags = pvclock_read_flags(vcpu_time);
+
+- if (!(flags & PVCLOCK_TSC_STABLE_BIT)) {
+- put_cpu();
+- return 1;
+- }
+-
+- pvclock_set_pvti_cpu0_va(hv_clock);
+ put_cpu();
+
++ if (!(flags & PVCLOCK_TSC_STABLE_BIT))
++ return 1;
++
+ kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK;
+ #endif
+ return 0;
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index dd4366edc200..a3bbac8ef4d0 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -2376,6 +2376,7 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ #ifdef CONFIG_X86_64
+ int cpu = raw_smp_processor_id();
++ unsigned long fs_base, kernel_gs_base;
+ #endif
+ int i;
+
+@@ -2391,12 +2392,20 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel;
+
+ #ifdef CONFIG_X86_64
+- save_fsgs_for_kvm();
+- vmx->host_state.fs_sel = current->thread.fsindex;
+- vmx->host_state.gs_sel = current->thread.gsindex;
+-#else
+- savesegment(fs, vmx->host_state.fs_sel);
+- savesegment(gs, vmx->host_state.gs_sel);
++ if (likely(is_64bit_mm(current->mm))) {
++ save_fsgs_for_kvm();
++ vmx->host_state.fs_sel = current->thread.fsindex;
++ vmx->host_state.gs_sel = current->thread.gsindex;
++ fs_base = current->thread.fsbase;
++ kernel_gs_base = current->thread.gsbase;
++ } else {
++#endif
++ savesegment(fs, vmx->host_state.fs_sel);
++ savesegment(gs, vmx->host_state.gs_sel);
++#ifdef CONFIG_X86_64
++ fs_base = read_msr(MSR_FS_BASE);
++ kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE);
++ }
+ #endif
+ if (!(vmx->host_state.fs_sel & 7)) {
+ vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel);
+@@ -2416,10 +2425,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ savesegment(ds, vmx->host_state.ds_sel);
+ savesegment(es, vmx->host_state.es_sel);
+
+- vmcs_writel(HOST_FS_BASE, current->thread.fsbase);
++ vmcs_writel(HOST_FS_BASE, fs_base);
+ vmcs_writel(HOST_GS_BASE, cpu_kernelmode_gs_base(cpu));
+
+- vmx->msr_host_kernel_gs_base = current->thread.gsbase;
++ vmx->msr_host_kernel_gs_base = kernel_gs_base;
+ if (is_long_mode(&vmx->vcpu))
+ wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
+ #else
+@@ -4110,11 +4119,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
+ vmcs_conf->order = get_order(vmcs_conf->size);
+ vmcs_conf->basic_cap = vmx_msr_high & ~0x1fff;
+
+- /* KVM supports Enlightened VMCS v1 only */
+- if (static_branch_unlikely(&enable_evmcs))
+- vmcs_conf->revision_id = KVM_EVMCS_VERSION;
+- else
+- vmcs_conf->revision_id = vmx_msr_low;
++ vmcs_conf->revision_id = vmx_msr_low;
+
+ vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control;
+ vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control;
+@@ -4184,7 +4189,13 @@ static struct vmcs *alloc_vmcs_cpu(int cpu)
+ return NULL;
+ vmcs = page_address(pages);
+ memset(vmcs, 0, vmcs_config.size);
+- vmcs->revision_id = vmcs_config.revision_id; /* vmcs revision id */
++
++ /* KVM supports Enlightened VMCS v1 only */
++ if (static_branch_unlikely(&enable_evmcs))
++ vmcs->revision_id = KVM_EVMCS_VERSION;
++ else
++ vmcs->revision_id = vmcs_config.revision_id;
++
+ return vmcs;
+ }
+
+@@ -4343,6 +4354,19 @@ static __init int alloc_kvm_area(void)
+ return -ENOMEM;
+ }
+
++ /*
++ * When eVMCS is enabled, alloc_vmcs_cpu() sets
++ * vmcs->revision_id to KVM_EVMCS_VERSION instead of
++ * revision_id reported by MSR_IA32_VMX_BASIC.
++ *
++ * However, even though not explictly documented by
++ * TLFS, VMXArea passed as VMXON argument should
++ * still be marked with revision_id reported by
++ * physical CPU.
++ */
++ if (static_branch_unlikely(&enable_evmcs))
++ vmcs->revision_id = vmcs_config.revision_id;
++
+ per_cpu(vmxarea, cpu) = vmcs;
+ }
+ return 0;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index bd3f0a9d5e60..b357f81bfba6 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2179,6 +2179,18 @@ static bool __init intel_pstate_no_acpi_pss(void)
+ return true;
+ }
+
++static bool __init intel_pstate_no_acpi_pcch(void)
++{
++ acpi_status status;
++ acpi_handle handle;
++
++ status = acpi_get_handle(NULL, "\\_SB", &handle);
++ if (ACPI_FAILURE(status))
++ return true;
++
++ return !acpi_has_method(handle, "PCCH");
++}
++
+ static bool __init intel_pstate_has_acpi_ppc(void)
+ {
+ int i;
+@@ -2238,7 +2250,10 @@ static bool __init intel_pstate_platform_pwr_mgmt_exists(void)
+
+ switch (plat_info[idx].data) {
+ case PSS:
+- return intel_pstate_no_acpi_pss();
++ if (!intel_pstate_no_acpi_pss())
++ return false;
++
++ return intel_pstate_no_acpi_pcch();
+ case PPC:
+ return intel_pstate_has_acpi_ppc() && !force_load;
+ }
+diff --git a/drivers/cpufreq/pcc-cpufreq.c b/drivers/cpufreq/pcc-cpufreq.c
+index 3f0ce2ae35ee..0c56c9759672 100644
+--- a/drivers/cpufreq/pcc-cpufreq.c
++++ b/drivers/cpufreq/pcc-cpufreq.c
+@@ -580,6 +580,10 @@ static int __init pcc_cpufreq_init(void)
+ {
+ int ret;
+
++ /* Skip initialization if another cpufreq driver is there. */
++ if (cpufreq_get_current_driver())
++ return 0;
++
+ if (acpi_disabled)
+ return 0;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index dc34b50e6b29..b11e9659e312 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -925,6 +925,10 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
+ r = amdgpu_bo_vm_update_pte(p);
+ if (r)
+ return r;
++
++ r = reservation_object_reserve_shared(vm->root.base.bo->tbo.resv);
++ if (r)
++ return r;
+ }
+
+ return amdgpu_cs_sync_rings(p);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 4304d9e408b8..ace9ad578ca0 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -83,22 +83,21 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ enum i2c_mot_mode mot = (msg->request & DP_AUX_I2C_MOT) ?
+ I2C_MOT_TRUE : I2C_MOT_FALSE;
+ enum ddc_result res;
+- uint32_t read_bytes = msg->size;
++ ssize_t read_bytes;
+
+ if (WARN_ON(msg->size > 16))
+ return -E2BIG;
+
+ switch (msg->request & ~DP_AUX_I2C_MOT) {
+ case DP_AUX_NATIVE_READ:
+- res = dal_ddc_service_read_dpcd_data(
++ read_bytes = dal_ddc_service_read_dpcd_data(
+ TO_DM_AUX(aux)->ddc_service,
+ false,
+ I2C_MOT_UNDEF,
+ msg->address,
+ msg->buffer,
+- msg->size,
+- &read_bytes);
+- break;
++ msg->size);
++ return read_bytes;
+ case DP_AUX_NATIVE_WRITE:
+ res = dal_ddc_service_write_dpcd_data(
+ TO_DM_AUX(aux)->ddc_service,
+@@ -109,15 +108,14 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ msg->size);
+ break;
+ case DP_AUX_I2C_READ:
+- res = dal_ddc_service_read_dpcd_data(
++ read_bytes = dal_ddc_service_read_dpcd_data(
+ TO_DM_AUX(aux)->ddc_service,
+ true,
+ mot,
+ msg->address,
+ msg->buffer,
+- msg->size,
+- &read_bytes);
+- break;
++ msg->size);
++ return read_bytes;
+ case DP_AUX_I2C_WRITE:
+ res = dal_ddc_service_write_dpcd_data(
+ TO_DM_AUX(aux)->ddc_service,
+@@ -139,9 +137,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ r == DDC_RESULT_SUCESSFULL);
+ #endif
+
+- if (res != DDC_RESULT_SUCESSFULL)
+- return -EIO;
+- return read_bytes;
++ return msg->size;
+ }
+
+ static enum drm_connector_status
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+index ae48d603ebd6..49c2face1e7a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+@@ -629,14 +629,13 @@ bool dal_ddc_service_query_ddc_data(
+ return ret;
+ }
+
+-enum ddc_result dal_ddc_service_read_dpcd_data(
++ssize_t dal_ddc_service_read_dpcd_data(
+ struct ddc_service *ddc,
+ bool i2c,
+ enum i2c_mot_mode mot,
+ uint32_t address,
+ uint8_t *data,
+- uint32_t len,
+- uint32_t *read)
++ uint32_t len)
+ {
+ struct aux_payload read_payload = {
+ .i2c_over_aux = i2c,
+@@ -653,8 +652,6 @@ enum ddc_result dal_ddc_service_read_dpcd_data(
+ .mot = mot
+ };
+
+- *read = 0;
+-
+ if (len > DEFAULT_AUX_MAX_DATA_SIZE) {
+ BREAK_TO_DEBUGGER();
+ return DDC_RESULT_FAILED_INVALID_OPERATION;
+@@ -664,8 +661,7 @@ enum ddc_result dal_ddc_service_read_dpcd_data(
+ ddc->ctx->i2caux,
+ ddc->ddc_pin,
+ &command)) {
+- *read = command.payloads->length;
+- return DDC_RESULT_SUCESSFULL;
++ return (ssize_t)command.payloads->length;
+ }
+
+ return DDC_RESULT_FAILED_OPERATION;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
+index 30b3a08b91be..090b7a8dd67b 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
+@@ -102,14 +102,13 @@ bool dal_ddc_service_query_ddc_data(
+ uint8_t *read_buf,
+ uint32_t read_size);
+
+-enum ddc_result dal_ddc_service_read_dpcd_data(
++ssize_t dal_ddc_service_read_dpcd_data(
+ struct ddc_service *ddc,
+ bool i2c,
+ enum i2c_mot_mode mot,
+ uint32_t address,
+ uint8_t *data,
+- uint32_t len,
+- uint32_t *read);
++ uint32_t len);
+
+ enum ddc_result dal_ddc_service_write_dpcd_data(
+ struct ddc_service *ddc,
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index d345563fdff3..ce281d651ae8 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -553,24 +553,13 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+
+ /* Clone the lessor file to create a new file for us */
+ DRM_DEBUG_LEASE("Allocating lease file\n");
+- path_get(&lessor_file->f_path);
+- lessee_file = alloc_file(&lessor_file->f_path,
+- lessor_file->f_mode,
+- fops_get(lessor_file->f_inode->i_fop));
+-
++ lessee_file = filp_clone_open(lessor_file);
+ if (IS_ERR(lessee_file)) {
+ ret = PTR_ERR(lessee_file);
+ goto out_lessee;
+ }
+
+- /* Initialize the new file for DRM */
+- DRM_DEBUG_LEASE("Initializing the file with %p\n", lessee_file->f_op->open);
+- ret = lessee_file->f_op->open(lessee_file->f_inode, lessee_file);
+- if (ret)
+- goto out_lessee_file;
+-
+ lessee_priv = lessee_file->private_data;
+-
+ /* Change the file to a master one */
+ drm_master_put(&lessee_priv->master);
+ lessee_priv->master = lessee;
+@@ -588,9 +577,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ return 0;
+
+-out_lessee_file:
+- fput(lessee_file);
+-
+ out_lessee:
+ drm_master_put(&lessee);
+
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index b25cc5aa8fbe..d793b2bbd6c2 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -1967,10 +1967,38 @@ static void valleyview_pipestat_irq_handler(struct drm_i915_private *dev_priv,
+
+ static u32 i9xx_hpd_irq_ack(struct drm_i915_private *dev_priv)
+ {
+- u32 hotplug_status = I915_READ(PORT_HOTPLUG_STAT);
++ u32 hotplug_status = 0, hotplug_status_mask;
++ int i;
++
++ if (IS_G4X(dev_priv) ||
++ IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
++ hotplug_status_mask = HOTPLUG_INT_STATUS_G4X |
++ DP_AUX_CHANNEL_MASK_INT_STATUS_G4X;
++ else
++ hotplug_status_mask = HOTPLUG_INT_STATUS_I915;
+
+- if (hotplug_status)
++ /*
++ * We absolutely have to clear all the pending interrupt
++ * bits in PORT_HOTPLUG_STAT. Otherwise the ISR port
++ * interrupt bit won't have an edge, and the i965/g4x
++ * edge triggered IIR will not notice that an interrupt
++ * is still pending. We can't use PORT_HOTPLUG_EN to
++ * guarantee the edge as the act of toggling the enable
++ * bits can itself generate a new hotplug interrupt :(
++ */
++ for (i = 0; i < 10; i++) {
++ u32 tmp = I915_READ(PORT_HOTPLUG_STAT) & hotplug_status_mask;
++
++ if (tmp == 0)
++ return hotplug_status;
++
++ hotplug_status |= tmp;
+ I915_WRITE(PORT_HOTPLUG_STAT, hotplug_status);
++ }
++
++ WARN_ONCE(1,
++ "PORT_HOTPLUG_STAT did not clear (0x%08x)\n",
++ I915_READ(PORT_HOTPLUG_STAT));
+
+ return hotplug_status;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index debbbf0fd4bd..408b955e5c39 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -267,6 +267,7 @@ nouveau_backlight_init(struct drm_device *dev)
+ struct nouveau_drm *drm = nouveau_drm(dev);
+ struct nvif_device *device = &drm->client.device;
+ struct drm_connector *connector;
++ struct drm_connector_list_iter conn_iter;
+
+ INIT_LIST_HEAD(&drm->bl_connectors);
+
+@@ -275,7 +276,8 @@ nouveau_backlight_init(struct drm_device *dev)
+ return 0;
+ }
+
+- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ drm_connector_list_iter_begin(dev, &conn_iter);
++ drm_for_each_connector_iter(connector, &conn_iter) {
+ if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS &&
+ connector->connector_type != DRM_MODE_CONNECTOR_eDP)
+ continue;
+@@ -292,7 +294,7 @@ nouveau_backlight_init(struct drm_device *dev)
+ break;
+ }
+ }
+-
++ drm_connector_list_iter_end(&conn_iter);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 6ed9cb053dfa..359fecce8cc0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -1208,14 +1208,19 @@ nouveau_connector_create(struct drm_device *dev, int index)
+ struct nouveau_display *disp = nouveau_display(dev);
+ struct nouveau_connector *nv_connector = NULL;
+ struct drm_connector *connector;
++ struct drm_connector_list_iter conn_iter;
+ int type, ret = 0;
+ bool dummy;
+
+- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ drm_connector_list_iter_begin(dev, &conn_iter);
++ nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+ nv_connector = nouveau_connector(connector);
+- if (nv_connector->index == index)
++ if (nv_connector->index == index) {
++ drm_connector_list_iter_end(&conn_iter);
+ return connector;
++ }
+ }
++ drm_connector_list_iter_end(&conn_iter);
+
+ nv_connector = kzalloc(sizeof(*nv_connector), GFP_KERNEL);
+ if (!nv_connector)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.h b/drivers/gpu/drm/nouveau/nouveau_connector.h
+index a4d1a059bd3d..dc7454e7f19a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.h
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.h
+@@ -33,6 +33,7 @@
+ #include <drm/drm_encoder.h>
+ #include <drm/drm_dp_helper.h>
+ #include "nouveau_crtc.h"
++#include "nouveau_encoder.h"
+
+ struct nvkm_i2c_port;
+
+@@ -60,19 +61,46 @@ static inline struct nouveau_connector *nouveau_connector(
+ return container_of(con, struct nouveau_connector, base);
+ }
+
++static inline bool
++nouveau_connector_is_mst(struct drm_connector *connector)
++{
++ const struct nouveau_encoder *nv_encoder;
++ const struct drm_encoder *encoder;
++
++ if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
++ return false;
++
++ nv_encoder = find_encoder(connector, DCB_OUTPUT_ANY);
++ if (!nv_encoder)
++ return false;
++
++ encoder = &nv_encoder->base.base;
++ return encoder->encoder_type == DRM_MODE_ENCODER_DPMST;
++}
++
++#define nouveau_for_each_non_mst_connector_iter(connector, iter) \
++ drm_for_each_connector_iter(connector, iter) \
++ for_each_if(!nouveau_connector_is_mst(connector))
++
+ static inline struct nouveau_connector *
+ nouveau_crtc_connector_get(struct nouveau_crtc *nv_crtc)
+ {
+ struct drm_device *dev = nv_crtc->base.dev;
+ struct drm_connector *connector;
++ struct drm_connector_list_iter conn_iter;
++ struct nouveau_connector *nv_connector = NULL;
+ struct drm_crtc *crtc = to_drm_crtc(nv_crtc);
+
+- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+- if (connector->encoder && connector->encoder->crtc == crtc)
+- return nouveau_connector(connector);
++ drm_connector_list_iter_begin(dev, &conn_iter);
++ nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
++ if (connector->encoder && connector->encoder->crtc == crtc) {
++ nv_connector = nouveau_connector(connector);
++ break;
++ }
+ }
++ drm_connector_list_iter_end(&conn_iter);
+
+- return NULL;
++ return nv_connector;
+ }
+
+ struct drm_connector *
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index 009713404cc4..4cba117e81fc 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -406,6 +406,7 @@ nouveau_display_init(struct drm_device *dev)
+ struct nouveau_display *disp = nouveau_display(dev);
+ struct nouveau_drm *drm = nouveau_drm(dev);
+ struct drm_connector *connector;
++ struct drm_connector_list_iter conn_iter;
+ int ret;
+
+ ret = disp->init(dev);
+@@ -413,10 +414,12 @@ nouveau_display_init(struct drm_device *dev)
+ return ret;
+
+ /* enable hotplug interrupts */
+- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ drm_connector_list_iter_begin(dev, &conn_iter);
++ nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+ struct nouveau_connector *conn = nouveau_connector(connector);
+ nvif_notify_get(&conn->hpd);
+ }
++ drm_connector_list_iter_end(&conn_iter);
+
+ /* enable flip completion events */
+ nvif_notify_get(&drm->flip);
+@@ -429,6 +432,7 @@ nouveau_display_fini(struct drm_device *dev, bool suspend)
+ struct nouveau_display *disp = nouveau_display(dev);
+ struct nouveau_drm *drm = nouveau_drm(dev);
+ struct drm_connector *connector;
++ struct drm_connector_list_iter conn_iter;
+
+ if (!suspend) {
+ if (drm_drv_uses_atomic_modeset(dev))
+@@ -441,10 +445,12 @@ nouveau_display_fini(struct drm_device *dev, bool suspend)
+ nvif_notify_put(&drm->flip);
+
+ /* disable hotplug interrupts */
+- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ drm_connector_list_iter_begin(dev, &conn_iter);
++ nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+ struct nouveau_connector *conn = nouveau_connector(connector);
+ nvif_notify_put(&conn->hpd);
+ }
++ drm_connector_list_iter_end(&conn_iter);
+
+ drm_kms_helper_poll_disable(dev);
+ disp->fini(dev);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index bbbf353682e1..0bffeb95b072 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -866,22 +866,11 @@ nouveau_pmops_runtime_resume(struct device *dev)
+ static int
+ nouveau_pmops_runtime_idle(struct device *dev)
+ {
+- struct pci_dev *pdev = to_pci_dev(dev);
+- struct drm_device *drm_dev = pci_get_drvdata(pdev);
+- struct nouveau_drm *drm = nouveau_drm(drm_dev);
+- struct drm_crtc *crtc;
+-
+ if (!nouveau_pmops_runtime()) {
+ pm_runtime_forbid(dev);
+ return -EBUSY;
+ }
+
+- list_for_each_entry(crtc, &drm->dev->mode_config.crtc_list, head) {
+- if (crtc->enabled) {
+- DRM_DEBUG_DRIVER("failing to power off - crtc active\n");
+- return -EBUSY;
+- }
+- }
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_autosuspend(dev);
+ /* we don't want the main rpm_idle to call suspend - we want to autosuspend */
+diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
+index 753b1a698fc4..6b16946f9b05 100644
+--- a/drivers/misc/cxl/api.c
++++ b/drivers/misc/cxl/api.c
+@@ -103,15 +103,15 @@ static struct file *cxl_getfile(const char *name,
+ d_instantiate(path.dentry, inode);
+
+ file = alloc_file(&path, OPEN_FMODE(flags), fops);
+- if (IS_ERR(file))
+- goto err_dput;
++ if (IS_ERR(file)) {
++ path_put(&path);
++ goto err_fs;
++ }
+ file->f_flags = flags & (O_ACCMODE | O_NONBLOCK);
+ file->private_data = priv;
+
+ return file;
+
+-err_dput:
+- path_put(&path);
+ err_inode:
+ iput(inode);
+ err_fs:
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
+index fc7383106946..91eb8910b1c9 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
+@@ -63,8 +63,6 @@
+
+ #define AQ_CFG_NAPI_WEIGHT 64U
+
+-#define AQ_CFG_MULTICAST_ADDRESS_MAX 32U
+-
+ /*#define AQ_CFG_MAC_ADDR_PERMANENT {0x30, 0x0E, 0xE3, 0x12, 0x34, 0x56}*/
+
+ #define AQ_NIC_FC_OFF 0U
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+index a2d416b24ffc..2c6ebd91a9f2 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+@@ -98,6 +98,8 @@ struct aq_stats_s {
+ #define AQ_HW_MEDIA_TYPE_TP 1U
+ #define AQ_HW_MEDIA_TYPE_FIBRE 2U
+
++#define AQ_HW_MULTICAST_ADDRESS_MAX 32U
++
+ struct aq_hw_s {
+ atomic_t flags;
+ u8 rbl_enabled:1;
+@@ -177,7 +179,7 @@ struct aq_hw_ops {
+ unsigned int packet_filter);
+
+ int (*hw_multicast_list_set)(struct aq_hw_s *self,
+- u8 ar_mac[AQ_CFG_MULTICAST_ADDRESS_MAX]
++ u8 ar_mac[AQ_HW_MULTICAST_ADDRESS_MAX]
+ [ETH_ALEN],
+ u32 count);
+
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index ba5fe8c4125d..e3ae29e523f0 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -135,17 +135,10 @@ static int aq_ndev_set_mac_address(struct net_device *ndev, void *addr)
+ static void aq_ndev_set_multicast_settings(struct net_device *ndev)
+ {
+ struct aq_nic_s *aq_nic = netdev_priv(ndev);
+- int err = 0;
+
+- err = aq_nic_set_packet_filter(aq_nic, ndev->flags);
+- if (err < 0)
+- return;
++ aq_nic_set_packet_filter(aq_nic, ndev->flags);
+
+- if (netdev_mc_count(ndev)) {
+- err = aq_nic_set_multicast_list(aq_nic, ndev);
+- if (err < 0)
+- return;
+- }
++ aq_nic_set_multicast_list(aq_nic, ndev);
+ }
+
+ static const struct net_device_ops aq_ndev_ops = {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 1a1a6380c128..7a22d0257e04 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -563,34 +563,41 @@ int aq_nic_set_packet_filter(struct aq_nic_s *self, unsigned int flags)
+
+ int aq_nic_set_multicast_list(struct aq_nic_s *self, struct net_device *ndev)
+ {
++ unsigned int packet_filter = self->packet_filter;
+ struct netdev_hw_addr *ha = NULL;
+ unsigned int i = 0U;
+
+- self->mc_list.count = 0U;
+-
+- netdev_for_each_mc_addr(ha, ndev) {
+- ether_addr_copy(self->mc_list.ar[i++], ha->addr);
+- ++self->mc_list.count;
++ self->mc_list.count = 0;
++ if (netdev_uc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) {
++ packet_filter |= IFF_PROMISC;
++ } else {
++ netdev_for_each_uc_addr(ha, ndev) {
++ ether_addr_copy(self->mc_list.ar[i++], ha->addr);
+
+- if (i >= AQ_CFG_MULTICAST_ADDRESS_MAX)
+- break;
++ if (i >= AQ_HW_MULTICAST_ADDRESS_MAX)
++ break;
++ }
+ }
+
+- if (i >= AQ_CFG_MULTICAST_ADDRESS_MAX) {
+- /* Number of filters is too big: atlantic does not support this.
+- * Force all multi filter to support this.
+- * With this we disable all UC filters and setup "all pass"
+- * multicast mask
+- */
+- self->packet_filter |= IFF_ALLMULTI;
+- self->aq_nic_cfg.mc_list_count = 0;
+- return self->aq_hw_ops->hw_packet_filter_set(self->aq_hw,
+- self->packet_filter);
++ if (i + netdev_mc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) {
++ packet_filter |= IFF_ALLMULTI;
+ } else {
+- return self->aq_hw_ops->hw_multicast_list_set(self->aq_hw,
+- self->mc_list.ar,
+- self->mc_list.count);
++ netdev_for_each_mc_addr(ha, ndev) {
++ ether_addr_copy(self->mc_list.ar[i++], ha->addr);
++
++ if (i >= AQ_HW_MULTICAST_ADDRESS_MAX)
++ break;
++ }
++ }
++
++ if (i > 0 && i < AQ_HW_MULTICAST_ADDRESS_MAX) {
++ packet_filter |= IFF_MULTICAST;
++ self->mc_list.count = i;
++ self->aq_hw_ops->hw_multicast_list_set(self->aq_hw,
++ self->mc_list.ar,
++ self->mc_list.count);
+ }
++ return aq_nic_set_packet_filter(self, packet_filter);
+ }
+
+ int aq_nic_set_mtu(struct aq_nic_s *self, int new_mtu)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+index faa533a0ec47..fecfc401f95d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+@@ -75,7 +75,7 @@ struct aq_nic_s {
+ struct aq_hw_link_status_s link_status;
+ struct {
+ u32 count;
+- u8 ar[AQ_CFG_MULTICAST_ADDRESS_MAX][ETH_ALEN];
++ u8 ar[AQ_HW_MULTICAST_ADDRESS_MAX][ETH_ALEN];
+ } mc_list;
+
+ struct pci_dev *pdev;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+index 67e2f9fb9402..8cc6abadc03b 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+@@ -765,7 +765,7 @@ static int hw_atl_a0_hw_packet_filter_set(struct aq_hw_s *self,
+
+ static int hw_atl_a0_hw_multicast_list_set(struct aq_hw_s *self,
+ u8 ar_mac
+- [AQ_CFG_MULTICAST_ADDRESS_MAX]
++ [AQ_HW_MULTICAST_ADDRESS_MAX]
+ [ETH_ALEN],
+ u32 count)
+ {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index 819f6bcf9b4e..956860a69797 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -784,7 +784,7 @@ static int hw_atl_b0_hw_packet_filter_set(struct aq_hw_s *self,
+
+ static int hw_atl_b0_hw_multicast_list_set(struct aq_hw_s *self,
+ u8 ar_mac
+- [AQ_CFG_MULTICAST_ADDRESS_MAX]
++ [AQ_HW_MULTICAST_ADDRESS_MAX]
+ [ETH_ALEN],
+ u32 count)
+ {
+@@ -812,7 +812,7 @@ static int hw_atl_b0_hw_multicast_list_set(struct aq_hw_s *self,
+
+ hw_atl_rpfl2_uc_flr_en_set(self,
+ (self->aq_nic_cfg->is_mc_list_enabled),
+- HW_ATL_B0_MAC_MIN + i);
++ HW_ATL_B0_MAC_MIN + i);
+ }
+
+ err = aq_hw_err_from_flags(self);
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index f33b25fbca63..7db072fe5f22 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1946,8 +1946,8 @@ static int bcm_sysport_open(struct net_device *dev)
+ if (!priv->is_lite)
+ priv->crc_fwd = !!(umac_readl(priv, UMAC_CMD) & CMD_CRC_FWD);
+ else
+- priv->crc_fwd = !!(gib_readl(priv, GIB_CONTROL) &
+- GIB_FCS_STRIP);
++ priv->crc_fwd = !((gib_readl(priv, GIB_CONTROL) &
++ GIB_FCS_STRIP) >> GIB_FCS_STRIP_SHIFT);
+
+ phydev = of_phy_connect(dev, priv->phy_dn, bcm_sysport_adj_link,
+ 0, priv->phy_interface);
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
+index d6e5d0cbf3a3..cf440b91fd04 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.h
++++ b/drivers/net/ethernet/broadcom/bcmsysport.h
+@@ -278,7 +278,8 @@ struct bcm_rsb {
+ #define GIB_GTX_CLK_EXT_CLK (0 << GIB_GTX_CLK_SEL_SHIFT)
+ #define GIB_GTX_CLK_125MHZ (1 << GIB_GTX_CLK_SEL_SHIFT)
+ #define GIB_GTX_CLK_250MHZ (2 << GIB_GTX_CLK_SEL_SHIFT)
+-#define GIB_FCS_STRIP (1 << 6)
++#define GIB_FCS_STRIP_SHIFT 6
++#define GIB_FCS_STRIP (1 << GIB_FCS_STRIP_SHIFT)
+ #define GIB_LCL_LOOP_EN (1 << 7)
+ #define GIB_LCL_LOOP_TXEN (1 << 8)
+ #define GIB_RMT_LOOP_EN (1 << 9)
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 9f59b1270a7c..3e0e7f18ecf9 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -9289,6 +9289,15 @@ static int tg3_chip_reset(struct tg3 *tp)
+
+ tg3_restore_clk(tp);
+
++ /* Increase the core clock speed to fix tx timeout issue for 5762
++ * with 100Mbps link speed.
++ */
++ if (tg3_asic_rev(tp) == ASIC_REV_5762) {
++ val = tr32(TG3_CPMU_CLCK_ORIDE_ENABLE);
++ tw32(TG3_CPMU_CLCK_ORIDE_ENABLE, val |
++ TG3_CPMU_MAC_ORIDE_ENABLE);
++ }
++
+ /* Reprobe ASF enable state. */
+ tg3_flag_clear(tp, ENABLE_ASF);
+ tp->phy_flags &= ~(TG3_PHYFLG_1G_ON_VAUX_OK |
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index 5c613c6663da..2ca0f1dad54c 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -474,10 +474,10 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
+ {
+ const struct mlx4_en_frag_info *frag_info = priv->frag_info;
+ unsigned int truesize = 0;
++ bool release = true;
+ int nr, frag_size;
+ struct page *page;
+ dma_addr_t dma;
+- bool release;
+
+ /* Collect used fragments while replacing them in the HW descriptors */
+ for (nr = 0;; frags++) {
+@@ -500,7 +500,11 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
+ release = page_count(page) != 1 ||
+ page_is_pfmemalloc(page) ||
+ page_to_nid(page) != numa_mem_id();
+- } else {
++ } else if (!priv->rx_headroom) {
++ /* rx_headroom for non XDP setup is always 0.
++ * When XDP is set, the above condition will
++ * guarantee page is always released.
++ */
+ u32 sz_align = ALIGN(frag_size, SMP_CACHE_BYTES);
+
+ frags->page_offset += sz_align;
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index c418113c6b20..c10ca3c20b36 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -1291,6 +1291,7 @@ int netvsc_poll(struct napi_struct *napi, int budget)
+ struct hv_device *device = netvsc_channel_to_device(channel);
+ struct net_device *ndev = hv_get_drvdata(device);
+ int work_done = 0;
++ int ret;
+
+ /* If starting a new interval */
+ if (!nvchan->desc)
+@@ -1302,16 +1303,18 @@ int netvsc_poll(struct napi_struct *napi, int budget)
+ nvchan->desc = hv_pkt_iter_next(channel, nvchan->desc);
+ }
+
+- /* If send of pending receive completions suceeded
+- * and did not exhaust NAPI budget this time
+- * and not doing busy poll
++ /* Send any pending receive completions */
++ ret = send_recv_completions(ndev, net_device, nvchan);
++
++ /* If it did not exhaust NAPI budget this time
++ * and not doing busy poll
+ * then re-enable host interrupts
+- * and reschedule if ring is not empty.
++ * and reschedule if ring is not empty
++ * or sending receive completion failed.
+ */
+- if (send_recv_completions(ndev, net_device, nvchan) == 0 &&
+- work_done < budget &&
++ if (work_done < budget &&
+ napi_complete_done(napi, work_done) &&
+- hv_end_read(&channel->inbound) &&
++ (ret || hv_end_read(&channel->inbound)) &&
+ napi_schedule_prep(napi)) {
+ hv_begin_read(&channel->inbound);
+ __napi_schedule(napi);
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 9e4ba8e80a18..5aa081fda447 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1720,11 +1720,8 @@ EXPORT_SYMBOL(genphy_loopback);
+
+ static int __set_phy_supported(struct phy_device *phydev, u32 max_speed)
+ {
+- /* The default values for phydev->supported are provided by the PHY
+- * driver "features" member, we want to reset to sane defaults first
+- * before supporting higher speeds.
+- */
+- phydev->supported &= PHY_DEFAULT_FEATURES;
++ phydev->supported &= ~(PHY_1000BT_FEATURES | PHY_100BT_FEATURES |
++ PHY_10BT_FEATURES);
+
+ switch (max_speed) {
+ default:
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 3d4f7959dabb..b1b3d8f7e67d 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -642,10 +642,12 @@ static void ax88772_restore_phy(struct usbnet *dev)
+ priv->presvd_phy_advertise);
+
+ /* Restore BMCR */
++ if (priv->presvd_phy_bmcr & BMCR_ANENABLE)
++ priv->presvd_phy_bmcr |= BMCR_ANRESTART;
++
+ asix_mdio_write_nopm(dev->net, dev->mii.phy_id, MII_BMCR,
+ priv->presvd_phy_bmcr);
+
+- mii_nway_restart(&dev->mii);
+ priv->presvd_phy_advertise = 0;
+ priv->presvd_phy_bmcr = 0;
+ }
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 04c22f508ed9..f8f90d77cf0f 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1253,6 +1253,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x1e0e, 0x9001, 5)}, /* SIMCom 7230E */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
++ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
+ {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0306, 4)}, /* Quectel EP06 Mini PCIe */
+
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index 767c485af59b..522719b494f3 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -89,6 +89,7 @@ int ptp_set_pinfunc(struct ptp_clock *ptp, unsigned int pin,
+ case PTP_PF_PHYSYNC:
+ if (chan != 0)
+ return -EINVAL;
++ break;
+ default:
+ return -EINVAL;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index eb2ec1fb07cb..209de7cd9358 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -361,6 +361,8 @@ struct ct_arg {
+ dma_addr_t rsp_dma;
+ u32 req_size;
+ u32 rsp_size;
++ u32 req_allocated_size;
++ u32 rsp_allocated_size;
+ void *req;
+ void *rsp;
+ port_id_t id;
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 05abe5aaab7f..cbfbab5d9a59 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -556,7 +556,7 @@ static void qla2x00_async_sns_sp_done(void *s, int rc)
+ /* please ignore kernel warning. otherwise, we have mem leak. */
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+@@ -564,7 +564,7 @@ static void qla2x00_async_sns_sp_done(void *s, int rc)
+
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -617,6 +617,7 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xd041,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -627,6 +628,7 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xd042,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -712,6 +714,7 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xd041,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -722,6 +725,7 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xd042,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -802,6 +806,7 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xd041,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -812,6 +817,7 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xd042,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -909,6 +915,7 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xd041,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -919,6 +926,7 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xd042,
+ "%s: Failed to allocate ct_sns request.\n",
+@@ -3392,14 +3400,14 @@ void qla24xx_sp_unmap(scsi_qla_host_t *vha, srb_t *sp)
+ {
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -3600,14 +3608,14 @@ static void qla2x00_async_gpnid_sp_done(void *s, int res)
+ /* please ignore kernel warning. otherwise, we have mem leak. */
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -3658,6 +3666,7 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xd041,
+ "Failed to allocate ct_sns request.\n");
+@@ -3667,6 +3676,7 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xd042,
+ "Failed to allocate ct_sns request.\n");
+@@ -4125,14 +4135,14 @@ static void qla2x00_async_gpnft_gnnft_sp_done(void *s, int res)
+ */
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -4162,14 +4172,14 @@ static void qla2x00_async_gpnft_gnnft_sp_done(void *s, int res)
+ /* please ignore kernel warning. Otherwise, we have mem leak. */
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -4264,14 +4274,14 @@ static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+ done_free_sp:
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+@@ -4332,6 +4342,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ sp->u.iocb_cmd.u.ctarg.req = dma_zalloc_coherent(
+ &vha->hw->pdev->dev, sizeof(struct ct_sns_pkt),
+ &sp->u.iocb_cmd.u.ctarg.req_dma, GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.req) {
+ ql_log(ql_log_warn, vha, 0xffff,
+ "Failed to allocate ct_sns request.\n");
+@@ -4349,6 +4360,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ sp->u.iocb_cmd.u.ctarg.rsp = dma_zalloc_coherent(
+ &vha->hw->pdev->dev, rspsz,
+ &sp->u.iocb_cmd.u.ctarg.rsp_dma, GFP_KERNEL);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xffff,
+ "Failed to allocate ct_sns request.\n");
+@@ -4408,14 +4420,14 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ done_free_sp:
+ if (sp->u.iocb_cmd.u.ctarg.req) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
+ }
+ if (sp->u.iocb_cmd.u.ctarg.rsp) {
+ dma_free_coherent(&vha->hw->pdev->dev,
+- sizeof(struct ct_sns_pkt),
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+ sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 636960ad029a..0cb552268be3 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -591,12 +591,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ conflict_fcport =
+ qla2x00_find_fcport_by_wwpn(vha,
+ e->port_name, 0);
+- ql_dbg(ql_dbg_disc, vha, 0x20e6,
+- "%s %d %8phC post del sess\n",
+- __func__, __LINE__,
+- conflict_fcport->port_name);
+- qlt_schedule_sess_for_deletion
+- (conflict_fcport);
++ if (conflict_fcport) {
++ qlt_schedule_sess_for_deletion
++ (conflict_fcport);
++ ql_dbg(ql_dbg_disc, vha, 0x20e6,
++ "%s %d %8phC post del sess\n",
++ __func__, __LINE__,
++ conflict_fcport->port_name);
++ }
+ }
+
+ /* FW already picked this loop id for another fcport */
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 15eaa6dded04..2b0816dfe9bd 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3180,6 +3180,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ "req->req_q_in=%p req->req_q_out=%p rsp->rsp_q_in=%p rsp->rsp_q_out=%p.\n",
+ req->req_q_in, req->req_q_out, rsp->rsp_q_in, rsp->rsp_q_out);
+
++ ha->wq = alloc_workqueue("qla2xxx_wq", 0, 0);
++
+ if (ha->isp_ops->initialize_adapter(base_vha)) {
+ ql_log(ql_log_fatal, base_vha, 0x00d6,
+ "Failed to initialize adapter - Adapter flags %x.\n",
+@@ -3216,8 +3218,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ host->can_queue, base_vha->req,
+ base_vha->mgmt_svr_loop_id, host->sg_tablesize);
+
+- ha->wq = alloc_workqueue("qla2xxx_wq", 0, 0);
+-
+ if (ha->mqenable) {
+ bool mq = false;
+ bool startit = false;
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 210407cd2341..da868f6c9638 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -401,7 +401,8 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf)
+ * Check that all zones of the device are equal. The last zone can however
+ * be smaller. The zone size must also be a power of two number of LBAs.
+ *
+- * Returns the zone size in bytes upon success or an error code upon failure.
++ * Returns the zone size in number of blocks upon success or an error code
++ * upon failure.
+ */
+ static s64 sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+ {
+@@ -411,7 +412,7 @@ static s64 sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+ unsigned char *rec;
+ unsigned int buf_len;
+ unsigned int list_length;
+- int ret;
++ s64 ret;
+ u8 same;
+
+ /* Get a buffer */
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 711da3306b14..61c3dc2f3be5 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -844,6 +844,41 @@ static void xhci_disable_port_wake_on_bits(struct xhci_hcd *xhci)
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ }
+
++static bool xhci_pending_portevent(struct xhci_hcd *xhci)
++{
++ __le32 __iomem **port_array;
++ int port_index;
++ u32 status;
++ u32 portsc;
++
++ status = readl(&xhci->op_regs->status);
++ if (status & STS_EINT)
++ return true;
++ /*
++ * Checking STS_EINT is not enough as there is a lag between a change
++ * bit being set and the Port Status Change Event that it generated
++ * being written to the Event Ring. See note in xhci 1.1 section 4.19.2.
++ */
++
++ port_index = xhci->num_usb2_ports;
++ port_array = xhci->usb2_ports;
++ while (port_index--) {
++ portsc = readl(port_array[port_index]);
++ if (portsc & PORT_CHANGE_MASK ||
++ (portsc & PORT_PLS_MASK) == XDEV_RESUME)
++ return true;
++ }
++ port_index = xhci->num_usb3_ports;
++ port_array = xhci->usb3_ports;
++ while (port_index--) {
++ portsc = readl(port_array[port_index]);
++ if (portsc & PORT_CHANGE_MASK ||
++ (portsc & PORT_PLS_MASK) == XDEV_RESUME)
++ return true;
++ }
++ return false;
++}
++
+ /*
+ * Stop HC (not bus-specific)
+ *
+@@ -945,7 +980,7 @@ EXPORT_SYMBOL_GPL(xhci_suspend);
+ */
+ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ {
+- u32 command, temp = 0, status;
++ u32 command, temp = 0;
+ struct usb_hcd *hcd = xhci_to_hcd(xhci);
+ struct usb_hcd *secondary_hcd;
+ int retval = 0;
+@@ -1069,8 +1104,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ done:
+ if (retval == 0) {
+ /* Resume root hubs only when have pending events. */
+- status = readl(&xhci->op_regs->status);
+- if (status & STS_EINT) {
++ if (xhci_pending_portevent(xhci)) {
+ usb_hcd_resume_root_hub(xhci->shared_hcd);
+ usb_hcd_resume_root_hub(hcd);
+ }
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 6dfc4867dbcf..9751c1373fbb 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -382,6 +382,10 @@ struct xhci_op_regs {
+ #define PORT_PLC (1 << 22)
+ /* port configure error change - port failed to configure its link partner */
+ #define PORT_CEC (1 << 23)
++#define PORT_CHANGE_MASK (PORT_CSC | PORT_PEC | PORT_WRC | PORT_OCC | \
++ PORT_RC | PORT_PLC | PORT_CEC)
++
++
+ /* Cold Attach Status - xHC can set this bit to report device attached during
+ * Sx state. Warm port reset should be perfomed to clear this bit and move port
+ * to connected state.
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index b423a309a6e0..125b58eff936 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -28,6 +28,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/vfio.h>
+ #include <linux/vgaarb.h>
++#include <linux/nospec.h>
+
+ #include "vfio_pci_private.h"
+
+@@ -727,6 +728,9 @@ static long vfio_pci_ioctl(void *device_data,
+ if (info.index >=
+ VFIO_PCI_NUM_REGIONS + vdev->num_regions)
+ return -EINVAL;
++ info.index = array_index_nospec(info.index,
++ VFIO_PCI_NUM_REGIONS +
++ vdev->num_regions);
+
+ i = info.index - VFIO_PCI_NUM_REGIONS;
+
+diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
+index 759a5bdd40e1..2da5f054257a 100644
+--- a/drivers/vfio/vfio_iommu_spapr_tce.c
++++ b/drivers/vfio/vfio_iommu_spapr_tce.c
+@@ -457,13 +457,13 @@ static void tce_iommu_unuse_page(struct tce_container *container,
+ }
+
+ static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container,
+- unsigned long tce, unsigned long size,
++ unsigned long tce, unsigned long shift,
+ unsigned long *phpa, struct mm_iommu_table_group_mem_t **pmem)
+ {
+ long ret = 0;
+ struct mm_iommu_table_group_mem_t *mem;
+
+- mem = mm_iommu_lookup(container->mm, tce, size);
++ mem = mm_iommu_lookup(container->mm, tce, 1ULL << shift);
+ if (!mem)
+ return -EINVAL;
+
+@@ -487,7 +487,7 @@ static void tce_iommu_unuse_page_v2(struct tce_container *container,
+ if (!pua)
+ return;
+
+- ret = tce_iommu_prereg_ua_to_hpa(container, *pua, IOMMU_PAGE_SIZE(tbl),
++ ret = tce_iommu_prereg_ua_to_hpa(container, *pua, tbl->it_page_shift,
+ &hpa, &mem);
+ if (ret)
+ pr_debug("%s: tce %lx at #%lx was not cached, ret=%d\n",
+@@ -611,7 +611,7 @@ static long tce_iommu_build_v2(struct tce_container *container,
+ entry + i);
+
+ ret = tce_iommu_prereg_ua_to_hpa(container,
+- tce, IOMMU_PAGE_SIZE(tbl), &hpa, &mem);
++ tce, tbl->it_page_shift, &hpa, &mem);
+ if (ret)
+ break;
+
+diff --git a/fs/fat/inode.c b/fs/fat/inode.c
+index ffbbf0520d9e..6aa49dcaa938 100644
+--- a/fs/fat/inode.c
++++ b/fs/fat/inode.c
+@@ -697,13 +697,21 @@ static void fat_set_state(struct super_block *sb,
+ brelse(bh);
+ }
+
++static void fat_reset_iocharset(struct fat_mount_options *opts)
++{
++ if (opts->iocharset != fat_default_iocharset) {
++ /* Note: opts->iocharset can be NULL here */
++ kfree(opts->iocharset);
++ opts->iocharset = fat_default_iocharset;
++ }
++}
++
+ static void delayed_free(struct rcu_head *p)
+ {
+ struct msdos_sb_info *sbi = container_of(p, struct msdos_sb_info, rcu);
+ unload_nls(sbi->nls_disk);
+ unload_nls(sbi->nls_io);
+- if (sbi->options.iocharset != fat_default_iocharset)
+- kfree(sbi->options.iocharset);
++ fat_reset_iocharset(&sbi->options);
+ kfree(sbi);
+ }
+
+@@ -1118,7 +1126,7 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat,
+ opts->fs_fmask = opts->fs_dmask = current_umask();
+ opts->allow_utime = -1;
+ opts->codepage = fat_default_codepage;
+- opts->iocharset = fat_default_iocharset;
++ fat_reset_iocharset(opts);
+ if (is_vfat) {
+ opts->shortname = VFAT_SFN_DISPLAY_WINNT|VFAT_SFN_CREATE_WIN95;
+ opts->rodir = 0;
+@@ -1275,8 +1283,7 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat,
+
+ /* vfat specific */
+ case Opt_charset:
+- if (opts->iocharset != fat_default_iocharset)
+- kfree(opts->iocharset);
++ fat_reset_iocharset(opts);
+ iocharset = match_strdup(&args[0]);
+ if (!iocharset)
+ return -ENOMEM;
+@@ -1867,8 +1874,7 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat,
+ iput(fat_inode);
+ unload_nls(sbi->nls_io);
+ unload_nls(sbi->nls_disk);
+- if (sbi->options.iocharset != fat_default_iocharset)
+- kfree(sbi->options.iocharset);
++ fat_reset_iocharset(&sbi->options);
+ sb->s_fs_info = NULL;
+ kfree(sbi);
+ return error;
+diff --git a/fs/internal.h b/fs/internal.h
+index 980d005b21b4..5645b4ebf494 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -127,7 +127,6 @@ int do_fchownat(int dfd, const char __user *filename, uid_t user, gid_t group,
+
+ extern int open_check_o_direct(struct file *f);
+ extern int vfs_open(const struct path *, struct file *, const struct cred *);
+-extern struct file *filp_clone_open(struct file *);
+
+ /*
+ * inode.c
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 760d8da1b6c7..81fe0292a7ac 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2401,6 +2401,7 @@ extern struct file *filp_open(const char *, int, umode_t);
+ extern struct file *file_open_root(struct dentry *, struct vfsmount *,
+ const char *, int, umode_t);
+ extern struct file * dentry_open(const struct path *, int, const struct cred *);
++extern struct file *filp_clone_open(struct file *);
+ extern int filp_close(struct file *, fl_owner_t id);
+
+ extern struct filename *getname_flags(const char __user *, int, int *);
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 5be31eb7b266..108ede99e533 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -75,7 +75,7 @@ extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *,
+ extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
+ struct task_struct *fork_idle(int);
+ extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+-extern long kernel_wait4(pid_t, int *, int, struct rusage *);
++extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
+
+ extern void free_task(struct task_struct *tsk);
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 9065477ed255..15d8f9c84ca5 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -628,6 +628,7 @@ typedef unsigned char *sk_buff_data_t;
+ * @hash: the packet hash
+ * @queue_mapping: Queue mapping for multiqueue devices
+ * @xmit_more: More SKBs are pending for this queue
++ * @pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ * @ndisc_nodetype: router type (from link layer)
+ * @ooo_okay: allow the mapping of a socket to a queue to be changed
+ * @l4_hash: indicate hash is a canonical 4-tuple hash over transport
+@@ -733,7 +734,7 @@ struct sk_buff {
+ peeked:1,
+ head_frag:1,
+ xmit_more:1,
+- __unused:1; /* one bit hole */
++ pfmemalloc:1;
+
+ /* fields enclosed in headers_start/headers_end are copied
+ * using a single memcpy() in __copy_skb_header()
+@@ -752,31 +753,30 @@ struct sk_buff {
+
+ __u8 __pkt_type_offset[0];
+ __u8 pkt_type:3;
+- __u8 pfmemalloc:1;
+ __u8 ignore_df:1;
+-
+ __u8 nf_trace:1;
+ __u8 ip_summed:2;
+ __u8 ooo_okay:1;
++
+ __u8 l4_hash:1;
+ __u8 sw_hash:1;
+ __u8 wifi_acked_valid:1;
+ __u8 wifi_acked:1;
+-
+ __u8 no_fcs:1;
+ /* Indicates the inner headers are valid in the skbuff. */
+ __u8 encapsulation:1;
+ __u8 encap_hdr_csum:1;
+ __u8 csum_valid:1;
++
+ __u8 csum_complete_sw:1;
+ __u8 csum_level:2;
+ __u8 csum_not_inet:1;
+-
+ __u8 dst_pending_confirm:1;
+ #ifdef CONFIG_IPV6_NDISC_NODETYPE
+ __u8 ndisc_nodetype:2;
+ #endif
+ __u8 ipvs_property:1;
++
+ __u8 inner_protocol_type:1;
+ __u8 remcsum_offload:1;
+ #ifdef CONFIG_NET_SWITCHDEV
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index a406f2e8680a..aeebbbb9e0bd 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -829,7 +829,7 @@ static inline __be32 ip6_make_flowlabel(struct net *net, struct sk_buff *skb,
+ * to minimize possbility that any useful information to an
+ * attacker is leaked. Only lower 20 bits are relevant.
+ */
+- rol32(hash, 16);
++ hash = rol32(hash, 16);
+
+ flowlabel = (__force __be32)hash & IPV6_FLOWLABEL_MASK;
+
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 35498e613ff5..edfa9d0f6005 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -609,10 +609,15 @@ static inline struct dst_entry *sctp_transport_dst_check(struct sctp_transport *
+ return t->dst;
+ }
+
++static inline __u32 sctp_dst_mtu(const struct dst_entry *dst)
++{
++ return SCTP_TRUNC4(max_t(__u32, dst_mtu(dst),
++ SCTP_DEFAULT_MINSEGMENT));
++}
++
+ static inline bool sctp_transport_pmtu_check(struct sctp_transport *t)
+ {
+- __u32 pmtu = max_t(size_t, SCTP_TRUNC4(dst_mtu(t->dst)),
+- SCTP_DEFAULT_MINSEGMENT);
++ __u32 pmtu = sctp_dst_mtu(t->dst);
+
+ if (t->pathmtu == pmtu)
+ return true;
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 64c0291b579c..2f6fa95de2d8 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -270,7 +270,11 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ goto retry;
+ }
+
+- wake_up_q(&wakeq);
++ if (!err) {
++ preempt_disable();
++ wake_up_q(&wakeq);
++ preempt_enable();
++ }
+
+ return err;
+ }
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 2b2b79974b61..240a8b864d5b 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -923,8 +923,16 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_stop);
+
+ static size_t rounded_hashtable_size(const struct rhashtable_params *params)
+ {
+- return max(roundup_pow_of_two(params->nelem_hint * 4 / 3),
+- (unsigned long)params->min_size);
++ size_t retsize;
++
++ if (params->nelem_hint)
++ retsize = max(roundup_pow_of_two(params->nelem_hint * 4 / 3),
++ (unsigned long)params->min_size);
++ else
++ retsize = max(HASH_DEFAULT_SIZE,
++ (unsigned long)params->min_size);
++
++ return retsize;
+ }
+
+ static u32 rhashtable_jhash2(const void *key, u32 length, u32 seed)
+@@ -981,8 +989,6 @@ int rhashtable_init(struct rhashtable *ht,
+ struct bucket_table *tbl;
+ size_t size;
+
+- size = HASH_DEFAULT_SIZE;
+-
+ if ((!params->key_len && !params->obj_hashfn) ||
+ (params->obj_hashfn && !params->obj_cmpfn))
+ return -EINVAL;
+@@ -1009,8 +1015,7 @@ int rhashtable_init(struct rhashtable *ht,
+
+ ht->p.min_size = max_t(u16, ht->p.min_size, HASH_MIN_SIZE);
+
+- if (params->nelem_hint)
+- size = rounded_hashtable_size(&ht->p);
++ size = rounded_hashtable_size(&ht->p);
+
+ if (params->locks_mul)
+ ht->p.locks_mul = roundup_pow_of_two(params->locks_mul);
+@@ -1102,13 +1107,14 @@ void rhashtable_free_and_destroy(struct rhashtable *ht,
+ void (*free_fn)(void *ptr, void *arg),
+ void *arg)
+ {
+- struct bucket_table *tbl;
++ struct bucket_table *tbl, *next_tbl;
+ unsigned int i;
+
+ cancel_work_sync(&ht->run_work);
+
+ mutex_lock(&ht->mutex);
+ tbl = rht_dereference(ht->tbl, ht);
++restart:
+ if (free_fn) {
+ for (i = 0; i < tbl->size; i++) {
+ struct rhash_head *pos, *next;
+@@ -1125,7 +1131,12 @@ void rhashtable_free_and_destroy(struct rhashtable *ht,
+ }
+ }
+
++ next_tbl = rht_dereference(tbl->future_tbl, ht);
+ bucket_table_free(tbl);
++ if (next_tbl) {
++ tbl = next_tbl;
++ goto restart;
++ }
+ mutex_unlock(&ht->mutex);
+ }
+ EXPORT_SYMBOL_GPL(rhashtable_free_and_destroy);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index b9f3dbd885bd..327e12679dd5 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2087,6 +2087,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ if (vma_is_dax(vma))
+ return;
+ page = pmd_page(_pmd);
++ if (!PageDirty(page) && pmd_dirty(_pmd))
++ set_page_dirty(page);
+ if (!PageReferenced(page) && pmd_young(_pmd))
+ SetPageReferenced(page);
+ page_remove_rmap(page, true);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 2bd3df3d101a..95c0980a6f7e 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -850,7 +850,7 @@ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
+ int nid;
+ int i;
+
+- while ((memcg = parent_mem_cgroup(memcg))) {
++ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ for_each_node(nid) {
+ mz = mem_cgroup_nodeinfo(memcg, nid);
+ for (i = 0; i <= DEF_PRIORITY; i++) {
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index b2b2323bdc84..188d693cb251 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -77,8 +77,20 @@ gnet_stats_start_copy_compat(struct sk_buff *skb, int type, int tc_stats_type,
+ d->lock = lock;
+ spin_lock_bh(lock);
+ }
+- if (d->tail)
+- return gnet_stats_copy(d, type, NULL, 0, padattr);
++ if (d->tail) {
++ int ret = gnet_stats_copy(d, type, NULL, 0, padattr);
++
++ /* The initial attribute added in gnet_stats_copy() may be
++ * preceded by a padding attribute, in which case d->tail will
++ * end up pointing at the padding instead of the real attribute.
++ * Fix this so gnet_stats_finish_copy() adjusts the length of
++ * the right attribute.
++ */
++ if (ret == 0 && d->tail->nla_type == padattr)
++ d->tail = (struct nlattr *)((char *)d->tail +
++ NLA_ALIGN(d->tail->nla_len));
++ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 345b51837ca8..a84d69c047ac 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -858,6 +858,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
+ n->cloned = 1;
+ n->nohdr = 0;
+ n->peeked = 0;
++ C(pfmemalloc);
+ n->destructor = NULL;
+ C(tail);
+ C(end);
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index e66172aaf241..511d6748ea5f 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -300,6 +300,7 @@ __be32 fib_compute_spec_dst(struct sk_buff *skb)
+ if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) {
+ struct flowi4 fl4 = {
+ .flowi4_iif = LOOPBACK_IFINDEX,
++ .flowi4_oif = l3mdev_master_ifindex_rcu(dev),
+ .daddr = ip_hdr(skb)->saddr,
+ .flowi4_tos = RT_TOS(ip_hdr(skb)->tos),
+ .flowi4_scope = scope,
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 2f600f261690..61e42a3390ba 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -187,8 +187,9 @@ static int ipv4_ping_group_range(struct ctl_table *table, int write,
+ if (write && ret == 0) {
+ low = make_kgid(user_ns, urange[0]);
+ high = make_kgid(user_ns, urange[1]);
+- if (!gid_valid(low) || !gid_valid(high) ||
+- (urange[1] < urange[0]) || gid_lt(high, low)) {
++ if (!gid_valid(low) || !gid_valid(high))
++ return -EINVAL;
++ if (urange[1] < urange[0] || gid_lt(high, low)) {
+ low = make_kgid(&init_user_ns, 1);
+ high = make_kgid(&init_user_ns, 0);
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index c9d00ef54dec..58e316cf6607 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3524,8 +3524,7 @@ int tcp_abort(struct sock *sk, int err)
+ struct request_sock *req = inet_reqsk(sk);
+
+ local_bh_disable();
+- inet_csk_reqsk_queue_drop_and_put(req->rsk_listener,
+- req);
++ inet_csk_reqsk_queue_drop(req->rsk_listener, req);
+ local_bh_enable();
+ return 0;
+ }
+diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig
+index 11e4e80cf7e9..0efb914695ac 100644
+--- a/net/ipv6/Kconfig
++++ b/net/ipv6/Kconfig
+@@ -108,6 +108,7 @@ config IPV6_MIP6
+ config IPV6_ILA
+ tristate "IPv6: Identifier Locator Addressing (ILA)"
+ depends on NETFILTER
++ select DST_CACHE
+ select LWTUNNEL
+ ---help---
+ Support for IPv6 Identifier Locator Addressing (ILA).
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 458de353f5d9..1a4d6897d17f 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -927,7 +927,6 @@ static netdev_tx_t ip6gre_tunnel_xmit(struct sk_buff *skb,
+ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+ {
+- struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+ struct ip6_tnl *t = netdev_priv(dev);
+ struct dst_entry *dst = skb_dst(skb);
+ struct net_device_stats *stats;
+@@ -998,6 +997,8 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ goto tx_err;
+ }
+ } else {
++ struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++
+ switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 525051a886bc..3ff9316616d8 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -811,7 +811,7 @@ static void ndisc_recv_ns(struct sk_buff *skb)
+ return;
+ }
+ }
+- if (ndopts.nd_opts_nonce)
++ if (ndopts.nd_opts_nonce && ndopts.nd_opts_nonce->nd_opt_len == 1)
+ memcpy(&nonce, (u8 *)(ndopts.nd_opts_nonce + 1), 6);
+
+ inc = ipv6_addr_is_multicast(daddr);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b94345e657f7..3ed4de230830 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4274,6 +4274,13 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ err_nh = nh;
+ goto add_errout;
+ }
++ if (!rt6_qualify_for_ecmp(rt)) {
++ err = -EINVAL;
++ NL_SET_ERR_MSG(extack,
++ "Device only routes can not be added for IPv6 using the multipath API.");
++ dst_release_immediate(&rt->dst);
++ goto cleanup;
++ }
+
+ /* Because each route is added like a single route we remove
+ * these flags after the first nexthop: if there is a collision,
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 22fa13cf5d8b..846883907cd4 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -479,23 +479,27 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt,
+ q->cparams.mtu = psched_mtu(qdisc_dev(sch));
+
+ if (opt) {
+- int err = fq_codel_change(sch, opt, extack);
++ err = fq_codel_change(sch, opt, extack);
+ if (err)
+- return err;
++ goto init_failure;
+ }
+
+ err = tcf_block_get(&q->block, &q->filter_list, sch, extack);
+ if (err)
+- return err;
++ goto init_failure;
+
+ if (!q->flows) {
+ q->flows = kvzalloc(q->flows_cnt *
+ sizeof(struct fq_codel_flow), GFP_KERNEL);
+- if (!q->flows)
+- return -ENOMEM;
++ if (!q->flows) {
++ err = -ENOMEM;
++ goto init_failure;
++ }
+ q->backlogs = kvzalloc(q->flows_cnt * sizeof(u32), GFP_KERNEL);
+- if (!q->backlogs)
+- return -ENOMEM;
++ if (!q->backlogs) {
++ err = -ENOMEM;
++ goto alloc_failure;
++ }
+ for (i = 0; i < q->flows_cnt; i++) {
+ struct fq_codel_flow *flow = q->flows + i;
+
+@@ -508,6 +512,13 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt,
+ else
+ sch->flags &= ~TCQ_F_CAN_BYPASS;
+ return 0;
++
++alloc_failure:
++ kvfree(q->flows);
++ q->flows = NULL;
++init_failure:
++ q->flows_cnt = 0;
++ return err;
+ }
+
+ static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index a47179da24e6..ef8adac1be83 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -1446,11 +1446,9 @@ void sctp_assoc_sync_pmtu(struct sctp_association *asoc)
+ return;
+
+ /* Get the lowest pmtu of all the transports. */
+- list_for_each_entry(t, &asoc->peer.transport_addr_list,
+- transports) {
++ list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) {
+ if (t->pmtu_pending && t->dst) {
+- sctp_transport_update_pmtu(
+- t, SCTP_TRUNC4(dst_mtu(t->dst)));
++ sctp_transport_update_pmtu(t, sctp_dst_mtu(t->dst));
+ t->pmtu_pending = 0;
+ }
+ if (!pmtu || (t->pathmtu < pmtu))
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 03fc2c427aca..e890ceb55939 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -242,9 +242,9 @@ void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk)
+ &transport->fl, sk);
+ }
+
+- if (transport->dst) {
+- transport->pathmtu = SCTP_TRUNC4(dst_mtu(transport->dst));
+- } else
++ if (transport->dst)
++ transport->pathmtu = sctp_dst_mtu(transport->dst);
++ else
+ transport->pathmtu = SCTP_DEFAULT_MAXSEGMENT;
+ }
+
+@@ -273,7 +273,7 @@ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+
+ if (dst) {
+ /* Re-fetch, as under layers may have a higher minimum size */
+- pmtu = SCTP_TRUNC4(dst_mtu(dst));
++ pmtu = sctp_dst_mtu(dst);
+ change = t->pathmtu != pmtu;
+ }
+ t->pathmtu = pmtu;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 69616d00481c..b53026a72e73 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -635,7 +635,7 @@ static int snd_rawmidi_info_select_user(struct snd_card *card,
+ int snd_rawmidi_output_params(struct snd_rawmidi_substream *substream,
+ struct snd_rawmidi_params * params)
+ {
+- char *newbuf;
++ char *newbuf, *oldbuf;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
+
+ if (substream->append && substream->use_count > 1)
+@@ -648,13 +648,17 @@ int snd_rawmidi_output_params(struct snd_rawmidi_substream *substream,
+ return -EINVAL;
+ }
+ if (params->buffer_size != runtime->buffer_size) {
+- newbuf = krealloc(runtime->buffer, params->buffer_size,
+- GFP_KERNEL);
++ newbuf = kmalloc(params->buffer_size, GFP_KERNEL);
+ if (!newbuf)
+ return -ENOMEM;
++ spin_lock_irq(&runtime->lock);
++ oldbuf = runtime->buffer;
+ runtime->buffer = newbuf;
+ runtime->buffer_size = params->buffer_size;
+ runtime->avail = runtime->buffer_size;
++ runtime->appl_ptr = runtime->hw_ptr = 0;
++ spin_unlock_irq(&runtime->lock);
++ kfree(oldbuf);
+ }
+ runtime->avail_min = params->avail_min;
+ substream->active_sensing = !params->no_active_sensing;
+@@ -665,7 +669,7 @@ EXPORT_SYMBOL(snd_rawmidi_output_params);
+ int snd_rawmidi_input_params(struct snd_rawmidi_substream *substream,
+ struct snd_rawmidi_params * params)
+ {
+- char *newbuf;
++ char *newbuf, *oldbuf;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
+
+ snd_rawmidi_drain_input(substream);
+@@ -676,12 +680,16 @@ int snd_rawmidi_input_params(struct snd_rawmidi_substream *substream,
+ return -EINVAL;
+ }
+ if (params->buffer_size != runtime->buffer_size) {
+- newbuf = krealloc(runtime->buffer, params->buffer_size,
+- GFP_KERNEL);
++ newbuf = kmalloc(params->buffer_size, GFP_KERNEL);
+ if (!newbuf)
+ return -ENOMEM;
++ spin_lock_irq(&runtime->lock);
++ oldbuf = runtime->buffer;
+ runtime->buffer = newbuf;
+ runtime->buffer_size = params->buffer_size;
++ runtime->appl_ptr = runtime->hw_ptr = 0;
++ spin_unlock_irq(&runtime->lock);
++ kfree(oldbuf);
+ }
+ runtime->avail_min = params->avail_min;
+ return 0;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index ba9a7e552183..88ce2f1022e1 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -965,6 +965,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
++ SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 066efe783fe8..7bba415cb850 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2363,6 +2363,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
++ SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+@@ -6543,6 +6544,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
++ SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 6e865e8b5b10..fe6eb0fe07f6 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -119,8 +119,12 @@ irqfd_shutdown(struct work_struct *work)
+ {
+ struct kvm_kernel_irqfd *irqfd =
+ container_of(work, struct kvm_kernel_irqfd, shutdown);
++ struct kvm *kvm = irqfd->kvm;
+ u64 cnt;
+
++ /* Make sure irqfd has been initalized in assign path. */
++ synchronize_srcu(&kvm->irq_srcu);
++
+ /*
+ * Synchronize with the wait-queue and unhook ourselves to prevent
+ * further events.
+@@ -387,7 +391,6 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+
+ idx = srcu_read_lock(&kvm->irq_srcu);
+ irqfd_update(kvm, irqfd);
+- srcu_read_unlock(&kvm->irq_srcu, idx);
+
+ list_add_tail(&irqfd->list, &kvm->irqfds.items);
+
+@@ -402,11 +405,6 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ if (events & EPOLLIN)
+ schedule_work(&irqfd->inject);
+
+- /*
+- * do not drop the file until the irqfd is fully initialized, otherwise
+- * we might race against the EPOLLHUP
+- */
+- fdput(f);
+ #ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
+ if (kvm_arch_has_irq_bypass()) {
+ irqfd->consumer.token = (void *)irqfd->eventfd;
+@@ -421,6 +419,13 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ }
+ #endif
+
++ srcu_read_unlock(&kvm->irq_srcu, idx);
++
++ /*
++ * do not drop the file until the irqfd is fully initialized, otherwise
++ * we might race against the EPOLLHUP
++ */
++ fdput(f);
+ return 0;
+
+ fail:
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-22 15:12 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-22 15:12 UTC (permalink / raw
To: gentoo-commits
commit: 0ba4a5bcdee011391109471f17906e321ef9edde
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 22 15:12:26 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 22 15:12:26 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ba4a5bc
Linux patch 4.17.9
0000_README | 4 +
1008_linux-4.17.9.patch | 4495 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4499 insertions(+)
diff --git a/0000_README b/0000_README
index 5c3b875..378d9da 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-4.17.8.patch
From: http://www.kernel.org
Desc: Linux 4.17.8
+Patch: 1008_linux-4.17.9.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-4.17.9.patch b/1008_linux-4.17.9.patch
new file mode 100644
index 0000000..7bb42e7
--- /dev/null
+++ b/1008_linux-4.17.9.patch
@@ -0,0 +1,4495 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f2040d46f095..ff4ba249a26f 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4092,6 +4092,23 @@
+ expediting. Set to zero to disable automatic
+ expediting.
+
++ ssbd= [ARM64,HW]
++ Speculative Store Bypass Disable control
++
++ On CPUs that are vulnerable to the Speculative
++ Store Bypass vulnerability and offer a
++ firmware based mitigation, this parameter
++ indicates how the mitigation should be used:
++
++ force-on: Unconditionally enable mitigation for
++ for both kernel and userspace
++ force-off: Unconditionally disable mitigation for
++ for both kernel and userspace
++ kernel: Always enable mitigation in the
++ kernel, and offer a prctl interface
++ to allow userspace to register its
++ interest in being mitigated too.
++
+ stack_guard_gap= [MM]
+ override the default stack gap protection. The value
+ is in page units and it defines how many pages prior
+diff --git a/Makefile b/Makefile
+index 7cc36fe18dbb..693fde3aa317 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
+index c7c28c885a19..7001fb871429 100644
+--- a/arch/arm/include/asm/kvm_host.h
++++ b/arch/arm/include/asm/kvm_host.h
+@@ -315,6 +315,18 @@ static inline bool kvm_arm_harden_branch_predictor(void)
+ return false;
+ }
+
++#define KVM_SSBD_UNKNOWN -1
++#define KVM_SSBD_FORCE_DISABLE 0
++#define KVM_SSBD_KERNEL 1
++#define KVM_SSBD_FORCE_ENABLE 2
++#define KVM_SSBD_MITIGATED 3
++
++static inline int kvm_arm_have_ssbd(void)
++{
++ /* No way to detect it yet, pretend it is not there. */
++ return KVM_SSBD_UNKNOWN;
++}
++
+ static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+
+diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
+index f675162663f0..d2eb24eccf8f 100644
+--- a/arch/arm/include/asm/kvm_mmu.h
++++ b/arch/arm/include/asm/kvm_mmu.h
+@@ -335,6 +335,11 @@ static inline int kvm_map_vectors(void)
+ return 0;
+ }
+
++static inline int hyp_map_aux_data(void)
++{
++ return 0;
++}
++
+ #define kvm_phys_to_vttbr(addr) (addr)
+
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index b5030e1a41d8..5539fba892ce 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -1928,7 +1928,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ /* there are 2 passes here */
+ bpf_jit_dump(prog->len, image_size, 2, ctx.target);
+
+- set_memory_ro((unsigned long)header, header->pages);
++ bpf_jit_binary_lock_ro(header);
+ prog->bpf_func = (void *)ctx.target;
+ prog->jited = 1;
+ prog->jited_len = image_size;
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index eb2cf4938f6d..b2103b4df467 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
+
+ If unsure, say Y.
+
++config ARM64_SSBD
++ bool "Speculative Store Bypass Disable" if EXPERT
++ default y
++ help
++ This enables mitigation of the bypassing of previous stores
++ by speculative loads.
++
++ If unsure, say Y.
++
+ menuconfig ARMV8_DEPRECATED
+ bool "Emulate deprecated/obsolete ARMv8 instructions"
+ depends on COMPAT
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index bc51b72fafd4..8a699c708fc9 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -48,7 +48,8 @@
+ #define ARM64_HAS_CACHE_IDC 27
+ #define ARM64_HAS_CACHE_DIC 28
+ #define ARM64_HW_DBM 29
++#define ARM64_SSBD 30
+
+-#define ARM64_NCAPS 30
++#define ARM64_NCAPS 31
+
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index 09b0f2a80c8f..55bc1f073bfb 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -537,6 +537,28 @@ static inline u64 read_zcr_features(void)
+ return zcr;
+ }
+
++#define ARM64_SSBD_UNKNOWN -1
++#define ARM64_SSBD_FORCE_DISABLE 0
++#define ARM64_SSBD_KERNEL 1
++#define ARM64_SSBD_FORCE_ENABLE 2
++#define ARM64_SSBD_MITIGATED 3
++
++static inline int arm64_get_ssbd_state(void)
++{
++#ifdef CONFIG_ARM64_SSBD
++ extern int ssbd_state;
++ return ssbd_state;
++#else
++ return ARM64_SSBD_UNKNOWN;
++#endif
++}
++
++#ifdef CONFIG_ARM64_SSBD
++void arm64_set_ssbd_mitigation(bool state);
++#else
++static inline void arm64_set_ssbd_mitigation(bool state) {}
++#endif
++
+ #endif /* __ASSEMBLY__ */
+
+ #endif
+diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
+index f6648a3e4152..d4fbb1356c4c 100644
+--- a/arch/arm64/include/asm/kvm_asm.h
++++ b/arch/arm64/include/asm/kvm_asm.h
+@@ -33,6 +33,9 @@
+ #define KVM_ARM64_DEBUG_DIRTY_SHIFT 0
+ #define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
+
++#define VCPU_WORKAROUND_2_FLAG_SHIFT 0
++#define VCPU_WORKAROUND_2_FLAG (_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
++
+ /* Translate a kernel address of @sym into its equivalent linear mapping */
+ #define kvm_ksym_ref(sym) \
+ ({ \
+@@ -71,14 +74,37 @@ extern u32 __kvm_get_mdcr_el2(void);
+
+ extern u32 __init_stage2_translation(void);
+
++/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
++#define __hyp_this_cpu_ptr(sym) \
++ ({ \
++ void *__ptr = hyp_symbol_addr(sym); \
++ __ptr += read_sysreg(tpidr_el2); \
++ (typeof(&sym))__ptr; \
++ })
++
++#define __hyp_this_cpu_read(sym) \
++ ({ \
++ *__hyp_this_cpu_ptr(sym); \
++ })
++
+ #else /* __ASSEMBLY__ */
+
+-.macro get_host_ctxt reg, tmp
+- adr_l \reg, kvm_host_cpu_state
++.macro hyp_adr_this_cpu reg, sym, tmp
++ adr_l \reg, \sym
+ mrs \tmp, tpidr_el2
+ add \reg, \reg, \tmp
+ .endm
+
++.macro hyp_ldr_this_cpu reg, sym, tmp
++ adr_l \reg, \sym
++ mrs \tmp, tpidr_el2
++ ldr \reg, [\reg, \tmp]
++.endm
++
++.macro get_host_ctxt reg, tmp
++ hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp
++.endm
++
+ .macro get_vcpu_ptr vcpu, ctxt
+ get_host_ctxt \ctxt, \vcpu
+ ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 469de8acd06f..95d8a0e15b5f 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -216,6 +216,9 @@ struct kvm_vcpu_arch {
+ /* Exception Information */
+ struct kvm_vcpu_fault_info fault;
+
++ /* State of various workarounds, see kvm_asm.h for bit assignment */
++ u64 workaround_flags;
++
+ /* Guest debug state */
+ u64 debug_flags;
+
+@@ -452,6 +455,29 @@ static inline bool kvm_arm_harden_branch_predictor(void)
+ return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
+ }
+
++#define KVM_SSBD_UNKNOWN -1
++#define KVM_SSBD_FORCE_DISABLE 0
++#define KVM_SSBD_KERNEL 1
++#define KVM_SSBD_FORCE_ENABLE 2
++#define KVM_SSBD_MITIGATED 3
++
++static inline int kvm_arm_have_ssbd(void)
++{
++ switch (arm64_get_ssbd_state()) {
++ case ARM64_SSBD_FORCE_DISABLE:
++ return KVM_SSBD_FORCE_DISABLE;
++ case ARM64_SSBD_KERNEL:
++ return KVM_SSBD_KERNEL;
++ case ARM64_SSBD_FORCE_ENABLE:
++ return KVM_SSBD_FORCE_ENABLE;
++ case ARM64_SSBD_MITIGATED:
++ return KVM_SSBD_MITIGATED;
++ case ARM64_SSBD_UNKNOWN:
++ default:
++ return KVM_SSBD_UNKNOWN;
++ }
++}
++
+ void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
+ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
+
+diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
+index 6128992c2ded..e3b2ad7dd40a 100644
+--- a/arch/arm64/include/asm/kvm_mmu.h
++++ b/arch/arm64/include/asm/kvm_mmu.h
+@@ -473,6 +473,30 @@ static inline int kvm_map_vectors(void)
+ }
+ #endif
+
++#ifdef CONFIG_ARM64_SSBD
++DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
++
++static inline int hyp_map_aux_data(void)
++{
++ int cpu, err;
++
++ for_each_possible_cpu(cpu) {
++ u64 *ptr;
++
++ ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
++ err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
++ if (err)
++ return err;
++ }
++ return 0;
++}
++#else
++static inline int hyp_map_aux_data(void)
++{
++ return 0;
++}
++#endif
++
+ #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr)
+
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
+index 740aa03c5f0d..cbcf11b5e637 100644
+--- a/arch/arm64/include/asm/thread_info.h
++++ b/arch/arm64/include/asm/thread_info.h
+@@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
+ #define TIF_32BIT 22 /* 32bit process */
+ #define TIF_SVE 23 /* Scalable Vector Extension in use */
+ #define TIF_SVE_VL_INHERIT 24 /* Inherit sve_vl_onexec across exec */
++#define TIF_SSBD 25 /* Wants SSB mitigation */
+
+ #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
+ #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index bf825f38d206..0025f8691046 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST) += arm64-reloc-test.o
+ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
+ arm64-obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
+ arm64-obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
++arm64-obj-$(CONFIG_ARM64_SSBD) += ssbd.o
+
+ obj-y += $(arm64-obj-y) vdso/ probes/
+ obj-m += $(arm64-obj-m)
+diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
+index 5bdda651bd05..323aeb5f2fe6 100644
+--- a/arch/arm64/kernel/asm-offsets.c
++++ b/arch/arm64/kernel/asm-offsets.c
+@@ -136,6 +136,7 @@ int main(void)
+ #ifdef CONFIG_KVM_ARM_HOST
+ DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));
+ DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
++ DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags));
+ DEFINE(CPU_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
+ DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
+ DEFINE(CPU_FP_REGS, offsetof(struct kvm_regs, fp_regs));
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index e4a1182deff7..2b9a31a6a16a 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -232,6 +232,178 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+ }
+ #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
++#ifdef CONFIG_ARM64_SSBD
++DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
++
++int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
++
++static const struct ssbd_options {
++ const char *str;
++ int state;
++} ssbd_options[] = {
++ { "force-on", ARM64_SSBD_FORCE_ENABLE, },
++ { "force-off", ARM64_SSBD_FORCE_DISABLE, },
++ { "kernel", ARM64_SSBD_KERNEL, },
++};
++
++static int __init ssbd_cfg(char *buf)
++{
++ int i;
++
++ if (!buf || !buf[0])
++ return -EINVAL;
++
++ for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
++ int len = strlen(ssbd_options[i].str);
++
++ if (strncmp(buf, ssbd_options[i].str, len))
++ continue;
++
++ ssbd_state = ssbd_options[i].state;
++ return 0;
++ }
++
++ return -EINVAL;
++}
++early_param("ssbd", ssbd_cfg);
++
++void __init arm64_update_smccc_conduit(struct alt_instr *alt,
++ __le32 *origptr, __le32 *updptr,
++ int nr_inst)
++{
++ u32 insn;
++
++ BUG_ON(nr_inst != 1);
++
++ switch (psci_ops.conduit) {
++ case PSCI_CONDUIT_HVC:
++ insn = aarch64_insn_get_hvc_value();
++ break;
++ case PSCI_CONDUIT_SMC:
++ insn = aarch64_insn_get_smc_value();
++ break;
++ default:
++ return;
++ }
++
++ *updptr = cpu_to_le32(insn);
++}
++
++void __init arm64_enable_wa2_handling(struct alt_instr *alt,
++ __le32 *origptr, __le32 *updptr,
++ int nr_inst)
++{
++ BUG_ON(nr_inst != 1);
++ /*
++ * Only allow mitigation on EL1 entry/exit and guest
++ * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
++ * be flipped.
++ */
++ if (arm64_get_ssbd_state() == ARM64_SSBD_KERNEL)
++ *updptr = cpu_to_le32(aarch64_insn_gen_nop());
++}
++
++void arm64_set_ssbd_mitigation(bool state)
++{
++ switch (psci_ops.conduit) {
++ case PSCI_CONDUIT_HVC:
++ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
++ break;
++
++ case PSCI_CONDUIT_SMC:
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
++ break;
++
++ default:
++ WARN_ON_ONCE(1);
++ break;
++ }
++}
++
++static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
++ int scope)
++{
++ struct arm_smccc_res res;
++ bool required = true;
++ s32 val;
++
++ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++
++ if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
++ ssbd_state = ARM64_SSBD_UNKNOWN;
++ return false;
++ }
++
++ switch (psci_ops.conduit) {
++ case PSCI_CONDUIT_HVC:
++ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++ ARM_SMCCC_ARCH_WORKAROUND_2, &res);
++ break;
++
++ case PSCI_CONDUIT_SMC:
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++ ARM_SMCCC_ARCH_WORKAROUND_2, &res);
++ break;
++
++ default:
++ ssbd_state = ARM64_SSBD_UNKNOWN;
++ return false;
++ }
++
++ val = (s32)res.a0;
++
++ switch (val) {
++ case SMCCC_RET_NOT_SUPPORTED:
++ ssbd_state = ARM64_SSBD_UNKNOWN;
++ return false;
++
++ case SMCCC_RET_NOT_REQUIRED:
++ pr_info_once("%s mitigation not required\n", entry->desc);
++ ssbd_state = ARM64_SSBD_MITIGATED;
++ return false;
++
++ case SMCCC_RET_SUCCESS:
++ required = true;
++ break;
++
++ case 1: /* Mitigation not required on this CPU */
++ required = false;
++ break;
++
++ default:
++ WARN_ON(1);
++ return false;
++ }
++
++ switch (ssbd_state) {
++ case ARM64_SSBD_FORCE_DISABLE:
++ pr_info_once("%s disabled from command-line\n", entry->desc);
++ arm64_set_ssbd_mitigation(false);
++ required = false;
++ break;
++
++ case ARM64_SSBD_KERNEL:
++ if (required) {
++ __this_cpu_write(arm64_ssbd_callback_required, 1);
++ arm64_set_ssbd_mitigation(true);
++ }
++ break;
++
++ case ARM64_SSBD_FORCE_ENABLE:
++ pr_info_once("%s forced from command-line\n", entry->desc);
++ arm64_set_ssbd_mitigation(true);
++ required = true;
++ break;
++
++ default:
++ WARN_ON(1);
++ break;
++ }
++
++ return required;
++}
++#endif /* CONFIG_ARM64_SSBD */
++
+ #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \
+ .matches = is_affected_midr_range, \
+ .midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
+@@ -487,6 +659,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
+ },
++#endif
++#ifdef CONFIG_ARM64_SSBD
++ {
++ .desc = "Speculative Store Bypass Disable",
++ .capability = ARM64_SSBD,
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++ .matches = has_ssbd_mitigation,
++ },
+ #endif
+ {
+ }
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index ec2ee720e33e..28ad8799406f 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -18,6 +18,7 @@
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
++#include <linux/arm-smccc.h>
+ #include <linux/init.h>
+ #include <linux/linkage.h>
+
+@@ -137,6 +138,25 @@ alternative_else_nop_endif
+ add \dst, \dst, #(\sym - .entry.tramp.text)
+ .endm
+
++ // This macro corrupts x0-x3. It is the caller's duty
++ // to save/restore them if required.
++ .macro apply_ssbd, state, targ, tmp1, tmp2
++#ifdef CONFIG_ARM64_SSBD
++alternative_cb arm64_enable_wa2_handling
++ b \targ
++alternative_cb_end
++ ldr_this_cpu \tmp2, arm64_ssbd_callback_required, \tmp1
++ cbz \tmp2, \targ
++ ldr \tmp2, [tsk, #TSK_TI_FLAGS]
++ tbnz \tmp2, #TIF_SSBD, \targ
++ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2
++ mov w1, #\state
++alternative_cb arm64_update_smccc_conduit
++ nop // Patched to SMC/HVC #0
++alternative_cb_end
++#endif
++ .endm
++
+ .macro kernel_entry, el, regsize = 64
+ .if \regsize == 32
+ mov w0, w0 // zero upper 32 bits of x0
+@@ -163,6 +183,14 @@ alternative_else_nop_endif
+ ldr x19, [tsk, #TSK_TI_FLAGS] // since we can unmask debug
+ disable_step_tsk x19, x20 // exceptions when scheduling.
+
++ apply_ssbd 1, 1f, x22, x23
++
++#ifdef CONFIG_ARM64_SSBD
++ ldp x0, x1, [sp, #16 * 0]
++ ldp x2, x3, [sp, #16 * 1]
++#endif
++1:
++
+ mov x29, xzr // fp pointed to user-space
+ .else
+ add x21, sp, #S_FRAME_SIZE
+@@ -303,6 +331,8 @@ alternative_if ARM64_WORKAROUND_845719
+ alternative_else_nop_endif
+ #endif
+ 3:
++ apply_ssbd 0, 5f, x0, x1
++5:
+ .endif
+
+ msr elr_el1, x21 // set up the return data
+diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
+index 1ec5f28c39fc..6b2686d54411 100644
+--- a/arch/arm64/kernel/hibernate.c
++++ b/arch/arm64/kernel/hibernate.c
+@@ -313,6 +313,17 @@ int swsusp_arch_suspend(void)
+
+ sleep_cpu = -EINVAL;
+ __cpu_suspend_exit();
++
++ /*
++ * Just in case the boot kernel did turn the SSBD
++ * mitigation off behind our back, let's set the state
++ * to what we expect it to be.
++ */
++ switch (arm64_get_ssbd_state()) {
++ case ARM64_SSBD_FORCE_ENABLE:
++ case ARM64_SSBD_KERNEL:
++ arm64_set_ssbd_mitigation(true);
++ }
+ }
+
+ local_daif_restore(flags);
+diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
+new file mode 100644
+index 000000000000..3432e5ef9f41
+--- /dev/null
++++ b/arch/arm64/kernel/ssbd.c
+@@ -0,0 +1,110 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
++ */
++
++#include <linux/errno.h>
++#include <linux/sched.h>
++#include <linux/thread_info.h>
++
++#include <asm/cpufeature.h>
++
++/*
++ * prctl interface for SSBD
++ * FIXME: Drop the below ifdefery once merged in 4.18.
++ */
++#ifdef PR_SPEC_STORE_BYPASS
++static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
++{
++ int state = arm64_get_ssbd_state();
++
++ /* Unsupported */
++ if (state == ARM64_SSBD_UNKNOWN)
++ return -EINVAL;
++
++ /* Treat the unaffected/mitigated state separately */
++ if (state == ARM64_SSBD_MITIGATED) {
++ switch (ctrl) {
++ case PR_SPEC_ENABLE:
++ return -EPERM;
++ case PR_SPEC_DISABLE:
++ case PR_SPEC_FORCE_DISABLE:
++ return 0;
++ }
++ }
++
++ /*
++ * Things are a bit backward here: the arm64 internal API
++ * *enables the mitigation* when the userspace API *disables
++ * speculation*. So much fun.
++ */
++ switch (ctrl) {
++ case PR_SPEC_ENABLE:
++ /* If speculation is force disabled, enable is not allowed */
++ if (state == ARM64_SSBD_FORCE_ENABLE ||
++ task_spec_ssb_force_disable(task))
++ return -EPERM;
++ task_clear_spec_ssb_disable(task);
++ clear_tsk_thread_flag(task, TIF_SSBD);
++ break;
++ case PR_SPEC_DISABLE:
++ if (state == ARM64_SSBD_FORCE_DISABLE)
++ return -EPERM;
++ task_set_spec_ssb_disable(task);
++ set_tsk_thread_flag(task, TIF_SSBD);
++ break;
++ case PR_SPEC_FORCE_DISABLE:
++ if (state == ARM64_SSBD_FORCE_DISABLE)
++ return -EPERM;
++ task_set_spec_ssb_disable(task);
++ task_set_spec_ssb_force_disable(task);
++ set_tsk_thread_flag(task, TIF_SSBD);
++ break;
++ default:
++ return -ERANGE;
++ }
++
++ return 0;
++}
++
++int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
++ unsigned long ctrl)
++{
++ switch (which) {
++ case PR_SPEC_STORE_BYPASS:
++ return ssbd_prctl_set(task, ctrl);
++ default:
++ return -ENODEV;
++ }
++}
++
++static int ssbd_prctl_get(struct task_struct *task)
++{
++ switch (arm64_get_ssbd_state()) {
++ case ARM64_SSBD_UNKNOWN:
++ return -EINVAL;
++ case ARM64_SSBD_FORCE_ENABLE:
++ return PR_SPEC_DISABLE;
++ case ARM64_SSBD_KERNEL:
++ if (task_spec_ssb_force_disable(task))
++ return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
++ if (task_spec_ssb_disable(task))
++ return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
++ return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
++ case ARM64_SSBD_FORCE_DISABLE:
++ return PR_SPEC_ENABLE;
++ default:
++ return PR_SPEC_NOT_AFFECTED;
++ }
++}
++
++int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
++{
++ switch (which) {
++ case PR_SPEC_STORE_BYPASS:
++ return ssbd_prctl_get(task);
++ default:
++ return -ENODEV;
++ }
++}
++#endif /* PR_SPEC_STORE_BYPASS */
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index a307b9e13392..70c283368b64 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
+ */
+ if (hw_breakpoint_restore)
+ hw_breakpoint_restore(cpu);
++
++ /*
++ * On resume, firmware implementing dynamic mitigation will
++ * have turned the mitigation on. If the user has forcefully
++ * disabled it, make sure their wishes are obeyed.
++ */
++ if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
++ arm64_set_ssbd_mitigation(false);
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index bffece27b5c1..05d836979032 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -106,8 +106,44 @@ el1_hvc_guest:
+ */
+ ldr x1, [sp] // Guest's x0
+ eor w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
++ cbz w1, wa_epilogue
++
++ /* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
++ eor w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
++ ARM_SMCCC_ARCH_WORKAROUND_2)
+ cbnz w1, el1_trap
+- mov x0, x1
++
++#ifdef CONFIG_ARM64_SSBD
++alternative_cb arm64_enable_wa2_handling
++ b wa2_end
++alternative_cb_end
++ get_vcpu_ptr x2, x0
++ ldr x0, [x2, #VCPU_WORKAROUND_FLAGS]
++
++ // Sanitize the argument and update the guest flags
++ ldr x1, [sp, #8] // Guest's x1
++ clz w1, w1 // Murphy's device:
++ lsr w1, w1, #5 // w1 = !!w1 without using
++ eor w1, w1, #1 // the flags...
++ bfi x0, x1, #VCPU_WORKAROUND_2_FLAG_SHIFT, #1
++ str x0, [x2, #VCPU_WORKAROUND_FLAGS]
++
++ /* Check that we actually need to perform the call */
++ hyp_ldr_this_cpu x0, arm64_ssbd_callback_required, x2
++ cbz x0, wa2_end
++
++ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2
++ smc #0
++
++ /* Don't leak data from the SMC call */
++ mov x3, xzr
++wa2_end:
++ mov x2, xzr
++ mov x1, xzr
++#endif
++
++wa_epilogue:
++ mov x0, xzr
+ add sp, sp, #16
+ eret
+
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index d9645236e474..c50cedc447f1 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -15,6 +15,7 @@
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
++#include <linux/arm-smccc.h>
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
+ #include <uapi/linux/psci.h>
+@@ -389,6 +390,39 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ return false;
+ }
+
++static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
++{
++ if (!cpus_have_const_cap(ARM64_SSBD))
++ return false;
++
++ return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
++}
++
++static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
++{
++#ifdef CONFIG_ARM64_SSBD
++ /*
++ * The host runs with the workaround always present. If the
++ * guest wants it disabled, so be it...
++ */
++ if (__needs_ssbd_off(vcpu) &&
++ __hyp_this_cpu_read(arm64_ssbd_callback_required))
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
++#endif
++}
++
++static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
++{
++#ifdef CONFIG_ARM64_SSBD
++ /*
++ * If the guest has disabled the workaround, bring it back on.
++ */
++ if (__needs_ssbd_off(vcpu) &&
++ __hyp_this_cpu_read(arm64_ssbd_callback_required))
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
++#endif
++}
++
+ /* Switch to the guest for VHE systems running in EL2 */
+ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+ {
+@@ -409,6 +443,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+ sysreg_restore_guest_state_vhe(guest_ctxt);
+ __debug_switch_to_guest(vcpu);
+
++ __set_guest_arch_workaround_state(vcpu);
++
+ do {
+ /* Jump in the fire! */
+ exit_code = __guest_enter(vcpu, host_ctxt);
+@@ -416,6 +452,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+ /* And we're baaack! */
+ } while (fixup_guest_exit(vcpu, &exit_code));
+
++ __set_host_arch_workaround_state(vcpu);
++
+ fp_enabled = fpsimd_enabled_vhe();
+
+ sysreg_save_guest_state_vhe(guest_ctxt);
+@@ -465,6 +503,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
+ __sysreg_restore_state_nvhe(guest_ctxt);
+ __debug_switch_to_guest(vcpu);
+
++ __set_guest_arch_workaround_state(vcpu);
++
+ do {
+ /* Jump in the fire! */
+ exit_code = __guest_enter(vcpu, host_ctxt);
+@@ -472,6 +512,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
+ /* And we're baaack! */
+ } while (fixup_guest_exit(vcpu, &exit_code));
+
++ __set_host_arch_workaround_state(vcpu);
++
+ fp_enabled = __fpsimd_enabled_nvhe();
+
+ __sysreg_save_state_nvhe(guest_ctxt);
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index 3256b9228e75..a74311beda35 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -122,6 +122,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ /* Reset PMU */
+ kvm_pmu_vcpu_reset(vcpu);
+
++ /* Default workaround setup is enabled (if supported) */
++ if (kvm_arm_have_ssbd() == KVM_SSBD_KERNEL)
++ vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
++
+ /* Reset timer */
+ return kvm_timer_vcpu_reset(vcpu);
+ }
+diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
+index 219faaec51df..990770f9e76b 100644
+--- a/arch/x86/include/asm/asm.h
++++ b/arch/x86/include/asm/asm.h
+@@ -46,6 +46,65 @@
+ #define _ASM_SI __ASM_REG(si)
+ #define _ASM_DI __ASM_REG(di)
+
++#ifndef __x86_64__
++/* 32 bit */
++
++#define _ASM_ARG1 _ASM_AX
++#define _ASM_ARG2 _ASM_DX
++#define _ASM_ARG3 _ASM_CX
++
++#define _ASM_ARG1L eax
++#define _ASM_ARG2L edx
++#define _ASM_ARG3L ecx
++
++#define _ASM_ARG1W ax
++#define _ASM_ARG2W dx
++#define _ASM_ARG3W cx
++
++#define _ASM_ARG1B al
++#define _ASM_ARG2B dl
++#define _ASM_ARG3B cl
++
++#else
++/* 64 bit */
++
++#define _ASM_ARG1 _ASM_DI
++#define _ASM_ARG2 _ASM_SI
++#define _ASM_ARG3 _ASM_DX
++#define _ASM_ARG4 _ASM_CX
++#define _ASM_ARG5 r8
++#define _ASM_ARG6 r9
++
++#define _ASM_ARG1Q rdi
++#define _ASM_ARG2Q rsi
++#define _ASM_ARG3Q rdx
++#define _ASM_ARG4Q rcx
++#define _ASM_ARG5Q r8
++#define _ASM_ARG6Q r9
++
++#define _ASM_ARG1L edi
++#define _ASM_ARG2L esi
++#define _ASM_ARG3L edx
++#define _ASM_ARG4L ecx
++#define _ASM_ARG5L r8d
++#define _ASM_ARG6L r9d
++
++#define _ASM_ARG1W di
++#define _ASM_ARG2W si
++#define _ASM_ARG3W dx
++#define _ASM_ARG4W cx
++#define _ASM_ARG5W r8w
++#define _ASM_ARG6W r9w
++
++#define _ASM_ARG1B dil
++#define _ASM_ARG2B sil
++#define _ASM_ARG3B dl
++#define _ASM_ARG4B cl
++#define _ASM_ARG5B r8b
++#define _ASM_ARG6B r9b
++
++#endif
++
+ /*
+ * Macros to generate condition code outputs from inline assembly,
+ * The output operand must be type "bool".
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 89f08955fff7..c4fc17220df9 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -13,7 +13,7 @@
+ * Interrupt control:
+ */
+
+-static inline unsigned long native_save_fl(void)
++extern inline unsigned long native_save_fl(void)
+ {
+ unsigned long flags;
+
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index 02d6f5cf4e70..8824d01c0c35 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -61,6 +61,7 @@ obj-y += alternative.o i8253.o hw_breakpoint.o
+ obj-y += tsc.o tsc_msr.o io_delay.o rtc.o
+ obj-y += pci-iommu_table.o
+ obj-y += resource.o
++obj-y += irqflags.o
+
+ obj-y += process.o
+ obj-y += fpu/
+diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S
+new file mode 100644
+index 000000000000..ddeeaac8adda
+--- /dev/null
++++ b/arch/x86/kernel/irqflags.S
+@@ -0,0 +1,26 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#include <asm/asm.h>
++#include <asm/export.h>
++#include <linux/linkage.h>
++
++/*
++ * unsigned long native_save_fl(void)
++ */
++ENTRY(native_save_fl)
++ pushf
++ pop %_ASM_AX
++ ret
++ENDPROC(native_save_fl)
++EXPORT_SYMBOL(native_save_fl)
++
++/*
++ * void native_restore_fl(unsigned long flags)
++ * %eax/%rdi: flags
++ */
++ENTRY(native_restore_fl)
++ push %_ASM_ARG1
++ popf
++ ret
++ENDPROC(native_restore_fl)
++EXPORT_SYMBOL(native_restore_fl)
+diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
+index 92fd433c50b9..1bbec387d289 100644
+--- a/arch/x86/kvm/Kconfig
++++ b/arch/x86/kvm/Kconfig
+@@ -85,7 +85,7 @@ config KVM_AMD_SEV
+ def_bool y
+ bool "AMD Secure Encrypted Virtualization (SEV) support"
+ depends on KVM_AMD && X86_64
+- depends on CRYPTO_DEV_CCP && CRYPTO_DEV_CCP_DD && CRYPTO_DEV_SP_PSP
++ depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m)
+ ---help---
+ Provides support for launching Encrypted VMs on AMD processors.
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index b559b9d4f1a2..47ab2d9d02d9 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2392,7 +2392,9 @@ blk_qc_t generic_make_request(struct bio *bio)
+
+ if (bio->bi_opf & REQ_NOWAIT)
+ flags = BLK_MQ_REQ_NOWAIT;
+- if (blk_queue_enter(q, flags) < 0) {
++ if (bio_flagged(bio, BIO_QUEUE_ENTERED))
++ blk_queue_enter_live(q);
++ else if (blk_queue_enter(q, flags) < 0) {
+ if (!blk_queue_dying(q) && (bio->bi_opf & REQ_NOWAIT))
+ bio_wouldblock_error(bio);
+ else
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 782940c65d8a..481dc02668f9 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -210,6 +210,16 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
+ /* there isn't chance to merge the splitted bio */
+ split->bi_opf |= REQ_NOMERGE;
+
++ /*
++ * Since we're recursing into make_request here, ensure
++ * that we mark this bio as already having entered the queue.
++ * If not, and the queue is going away, we can get stuck
++ * forever on waiting for the queue reference to drop. But
++ * that will never happen, as we're already holding a
++ * reference to it.
++ */
++ bio_set_flag(*bio, BIO_QUEUE_ENTERED);
++
+ bio_chain(split, *bio);
+ trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
+ generic_make_request(*bio);
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 7846c0c20cfe..b52a14fc3bae 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -1156,8 +1156,10 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags,
+
+ /* make one iovec available as scatterlist */
+ err = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, seglen);
+- if (err < 0)
++ if (err < 0) {
++ rsgl->sg_num_bytes = 0;
+ return err;
++ }
+
+ /* chain the new scatterlist with previous one */
+ if (areq->last_rsgl)
+diff --git a/drivers/atm/zatm.c b/drivers/atm/zatm.c
+index a8d2eb0ceb8d..2c288d1f42bb 100644
+--- a/drivers/atm/zatm.c
++++ b/drivers/atm/zatm.c
+@@ -1483,6 +1483,8 @@ static int zatm_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg)
+ return -EFAULT;
+ if (pool < 0 || pool > ZATM_LAST_POOL)
+ return -EINVAL;
++ pool = array_index_nospec(pool,
++ ZATM_LAST_POOL + 1);
+ if (copy_from_user(&info,
+ &((struct zatm_pool_req __user *) arg)->info,
+ sizeof(info))) return -EFAULT;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 69716a7ea993..95a516ac6c39 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -5736,7 +5736,7 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
+ dev->num_ports = max(MLX5_CAP_GEN(mdev, num_ports),
+ MLX5_CAP_GEN(mdev, num_vhca_ports));
+
+- if (MLX5_VPORT_MANAGER(mdev) &&
++ if (MLX5_ESWITCH_MANAGER(mdev) &&
+ mlx5_ib_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) {
+ dev->rep = mlx5_ib_vport_rep(mdev->priv.eswitch, 0);
+
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index 567ee54504bc..5e5022fa1d04 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1897,13 +1897,19 @@ static int alx_resume(struct device *dev)
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct alx_priv *alx = pci_get_drvdata(pdev);
+ struct alx_hw *hw = &alx->hw;
++ int err;
+
+ alx_reset_phy(hw);
+
+ if (!netif_running(alx->dev))
+ return 0;
+ netif_device_attach(alx->dev);
+- return __alx_open(alx, true);
++
++ rtnl_lock();
++ err = __alx_open(alx, true);
++ rtnl_unlock();
++
++ return err;
+ }
+
+ static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index b4c9268100bb..068f991395dc 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3732,6 +3732,8 @@ static int at91ether_init(struct platform_device *pdev)
+ int err;
+ u32 reg;
+
++ bp->queues[0].bp = bp;
++
+ dev->netdev_ops = &at91ether_netdev_ops;
+ dev->ethtool_ops = &macb_ethtool_ops;
+
+diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
+index 2220c771092b..678835136bf8 100644
+--- a/drivers/net/ethernet/cadence/macb_ptp.c
++++ b/drivers/net/ethernet/cadence/macb_ptp.c
+@@ -170,10 +170,7 @@ static int gem_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+
+ if (delta > TSU_NSEC_MAX_VAL) {
+ gem_tsu_get_time(&bp->ptp_clock_info, &now);
+- if (sign)
+- now = timespec64_sub(now, then);
+- else
+- now = timespec64_add(now, then);
++ now = timespec64_add(now, then);
+
+ gem_tsu_set_time(&bp->ptp_clock_info,
+ (const struct timespec64 *)&now);
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index 2edfdbdaae48..b25fd543b6f0 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -51,6 +51,7 @@
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
++#include <linux/nospec.h>
+
+ #include "common.h"
+ #include "cxgb3_ioctl.h"
+@@ -2268,6 +2269,7 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+
+ if (t.qset_idx >= nqsets)
+ return -EINVAL;
++ t.qset_idx = array_index_nospec(t.qset_idx, nqsets);
+
+ q = &adapter->params.sge.qset[q1 + t.qset_idx];
+ t.rspq_size = q->rspq_size;
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 8a8b12b720ef..454e57ef047a 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -1920,7 +1920,7 @@ static int enic_open(struct net_device *netdev)
+ {
+ struct enic *enic = netdev_priv(netdev);
+ unsigned int i;
+- int err;
++ int err, ret;
+
+ err = enic_request_intr(enic);
+ if (err) {
+@@ -1977,10 +1977,9 @@ static int enic_open(struct net_device *netdev)
+
+ err_out_free_rq:
+ for (i = 0; i < enic->rq_count; i++) {
+- err = vnic_rq_disable(&enic->rq[i]);
+- if (err)
+- return err;
+- vnic_rq_clean(&enic->rq[i], enic_free_rq_buf);
++ ret = vnic_rq_disable(&enic->rq[i]);
++ if (!ret)
++ vnic_rq_clean(&enic->rq[i], enic_free_rq_buf);
+ }
+ enic_dev_notify_unset(enic);
+ err_out_free_intr:
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+index e2e5cdc7119c..4c0f7eda1166 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+@@ -439,6 +439,7 @@ static void rx_free_irq(struct hinic_rxq *rxq)
+ {
+ struct hinic_rq *rq = rxq->rq;
+
++ irq_set_affinity_hint(rq->irq, NULL);
+ free_irq(rq->irq, rxq);
+ rx_del_napi(rxq);
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index f174c72480ab..8d3522c94c3f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -2199,9 +2199,10 @@ static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
+ return true;
+ }
+
+-#define I40E_XDP_PASS 0
+-#define I40E_XDP_CONSUMED 1
+-#define I40E_XDP_TX 2
++#define I40E_XDP_PASS 0
++#define I40E_XDP_CONSUMED BIT(0)
++#define I40E_XDP_TX BIT(1)
++#define I40E_XDP_REDIR BIT(2)
+
+ static int i40e_xmit_xdp_ring(struct xdp_buff *xdp,
+ struct i40e_ring *xdp_ring);
+@@ -2235,7 +2236,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
+ break;
+ case XDP_REDIRECT:
+ err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+- result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED;
++ result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
+ break;
+ default:
+ bpf_warn_invalid_xdp_action(act);
+@@ -2298,7 +2299,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+ struct sk_buff *skb = rx_ring->skb;
+ u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
+- bool failure = false, xdp_xmit = false;
++ unsigned int xdp_xmit = 0;
++ bool failure = false;
+ struct xdp_buff xdp;
+
+ xdp.rxq = &rx_ring->xdp_rxq;
+@@ -2359,8 +2361,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ }
+
+ if (IS_ERR(skb)) {
+- if (PTR_ERR(skb) == -I40E_XDP_TX) {
+- xdp_xmit = true;
++ unsigned int xdp_res = -PTR_ERR(skb);
++
++ if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
++ xdp_xmit |= xdp_res;
+ i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+ } else {
+ rx_buffer->pagecnt_bias++;
+@@ -2414,12 +2418,14 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ total_rx_packets++;
+ }
+
+- if (xdp_xmit) {
++ if (xdp_xmit & I40E_XDP_REDIR)
++ xdp_do_flush_map();
++
++ if (xdp_xmit & I40E_XDP_TX) {
+ struct i40e_ring *xdp_ring =
+ rx_ring->vsi->xdp_rings[rx_ring->queue_index];
+
+ i40e_xdp_ring_update_tail(xdp_ring);
+- xdp_do_flush_map();
+ }
+
+ rx_ring->skb = skb;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 2ecd55856c50..a820a6cd831a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2257,9 +2257,10 @@ static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring,
+ return skb;
+ }
+
+-#define IXGBE_XDP_PASS 0
+-#define IXGBE_XDP_CONSUMED 1
+-#define IXGBE_XDP_TX 2
++#define IXGBE_XDP_PASS 0
++#define IXGBE_XDP_CONSUMED BIT(0)
++#define IXGBE_XDP_TX BIT(1)
++#define IXGBE_XDP_REDIR BIT(2)
+
+ static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter,
+ struct xdp_buff *xdp);
+@@ -2288,7 +2289,7 @@ static struct sk_buff *ixgbe_run_xdp(struct ixgbe_adapter *adapter,
+ case XDP_REDIRECT:
+ err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);
+ if (!err)
+- result = IXGBE_XDP_TX;
++ result = IXGBE_XDP_REDIR;
+ else
+ result = IXGBE_XDP_CONSUMED;
+ break;
+@@ -2348,7 +2349,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ unsigned int mss = 0;
+ #endif /* IXGBE_FCOE */
+ u16 cleaned_count = ixgbe_desc_unused(rx_ring);
+- bool xdp_xmit = false;
++ unsigned int xdp_xmit = 0;
+ struct xdp_buff xdp;
+
+ xdp.rxq = &rx_ring->xdp_rxq;
+@@ -2391,8 +2392,10 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ }
+
+ if (IS_ERR(skb)) {
+- if (PTR_ERR(skb) == -IXGBE_XDP_TX) {
+- xdp_xmit = true;
++ unsigned int xdp_res = -PTR_ERR(skb);
++
++ if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) {
++ xdp_xmit |= xdp_res;
+ ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size);
+ } else {
+ rx_buffer->pagecnt_bias++;
+@@ -2464,7 +2467,10 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ total_rx_packets++;
+ }
+
+- if (xdp_xmit) {
++ if (xdp_xmit & IXGBE_XDP_REDIR)
++ xdp_do_flush_map();
++
++ if (xdp_xmit & IXGBE_XDP_TX) {
+ struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()];
+
+ /* Force memory writes to complete before letting h/w
+@@ -2472,8 +2478,6 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ */
+ wmb();
+ writel(ring->next_to_use, ring->tail);
+-
+- xdp_do_flush_map();
+ }
+
+ u64_stats_update_begin(&rx_ring->syncp);
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 17a904cc6a5e..0ad2f3f7da85 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1932,7 +1932,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
+ rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
+ index = rx_desc - rxq->descs;
+ data = rxq->buf_virt_addr[index];
+- phys_addr = rx_desc->buf_phys_addr;
++ phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
+
+ if (!mvneta_rxq_desc_is_first_last(rx_status) ||
+ (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 21cd1703a862..33ab34dc6d96 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -803,6 +803,7 @@ static void cmd_work_handler(struct work_struct *work)
+ unsigned long flags;
+ bool poll_cmd = ent->polling;
+ int alloc_ret;
++ int cmd_mode;
+
+ sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ down(sem);
+@@ -849,6 +850,7 @@ static void cmd_work_handler(struct work_struct *work)
+ set_signature(ent, !cmd->checksum_disabled);
+ dump_command(dev, ent, 1);
+ ent->ts1 = ktime_get_ns();
++ cmd_mode = cmd->mode;
+
+ if (ent->callback)
+ schedule_delayed_work(&ent->cb_timeout_work, cb_timeout);
+@@ -873,7 +875,7 @@ static void cmd_work_handler(struct work_struct *work)
+ iowrite32be(1 << ent->idx, &dev->iseg->cmd_dbell);
+ mmiowb();
+ /* if not in polling don't use ent after this point */
+- if (cmd->mode == CMD_MODE_POLLING || poll_cmd) {
++ if (cmd_mode == CMD_MODE_POLLING || poll_cmd) {
+ poll_timeout(ent);
+ /* make sure we read the descriptor after ownership is SW */
+ rmb();
+@@ -1274,7 +1276,7 @@ static ssize_t outlen_write(struct file *filp, const char __user *buf,
+ {
+ struct mlx5_core_dev *dev = filp->private_data;
+ struct mlx5_cmd_debug *dbg = &dev->cmd.dbg;
+- char outlen_str[8];
++ char outlen_str[8] = {0};
+ int outlen;
+ void *ptr;
+ int err;
+@@ -1289,8 +1291,6 @@ static ssize_t outlen_write(struct file *filp, const char __user *buf,
+ if (copy_from_user(outlen_str, buf, count))
+ return -EFAULT;
+
+- outlen_str[7] = 0;
+-
+ err = sscanf(outlen_str, "%d", &outlen);
+ if (err < 0)
+ return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index b29c1d93f058..d3a1a2281e77 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2612,7 +2612,7 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
+ mlx5e_activate_channels(&priv->channels);
+ netif_tx_start_all_queues(priv->netdev);
+
+- if (MLX5_VPORT_MANAGER(priv->mdev))
++ if (MLX5_ESWITCH_MANAGER(priv->mdev))
+ mlx5e_add_sqs_fwd_rules(priv);
+
+ mlx5e_wait_channels_min_rx_wqes(&priv->channels);
+@@ -2623,7 +2623,7 @@ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv)
+ {
+ mlx5e_redirect_rqts_to_drop(priv);
+
+- if (MLX5_VPORT_MANAGER(priv->mdev))
++ if (MLX5_ESWITCH_MANAGER(priv->mdev))
+ mlx5e_remove_sqs_fwd_rules(priv);
+
+ /* FIXME: This is a W/A only for tx timeout watch dog false alarm when
+@@ -4315,7 +4315,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ mlx5e_set_netdev_dev_addr(netdev);
+
+ #if IS_ENABLED(CONFIG_MLX5_ESWITCH)
+- if (MLX5_VPORT_MANAGER(mdev))
++ if (MLX5_ESWITCH_MANAGER(mdev))
+ netdev->switchdev_ops = &mlx5e_switchdev_ops;
+ #endif
+
+@@ -4465,7 +4465,7 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
+
+ mlx5e_enable_async_events(priv);
+
+- if (MLX5_VPORT_MANAGER(priv->mdev))
++ if (MLX5_ESWITCH_MANAGER(priv->mdev))
+ mlx5e_register_vport_reps(priv);
+
+ if (netdev->reg_state != NETREG_REGISTERED)
+@@ -4500,7 +4500,7 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
+
+ queue_work(priv->wq, &priv->set_rx_mode_work);
+
+- if (MLX5_VPORT_MANAGER(priv->mdev))
++ if (MLX5_ESWITCH_MANAGER(priv->mdev))
+ mlx5e_unregister_vport_reps(priv);
+
+ mlx5e_disable_async_events(priv);
+@@ -4684,7 +4684,7 @@ static void *mlx5e_add(struct mlx5_core_dev *mdev)
+ return NULL;
+
+ #ifdef CONFIG_MLX5_ESWITCH
+- if (MLX5_VPORT_MANAGER(mdev)) {
++ if (MLX5_ESWITCH_MANAGER(mdev)) {
+ rpriv = mlx5e_alloc_nic_rep_priv(mdev);
+ if (!rpriv) {
+ mlx5_core_warn(mdev, "Failed to alloc NIC rep priv data\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 876c3e4c6193..286565862341 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -790,7 +790,7 @@ bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv)
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct mlx5_eswitch_rep *rep;
+
+- if (!MLX5_CAP_GEN(priv->mdev, vport_group_manager))
++ if (!MLX5_ESWITCH_MANAGER(priv->mdev))
+ return false;
+
+ rep = rpriv->rep;
+@@ -804,8 +804,12 @@ bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv)
+ static bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv)
+ {
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+- struct mlx5_eswitch_rep *rep = rpriv->rep;
++ struct mlx5_eswitch_rep *rep;
+
++ if (!MLX5_ESWITCH_MANAGER(priv->mdev))
++ return false;
++
++ rep = rpriv->rep;
+ if (rep && rep->vport != FDB_UPLINK_VPORT)
+ return true;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 1352d13eedb3..c3a18ddf5dba 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1604,7 +1604,7 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
+ if (!ESW_ALLOWED(esw))
+ return 0;
+
+- if (!MLX5_CAP_GEN(esw->dev, eswitch_flow_table) ||
++ if (!MLX5_ESWITCH_MANAGER(esw->dev) ||
+ !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) {
+ esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n");
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 35e256eb2f6e..2feb33dcad2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -983,8 +983,8 @@ static int mlx5_devlink_eswitch_check(struct devlink *devlink)
+ if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
+ return -EOPNOTSUPP;
+
+- if (!MLX5_CAP_GEN(dev, vport_group_manager))
+- return -EOPNOTSUPP;
++ if(!MLX5_ESWITCH_MANAGER(dev))
++ return -EPERM;
+
+ if (dev->priv.eswitch->mode == SRIOV_NONE)
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index c39c1692e674..bd0ffc347bd7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -32,6 +32,7 @@
+
+ #include <linux/mutex.h>
+ #include <linux/mlx5/driver.h>
++#include <linux/mlx5/eswitch.h>
+
+ #include "mlx5_core.h"
+ #include "fs_core.h"
+@@ -2631,7 +2632,7 @@ int mlx5_init_fs(struct mlx5_core_dev *dev)
+ goto err;
+ }
+
+- if (MLX5_CAP_GEN(dev, eswitch_flow_table)) {
++ if (MLX5_ESWITCH_MANAGER(dev)) {
+ if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, ft_support)) {
+ err = init_fdb_root_ns(steering);
+ if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+index afd9f4fa22f4..41ad24f0de2c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+@@ -32,6 +32,7 @@
+
+ #include <linux/mlx5/driver.h>
+ #include <linux/mlx5/cmd.h>
++#include <linux/mlx5/eswitch.h>
+ #include <linux/module.h>
+ #include "mlx5_core.h"
+ #include "../../mlxfw/mlxfw.h"
+@@ -159,13 +160,13 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
+ }
+
+ if (MLX5_CAP_GEN(dev, vport_group_manager) &&
+- MLX5_CAP_GEN(dev, eswitch_flow_table)) {
++ MLX5_ESWITCH_MANAGER(dev)) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH_FLOW_TABLE);
+ if (err)
+ return err;
+ }
+
+- if (MLX5_CAP_GEN(dev, eswitch_flow_table)) {
++ if (MLX5_ESWITCH_MANAGER(dev)) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH);
+ if (err)
+ return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+index 7cb67122e8b5..98359559c77e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+@@ -33,6 +33,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/mlx5/driver.h>
+ #include <linux/mlx5/mlx5_ifc.h>
++#include <linux/mlx5/eswitch.h>
+ #include "mlx5_core.h"
+ #include "lib/mpfs.h"
+
+@@ -98,7 +99,7 @@ int mlx5_mpfs_init(struct mlx5_core_dev *dev)
+ int l2table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table);
+ struct mlx5_mpfs *mpfs;
+
+- if (!MLX5_VPORT_MANAGER(dev))
++ if (!MLX5_ESWITCH_MANAGER(dev))
+ return 0;
+
+ mpfs = kzalloc(sizeof(*mpfs), GFP_KERNEL);
+@@ -122,7 +123,7 @@ void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_mpfs *mpfs = dev->priv.mpfs;
+
+- if (!MLX5_VPORT_MANAGER(dev))
++ if (!MLX5_ESWITCH_MANAGER(dev))
+ return;
+
+ WARN_ON(!hlist_empty(mpfs->hash));
+@@ -137,7 +138,7 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
+ u32 index;
+ int err;
+
+- if (!MLX5_VPORT_MANAGER(dev))
++ if (!MLX5_ESWITCH_MANAGER(dev))
+ return 0;
+
+ mutex_lock(&mpfs->lock);
+@@ -179,7 +180,7 @@ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
+ int err = 0;
+ u32 index;
+
+- if (!MLX5_VPORT_MANAGER(dev))
++ if (!MLX5_ESWITCH_MANAGER(dev))
+ return 0;
+
+ mutex_lock(&mpfs->lock);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index fa9d0760dd36..31a9cbd85689 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -701,7 +701,7 @@ EXPORT_SYMBOL_GPL(mlx5_query_port_prio_tc);
+ static int mlx5_set_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *in,
+ int inlen)
+ {
+- u32 out[MLX5_ST_SZ_DW(qtct_reg)];
++ u32 out[MLX5_ST_SZ_DW(qetc_reg)];
+
+ if (!MLX5_CAP_GEN(mdev, ets))
+ return -EOPNOTSUPP;
+@@ -713,7 +713,7 @@ static int mlx5_set_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *in,
+ static int mlx5_query_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *out,
+ int outlen)
+ {
+- u32 in[MLX5_ST_SZ_DW(qtct_reg)];
++ u32 in[MLX5_ST_SZ_DW(qetc_reg)];
+
+ if (!MLX5_CAP_GEN(mdev, ets))
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+index 2a8b529ce6dd..a0674962f02c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+@@ -88,6 +88,9 @@ static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs)
+ return -EBUSY;
+ }
+
++ if (!MLX5_ESWITCH_MANAGER(dev))
++ goto enable_vfs_hca;
++
+ err = mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY);
+ if (err) {
+ mlx5_core_warn(dev,
+@@ -95,6 +98,7 @@ static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs)
+ return err;
+ }
+
++enable_vfs_hca:
+ for (vf = 0; vf < num_vfs; vf++) {
+ err = mlx5_core_enable_hca(dev, vf + 1);
+ if (err) {
+@@ -140,7 +144,8 @@ static void mlx5_device_disable_sriov(struct mlx5_core_dev *dev)
+ }
+
+ out:
+- mlx5_eswitch_disable_sriov(dev->priv.eswitch);
++ if (MLX5_ESWITCH_MANAGER(dev))
++ mlx5_eswitch_disable_sriov(dev->priv.eswitch);
+
+ if (mlx5_wait_for_vf_pages(dev))
+ mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+index 35fb31f682af..1a781281c57a 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+@@ -194,6 +194,9 @@ static int nfp_bpf_setup_tc_block(struct net_device *netdev,
+ if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+ return -EOPNOTSUPP;
+
++ if (tcf_block_shared(f->block))
++ return -EOPNOTSUPP;
++
+ switch (f->command) {
+ case TC_BLOCK_BIND:
+ return tcf_block_cb_register(f->block,
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
+index 91935405f586..84f7a5dbea9d 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
+@@ -123,6 +123,20 @@ nfp_flower_compile_mac(struct nfp_flower_mac_mpls *frame,
+ NFP_FLOWER_MASK_MPLS_Q;
+
+ frame->mpls_lse = cpu_to_be32(t_mpls);
++ } else if (dissector_uses_key(flow->dissector,
++ FLOW_DISSECTOR_KEY_BASIC)) {
++ /* Check for mpls ether type and set NFP_FLOWER_MASK_MPLS_Q
++ * bit, which indicates an mpls ether type but without any
++ * mpls fields.
++ */
++ struct flow_dissector_key_basic *key_basic;
++
++ key_basic = skb_flow_dissector_target(flow->dissector,
++ FLOW_DISSECTOR_KEY_BASIC,
++ flow->key);
++ if (key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_UC) ||
++ key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_MC))
++ frame->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index 114d2ab02a38..4de30d0f9491 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -264,6 +264,14 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
+ case cpu_to_be16(ETH_P_ARP):
+ return -EOPNOTSUPP;
+
++ case cpu_to_be16(ETH_P_MPLS_UC):
++ case cpu_to_be16(ETH_P_MPLS_MC):
++ if (!(key_layer & NFP_FLOWER_LAYER_MAC)) {
++ key_layer |= NFP_FLOWER_LAYER_MAC;
++ key_size += sizeof(struct nfp_flower_mac_mpls);
++ }
++ break;
++
+ /* Will be included in layer 2. */
+ case cpu_to_be16(ETH_P_8021Q):
+ break;
+@@ -593,6 +601,9 @@ static int nfp_flower_setup_tc_block(struct net_device *netdev,
+ if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+ return -EOPNOTSUPP;
+
++ if (tcf_block_shared(f->block))
++ return -EOPNOTSUPP;
++
+ switch (f->command) {
+ case TC_BLOCK_BIND:
+ return tcf_block_cb_register(f->block,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index 449777f21237..e82986df9b8e 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -700,9 +700,9 @@ qed_dcbx_get_local_lldp_params(struct qed_hwfn *p_hwfn,
+ p_local = &p_hwfn->p_dcbx_info->lldp_local[LLDP_NEAREST_BRIDGE];
+
+ memcpy(params->lldp_local.local_chassis_id, p_local->local_chassis_id,
+- ARRAY_SIZE(p_local->local_chassis_id));
++ sizeof(p_local->local_chassis_id));
+ memcpy(params->lldp_local.local_port_id, p_local->local_port_id,
+- ARRAY_SIZE(p_local->local_port_id));
++ sizeof(p_local->local_port_id));
+ }
+
+ static void
+@@ -714,9 +714,9 @@ qed_dcbx_get_remote_lldp_params(struct qed_hwfn *p_hwfn,
+ p_remote = &p_hwfn->p_dcbx_info->lldp_remote[LLDP_NEAREST_BRIDGE];
+
+ memcpy(params->lldp_remote.peer_chassis_id, p_remote->peer_chassis_id,
+- ARRAY_SIZE(p_remote->peer_chassis_id));
++ sizeof(p_remote->peer_chassis_id));
+ memcpy(params->lldp_remote.peer_port_id, p_remote->peer_port_id,
+- ARRAY_SIZE(p_remote->peer_port_id));
++ sizeof(p_remote->peer_port_id));
+ }
+
+ static int
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index d2ad5e92c74f..5644b24d85b0 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -1789,7 +1789,7 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ DP_INFO(p_hwfn, "Failed to update driver state\n");
+
+ rc = qed_mcp_ov_update_eswitch(p_hwfn, p_hwfn->p_main_ptt,
+- QED_OV_ESWITCH_VEB);
++ QED_OV_ESWITCH_NONE);
+ if (rc)
+ DP_INFO(p_hwfn, "Failed to update eswitch mode\n");
+ }
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 7870ae2a6f7e..261f21d6b0b0 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -780,6 +780,14 @@ static int qed_slowpath_setup_int(struct qed_dev *cdev,
+ /* We want a minimum of one slowpath and one fastpath vector per hwfn */
+ cdev->int_params.in.min_msix_cnt = cdev->num_hwfns * 2;
+
++ if (is_kdump_kernel()) {
++ DP_INFO(cdev,
++ "Kdump kernel: Limit the max number of requested MSI-X vectors to %hd\n",
++ cdev->int_params.in.min_msix_cnt);
++ cdev->int_params.in.num_vectors =
++ cdev->int_params.in.min_msix_cnt;
++ }
++
+ rc = qed_set_int_mode(cdev, false);
+ if (rc) {
+ DP_ERR(cdev, "qed_slowpath_setup_int ERR\n");
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 5acb91b3564c..419c681ea2be 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -4400,6 +4400,8 @@ static void qed_sriov_enable_qid_config(struct qed_hwfn *hwfn,
+ static int qed_sriov_enable(struct qed_dev *cdev, int num)
+ {
+ struct qed_iov_vf_init_params params;
++ struct qed_hwfn *hwfn;
++ struct qed_ptt *ptt;
+ int i, j, rc;
+
+ if (num >= RESC_NUM(&cdev->hwfns[0], QED_VPORT)) {
+@@ -4412,8 +4414,8 @@ static int qed_sriov_enable(struct qed_dev *cdev, int num)
+
+ /* Initialize HW for VF access */
+ for_each_hwfn(cdev, j) {
+- struct qed_hwfn *hwfn = &cdev->hwfns[j];
+- struct qed_ptt *ptt = qed_ptt_acquire(hwfn);
++ hwfn = &cdev->hwfns[j];
++ ptt = qed_ptt_acquire(hwfn);
+
+ /* Make sure not to use more than 16 queues per VF */
+ params.num_queues = min_t(int,
+@@ -4449,6 +4451,19 @@ static int qed_sriov_enable(struct qed_dev *cdev, int num)
+ goto err;
+ }
+
++ hwfn = QED_LEADING_HWFN(cdev);
++ ptt = qed_ptt_acquire(hwfn);
++ if (!ptt) {
++ DP_ERR(hwfn, "Failed to acquire ptt\n");
++ rc = -EBUSY;
++ goto err;
++ }
++
++ rc = qed_mcp_ov_update_eswitch(hwfn, ptt, QED_OV_ESWITCH_VEB);
++ if (rc)
++ DP_INFO(cdev, "Failed to update eswitch mode\n");
++ qed_ptt_release(hwfn, ptt);
++
+ return num;
+
+ err:
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+index 02adb513f475..013ff567283c 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+@@ -337,8 +337,14 @@ int qede_ptp_get_ts_info(struct qede_dev *edev, struct ethtool_ts_info *info)
+ {
+ struct qede_ptp *ptp = edev->ptp;
+
+- if (!ptp)
+- return -EIO;
++ if (!ptp) {
++ info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
++ SOF_TIMESTAMPING_RX_SOFTWARE |
++ SOF_TIMESTAMPING_SOFTWARE;
++ info->phc_index = -1;
++
++ return 0;
++ }
+
+ info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+diff --git a/drivers/net/ethernet/sfc/farch.c b/drivers/net/ethernet/sfc/farch.c
+index c72adf8b52ea..9165e2b0c590 100644
+--- a/drivers/net/ethernet/sfc/farch.c
++++ b/drivers/net/ethernet/sfc/farch.c
+@@ -2794,6 +2794,7 @@ int efx_farch_filter_table_probe(struct efx_nic *efx)
+ if (!state)
+ return -ENOMEM;
+ efx->filter_state = state;
++ init_rwsem(&state->lock);
+
+ table = &state->table[EFX_FARCH_FILTER_TABLE_RX_IP];
+ table->id = EFX_FARCH_FILTER_TABLE_RX_IP;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b65e2d144698..1e1cc5256eca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -927,6 +927,7 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
+ static int stmmac_init_phy(struct net_device *dev)
+ {
+ struct stmmac_priv *priv = netdev_priv(dev);
++ u32 tx_cnt = priv->plat->tx_queues_to_use;
+ struct phy_device *phydev;
+ char phy_id_fmt[MII_BUS_ID_SIZE + 3];
+ char bus_id[MII_BUS_ID_SIZE];
+@@ -967,6 +968,15 @@ static int stmmac_init_phy(struct net_device *dev)
+ phydev->advertising &= ~(SUPPORTED_1000baseT_Half |
+ SUPPORTED_1000baseT_Full);
+
++ /*
++ * Half-duplex mode not supported with multiqueue
++ * half-duplex can only works with single queue
++ */
++ if (tx_cnt > 1)
++ phydev->supported &= ~(SUPPORTED_1000baseT_Half |
++ SUPPORTED_100baseT_Half |
++ SUPPORTED_10baseT_Half);
++
+ /*
+ * Broken HW is sometimes missing the pull-up resistor on the
+ * MDIO line, which results in reads to non-existent devices returning
+diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c
+index 7a16d40a72d1..b9221fc1674d 100644
+--- a/drivers/net/ethernet/sun/sungem.c
++++ b/drivers/net/ethernet/sun/sungem.c
+@@ -60,8 +60,7 @@
+ #include <linux/sungem_phy.h>
+ #include "sungem.h"
+
+-/* Stripping FCS is causing problems, disabled for now */
+-#undef STRIP_FCS
++#define STRIP_FCS
+
+ #define DEFAULT_MSG (NETIF_MSG_DRV | \
+ NETIF_MSG_PROBE | \
+@@ -435,7 +434,7 @@ static int gem_rxmac_reset(struct gem *gp)
+ writel(desc_dma & 0xffffffff, gp->regs + RXDMA_DBLOW);
+ writel(RX_RING_SIZE - 4, gp->regs + RXDMA_KICK);
+ val = (RXDMA_CFG_BASE | (RX_OFFSET << 10) |
+- ((14 / 2) << 13) | RXDMA_CFG_FTHRESH_128);
++ (ETH_HLEN << 13) | RXDMA_CFG_FTHRESH_128);
+ writel(val, gp->regs + RXDMA_CFG);
+ if (readl(gp->regs + GREG_BIFCFG) & GREG_BIFCFG_M66EN)
+ writel(((5 & RXDMA_BLANK_IPKTS) |
+@@ -760,7 +759,6 @@ static int gem_rx(struct gem *gp, int work_to_do)
+ struct net_device *dev = gp->dev;
+ int entry, drops, work_done = 0;
+ u32 done;
+- __sum16 csum;
+
+ if (netif_msg_rx_status(gp))
+ printk(KERN_DEBUG "%s: rx interrupt, done: %d, rx_new: %d\n",
+@@ -855,9 +853,13 @@ static int gem_rx(struct gem *gp, int work_to_do)
+ skb = copy_skb;
+ }
+
+- csum = (__force __sum16)htons((status & RXDCTRL_TCPCSUM) ^ 0xffff);
+- skb->csum = csum_unfold(csum);
+- skb->ip_summed = CHECKSUM_COMPLETE;
++ if (likely(dev->features & NETIF_F_RXCSUM)) {
++ __sum16 csum;
++
++ csum = (__force __sum16)htons((status & RXDCTRL_TCPCSUM) ^ 0xffff);
++ skb->csum = csum_unfold(csum);
++ skb->ip_summed = CHECKSUM_COMPLETE;
++ }
+ skb->protocol = eth_type_trans(skb, gp->dev);
+
+ napi_gro_receive(&gp->napi, skb);
+@@ -1761,7 +1763,7 @@ static void gem_init_dma(struct gem *gp)
+ writel(0, gp->regs + TXDMA_KICK);
+
+ val = (RXDMA_CFG_BASE | (RX_OFFSET << 10) |
+- ((14 / 2) << 13) | RXDMA_CFG_FTHRESH_128);
++ (ETH_HLEN << 13) | RXDMA_CFG_FTHRESH_128);
+ writel(val, gp->regs + RXDMA_CFG);
+
+ writel(desc_dma >> 32, gp->regs + RXDMA_DBHI);
+@@ -2985,8 +2987,8 @@ static int gem_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ pci_set_drvdata(pdev, dev);
+
+ /* We can do scatter/gather and HW checksum */
+- dev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM;
+- dev->features |= dev->hw_features | NETIF_F_RXCSUM;
++ dev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
++ dev->features = dev->hw_features;
+ if (pci_using_dac)
+ dev->features |= NETIF_F_HIGHDMA;
+
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index b919e89a9b93..4b3986dda52e 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -474,7 +474,7 @@ static struct sk_buff **geneve_gro_receive(struct sock *sk,
+ out_unlock:
+ rcu_read_unlock();
+ out:
+- NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_flush_final(skb, pp, flush);
+
+ return pp;
+ }
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 960f06141472..eaeee3201e8f 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -211,7 +211,7 @@ int netvsc_recv_callback(struct net_device *net,
+ void netvsc_channel_cb(void *context);
+ int netvsc_poll(struct napi_struct *napi, int budget);
+
+-void rndis_set_subchannel(struct work_struct *w);
++int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev);
+ int rndis_filter_open(struct netvsc_device *nvdev);
+ int rndis_filter_close(struct netvsc_device *nvdev);
+ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 04f611e6f678..c418113c6b20 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -66,6 +66,41 @@ void netvsc_switch_datapath(struct net_device *ndev, bool vf)
+ VM_PKT_DATA_INBAND, 0);
+ }
+
++/* Worker to setup sub channels on initial setup
++ * Initial hotplug event occurs in softirq context
++ * and can't wait for channels.
++ */
++static void netvsc_subchan_work(struct work_struct *w)
++{
++ struct netvsc_device *nvdev =
++ container_of(w, struct netvsc_device, subchan_work);
++ struct rndis_device *rdev;
++ int i, ret;
++
++ /* Avoid deadlock with device removal already under RTNL */
++ if (!rtnl_trylock()) {
++ schedule_work(w);
++ return;
++ }
++
++ rdev = nvdev->extension;
++ if (rdev) {
++ ret = rndis_set_subchannel(rdev->ndev, nvdev);
++ if (ret == 0) {
++ netif_device_attach(rdev->ndev);
++ } else {
++ /* fallback to only primary channel */
++ for (i = 1; i < nvdev->num_chn; i++)
++ netif_napi_del(&nvdev->chan_table[i].napi);
++
++ nvdev->max_chn = 1;
++ nvdev->num_chn = 1;
++ }
++ }
++
++ rtnl_unlock();
++}
++
+ static struct netvsc_device *alloc_net_device(void)
+ {
+ struct netvsc_device *net_device;
+@@ -82,7 +117,7 @@ static struct netvsc_device *alloc_net_device(void)
+
+ init_completion(&net_device->channel_init_wait);
+ init_waitqueue_head(&net_device->subchan_open);
+- INIT_WORK(&net_device->subchan_work, rndis_set_subchannel);
++ INIT_WORK(&net_device->subchan_work, netvsc_subchan_work);
+
+ return net_device;
+ }
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index eb8dccd24abf..82c3c8e200f0 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -905,8 +905,20 @@ static int netvsc_attach(struct net_device *ndev,
+ if (IS_ERR(nvdev))
+ return PTR_ERR(nvdev);
+
+- /* Note: enable and attach happen when sub-channels setup */
++ if (nvdev->num_chn > 1) {
++ ret = rndis_set_subchannel(ndev, nvdev);
++
++ /* if unavailable, just proceed with one queue */
++ if (ret) {
++ nvdev->max_chn = 1;
++ nvdev->num_chn = 1;
++ }
++ }
++
++ /* In any case device is now ready */
++ netif_device_attach(ndev);
+
++ /* Note: enable and attach happen when sub-channels setup */
+ netif_carrier_off(ndev);
+
+ if (netif_running(ndev)) {
+@@ -2064,6 +2076,9 @@ static int netvsc_probe(struct hv_device *dev,
+
+ memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);
+
++ if (nvdev->num_chn > 1)
++ schedule_work(&nvdev->subchan_work);
++
+ /* hw_features computed in rndis_netdev_set_hwcaps() */
+ net->features = net->hw_features |
+ NETIF_F_HIGHDMA | NETIF_F_SG |
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index e7ca5b5f39ed..f362cda85425 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1061,29 +1061,15 @@ static void netvsc_sc_open(struct vmbus_channel *new_sc)
+ * This breaks overlap of processing the host message for the
+ * new primary channel with the initialization of sub-channels.
+ */
+-void rndis_set_subchannel(struct work_struct *w)
++int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev)
+ {
+- struct netvsc_device *nvdev
+- = container_of(w, struct netvsc_device, subchan_work);
+ struct nvsp_message *init_packet = &nvdev->channel_init_pkt;
+- struct net_device_context *ndev_ctx;
+- struct rndis_device *rdev;
+- struct net_device *ndev;
+- struct hv_device *hv_dev;
++ struct net_device_context *ndev_ctx = netdev_priv(ndev);
++ struct hv_device *hv_dev = ndev_ctx->device_ctx;
++ struct rndis_device *rdev = nvdev->extension;
+ int i, ret;
+
+- if (!rtnl_trylock()) {
+- schedule_work(w);
+- return;
+- }
+-
+- rdev = nvdev->extension;
+- if (!rdev)
+- goto unlock; /* device was removed */
+-
+- ndev = rdev->ndev;
+- ndev_ctx = netdev_priv(ndev);
+- hv_dev = ndev_ctx->device_ctx;
++ ASSERT_RTNL();
+
+ memset(init_packet, 0, sizeof(struct nvsp_message));
+ init_packet->hdr.msg_type = NVSP_MSG5_TYPE_SUBCHANNEL;
+@@ -1099,13 +1085,13 @@ void rndis_set_subchannel(struct work_struct *w)
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
+ if (ret) {
+ netdev_err(ndev, "sub channel allocate send failed: %d\n", ret);
+- goto failed;
++ return ret;
+ }
+
+ wait_for_completion(&nvdev->channel_init_wait);
+ if (init_packet->msg.v5_msg.subchn_comp.status != NVSP_STAT_SUCCESS) {
+ netdev_err(ndev, "sub channel request failed\n");
+- goto failed;
++ return -EIO;
+ }
+
+ nvdev->num_chn = 1 +
+@@ -1124,21 +1110,7 @@ void rndis_set_subchannel(struct work_struct *w)
+ for (i = 0; i < VRSS_SEND_TAB_SIZE; i++)
+ ndev_ctx->tx_table[i] = i % nvdev->num_chn;
+
+- netif_device_attach(ndev);
+- rtnl_unlock();
+- return;
+-
+-failed:
+- /* fallback to only primary channel */
+- for (i = 1; i < nvdev->num_chn; i++)
+- netif_napi_del(&nvdev->chan_table[i].napi);
+-
+- nvdev->max_chn = 1;
+- nvdev->num_chn = 1;
+-
+- netif_device_attach(ndev);
+-unlock:
+- rtnl_unlock();
++ return 0;
+ }
+
+ static int rndis_netdev_set_hwcaps(struct rndis_device *rndis_device,
+@@ -1329,21 +1301,12 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
+ netif_napi_add(net, &net_device->chan_table[i].napi,
+ netvsc_poll, NAPI_POLL_WEIGHT);
+
+- if (net_device->num_chn > 1)
+- schedule_work(&net_device->subchan_work);
++ return net_device;
+
+ out:
+- /* if unavailable, just proceed with one queue */
+- if (ret) {
+- net_device->max_chn = 1;
+- net_device->num_chn = 1;
+- }
+-
+- /* No sub channels, device is ready */
+- if (net_device->num_chn == 1)
+- netif_device_attach(net);
+-
+- return net_device;
++ /* setting up multiple channels failed */
++ net_device->max_chn = 1;
++ net_device->num_chn = 1;
+
+ err_dev_remv:
+ rndis_filter_device_remove(dev, net_device);
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 4377c26f714d..6641fd5355e0 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -594,7 +594,8 @@ int ipvlan_link_new(struct net *src_net, struct net_device *dev,
+ ipvlan->phy_dev = phy_dev;
+ ipvlan->dev = dev;
+ ipvlan->sfeatures = IPVLAN_FEATURES;
+- ipvlan_adjust_mtu(ipvlan, phy_dev);
++ if (!tb[IFLA_MTU])
++ ipvlan_adjust_mtu(ipvlan, phy_dev);
+ INIT_LIST_HEAD(&ipvlan->addrs);
+ spin_lock_init(&ipvlan->addrs_lock);
+
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 0867f7275852..8a76c1e5de8d 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -3193,6 +3193,7 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
+ pkt_cnt = 0;
+ count = 0;
+ length = 0;
++ spin_lock_irqsave(&tqp->lock, flags);
+ for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) {
+ if (skb_is_gso(skb)) {
+ if (pkt_cnt) {
+@@ -3201,7 +3202,8 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
+ }
+ count = 1;
+ length = skb->len - TX_OVERHEAD;
+- skb2 = skb_dequeue(tqp);
++ __skb_unlink(skb, tqp);
++ spin_unlock_irqrestore(&tqp->lock, flags);
+ goto gso_skb;
+ }
+
+@@ -3210,6 +3212,7 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
+ skb_totallen = skb->len + roundup(skb_totallen, sizeof(u32));
+ pkt_cnt++;
+ }
++ spin_unlock_irqrestore(&tqp->lock, flags);
+
+ /* copy to a single skb */
+ skb = alloc_skb(skb_totallen, GFP_ATOMIC);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 094680871687..04c22f508ed9 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1246,6 +1246,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
++ {QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e */
+ {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
+ {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
+ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 86f7196f9d91..2a58607a6aea 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3962,7 +3962,8 @@ static int rtl8152_close(struct net_device *netdev)
+ #ifdef CONFIG_PM_SLEEP
+ unregister_pm_notifier(&tp->pm_notifier);
+ #endif
+- napi_disable(&tp->napi);
++ if (!test_bit(RTL8152_UNPLUG, &tp->flags))
++ napi_disable(&tp->napi);
+ clear_bit(WORK_ENABLE, &tp->flags);
+ usb_kill_urb(tp->intr_urb);
+ cancel_delayed_work_sync(&tp->schedule);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 8c7207535179..11a3915e92e9 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -50,6 +50,10 @@ module_param(napi_tx, bool, 0644);
+ /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
+ #define VIRTIO_XDP_HEADROOM 256
+
++/* Separating two types of XDP xmit */
++#define VIRTIO_XDP_TX BIT(0)
++#define VIRTIO_XDP_REDIR BIT(1)
++
+ /* RX packet size EWMA. The average packet size is used to determine the packet
+ * buffer size when refilling RX rings. As the entire RX ring may be refilled
+ * at once, the weight is chosen so that the EWMA will be insensitive to short-
+@@ -547,7 +551,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf, void *ctx,
+ unsigned int len,
+- bool *xdp_xmit)
++ unsigned int *xdp_xmit)
+ {
+ struct sk_buff *skb;
+ struct bpf_prog *xdp_prog;
+@@ -615,14 +619,14 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ trace_xdp_exception(vi->dev, xdp_prog, act);
+ goto err_xdp;
+ }
+- *xdp_xmit = true;
++ *xdp_xmit |= VIRTIO_XDP_TX;
+ rcu_read_unlock();
+ goto xdp_xmit;
+ case XDP_REDIRECT:
+ err = xdp_do_redirect(dev, &xdp, xdp_prog);
+ if (err)
+ goto err_xdp;
+- *xdp_xmit = true;
++ *xdp_xmit |= VIRTIO_XDP_REDIR;
+ rcu_read_unlock();
+ goto xdp_xmit;
+ default:
+@@ -684,7 +688,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ void *buf,
+ void *ctx,
+ unsigned int len,
+- bool *xdp_xmit)
++ unsigned int *xdp_xmit)
+ {
+ struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
+ u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+@@ -772,7 +776,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ put_page(xdp_page);
+ goto err_xdp;
+ }
+- *xdp_xmit = true;
++ *xdp_xmit |= VIRTIO_XDP_REDIR;
+ if (unlikely(xdp_page != page))
+ put_page(page);
+ rcu_read_unlock();
+@@ -784,7 +788,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ put_page(xdp_page);
+ goto err_xdp;
+ }
+- *xdp_xmit = true;
++ *xdp_xmit |= VIRTIO_XDP_TX;
+ if (unlikely(xdp_page != page))
+ put_page(page);
+ rcu_read_unlock();
+@@ -893,7 +897,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ }
+
+ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+- void *buf, unsigned int len, void **ctx, bool *xdp_xmit)
++ void *buf, unsigned int len, void **ctx,
++ unsigned int *xdp_xmit)
+ {
+ struct net_device *dev = vi->dev;
+ struct sk_buff *skb;
+@@ -1186,7 +1191,8 @@ static void refill_work(struct work_struct *work)
+ }
+ }
+
+-static int virtnet_receive(struct receive_queue *rq, int budget, bool *xdp_xmit)
++static int virtnet_receive(struct receive_queue *rq, int budget,
++ unsigned int *xdp_xmit)
+ {
+ struct virtnet_info *vi = rq->vq->vdev->priv;
+ unsigned int len, received = 0, bytes = 0;
+@@ -1275,7 +1281,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ struct virtnet_info *vi = rq->vq->vdev->priv;
+ struct send_queue *sq;
+ unsigned int received, qp;
+- bool xdp_xmit = false;
++ unsigned int xdp_xmit = 0;
+
+ virtnet_poll_cleantx(rq);
+
+@@ -1285,12 +1291,14 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ if (received < budget)
+ virtqueue_napi_complete(napi, rq->vq, received);
+
+- if (xdp_xmit) {
++ if (xdp_xmit & VIRTIO_XDP_REDIR)
++ xdp_do_flush_map();
++
++ if (xdp_xmit & VIRTIO_XDP_TX) {
+ qp = vi->curr_queue_pairs - vi->xdp_queue_pairs +
+ smp_processor_id();
+ sq = &vi->sq[qp];
+ virtqueue_kick(sq->vq);
+- xdp_do_flush_map();
+ }
+
+ return received;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index fab7a4db249e..4b170599fa5e 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -623,9 +623,7 @@ static struct sk_buff **vxlan_gro_receive(struct sock *sk,
+ flush = 0;
+
+ out:
+- skb_gro_remcsum_cleanup(skb, &grc);
+- skb->remcsum_offload = 0;
+- NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_flush_final_remcsum(skb, pp, flush, &grc);
+
+ return pp;
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index 762a29cdf7ad..b23983737011 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -485,18 +485,21 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+
+ }
+
+-void rtl_deinit_deferred_work(struct ieee80211_hw *hw)
++void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq)
+ {
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+
+ del_timer_sync(&rtlpriv->works.watchdog_timer);
+
+- cancel_delayed_work(&rtlpriv->works.watchdog_wq);
+- cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq);
+- cancel_delayed_work(&rtlpriv->works.ps_work);
+- cancel_delayed_work(&rtlpriv->works.ps_rfon_wq);
+- cancel_delayed_work(&rtlpriv->works.fwevt_wq);
+- cancel_delayed_work(&rtlpriv->works.c2hcmd_wq);
++ cancel_delayed_work_sync(&rtlpriv->works.watchdog_wq);
++ if (ips_wq)
++ cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq);
++ else
++ cancel_delayed_work_sync(&rtlpriv->works.ips_nic_off_wq);
++ cancel_delayed_work_sync(&rtlpriv->works.ps_work);
++ cancel_delayed_work_sync(&rtlpriv->works.ps_rfon_wq);
++ cancel_delayed_work_sync(&rtlpriv->works.fwevt_wq);
++ cancel_delayed_work_sync(&rtlpriv->works.c2hcmd_wq);
+ }
+ EXPORT_SYMBOL_GPL(rtl_deinit_deferred_work);
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.h b/drivers/net/wireless/realtek/rtlwifi/base.h
+index acc924635818..92b8cad6b563 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.h
++++ b/drivers/net/wireless/realtek/rtlwifi/base.h
+@@ -121,7 +121,7 @@ void rtl_init_rfkill(struct ieee80211_hw *hw);
+ void rtl_deinit_rfkill(struct ieee80211_hw *hw);
+
+ void rtl_watch_dog_timer_callback(struct timer_list *t);
+-void rtl_deinit_deferred_work(struct ieee80211_hw *hw);
++void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq);
+
+ bool rtl_action_proc(struct ieee80211_hw *hw, struct sk_buff *skb, u8 is_tx);
+ int rtlwifi_rate_mapping(struct ieee80211_hw *hw, bool isht,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c b/drivers/net/wireless/realtek/rtlwifi/core.c
+index cfea57efa7f4..4bf7967590ca 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/core.c
++++ b/drivers/net/wireless/realtek/rtlwifi/core.c
+@@ -130,7 +130,6 @@ static void rtl_fw_do_work(const struct firmware *firmware, void *context,
+ firmware->size);
+ rtlpriv->rtlhal.wowlan_fwsize = firmware->size;
+ }
+- rtlpriv->rtlhal.fwsize = firmware->size;
+ release_firmware(firmware);
+ }
+
+@@ -196,7 +195,7 @@ static void rtl_op_stop(struct ieee80211_hw *hw)
+ /* reset sec info */
+ rtl_cam_reset_sec_info(hw);
+
+- rtl_deinit_deferred_work(hw);
++ rtl_deinit_deferred_work(hw, false);
+ }
+ rtlpriv->intf_ops->adapter_stop(hw);
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 57bb8f049e59..4dc3e3122f5d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -2375,7 +2375,7 @@ void rtl_pci_disconnect(struct pci_dev *pdev)
+ ieee80211_unregister_hw(hw);
+ rtlmac->mac80211_registered = 0;
+ } else {
+- rtl_deinit_deferred_work(hw);
++ rtl_deinit_deferred_work(hw, false);
+ rtlpriv->intf_ops->adapter_stop(hw);
+ }
+ rtlpriv->cfg->ops->disable_interrupt(hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/ps.c b/drivers/net/wireless/realtek/rtlwifi/ps.c
+index 71af24e2e051..479a4cfc245d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/ps.c
++++ b/drivers/net/wireless/realtek/rtlwifi/ps.c
+@@ -71,7 +71,7 @@ bool rtl_ps_disable_nic(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+
+ /*<1> Stop all timer */
+- rtl_deinit_deferred_work(hw);
++ rtl_deinit_deferred_work(hw, true);
+
+ /*<2> Disable Interrupt */
+ rtlpriv->cfg->ops->disable_interrupt(hw);
+@@ -292,7 +292,7 @@ void rtl_ips_nic_on(struct ieee80211_hw *hw)
+ struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+ enum rf_pwrstate rtstate;
+
+- cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq);
++ cancel_delayed_work_sync(&rtlpriv->works.ips_nic_off_wq);
+
+ mutex_lock(&rtlpriv->locks.ips_mutex);
+ if (ppsc->inactiveps) {
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index ce3103bb8ebb..6771b2742b78 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -1132,7 +1132,7 @@ void rtl_usb_disconnect(struct usb_interface *intf)
+ ieee80211_unregister_hw(hw);
+ rtlmac->mac80211_registered = 0;
+ } else {
+- rtl_deinit_deferred_work(hw);
++ rtl_deinit_deferred_work(hw, false);
+ rtlpriv->intf_ops->adapter_stop(hw);
+ }
+ /*deinit rfkill */
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 4dd0668003e7..1d5082d30187 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1810,7 +1810,7 @@ static int talk_to_netback(struct xenbus_device *dev,
+ err = xen_net_read_mac(dev, info->netdev->dev_addr);
+ if (err) {
+ xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
+- goto out;
++ goto out_unlocked;
+ }
+
+ rtnl_lock();
+@@ -1925,6 +1925,7 @@ static int talk_to_netback(struct xenbus_device *dev,
+ xennet_destroy_queues(info);
+ out:
+ rtnl_unlock();
++out_unlocked:
+ device_unregister(&dev->dev);
+ return err;
+ }
+@@ -1950,10 +1951,6 @@ static int xennet_connect(struct net_device *dev)
+ /* talk_to_netback() sets the correct number of queues */
+ num_queues = dev->real_num_tx_queues;
+
+- rtnl_lock();
+- netdev_update_features(dev);
+- rtnl_unlock();
+-
+ if (dev->reg_state == NETREG_UNINITIALIZED) {
+ err = register_netdev(dev);
+ if (err) {
+@@ -1963,6 +1960,10 @@ static int xennet_connect(struct net_device *dev)
+ }
+ }
+
++ rtnl_lock();
++ netdev_update_features(dev);
++ rtnl_unlock();
++
+ /*
+ * All public and private state should now be sane. Get
+ * ready to start sending and receiving packets and give the driver
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index da4b457a14e0..4690814cfc51 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -1077,6 +1077,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ struct pci_bus *pbus;
+ struct pci_dev *pdev;
+ struct cpumask *dest;
++ unsigned long flags;
+ struct compose_comp_ctxt comp;
+ struct tran_int_desc *int_desc;
+ struct {
+@@ -1168,14 +1169,15 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ * the channel callback directly when channel->target_cpu is
+ * the current CPU. When the higher level interrupt code
+ * calls us with interrupt enabled, let's add the
+- * local_bh_disable()/enable() to avoid race.
++ * local_irq_save()/restore() to avoid race:
++ * hv_pci_onchannelcallback() can also run in tasklet.
+ */
+- local_bh_disable();
++ local_irq_save(flags);
+
+ if (hbus->hdev->channel->target_cpu == smp_processor_id())
+ hv_pci_onchannelcallback(hbus);
+
+- local_bh_enable();
++ local_irq_restore(flags);
+
+ if (hpdev->state == hv_pcichild_ejecting) {
+ dev_err_once(&hbus->hdev->device,
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7622.c b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+index 06e8406c4440..9dc7cf211da0 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7622.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+@@ -1411,7 +1411,7 @@ static struct pinctrl_desc mtk_desc = {
+
+ static int mtk_gpio_get(struct gpio_chip *chip, unsigned int gpio)
+ {
+- struct mtk_pinctrl *hw = dev_get_drvdata(chip->parent);
++ struct mtk_pinctrl *hw = gpiochip_get_data(chip);
+ int value, err;
+
+ err = mtk_hw_get_value(hw, gpio, PINCTRL_PIN_REG_DI, &value);
+@@ -1423,7 +1423,7 @@ static int mtk_gpio_get(struct gpio_chip *chip, unsigned int gpio)
+
+ static void mtk_gpio_set(struct gpio_chip *chip, unsigned int gpio, int value)
+ {
+- struct mtk_pinctrl *hw = dev_get_drvdata(chip->parent);
++ struct mtk_pinctrl *hw = gpiochip_get_data(chip);
+
+ mtk_hw_set_value(hw, gpio, PINCTRL_PIN_REG_DO, !!value);
+ }
+@@ -1463,11 +1463,20 @@ static int mtk_build_gpiochip(struct mtk_pinctrl *hw, struct device_node *np)
+ if (ret < 0)
+ return ret;
+
+- ret = gpiochip_add_pin_range(chip, dev_name(hw->dev), 0, 0,
+- chip->ngpio);
+- if (ret < 0) {
+- gpiochip_remove(chip);
+- return ret;
++ /* Just for backward compatible for these old pinctrl nodes without
++ * "gpio-ranges" property. Otherwise, called directly from a
++ * DeviceTree-supported pinctrl driver is DEPRECATED.
++ * Please see Section 2.1 of
++ * Documentation/devicetree/bindings/gpio/gpio.txt on how to
++ * bind pinctrl and gpio drivers via the "gpio-ranges" property.
++ */
++ if (!of_find_property(np, "gpio-ranges", NULL)) {
++ ret = gpiochip_add_pin_range(chip, dev_name(hw->dev), 0, 0,
++ chip->ngpio);
++ if (ret < 0) {
++ gpiochip_remove(chip);
++ return ret;
++ }
+ }
+
+ return 0;
+@@ -1561,7 +1570,7 @@ static int mtk_pinctrl_probe(struct platform_device *pdev)
+ err = mtk_build_groups(hw);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to build groups\n");
+- return 0;
++ return err;
+ }
+
+ /* Setup functions descriptions per SoC types */
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77970.c b/drivers/pinctrl/sh-pfc/pfc-r8a77970.c
+index b1bb7263532b..049b374aa4ae 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77970.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77970.c
+@@ -22,12 +22,12 @@
+ #include "sh_pfc.h"
+
+ #define CPU_ALL_PORT(fn, sfx) \
+- PORT_GP_CFG_22(0, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH), \
+- PORT_GP_CFG_28(1, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH), \
+- PORT_GP_CFG_17(2, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH), \
+- PORT_GP_CFG_17(3, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH), \
+- PORT_GP_CFG_6(4, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH), \
+- PORT_GP_CFG_15(5, fn, sfx, SH_PFC_PIN_CFG_DRIVE_STRENGTH)
++ PORT_GP_22(0, fn, sfx), \
++ PORT_GP_28(1, fn, sfx), \
++ PORT_GP_17(2, fn, sfx), \
++ PORT_GP_17(3, fn, sfx), \
++ PORT_GP_6(4, fn, sfx), \
++ PORT_GP_15(5, fn, sfx)
+ /*
+ * F_() : just information
+ * FM() : macro for FN_xxx / xxx_MARK
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index 78b98b3e7efa..b7f75339683e 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -831,6 +831,17 @@ struct qeth_trap_id {
+ /*some helper functions*/
+ #define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "")
+
++static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf,
++ unsigned int elements)
++{
++ unsigned int i;
++
++ for (i = 0; i < elements; i++)
++ memset(&buf->element[i], 0, sizeof(struct qdio_buffer_element));
++ buf->element[14].sflags = 0;
++ buf->element[15].sflags = 0;
++}
++
+ /**
+ * qeth_get_elements_for_range() - find number of SBALEs to cover range.
+ * @start: Start of the address range.
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index dffd820731f2..b2eebcffd502 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -73,9 +73,6 @@ static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
+ struct qeth_qdio_out_buffer *buf,
+ enum iucv_tx_notify notification);
+ static void qeth_release_skbs(struct qeth_qdio_out_buffer *buf);
+-static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
+- struct qeth_qdio_out_buffer *buf,
+- enum qeth_qdio_buffer_states newbufstate);
+ static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int);
+
+ struct workqueue_struct *qeth_wq;
+@@ -488,6 +485,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ struct qaob *aob;
+ struct qeth_qdio_out_buffer *buffer;
+ enum iucv_tx_notify notification;
++ unsigned int i;
+
+ aob = (struct qaob *) phys_to_virt(phys_aob_addr);
+ QETH_CARD_TEXT(card, 5, "haob");
+@@ -512,10 +510,18 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
+ qeth_notify_skbs(buffer->q, buffer, notification);
+
+ buffer->aob = NULL;
+- qeth_clear_output_buffer(buffer->q, buffer,
+- QETH_QDIO_BUF_HANDLED_DELAYED);
++ /* Free dangling allocations. The attached skbs are handled by
++ * qeth_cleanup_handled_pending().
++ */
++ for (i = 0;
++ i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
++ i++) {
++ if (aob->sba[i] && buffer->is_header[i])
++ kmem_cache_free(qeth_core_header_cache,
++ (void *) aob->sba[i]);
++ }
++ atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
+
+- /* from here on: do not touch buffer anymore */
+ qdio_release_aob(aob);
+ }
+
+@@ -3759,6 +3765,10 @@ void qeth_qdio_output_handler(struct ccw_device *ccwdev,
+ QETH_CARD_TEXT(queue->card, 5, "aob");
+ QETH_CARD_TEXT_(queue->card, 5, "%lx",
+ virt_to_phys(buffer->aob));
++
++ /* prepare the queue slot for re-use: */
++ qeth_scrub_qdio_buffer(buffer->buffer,
++ QETH_MAX_BUFFER_ELEMENTS(card));
+ if (qeth_init_qdio_out_buf(queue, bidx)) {
+ QETH_CARD_TEXT(card, 2, "outofbuf");
+ qeth_schedule_recovery(card);
+@@ -4835,7 +4845,7 @@ int qeth_vm_request_mac(struct qeth_card *card)
+ goto out;
+ }
+
+- ccw_device_get_id(CARD_RDEV(card), &id);
++ ccw_device_get_id(CARD_DDEV(card), &id);
+ request->resp_buf_len = sizeof(*response);
+ request->resp_version = DIAG26C_VERSION2;
+ request->op_code = DIAG26C_GET_MAC;
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index b8079f2a65b3..16dc8b83ca6f 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -141,7 +141,7 @@ static int qeth_l2_send_setmac(struct qeth_card *card, __u8 *mac)
+
+ static int qeth_l2_write_mac(struct qeth_card *card, u8 *mac)
+ {
+- enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ?
++ enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ?
+ IPA_CMD_SETGMAC : IPA_CMD_SETVMAC;
+ int rc;
+
+@@ -158,7 +158,7 @@ static int qeth_l2_write_mac(struct qeth_card *card, u8 *mac)
+
+ static int qeth_l2_remove_mac(struct qeth_card *card, u8 *mac)
+ {
+- enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ?
++ enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ?
+ IPA_CMD_DELGMAC : IPA_CMD_DELVMAC;
+ int rc;
+
+@@ -523,27 +523,34 @@ static int qeth_l2_set_mac_address(struct net_device *dev, void *p)
+ return -ERESTARTSYS;
+ }
+
++ /* avoid racing against concurrent state change: */
++ if (!mutex_trylock(&card->conf_mutex))
++ return -EAGAIN;
++
+ if (!qeth_card_hw_is_reachable(card)) {
+ ether_addr_copy(dev->dev_addr, addr->sa_data);
+- return 0;
++ goto out_unlock;
+ }
+
+ /* don't register the same address twice */
+ if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) &&
+ (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))
+- return 0;
++ goto out_unlock;
+
+ /* add the new address, switch over, drop the old */
+ rc = qeth_l2_send_setmac(card, addr->sa_data);
+ if (rc)
+- return rc;
++ goto out_unlock;
+ ether_addr_copy(old_addr, dev->dev_addr);
+ ether_addr_copy(dev->dev_addr, addr->sa_data);
+
+ if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)
+ qeth_l2_remove_mac(card, old_addr);
+ card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED;
+- return 0;
++
++out_unlock:
++ mutex_unlock(&card->conf_mutex);
++ return rc;
+ }
+
+ static void qeth_promisc_to_bridge(struct qeth_card *card)
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index eeaf6739215f..dd4eb986f693 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -1219,7 +1219,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
+ if (ubufs)
+ vhost_net_ubuf_put_wait_and_free(ubufs);
+ err_ubufs:
+- sockfd_put(sock);
++ if (sock)
++ sockfd_put(sock);
+ err_vq:
+ mutex_unlock(&vq->mutex);
+ err:
+diff --git a/fs/autofs4/dev-ioctl.c b/fs/autofs4/dev-ioctl.c
+index 26f6b4f41ce6..00458e985cc3 100644
+--- a/fs/autofs4/dev-ioctl.c
++++ b/fs/autofs4/dev-ioctl.c
+@@ -148,6 +148,15 @@ static int validate_dev_ioctl(int cmd, struct autofs_dev_ioctl *param)
+ cmd);
+ goto out;
+ }
++ } else {
++ unsigned int inr = _IOC_NR(cmd);
++
++ if (inr == AUTOFS_DEV_IOCTL_OPENMOUNT_CMD ||
++ inr == AUTOFS_DEV_IOCTL_REQUESTER_CMD ||
++ inr == AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD) {
++ err = -EINVAL;
++ goto out;
++ }
+ }
+
+ err = 0;
+@@ -284,7 +293,8 @@ static int autofs_dev_ioctl_openmount(struct file *fp,
+ dev_t devid;
+ int err, fd;
+
+- /* param->path has already been checked */
++ /* param->path has been checked in validate_dev_ioctl() */
++
+ if (!param->openmount.devid)
+ return -EINVAL;
+
+@@ -446,10 +456,7 @@ static int autofs_dev_ioctl_requester(struct file *fp,
+ dev_t devid;
+ int err = -ENOENT;
+
+- if (param->size <= AUTOFS_DEV_IOCTL_SIZE) {
+- err = -EINVAL;
+- goto out;
+- }
++ /* param->path has been checked in validate_dev_ioctl() */
+
+ devid = sbi->sb->s_dev;
+
+@@ -534,10 +541,7 @@ static int autofs_dev_ioctl_ismountpoint(struct file *fp,
+ unsigned int devid, magic;
+ int err = -ENOENT;
+
+- if (param->size <= AUTOFS_DEV_IOCTL_SIZE) {
+- err = -EINVAL;
+- goto out;
+- }
++ /* param->path has been checked in validate_dev_ioctl() */
+
+ name = param->path;
+ type = param->ismountpoint.in.type;
+diff --git a/fs/reiserfs/prints.c b/fs/reiserfs/prints.c
+index 7e288d97adcb..9fed1c05f1f4 100644
+--- a/fs/reiserfs/prints.c
++++ b/fs/reiserfs/prints.c
+@@ -76,83 +76,99 @@ static char *le_type(struct reiserfs_key *key)
+ }
+
+ /* %k */
+-static void sprintf_le_key(char *buf, struct reiserfs_key *key)
++static int scnprintf_le_key(char *buf, size_t size, struct reiserfs_key *key)
+ {
+ if (key)
+- sprintf(buf, "[%d %d %s %s]", le32_to_cpu(key->k_dir_id),
+- le32_to_cpu(key->k_objectid), le_offset(key),
+- le_type(key));
++ return scnprintf(buf, size, "[%d %d %s %s]",
++ le32_to_cpu(key->k_dir_id),
++ le32_to_cpu(key->k_objectid), le_offset(key),
++ le_type(key));
+ else
+- sprintf(buf, "[NULL]");
++ return scnprintf(buf, size, "[NULL]");
+ }
+
+ /* %K */
+-static void sprintf_cpu_key(char *buf, struct cpu_key *key)
++static int scnprintf_cpu_key(char *buf, size_t size, struct cpu_key *key)
+ {
+ if (key)
+- sprintf(buf, "[%d %d %s %s]", key->on_disk_key.k_dir_id,
+- key->on_disk_key.k_objectid, reiserfs_cpu_offset(key),
+- cpu_type(key));
++ return scnprintf(buf, size, "[%d %d %s %s]",
++ key->on_disk_key.k_dir_id,
++ key->on_disk_key.k_objectid,
++ reiserfs_cpu_offset(key), cpu_type(key));
+ else
+- sprintf(buf, "[NULL]");
++ return scnprintf(buf, size, "[NULL]");
+ }
+
+-static void sprintf_de_head(char *buf, struct reiserfs_de_head *deh)
++static int scnprintf_de_head(char *buf, size_t size,
++ struct reiserfs_de_head *deh)
+ {
+ if (deh)
+- sprintf(buf,
+- "[offset=%d dir_id=%d objectid=%d location=%d state=%04x]",
+- deh_offset(deh), deh_dir_id(deh), deh_objectid(deh),
+- deh_location(deh), deh_state(deh));
++ return scnprintf(buf, size,
++ "[offset=%d dir_id=%d objectid=%d location=%d state=%04x]",
++ deh_offset(deh), deh_dir_id(deh),
++ deh_objectid(deh), deh_location(deh),
++ deh_state(deh));
+ else
+- sprintf(buf, "[NULL]");
++ return scnprintf(buf, size, "[NULL]");
+
+ }
+
+-static void sprintf_item_head(char *buf, struct item_head *ih)
++static int scnprintf_item_head(char *buf, size_t size, struct item_head *ih)
+ {
+ if (ih) {
+- strcpy(buf,
+- (ih_version(ih) == KEY_FORMAT_3_6) ? "*3.6* " : "*3.5*");
+- sprintf_le_key(buf + strlen(buf), &(ih->ih_key));
+- sprintf(buf + strlen(buf), ", item_len %d, item_location %d, "
+- "free_space(entry_count) %d",
+- ih_item_len(ih), ih_location(ih), ih_free_space(ih));
++ char *p = buf;
++ char * const end = buf + size;
++
++ p += scnprintf(p, end - p, "%s",
++ (ih_version(ih) == KEY_FORMAT_3_6) ?
++ "*3.6* " : "*3.5*");
++
++ p += scnprintf_le_key(p, end - p, &ih->ih_key);
++
++ p += scnprintf(p, end - p,
++ ", item_len %d, item_location %d, free_space(entry_count) %d",
++ ih_item_len(ih), ih_location(ih),
++ ih_free_space(ih));
++ return p - buf;
+ } else
+- sprintf(buf, "[NULL]");
++ return scnprintf(buf, size, "[NULL]");
+ }
+
+-static void sprintf_direntry(char *buf, struct reiserfs_dir_entry *de)
++static int scnprintf_direntry(char *buf, size_t size,
++ struct reiserfs_dir_entry *de)
+ {
+ char name[20];
+
+ memcpy(name, de->de_name, de->de_namelen > 19 ? 19 : de->de_namelen);
+ name[de->de_namelen > 19 ? 19 : de->de_namelen] = 0;
+- sprintf(buf, "\"%s\"==>[%d %d]", name, de->de_dir_id, de->de_objectid);
++ return scnprintf(buf, size, "\"%s\"==>[%d %d]",
++ name, de->de_dir_id, de->de_objectid);
+ }
+
+-static void sprintf_block_head(char *buf, struct buffer_head *bh)
++static int scnprintf_block_head(char *buf, size_t size, struct buffer_head *bh)
+ {
+- sprintf(buf, "level=%d, nr_items=%d, free_space=%d rdkey ",
+- B_LEVEL(bh), B_NR_ITEMS(bh), B_FREE_SPACE(bh));
++ return scnprintf(buf, size,
++ "level=%d, nr_items=%d, free_space=%d rdkey ",
++ B_LEVEL(bh), B_NR_ITEMS(bh), B_FREE_SPACE(bh));
+ }
+
+-static void sprintf_buffer_head(char *buf, struct buffer_head *bh)
++static int scnprintf_buffer_head(char *buf, size_t size, struct buffer_head *bh)
+ {
+- sprintf(buf,
+- "dev %pg, size %zd, blocknr %llu, count %d, state 0x%lx, page %p, (%s, %s, %s)",
+- bh->b_bdev, bh->b_size,
+- (unsigned long long)bh->b_blocknr, atomic_read(&(bh->b_count)),
+- bh->b_state, bh->b_page,
+- buffer_uptodate(bh) ? "UPTODATE" : "!UPTODATE",
+- buffer_dirty(bh) ? "DIRTY" : "CLEAN",
+- buffer_locked(bh) ? "LOCKED" : "UNLOCKED");
++ return scnprintf(buf, size,
++ "dev %pg, size %zd, blocknr %llu, count %d, state 0x%lx, page %p, (%s, %s, %s)",
++ bh->b_bdev, bh->b_size,
++ (unsigned long long)bh->b_blocknr,
++ atomic_read(&(bh->b_count)),
++ bh->b_state, bh->b_page,
++ buffer_uptodate(bh) ? "UPTODATE" : "!UPTODATE",
++ buffer_dirty(bh) ? "DIRTY" : "CLEAN",
++ buffer_locked(bh) ? "LOCKED" : "UNLOCKED");
+ }
+
+-static void sprintf_disk_child(char *buf, struct disk_child *dc)
++static int scnprintf_disk_child(char *buf, size_t size, struct disk_child *dc)
+ {
+- sprintf(buf, "[dc_number=%d, dc_size=%u]", dc_block_number(dc),
+- dc_size(dc));
++ return scnprintf(buf, size, "[dc_number=%d, dc_size=%u]",
++ dc_block_number(dc), dc_size(dc));
+ }
+
+ static char *is_there_reiserfs_struct(char *fmt, int *what)
+@@ -189,55 +205,60 @@ static void prepare_error_buf(const char *fmt, va_list args)
+ char *fmt1 = fmt_buf;
+ char *k;
+ char *p = error_buf;
++ char * const end = &error_buf[sizeof(error_buf)];
+ int what;
+
+ spin_lock(&error_lock);
+
+- strcpy(fmt1, fmt);
++ if (WARN_ON(strscpy(fmt_buf, fmt, sizeof(fmt_buf)) < 0)) {
++ strscpy(error_buf, "format string too long", end - error_buf);
++ goto out_unlock;
++ }
+
+ while ((k = is_there_reiserfs_struct(fmt1, &what)) != NULL) {
+ *k = 0;
+
+- p += vsprintf(p, fmt1, args);
++ p += vscnprintf(p, end - p, fmt1, args);
+
+ switch (what) {
+ case 'k':
+- sprintf_le_key(p, va_arg(args, struct reiserfs_key *));
++ p += scnprintf_le_key(p, end - p,
++ va_arg(args, struct reiserfs_key *));
+ break;
+ case 'K':
+- sprintf_cpu_key(p, va_arg(args, struct cpu_key *));
++ p += scnprintf_cpu_key(p, end - p,
++ va_arg(args, struct cpu_key *));
+ break;
+ case 'h':
+- sprintf_item_head(p, va_arg(args, struct item_head *));
++ p += scnprintf_item_head(p, end - p,
++ va_arg(args, struct item_head *));
+ break;
+ case 't':
+- sprintf_direntry(p,
+- va_arg(args,
+- struct reiserfs_dir_entry *));
++ p += scnprintf_direntry(p, end - p,
++ va_arg(args, struct reiserfs_dir_entry *));
+ break;
+ case 'y':
+- sprintf_disk_child(p,
+- va_arg(args, struct disk_child *));
++ p += scnprintf_disk_child(p, end - p,
++ va_arg(args, struct disk_child *));
+ break;
+ case 'z':
+- sprintf_block_head(p,
+- va_arg(args, struct buffer_head *));
++ p += scnprintf_block_head(p, end - p,
++ va_arg(args, struct buffer_head *));
+ break;
+ case 'b':
+- sprintf_buffer_head(p,
+- va_arg(args, struct buffer_head *));
++ p += scnprintf_buffer_head(p, end - p,
++ va_arg(args, struct buffer_head *));
+ break;
+ case 'a':
+- sprintf_de_head(p,
+- va_arg(args,
+- struct reiserfs_de_head *));
++ p += scnprintf_de_head(p, end - p,
++ va_arg(args, struct reiserfs_de_head *));
+ break;
+ }
+
+- p += strlen(p);
+ fmt1 = k + 2;
+ }
+- vsprintf(p, fmt1, args);
++ p += vscnprintf(p, end - p, fmt1, args);
++out_unlock:
+ spin_unlock(&error_lock);
+
+ }
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index a031897fca76..ca1d2cc2cdfa 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -80,6 +80,11 @@
+ ARM_SMCCC_SMC_32, \
+ 0, 0x8000)
+
++#define ARM_SMCCC_ARCH_WORKAROUND_2 \
++ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
++ ARM_SMCCC_SMC_32, \
++ 0, 0x7fff)
++
+ #ifndef __ASSEMBLY__
+
+ #include <linux/linkage.h>
+@@ -291,5 +296,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ */
+ #define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
+
++/* Return codes defined in ARM DEN 0070A */
++#define SMCCC_RET_SUCCESS 0
++#define SMCCC_RET_NOT_SUPPORTED -1
++#define SMCCC_RET_NOT_REQUIRED -2
++
+ #endif /*__ASSEMBLY__*/
+ #endif /*__LINUX_ARM_SMCCC_H*/
+diff --git a/include/linux/atmdev.h b/include/linux/atmdev.h
+index 0c27515d2cf6..8124815eb121 100644
+--- a/include/linux/atmdev.h
++++ b/include/linux/atmdev.h
+@@ -214,6 +214,7 @@ struct atmphy_ops {
+ struct atm_skb_data {
+ struct atm_vcc *vcc; /* ATM VCC */
+ unsigned long atm_options; /* ATM layer options */
++ unsigned int acct_truesize; /* truesize accounted to vcc */
+ };
+
+ #define VCC_HTABLE_SIZE 32
+@@ -241,6 +242,20 @@ void vcc_insert_socket(struct sock *sk);
+
+ void atm_dev_release_vccs(struct atm_dev *dev);
+
++static inline void atm_account_tx(struct atm_vcc *vcc, struct sk_buff *skb)
++{
++ /*
++ * Because ATM skbs may not belong to a sock (and we don't
++ * necessarily want to), skb->truesize may be adjusted,
++ * escaping the hack in pskb_expand_head() which avoids
++ * doing so for some cases. So stash the value of truesize
++ * at the time we accounted it, and atm_pop_raw() can use
++ * that value later, in case it changes.
++ */
++ refcount_add(skb->truesize, &sk_atm(vcc)->sk_wmem_alloc);
++ ATM_SKB(skb)->acct_truesize = skb->truesize;
++ ATM_SKB(skb)->atm_options = vcc->atm_options;
++}
+
+ static inline void atm_force_charge(struct atm_vcc *vcc,int truesize)
+ {
+diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
+index 0bd432a4d7bd..24251762c20c 100644
+--- a/include/linux/backing-dev-defs.h
++++ b/include/linux/backing-dev-defs.h
+@@ -22,7 +22,6 @@ struct dentry;
+ */
+ enum wb_state {
+ WB_registered, /* bdi_register() was done */
+- WB_shutting_down, /* wb_shutdown() in progress */
+ WB_writeback_running, /* Writeback is in progress */
+ WB_has_dirty_io, /* Dirty inodes on ->b_{dirty|io|more_io} */
+ WB_start_all, /* nr_pages == 0 (all) work pending */
+@@ -189,6 +188,7 @@ struct backing_dev_info {
+ #ifdef CONFIG_CGROUP_WRITEBACK
+ struct radix_tree_root cgwb_tree; /* radix tree of active cgroup wbs */
+ struct rb_root cgwb_congested_tree; /* their congested states */
++ struct mutex cgwb_release_mutex; /* protect shutdown of wb structs */
+ #else
+ struct bdi_writeback_congested *wb_congested;
+ #endif
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index 17b18b91ebac..1602bf4ab4cd 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -186,6 +186,8 @@ struct bio {
+ * throttling rules. Don't do it again. */
+ #define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion
+ * of this bio. */
++#define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */
++
+ /* See BVEC_POOL_OFFSET below before adding new flags */
+
+ /*
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index b4bf73f5e38f..f1fa516bcf51 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -65,6 +65,18 @@
+ #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+ #endif
+
++/*
++ * Feature detection for gnu_inline (gnu89 extern inline semantics). Either
++ * __GNUC_STDC_INLINE__ is defined (not using gnu89 extern inline semantics,
++ * and we opt in to the gnu89 semantics), or __GNUC_STDC_INLINE__ is not
++ * defined so the gnu89 semantics are the default.
++ */
++#ifdef __GNUC_STDC_INLINE__
++# define __gnu_inline __attribute__((gnu_inline))
++#else
++# define __gnu_inline
++#endif
++
+ /*
+ * Force always-inline if the user requests it so via the .config,
+ * or if gcc is too old.
+@@ -72,19 +84,22 @@
+ * -Wunused-function. This turns out to avoid the need for complex #ifdef
+ * directives. Suppress the warning in clang as well by using "unused"
+ * function attribute, which is redundant but not harmful for gcc.
++ * Prefer gnu_inline, so that extern inline functions do not emit an
++ * externally visible function. This makes extern inline behave as per gnu89
++ * semantics rather than c99. This prevents multiple symbol definition errors
++ * of extern inline functions at link time.
++ * A lot of inline functions can cause havoc with function tracing.
+ */
+ #if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \
+ !defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4)
+-#define inline inline __attribute__((always_inline,unused)) notrace
+-#define __inline__ __inline__ __attribute__((always_inline,unused)) notrace
+-#define __inline __inline __attribute__((always_inline,unused)) notrace
++#define inline \
++ inline __attribute__((always_inline, unused)) notrace __gnu_inline
+ #else
+-/* A lot of inline functions can cause havoc with function tracing */
+-#define inline inline __attribute__((unused)) notrace
+-#define __inline__ __inline__ __attribute__((unused)) notrace
+-#define __inline __inline __attribute__((unused)) notrace
++#define inline inline __attribute__((unused)) notrace __gnu_inline
+ #endif
+
++#define __inline__ inline
++#define __inline inline
+ #define __always_inline inline __attribute__((always_inline))
+ #define noinline __attribute__((noinline))
+
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index fc4e8f91b03d..b49658f9001e 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -453,15 +453,16 @@ struct sock_fprog_kern {
+ };
+
+ struct bpf_binary_header {
+- unsigned int pages;
+- u8 image[];
++ u32 pages;
++ /* Some arches need word alignment for their instructions */
++ u8 image[] __aligned(4);
+ };
+
+ struct bpf_prog {
+ u16 pages; /* Number of allocated pages */
+ u16 jited:1, /* Is our filter JIT'ed? */
+ jit_requested:1,/* archs need to JIT the prog */
+- locked:1, /* Program image locked? */
++ undo_set_mem:1, /* Passed set_memory_ro() checkpoint */
+ gpl_compatible:1, /* Is filter GPL compatible? */
+ cb_access:1, /* Is control block accessed? */
+ dst_needed:1, /* Do we need dst entry? */
+@@ -644,50 +645,27 @@ bpf_ctx_narrow_access_ok(u32 off, u32 size, const u32 size_default)
+
+ #define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0]))
+
+-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
+-static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
+-{
+- fp->locked = 1;
+- WARN_ON_ONCE(set_memory_ro((unsigned long)fp, fp->pages));
+-}
+-
+-static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
+-{
+- if (fp->locked) {
+- WARN_ON_ONCE(set_memory_rw((unsigned long)fp, fp->pages));
+- /* In case set_memory_rw() fails, we want to be the first
+- * to crash here instead of some random place later on.
+- */
+- fp->locked = 0;
+- }
+-}
+-
+-static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
+-{
+- WARN_ON_ONCE(set_memory_ro((unsigned long)hdr, hdr->pages));
+-}
+-
+-static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
+-{
+- WARN_ON_ONCE(set_memory_rw((unsigned long)hdr, hdr->pages));
+-}
+-#else
+ static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
+ {
++ fp->undo_set_mem = 1;
++ set_memory_ro((unsigned long)fp, fp->pages);
+ }
+
+ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
+ {
++ if (fp->undo_set_mem)
++ set_memory_rw((unsigned long)fp, fp->pages);
+ }
+
+ static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
+ {
++ set_memory_ro((unsigned long)hdr, hdr->pages);
+ }
+
+ static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
+ {
++ set_memory_rw((unsigned long)hdr, hdr->pages);
+ }
+-#endif /* CONFIG_ARCH_HAS_SET_MEMORY */
+
+ static inline struct bpf_binary_header *
+ bpf_jit_binary_hdr(const struct bpf_prog *fp)
+diff --git a/include/linux/mlx5/eswitch.h b/include/linux/mlx5/eswitch.h
+index d3c9db492b30..fab5121ffb8f 100644
+--- a/include/linux/mlx5/eswitch.h
++++ b/include/linux/mlx5/eswitch.h
+@@ -8,6 +8,8 @@
+
+ #include <linux/mlx5/driver.h>
+
++#define MLX5_ESWITCH_MANAGER(mdev) MLX5_CAP_GEN(mdev, eswitch_manager)
++
+ enum {
+ SRIOV_NONE,
+ SRIOV_LEGACY,
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 1aad455538f4..5b662ea2e32a 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -905,7 +905,7 @@ struct mlx5_ifc_cmd_hca_cap_bits {
+ u8 vnic_env_queue_counters[0x1];
+ u8 ets[0x1];
+ u8 nic_flow_table[0x1];
+- u8 eswitch_flow_table[0x1];
++ u8 eswitch_manager[0x1];
+ u8 device_memory[0x1];
+ u8 mcam_reg[0x1];
+ u8 pcam_reg[0x1];
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index cf44503ea81a..5ad916d31471 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2735,11 +2735,31 @@ static inline void skb_gro_flush_final(struct sk_buff *skb, struct sk_buff **pp,
+ if (PTR_ERR(pp) != -EINPROGRESS)
+ NAPI_GRO_CB(skb)->flush |= flush;
+ }
++static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb,
++ struct sk_buff **pp,
++ int flush,
++ struct gro_remcsum *grc)
++{
++ if (PTR_ERR(pp) != -EINPROGRESS) {
++ NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_remcsum_cleanup(skb, grc);
++ skb->remcsum_offload = 0;
++ }
++}
+ #else
+ static inline void skb_gro_flush_final(struct sk_buff *skb, struct sk_buff **pp, int flush)
+ {
+ NAPI_GRO_CB(skb)->flush |= flush;
+ }
++static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb,
++ struct sk_buff **pp,
++ int flush,
++ struct gro_remcsum *grc)
++{
++ NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_remcsum_cleanup(skb, grc);
++ skb->remcsum_offload = 0;
++}
+ #endif
+
+ static inline int dev_hard_header(struct sk_buff *skb, struct net_device *dev,
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index e828d31be5da..3b4fbf690957 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -111,6 +111,11 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q,
+ {
+ }
+
++static inline bool tcf_block_shared(struct tcf_block *block)
++{
++ return false;
++}
++
+ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
+ {
+ return NULL;
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 6ef6746a7871..78509e3f68da 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1513,6 +1513,17 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ return 0;
+ }
+
++static void bpf_prog_select_func(struct bpf_prog *fp)
++{
++#ifndef CONFIG_BPF_JIT_ALWAYS_ON
++ u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
++
++ fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
++#else
++ fp->bpf_func = __bpf_prog_ret0_warn;
++#endif
++}
++
+ /**
+ * bpf_prog_select_runtime - select exec runtime for BPF program
+ * @fp: bpf_prog populated with internal BPF program
+@@ -1523,13 +1534,13 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ */
+ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ {
+-#ifndef CONFIG_BPF_JIT_ALWAYS_ON
+- u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
++ /* In case of BPF to BPF calls, verifier did all the prep
++ * work with regards to JITing, etc.
++ */
++ if (fp->bpf_func)
++ goto finalize;
+
+- fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
+-#else
+- fp->bpf_func = __bpf_prog_ret0_warn;
+-#endif
++ bpf_prog_select_func(fp);
+
+ /* eBPF JITs can rewrite the program in case constant
+ * blinding is active. However, in case of error during
+@@ -1550,6 +1561,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ if (*err)
+ return fp;
+ }
++
++finalize:
+ bpf_prog_lock_ro(fp);
+
+ /* The tail call compatibility check can only be done at
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index 95a84b2f10ce..fc7ee4357381 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -112,6 +112,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
++static void bpf_tcp_close(struct sock *sk, long timeout);
+
+ static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
+ {
+@@ -133,7 +134,42 @@ static bool bpf_tcp_stream_read(const struct sock *sk)
+ return !empty;
+ }
+
+-static struct proto tcp_bpf_proto;
++enum {
++ SOCKMAP_IPV4,
++ SOCKMAP_IPV6,
++ SOCKMAP_NUM_PROTS,
++};
++
++enum {
++ SOCKMAP_BASE,
++ SOCKMAP_TX,
++ SOCKMAP_NUM_CONFIGS,
++};
++
++static struct proto *saved_tcpv6_prot __read_mostly;
++static DEFINE_SPINLOCK(tcpv6_prot_lock);
++static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS];
++static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
++ struct proto *base)
++{
++ prot[SOCKMAP_BASE] = *base;
++ prot[SOCKMAP_BASE].close = bpf_tcp_close;
++ prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg;
++ prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read;
++
++ prot[SOCKMAP_TX] = prot[SOCKMAP_BASE];
++ prot[SOCKMAP_TX].sendmsg = bpf_tcp_sendmsg;
++ prot[SOCKMAP_TX].sendpage = bpf_tcp_sendpage;
++}
++
++static void update_sk_prot(struct sock *sk, struct smap_psock *psock)
++{
++ int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4;
++ int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE;
++
++ sk->sk_prot = &bpf_tcp_prots[family][conf];
++}
++
+ static int bpf_tcp_init(struct sock *sk)
+ {
+ struct smap_psock *psock;
+@@ -153,14 +189,17 @@ static int bpf_tcp_init(struct sock *sk)
+ psock->save_close = sk->sk_prot->close;
+ psock->sk_proto = sk->sk_prot;
+
+- if (psock->bpf_tx_msg) {
+- tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg;
+- tcp_bpf_proto.sendpage = bpf_tcp_sendpage;
+- tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg;
+- tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read;
++ /* Build IPv6 sockmap whenever the address of tcpv6_prot changes */
++ if (sk->sk_family == AF_INET6 &&
++ unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {
++ spin_lock_bh(&tcpv6_prot_lock);
++ if (likely(sk->sk_prot != saved_tcpv6_prot)) {
++ build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot);
++ smp_store_release(&saved_tcpv6_prot, sk->sk_prot);
++ }
++ spin_unlock_bh(&tcpv6_prot_lock);
+ }
+-
+- sk->sk_prot = &tcp_bpf_proto;
++ update_sk_prot(sk, psock);
+ rcu_read_unlock();
+ return 0;
+ }
+@@ -432,7 +471,8 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
+ while (sg[i].length) {
+ free += sg[i].length;
+ sk_mem_uncharge(sk, sg[i].length);
+- put_page(sg_page(&sg[i]));
++ if (!md->skb)
++ put_page(sg_page(&sg[i]));
+ sg[i].length = 0;
+ sg[i].page_link = 0;
+ sg[i].offset = 0;
+@@ -441,6 +481,8 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
+ if (i == MAX_SKB_FRAGS)
+ i = 0;
+ }
++ if (md->skb)
++ consume_skb(md->skb);
+
+ return free;
+ }
+@@ -1070,8 +1112,7 @@ static void bpf_tcp_msg_add(struct smap_psock *psock,
+
+ static int bpf_tcp_ulp_register(void)
+ {
+- tcp_bpf_proto = tcp_prot;
+- tcp_bpf_proto.close = bpf_tcp_close;
++ build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot);
+ /* Once BPF TX ULP is registered it is never unregistered. It
+ * will be in the ULP list for the lifetime of the system. Doing
+ * duplicate registers is not a problem.
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 016ef9025827..74fa60b4b438 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1328,9 +1328,7 @@ static int bpf_prog_load(union bpf_attr *attr)
+ if (err < 0)
+ goto free_used_maps;
+
+- /* eBPF program is ready to be JITed */
+- if (!prog->bpf_func)
+- prog = bpf_prog_select_runtime(prog, &err);
++ prog = bpf_prog_select_runtime(prog, &err);
+ if (err < 0)
+ goto free_used_maps;
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 56212edd6f23..1b586f31cbfd 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5349,6 +5349,10 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ if (insn->code != (BPF_JMP | BPF_CALL) ||
+ insn->src_reg != BPF_PSEUDO_CALL)
+ continue;
++ /* Upon error here we cannot fall back to interpreter but
++ * need a hard reject of the program. Thus -EFAULT is
++ * propagated in any case.
++ */
+ subprog = find_subprog(env, i + insn->imm + 1);
+ if (subprog < 0) {
+ WARN_ONCE(1, "verifier bug. No program starts at insn %d\n",
+@@ -5369,7 +5373,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+
+ func = kzalloc(sizeof(prog) * (env->subprog_cnt + 1), GFP_KERNEL);
+ if (!func)
+- return -ENOMEM;
++ goto out_undo_insn;
+
+ for (i = 0; i <= env->subprog_cnt; i++) {
+ subprog_start = subprog_end;
+@@ -5424,7 +5428,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ tmp = bpf_int_jit_compile(func[i]);
+ if (tmp != func[i] || func[i]->bpf_func != old_bpf_func) {
+ verbose(env, "JIT doesn't support bpf-to-bpf calls\n");
+- err = -EFAULT;
++ err = -ENOTSUPP;
+ goto out_free;
+ }
+ cond_resched();
+@@ -5466,6 +5470,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ if (func[i])
+ bpf_jit_free(func[i]);
+ kfree(func);
++out_undo_insn:
+ /* cleanup main prog to be interpreted */
+ prog->jit_requested = 0;
+ for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) {
+@@ -5492,6 +5497,8 @@ static int fixup_call_args(struct bpf_verifier_env *env)
+ err = jit_subprogs(env);
+ if (err == 0)
+ return 0;
++ if (err == -EFAULT)
++ return err;
+ }
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ for (i = 0; i < prog->len; i++, insn++) {
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 8fe3ebd6ac00..048d0651aa98 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -359,15 +359,8 @@ static void wb_shutdown(struct bdi_writeback *wb)
+ spin_lock_bh(&wb->work_lock);
+ if (!test_and_clear_bit(WB_registered, &wb->state)) {
+ spin_unlock_bh(&wb->work_lock);
+- /*
+- * Wait for wb shutdown to finish if someone else is just
+- * running wb_shutdown(). Otherwise we could proceed to wb /
+- * bdi destruction before wb_shutdown() is finished.
+- */
+- wait_on_bit(&wb->state, WB_shutting_down, TASK_UNINTERRUPTIBLE);
+ return;
+ }
+- set_bit(WB_shutting_down, &wb->state);
+ spin_unlock_bh(&wb->work_lock);
+
+ cgwb_remove_from_bdi_list(wb);
+@@ -379,12 +372,6 @@ static void wb_shutdown(struct bdi_writeback *wb)
+ mod_delayed_work(bdi_wq, &wb->dwork, 0);
+ flush_delayed_work(&wb->dwork);
+ WARN_ON(!list_empty(&wb->work_list));
+- /*
+- * Make sure bit gets cleared after shutdown is finished. Matches with
+- * the barrier provided by test_and_clear_bit() above.
+- */
+- smp_wmb();
+- clear_and_wake_up_bit(WB_shutting_down, &wb->state);
+ }
+
+ static void wb_exit(struct bdi_writeback *wb)
+@@ -508,10 +495,12 @@ static void cgwb_release_workfn(struct work_struct *work)
+ struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
+ release_work);
+
++ mutex_lock(&wb->bdi->cgwb_release_mutex);
+ wb_shutdown(wb);
+
+ css_put(wb->memcg_css);
+ css_put(wb->blkcg_css);
++ mutex_unlock(&wb->bdi->cgwb_release_mutex);
+
+ fprop_local_destroy_percpu(&wb->memcg_completions);
+ percpu_ref_exit(&wb->refcnt);
+@@ -697,6 +686,7 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi)
+
+ INIT_RADIX_TREE(&bdi->cgwb_tree, GFP_ATOMIC);
+ bdi->cgwb_congested_tree = RB_ROOT;
++ mutex_init(&bdi->cgwb_release_mutex);
+
+ ret = wb_init(&bdi->wb, bdi, 1, GFP_KERNEL);
+ if (!ret) {
+@@ -717,7 +707,10 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
+ spin_lock_irq(&cgwb_lock);
+ radix_tree_for_each_slot(slot, &bdi->cgwb_tree, &iter, 0)
+ cgwb_kill(*slot);
++ spin_unlock_irq(&cgwb_lock);
+
++ mutex_lock(&bdi->cgwb_release_mutex);
++ spin_lock_irq(&cgwb_lock);
+ while (!list_empty(&bdi->wb_list)) {
+ wb = list_first_entry(&bdi->wb_list, struct bdi_writeback,
+ bdi_node);
+@@ -726,6 +719,7 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
+ spin_lock_irq(&cgwb_lock);
+ }
+ spin_unlock_irq(&cgwb_lock);
++ mutex_unlock(&bdi->cgwb_release_mutex);
+ }
+
+ /**
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index 5505ee6ebdbe..d3a5ec02e64c 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -688,7 +688,7 @@ static struct sk_buff **vlan_gro_receive(struct sk_buff **head,
+ out_unlock:
+ rcu_read_unlock();
+ out:
+- NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_flush_final(skb, pp, flush);
+
+ return pp;
+ }
+diff --git a/net/atm/br2684.c b/net/atm/br2684.c
+index fd94bea36ee8..82c8d33bd8ba 100644
+--- a/net/atm/br2684.c
++++ b/net/atm/br2684.c
+@@ -252,8 +252,7 @@ static int br2684_xmit_vcc(struct sk_buff *skb, struct net_device *dev,
+
+ ATM_SKB(skb)->vcc = atmvcc = brvcc->atmvcc;
+ pr_debug("atm_skb(%p)->vcc(%p)->dev(%p)\n", skb, atmvcc, atmvcc->dev);
+- refcount_add(skb->truesize, &sk_atm(atmvcc)->sk_wmem_alloc);
+- ATM_SKB(skb)->atm_options = atmvcc->atm_options;
++ atm_account_tx(atmvcc, skb);
+ dev->stats.tx_packets++;
+ dev->stats.tx_bytes += skb->len;
+
+diff --git a/net/atm/clip.c b/net/atm/clip.c
+index f07dbc632222..0edebf8decc0 100644
+--- a/net/atm/clip.c
++++ b/net/atm/clip.c
+@@ -381,8 +381,7 @@ static netdev_tx_t clip_start_xmit(struct sk_buff *skb,
+ memcpy(here, llc_oui, sizeof(llc_oui));
+ ((__be16 *) here)[3] = skb->protocol;
+ }
+- refcount_add(skb->truesize, &sk_atm(vcc)->sk_wmem_alloc);
+- ATM_SKB(skb)->atm_options = vcc->atm_options;
++ atm_account_tx(vcc, skb);
+ entry->vccs->last_use = jiffies;
+ pr_debug("atm_skb(%p)->vcc(%p)->dev(%p)\n", skb, vcc, vcc->dev);
+ old = xchg(&entry->vccs->xoff, 1); /* assume XOFF ... */
+diff --git a/net/atm/common.c b/net/atm/common.c
+index fc78a0508ae1..a7a68e509628 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -630,10 +630,9 @@ int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t size)
+ goto out;
+ }
+ pr_debug("%d += %d\n", sk_wmem_alloc_get(sk), skb->truesize);
+- refcount_add(skb->truesize, &sk->sk_wmem_alloc);
++ atm_account_tx(vcc, skb);
+
+ skb->dev = NULL; /* for paths shared with net_device interfaces */
+- ATM_SKB(skb)->atm_options = vcc->atm_options;
+ if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {
+ kfree_skb(skb);
+ error = -EFAULT;
+diff --git a/net/atm/lec.c b/net/atm/lec.c
+index 3138a869b5c0..19ad2fd04983 100644
+--- a/net/atm/lec.c
++++ b/net/atm/lec.c
+@@ -182,9 +182,8 @@ lec_send(struct atm_vcc *vcc, struct sk_buff *skb)
+ struct net_device *dev = skb->dev;
+
+ ATM_SKB(skb)->vcc = vcc;
+- ATM_SKB(skb)->atm_options = vcc->atm_options;
++ atm_account_tx(vcc, skb);
+
+- refcount_add(skb->truesize, &sk_atm(vcc)->sk_wmem_alloc);
+ if (vcc->send(vcc, skb) < 0) {
+ dev->stats.tx_dropped++;
+ return;
+diff --git a/net/atm/mpc.c b/net/atm/mpc.c
+index 31e0dcb970f8..44ddcdd5fd35 100644
+--- a/net/atm/mpc.c
++++ b/net/atm/mpc.c
+@@ -555,8 +555,7 @@ static int send_via_shortcut(struct sk_buff *skb, struct mpoa_client *mpc)
+ sizeof(struct llc_snap_hdr));
+ }
+
+- refcount_add(skb->truesize, &sk_atm(entry->shortcut)->sk_wmem_alloc);
+- ATM_SKB(skb)->atm_options = entry->shortcut->atm_options;
++ atm_account_tx(entry->shortcut, skb);
+ entry->shortcut->send(entry->shortcut, skb);
+ entry->packets_fwded++;
+ mpc->in_ops->put(entry);
+diff --git a/net/atm/pppoatm.c b/net/atm/pppoatm.c
+index 21d9d341a619..af8c4b38b746 100644
+--- a/net/atm/pppoatm.c
++++ b/net/atm/pppoatm.c
+@@ -350,8 +350,7 @@ static int pppoatm_send(struct ppp_channel *chan, struct sk_buff *skb)
+ return 1;
+ }
+
+- refcount_add(skb->truesize, &sk_atm(ATM_SKB(skb)->vcc)->sk_wmem_alloc);
+- ATM_SKB(skb)->atm_options = ATM_SKB(skb)->vcc->atm_options;
++ atm_account_tx(vcc, skb);
+ pr_debug("atm_skb(%p)->vcc(%p)->dev(%p)\n",
+ skb, ATM_SKB(skb)->vcc, ATM_SKB(skb)->vcc->dev);
+ ret = ATM_SKB(skb)->vcc->send(ATM_SKB(skb)->vcc, skb)
+diff --git a/net/atm/raw.c b/net/atm/raw.c
+index ee10e8d46185..b3ba44aab0ee 100644
+--- a/net/atm/raw.c
++++ b/net/atm/raw.c
+@@ -35,8 +35,8 @@ static void atm_pop_raw(struct atm_vcc *vcc, struct sk_buff *skb)
+ struct sock *sk = sk_atm(vcc);
+
+ pr_debug("(%d) %d -= %d\n",
+- vcc->vci, sk_wmem_alloc_get(sk), skb->truesize);
+- WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc));
++ vcc->vci, sk_wmem_alloc_get(sk), ATM_SKB(skb)->acct_truesize);
++ WARN_ON(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, &sk->sk_wmem_alloc));
+ dev_kfree_skb_any(skb);
+ sk->sk_write_space(sk);
+ }
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 499123afcab5..9d37d91b34e5 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -396,6 +396,12 @@ ebt_check_watcher(struct ebt_entry_watcher *w, struct xt_tgchk_param *par,
+ watcher = xt_request_find_target(NFPROTO_BRIDGE, w->u.name, 0);
+ if (IS_ERR(watcher))
+ return PTR_ERR(watcher);
++
++ if (watcher->family != NFPROTO_BRIDGE) {
++ module_put(watcher->me);
++ return -ENOENT;
++ }
++
+ w->u.watcher = watcher;
+
+ par->target = watcher;
+@@ -717,6 +723,13 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ goto cleanup_watchers;
+ }
+
++ /* Reject UNSPEC, xtables verdicts/return values are incompatible */
++ if (target->family != NFPROTO_BRIDGE) {
++ module_put(target->me);
++ ret = -ENOENT;
++ goto cleanup_watchers;
++ }
++
+ t->u.target = target;
+ if (t->u.target == &ebt_standard_target) {
+ if (gap < sizeof(struct ebt_standard_target)) {
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index a04e1e88bf3a..50537ff961a7 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -285,16 +285,9 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, unsigned int cmd)
+ if (ifr->ifr_qlen < 0)
+ return -EINVAL;
+ if (dev->tx_queue_len ^ ifr->ifr_qlen) {
+- unsigned int orig_len = dev->tx_queue_len;
+-
+- dev->tx_queue_len = ifr->ifr_qlen;
+- err = call_netdevice_notifiers(
+- NETDEV_CHANGE_TX_QUEUE_LEN, dev);
+- err = notifier_to_errno(err);
+- if (err) {
+- dev->tx_queue_len = orig_len;
++ err = dev_change_tx_queue_len(dev, ifr->ifr_qlen);
++ if (err)
+ return err;
+- }
+ }
+ return 0;
+
+diff --git a/net/dccp/ccids/ccid3.c b/net/dccp/ccids/ccid3.c
+index 8b5ba6dffac7..12877a1514e7 100644
+--- a/net/dccp/ccids/ccid3.c
++++ b/net/dccp/ccids/ccid3.c
+@@ -600,7 +600,7 @@ static void ccid3_hc_rx_send_feedback(struct sock *sk,
+ {
+ struct ccid3_hc_rx_sock *hc = ccid3_hc_rx_sk(sk);
+ struct dccp_sock *dp = dccp_sk(sk);
+- ktime_t now = ktime_get_real();
++ ktime_t now = ktime_get();
+ s64 delta = 0;
+
+ switch (fbtype) {
+@@ -625,15 +625,14 @@ static void ccid3_hc_rx_send_feedback(struct sock *sk,
+ case CCID3_FBACK_PERIODIC:
+ delta = ktime_us_delta(now, hc->rx_tstamp_last_feedback);
+ if (delta <= 0)
+- DCCP_BUG("delta (%ld) <= 0", (long)delta);
+- else
+- hc->rx_x_recv = scaled_div32(hc->rx_bytes_recv, delta);
++ delta = 1;
++ hc->rx_x_recv = scaled_div32(hc->rx_bytes_recv, delta);
+ break;
+ default:
+ return;
+ }
+
+- ccid3_pr_debug("Interval %ldusec, X_recv=%u, 1/p=%u\n", (long)delta,
++ ccid3_pr_debug("Interval %lldusec, X_recv=%u, 1/p=%u\n", delta,
+ hc->rx_x_recv, hc->rx_pinv);
+
+ hc->rx_tstamp_last_feedback = now;
+@@ -680,7 +679,8 @@ static int ccid3_hc_rx_insert_options(struct sock *sk, struct sk_buff *skb)
+ static u32 ccid3_first_li(struct sock *sk)
+ {
+ struct ccid3_hc_rx_sock *hc = ccid3_hc_rx_sk(sk);
+- u32 x_recv, p, delta;
++ u32 x_recv, p;
++ s64 delta;
+ u64 fval;
+
+ if (hc->rx_rtt == 0) {
+@@ -688,7 +688,9 @@ static u32 ccid3_first_li(struct sock *sk)
+ hc->rx_rtt = DCCP_FALLBACK_RTT;
+ }
+
+- delta = ktime_to_us(net_timedelta(hc->rx_tstamp_last_feedback));
++ delta = ktime_us_delta(ktime_get(), hc->rx_tstamp_last_feedback);
++ if (delta <= 0)
++ delta = 1;
+ x_recv = scaled_div32(hc->rx_bytes_recv, delta);
+ if (x_recv == 0) { /* would also trigger divide-by-zero */
+ DCCP_WARN("X_recv==0\n");
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index 40c851693f77..0c9478b91fa5 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -86,35 +86,39 @@ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ opt++;
+ kdebug("options: '%s'", opt);
+ do {
++ int opt_len, opt_nlen;
+ const char *eq;
+- int opt_len, opt_nlen, opt_vlen, tmp;
++ char optval[128];
+
+ next_opt = memchr(opt, '#', end - opt) ?: end;
+ opt_len = next_opt - opt;
+- if (opt_len <= 0 || opt_len > 128) {
++ if (opt_len <= 0 || opt_len > sizeof(optval)) {
+ pr_warn_ratelimited("Invalid option length (%d) for dns_resolver key\n",
+ opt_len);
+ return -EINVAL;
+ }
+
+- eq = memchr(opt, '=', opt_len) ?: end;
+- opt_nlen = eq - opt;
+- eq++;
+- opt_vlen = next_opt - eq; /* will be -1 if no value */
++ eq = memchr(opt, '=', opt_len);
++ if (eq) {
++ opt_nlen = eq - opt;
++ eq++;
++ memcpy(optval, eq, next_opt - eq);
++ optval[next_opt - eq] = '\0';
++ } else {
++ opt_nlen = opt_len;
++ optval[0] = '\0';
++ }
+
+- tmp = opt_vlen >= 0 ? opt_vlen : 0;
+- kdebug("option '%*.*s' val '%*.*s'",
+- opt_nlen, opt_nlen, opt, tmp, tmp, eq);
++ kdebug("option '%*.*s' val '%s'",
++ opt_nlen, opt_nlen, opt, optval);
+
+ /* see if it's an error number representing a DNS error
+ * that's to be recorded as the result in this key */
+ if (opt_nlen == sizeof(DNS_ERRORNO_OPTION) - 1 &&
+ memcmp(opt, DNS_ERRORNO_OPTION, opt_nlen) == 0) {
+ kdebug("dns error number option");
+- if (opt_vlen <= 0)
+- goto bad_option_value;
+
+- ret = kstrtoul(eq, 10, &derrno);
++ ret = kstrtoul(optval, 10, &derrno);
+ if (ret < 0)
+ goto bad_option_value;
+
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 1540db65241a..c9ec1603666b 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -448,9 +448,7 @@ static struct sk_buff **gue_gro_receive(struct sock *sk,
+ out_unlock:
+ rcu_read_unlock();
+ out:
+- NAPI_GRO_CB(skb)->flush |= flush;
+- skb_gro_remcsum_cleanup(skb, &grc);
+- skb->remcsum_offload = 0;
++ skb_gro_flush_final_remcsum(skb, pp, flush, &grc);
+
+ return pp;
+ }
+diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
+index 1859c473b21a..6a7d980105f6 100644
+--- a/net/ipv4/gre_offload.c
++++ b/net/ipv4/gre_offload.c
+@@ -223,7 +223,7 @@ static struct sk_buff **gre_gro_receive(struct sk_buff **head,
+ out_unlock:
+ rcu_read_unlock();
+ out:
+- NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_flush_final(skb, pp, flush);
+
+ return pp;
+ }
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 31ff46daae97..3647167c8fa3 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -243,9 +243,9 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ bool dev_match = (sk->sk_bound_dev_if == dif ||
+ sk->sk_bound_dev_if == sdif);
+
+- if (exact_dif && !dev_match)
++ if (!dev_match)
+ return -1;
+- if (sk->sk_bound_dev_if && dev_match)
++ if (sk->sk_bound_dev_if)
+ score += 4;
+ }
+ if (sk->sk_incoming_cpu == raw_smp_processor_id())
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 4b195bac8ac0..2f600f261690 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -263,8 +263,9 @@ static int proc_tcp_fastopen_key(struct ctl_table *table, int write,
+ ipv4.sysctl_tcp_fastopen);
+ struct ctl_table tbl = { .maxlen = (TCP_FASTOPEN_KEY_LENGTH * 2 + 10) };
+ struct tcp_fastopen_context *ctxt;
+- int ret;
+ u32 user_key[4]; /* 16 bytes, matching TCP_FASTOPEN_KEY_LENGTH */
++ __le32 key[4];
++ int ret, i;
+
+ tbl.data = kmalloc(tbl.maxlen, GFP_KERNEL);
+ if (!tbl.data)
+@@ -273,11 +274,14 @@ static int proc_tcp_fastopen_key(struct ctl_table *table, int write,
+ rcu_read_lock();
+ ctxt = rcu_dereference(net->ipv4.tcp_fastopen_ctx);
+ if (ctxt)
+- memcpy(user_key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH);
++ memcpy(key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH);
+ else
+- memset(user_key, 0, sizeof(user_key));
++ memset(key, 0, sizeof(key));
+ rcu_read_unlock();
+
++ for (i = 0; i < ARRAY_SIZE(key); i++)
++ user_key[i] = le32_to_cpu(key[i]);
++
+ snprintf(tbl.data, tbl.maxlen, "%08x-%08x-%08x-%08x",
+ user_key[0], user_key[1], user_key[2], user_key[3]);
+ ret = proc_dostring(&tbl, write, buffer, lenp, ppos);
+@@ -288,13 +292,17 @@ static int proc_tcp_fastopen_key(struct ctl_table *table, int write,
+ ret = -EINVAL;
+ goto bad_key;
+ }
+- tcp_fastopen_reset_cipher(net, NULL, user_key,
++
++ for (i = 0; i < ARRAY_SIZE(user_key); i++)
++ key[i] = cpu_to_le32(user_key[i]);
++
++ tcp_fastopen_reset_cipher(net, NULL, key,
+ TCP_FASTOPEN_KEY_LENGTH);
+ }
+
+ bad_key:
+ pr_debug("proc FO key set 0x%x-%x-%x-%x <- 0x%s: %u\n",
+- user_key[0], user_key[1], user_key[2], user_key[3],
++ user_key[0], user_key[1], user_key[2], user_key[3],
+ (char *)tbl.data, ret);
+ kfree(tbl.data);
+ return ret;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index e51c644484dc..1f25ebab25d2 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3149,6 +3149,15 @@ static int tcp_clean_rtx_queue(struct sock *sk, u32 prior_fack,
+
+ if (tcp_is_reno(tp)) {
+ tcp_remove_reno_sacks(sk, pkts_acked);
++
++ /* If any of the cumulatively ACKed segments was
++ * retransmitted, non-SACK case cannot confirm that
++ * progress was due to original transmission due to
++ * lack of TCPCB_SACKED_ACKED bits even if some of
++ * the packets may have been never retransmitted.
++ */
++ if (flag & FLAG_RETRANS_DATA_ACKED)
++ flag &= ~FLAG_ORIG_SACK_ACKED;
+ } else {
+ int delta;
+
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index ea6e6e7df0ee..cde2719fcb89 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -295,7 +295,7 @@ struct sk_buff **udp_gro_receive(struct sk_buff **head, struct sk_buff *skb,
+ out_unlock:
+ rcu_read_unlock();
+ out:
+- NAPI_GRO_CB(skb)->flush |= flush;
++ skb_gro_flush_final(skb, pp, flush);
+ return pp;
+ }
+ EXPORT_SYMBOL(udp_gro_receive);
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 2febe26de6a1..595ad408dba0 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -113,9 +113,9 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ bool dev_match = (sk->sk_bound_dev_if == dif ||
+ sk->sk_bound_dev_if == sdif);
+
+- if (exact_dif && !dev_match)
++ if (!dev_match)
+ return -1;
+- if (sk->sk_bound_dev_if && dev_match)
++ if (sk->sk_bound_dev_if)
+ score++;
+ }
+ if (sk->sk_incoming_cpu == raw_smp_processor_id())
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index 5e0332014c17..eeb4d3098ff4 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -585,6 +585,8 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ fq->q.meat == fq->q.len &&
+ nf_ct_frag6_reasm(fq, skb, dev))
+ ret = 0;
++ else
++ skb_dst_drop(skb);
+
+ out_unlock:
+ spin_unlock_bh(&fq->q.lock);
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index 33fb35cbfac1..558fe8cc6d43 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -373,7 +373,7 @@ static int seg6_hmac_init_algo(void)
+ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+- tfm = crypto_alloc_shash(algo->name, 0, GFP_KERNEL);
++ tfm = crypto_alloc_shash(algo->name, 0, 0);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+ p_tfm = per_cpu_ptr(algo->tfms, cpu);
+diff --git a/net/netfilter/ipvs/ip_vs_lblc.c b/net/netfilter/ipvs/ip_vs_lblc.c
+index 3057e453bf31..83918119ceb8 100644
+--- a/net/netfilter/ipvs/ip_vs_lblc.c
++++ b/net/netfilter/ipvs/ip_vs_lblc.c
+@@ -371,6 +371,7 @@ static int ip_vs_lblc_init_svc(struct ip_vs_service *svc)
+ tbl->counter = 1;
+ tbl->dead = false;
+ tbl->svc = svc;
++ atomic_set(&tbl->entries, 0);
+
+ /*
+ * Hook periodic timer for garbage collection
+diff --git a/net/netfilter/ipvs/ip_vs_lblcr.c b/net/netfilter/ipvs/ip_vs_lblcr.c
+index 92adc04557ed..bc2bc5eebcb8 100644
+--- a/net/netfilter/ipvs/ip_vs_lblcr.c
++++ b/net/netfilter/ipvs/ip_vs_lblcr.c
+@@ -534,6 +534,7 @@ static int ip_vs_lblcr_init_svc(struct ip_vs_service *svc)
+ tbl->counter = 1;
+ tbl->dead = false;
+ tbl->svc = svc;
++ atomic_set(&tbl->entries, 0);
+
+ /*
+ * Hook periodic timer for garbage collection
+diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
+index 2ceefa183cee..6a196e438b6c 100644
+--- a/net/nfc/llcp_commands.c
++++ b/net/nfc/llcp_commands.c
+@@ -752,11 +752,14 @@ int nfc_llcp_send_ui_frame(struct nfc_llcp_sock *sock, u8 ssap, u8 dsap,
+ pr_debug("Fragment %zd bytes remaining %zd",
+ frag_len, remaining_len);
+
+- pdu = nfc_alloc_send_skb(sock->dev, &sock->sk, MSG_DONTWAIT,
++ pdu = nfc_alloc_send_skb(sock->dev, &sock->sk, 0,
+ frag_len + LLCP_HEADER_SIZE, &err);
+ if (pdu == NULL) {
+- pr_err("Could not allocate PDU\n");
+- continue;
++ pr_err("Could not allocate PDU (error=%d)\n", err);
++ len -= remaining_len;
++ if (len == 0)
++ len = err;
++ break;
+ }
+
+ pdu = llcp_add_header(pdu, dsap, ssap, LLCP_PDU_UI);
+diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c
+index 9696ef96b719..1a30e165eeb4 100644
+--- a/net/nsh/nsh.c
++++ b/net/nsh/nsh.c
+@@ -104,7 +104,7 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb,
+ __skb_pull(skb, nsh_len);
+
+ skb_reset_mac_header(skb);
+- skb_reset_mac_len(skb);
++ skb->mac_len = proto == htons(ETH_P_TEB) ? ETH_HLEN : 0;
+ skb->protocol = proto;
+
+ features &= NETIF_F_SG;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 38d132d007ba..cb0f02785749 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2294,6 +2294,13 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ if (po->stats.stats1.tp_drops)
+ status |= TP_STATUS_LOSING;
+ }
++
++ if (do_vnet &&
++ virtio_net_hdr_from_skb(skb, h.raw + macoff -
++ sizeof(struct virtio_net_hdr),
++ vio_le(), true, 0))
++ goto drop_n_account;
++
+ po->stats.stats1.tp_packets++;
+ if (copy_skb) {
+ status |= TP_STATUS_COPY;
+@@ -2301,15 +2308,6 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ }
+ spin_unlock(&sk->sk_receive_queue.lock);
+
+- if (do_vnet) {
+- if (virtio_net_hdr_from_skb(skb, h.raw + macoff -
+- sizeof(struct virtio_net_hdr),
+- vio_le(), true, 0)) {
+- spin_lock(&sk->sk_receive_queue.lock);
+- goto drop_n_account;
+- }
+- }
+-
+ skb_copy_bits(skb, 0, h.raw + macoff, snaplen);
+
+ if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp)))
+diff --git a/net/rds/loop.c b/net/rds/loop.c
+index f2bf78de5688..dac6218a460e 100644
+--- a/net/rds/loop.c
++++ b/net/rds/loop.c
+@@ -193,4 +193,5 @@ struct rds_transport rds_loop_transport = {
+ .inc_copy_to_user = rds_message_inc_copy_to_user,
+ .inc_free = rds_loop_inc_free,
+ .t_name = "loopback",
++ .t_type = RDS_TRANS_LOOP,
+ };
+diff --git a/net/rds/rds.h b/net/rds/rds.h
+index b04c333d9d1c..f2272fb8cd45 100644
+--- a/net/rds/rds.h
++++ b/net/rds/rds.h
+@@ -479,6 +479,11 @@ struct rds_notifier {
+ int n_status;
+ };
+
++/* Available as part of RDS core, so doesn't need to participate
++ * in get_preferred transport etc
++ */
++#define RDS_TRANS_LOOP 3
++
+ /**
+ * struct rds_transport - transport specific behavioural hooks
+ *
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index dc67458b52f0..192ac6f78ded 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -103,6 +103,11 @@ static void rds_recv_rcvbuf_delta(struct rds_sock *rs, struct sock *sk,
+ rds_stats_add(s_recv_bytes_added_to_socket, delta);
+ else
+ rds_stats_add(s_recv_bytes_removed_from_socket, -delta);
++
++ /* loop transport doesn't send/recv congestion updates */
++ if (rs->rs_transport->t_type == RDS_TRANS_LOOP)
++ return;
++
+ now_congested = rs->rs_rcv_bytes > rds_sk_rcvbuf(rs);
+
+ rdsdebug("rs %p (%pI4:%u) recv bytes %d buf %d "
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index 8527cfdc446d..20d7d36b2fc9 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -415,7 +415,8 @@ static void tcf_ife_cleanup(struct tc_action *a)
+ spin_unlock_bh(&ife->tcf_lock);
+
+ p = rcu_dereference_protected(ife->params, 1);
+- kfree_rcu(p, rcu);
++ if (p)
++ kfree_rcu(p, rcu);
+ }
+
+ /* under ife->tcf_lock for existing action */
+@@ -516,8 +517,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ saddr = nla_data(tb[TCA_IFE_SMAC]);
+ }
+
+- ife->tcf_action = parm->action;
+-
+ if (parm->flags & IFE_ENCODE) {
+ if (daddr)
+ ether_addr_copy(p->eth_dst, daddr);
+@@ -543,10 +542,8 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ NULL, NULL);
+ if (err) {
+ metadata_parse_err:
+- if (exists)
+- tcf_idr_release(*a, bind);
+ if (ret == ACT_P_CREATED)
+- _tcf_ife_cleanup(*a);
++ tcf_idr_release(*a, bind);
+
+ if (exists)
+ spin_unlock_bh(&ife->tcf_lock);
+@@ -567,7 +564,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ err = use_all_metadata(ife);
+ if (err) {
+ if (ret == ACT_P_CREATED)
+- _tcf_ife_cleanup(*a);
++ tcf_idr_release(*a, bind);
+
+ if (exists)
+ spin_unlock_bh(&ife->tcf_lock);
+@@ -576,6 +573,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ }
+ }
+
++ ife->tcf_action = parm->action;
+ if (exists)
+ spin_unlock_bh(&ife->tcf_lock);
+
+diff --git a/net/sched/sch_blackhole.c b/net/sched/sch_blackhole.c
+index c98a61e980ba..9c4c2bb547d7 100644
+--- a/net/sched/sch_blackhole.c
++++ b/net/sched/sch_blackhole.c
+@@ -21,7 +21,7 @@ static int blackhole_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ struct sk_buff **to_free)
+ {
+ qdisc_drop(skb, sch, to_free);
+- return NET_XMIT_SUCCESS;
++ return NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
+ }
+
+ static struct sk_buff *blackhole_dequeue(struct Qdisc *sch)
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 092bebc70048..7afd66949a91 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -35,7 +35,6 @@ struct _strp_msg {
+ */
+ struct strp_msg strp;
+ int accum_len;
+- int early_eaten;
+ };
+
+ static inline struct _strp_msg *_strp_msg(struct sk_buff *skb)
+@@ -115,20 +114,6 @@ static int __strp_recv(read_descriptor_t *desc, struct sk_buff *orig_skb,
+ head = strp->skb_head;
+ if (head) {
+ /* Message already in progress */
+-
+- stm = _strp_msg(head);
+- if (unlikely(stm->early_eaten)) {
+- /* Already some number of bytes on the receive sock
+- * data saved in skb_head, just indicate they
+- * are consumed.
+- */
+- eaten = orig_len <= stm->early_eaten ?
+- orig_len : stm->early_eaten;
+- stm->early_eaten -= eaten;
+-
+- return eaten;
+- }
+-
+ if (unlikely(orig_offset)) {
+ /* Getting data with a non-zero offset when a message is
+ * in progress is not expected. If it does happen, we
+@@ -297,9 +282,9 @@ static int __strp_recv(read_descriptor_t *desc, struct sk_buff *orig_skb,
+ }
+
+ stm->accum_len += cand_len;
++ eaten += cand_len;
+ strp->need_bytes = stm->strp.full_len -
+ stm->accum_len;
+- stm->early_eaten = cand_len;
+ STRP_STATS_ADD(strp->stats.bytes, cand_len);
+ desc->count = 0; /* Stop reading socket */
+ break;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 5fe29121b9a8..9a7f91232de8 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -440,7 +440,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ ret = tls_push_record(sk, msg->msg_flags, record_type);
+ if (!ret)
+ continue;
+- if (ret == -EAGAIN)
++ if (ret < 0)
+ goto send_end;
+
+ copied -= try_to_copy;
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 8e03bd3f3668..5d3cce9e8744 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -201,7 +201,7 @@ virtio_transport_send_pkt(struct virtio_vsock_pkt *pkt)
+ return -ENODEV;
+ }
+
+- if (le32_to_cpu(pkt->hdr.dst_cid) == vsock->guest_cid)
++ if (le64_to_cpu(pkt->hdr.dst_cid) == vsock->guest_cid)
+ return virtio_transport_send_pkt_loopback(vsock, pkt);
+
+ if (pkt->reply)
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index a4c1b76240df..2d9b4795edb2 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -1490,6 +1490,10 @@ static int init_hyp_mode(void)
+ }
+ }
+
++ err = hyp_map_aux_data();
++ if (err)
++ kvm_err("Cannot map host auxilary data: %d\n", err);
++
+ return 0;
+
+ out_err:
+diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
+index c4762bef13c6..c95ab4c5a475 100644
+--- a/virt/kvm/arm/psci.c
++++ b/virt/kvm/arm/psci.c
+@@ -405,7 +405,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu)
+ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
+ {
+ u32 func_id = smccc_get_function(vcpu);
+- u32 val = PSCI_RET_NOT_SUPPORTED;
++ u32 val = SMCCC_RET_NOT_SUPPORTED;
+ u32 feature;
+
+ switch (func_id) {
+@@ -417,7 +417,21 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
+ switch(feature) {
+ case ARM_SMCCC_ARCH_WORKAROUND_1:
+ if (kvm_arm_harden_branch_predictor())
+- val = 0;
++ val = SMCCC_RET_SUCCESS;
++ break;
++ case ARM_SMCCC_ARCH_WORKAROUND_2:
++ switch (kvm_arm_have_ssbd()) {
++ case KVM_SSBD_FORCE_DISABLE:
++ case KVM_SSBD_UNKNOWN:
++ break;
++ case KVM_SSBD_KERNEL:
++ val = SMCCC_RET_SUCCESS;
++ break;
++ case KVM_SSBD_FORCE_ENABLE:
++ case KVM_SSBD_MITIGATED:
++ val = SMCCC_RET_NOT_REQUIRED;
++ break;
++ }
+ break;
+ }
+ break;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-18 11:18 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-18 11:18 UTC (permalink / raw
To: gentoo-commits
commit: f918c66c6091276f7ffd7c8aae48471c08334cdb
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 18 11:18:25 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 18 11:18:25 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f918c66c
Linux patch 4.17.8
0000_README | 4 ++++
1007_linux-4.17.8.patch | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 52 insertions(+)
diff --git a/0000_README b/0000_README
index a165e9a..5c3b875 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-4.17.7.patch
From: http://www.kernel.org
Desc: Linux 4.17.7
+Patch: 1007_linux-4.17.8.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-4.17.8.patch b/1007_linux-4.17.8.patch
new file mode 100644
index 0000000..bfe8221
--- /dev/null
+++ b/1007_linux-4.17.8.patch
@@ -0,0 +1,48 @@
+diff --git a/Makefile b/Makefile
+index 5c9f331f29c0..7cc36fe18dbb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 02a616e2f17d..d14261d6b213 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2081,7 +2081,7 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn,
+ struct mminit_pfnnid_cache *state);
+ #endif
+
+-#ifdef CONFIG_HAVE_MEMBLOCK
++#if defined(CONFIG_HAVE_MEMBLOCK) && !defined(CONFIG_FLAT_NODE_MEM_MAP)
+ void zero_resv_unavail(void);
+ #else
+ static inline void zero_resv_unavail(void) {}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 322cb12a142f..7b841a764dd0 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6377,7 +6377,7 @@ void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
+ free_area_init_core(pgdat);
+ }
+
+-#ifdef CONFIG_HAVE_MEMBLOCK
++#if defined(CONFIG_HAVE_MEMBLOCK) && !defined(CONFIG_FLAT_NODE_MEM_MAP)
+ /*
+ * Only struct pages that are backed by physical memory are zeroed and
+ * initialized by going through __init_single_page(). But, there are some
+@@ -6415,7 +6415,7 @@ void __paginginit zero_resv_unavail(void)
+ if (pgcnt)
+ pr_info("Reserved but unavailable: %lld pages", pgcnt);
+ }
+-#endif /* CONFIG_HAVE_MEMBLOCK */
++#endif /* CONFIG_HAVE_MEMBLOCK && !CONFIG_FLAT_NODE_MEM_MAP */
+
+ #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-17 16:18 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-17 16:18 UTC (permalink / raw
To: gentoo-commits
commit: c054eed8e07c74be300906a57710035d78899d71
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 16:18:10 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 16:18:10 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c054eed8
Linux patch 4.17.7
0000_README | 4 +
1006_linux-4.17.7.patch | 4347 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4351 insertions(+)
diff --git a/0000_README b/0000_README
index b414442..a165e9a 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-4.17.6.patch
From: http://www.kernel.org
Desc: Linux 4.17.6
+Patch: 1006_linux-4.17.7.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-4.17.7.patch b/1006_linux-4.17.7.patch
new file mode 100644
index 0000000..2f4965d
--- /dev/null
+++ b/1006_linux-4.17.7.patch
@@ -0,0 +1,4347 @@
+diff --git a/Documentation/kbuild/kbuild.txt b/Documentation/kbuild/kbuild.txt
+index 6c9c69ec3986..0fa16b461256 100644
+--- a/Documentation/kbuild/kbuild.txt
++++ b/Documentation/kbuild/kbuild.txt
+@@ -148,15 +148,6 @@ stripped after they are installed. If INSTALL_MOD_STRIP is '1', then
+ the default option --strip-debug will be used. Otherwise,
+ INSTALL_MOD_STRIP value will be used as the options to the strip command.
+
+-INSTALL_FW_PATH
+---------------------------------------------------
+-INSTALL_FW_PATH specifies where to install the firmware blobs.
+-The default value is:
+-
+- $(INSTALL_MOD_PATH)/lib/firmware
+-
+-The value can be overridden in which case the default value is ignored.
+-
+ INSTALL_HDR_PATH
+ --------------------------------------------------
+ INSTALL_HDR_PATH specifies where to install user space headers when
+diff --git a/Makefile b/Makefile
+index 1a885c8f82ef..5c9f331f29c0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
+index 4cc09e43eea2..e8309ecc32b9 100644
+--- a/arch/arm/boot/dts/armada-38x.dtsi
++++ b/arch/arm/boot/dts/armada-38x.dtsi
+@@ -547,7 +547,7 @@
+
+ thermal: thermal@e8078 {
+ compatible = "marvell,armada380-thermal";
+- reg = <0xe4078 0x4>, <0xe4074 0x4>;
++ reg = <0xe4078 0x4>, <0xe4070 0x8>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
+index fa8b3fe932e6..6495cc51246f 100644
+--- a/arch/arm64/include/asm/simd.h
++++ b/arch/arm64/include/asm/simd.h
+@@ -29,20 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy);
+ static __must_check inline bool may_use_simd(void)
+ {
+ /*
+- * The raw_cpu_read() is racy if called with preemption enabled.
+- * This is not a bug: kernel_neon_busy is only set when
+- * preemption is disabled, so we cannot migrate to another CPU
+- * while it is set, nor can we migrate to a CPU where it is set.
+- * So, if we find it clear on some CPU then we're guaranteed to
+- * find it clear on any CPU we could migrate to.
+- *
+- * If we are in between kernel_neon_begin()...kernel_neon_end(),
+- * the flag will be set, but preemption is also disabled, so we
+- * can't migrate to another CPU and spuriously see it become
+- * false.
++ * kernel_neon_busy is only set while preemption is disabled,
++ * and is clear whenever preemption is enabled. Since
++ * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
++ * cannot change under our feet -- if it's set we cannot be
++ * migrated, and if it's clear we cannot be migrated to a CPU
++ * where it is set.
+ */
+ return !in_irq() && !irqs_disabled() && !in_nmi() &&
+- !raw_cpu_read(kernel_neon_busy);
++ !this_cpu_read(kernel_neon_busy);
+ }
+
+ #else /* ! CONFIG_KERNEL_MODE_NEON */
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 3775a8d694fb..d4c59d9dca5b 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -29,6 +29,7 @@
+ #include <linux/kallsyms.h>
+ #include <linux/random.h>
+ #include <linux/prctl.h>
++#include <linux/nmi.h>
+
+ #include <asm/asm.h>
+ #include <asm/bootinfo.h>
+@@ -655,28 +656,42 @@ unsigned long arch_align_stack(unsigned long sp)
+ return sp & ALMASK;
+ }
+
+-static void arch_dump_stack(void *info)
++static DEFINE_PER_CPU(call_single_data_t, backtrace_csd);
++static struct cpumask backtrace_csd_busy;
++
++static void handle_backtrace(void *info)
+ {
+- struct pt_regs *regs;
++ nmi_cpu_backtrace(get_irq_regs());
++ cpumask_clear_cpu(smp_processor_id(), &backtrace_csd_busy);
++}
+
+- regs = get_irq_regs();
++static void raise_backtrace(cpumask_t *mask)
++{
++ call_single_data_t *csd;
++ int cpu;
+
+- if (regs)
+- show_regs(regs);
++ for_each_cpu(cpu, mask) {
++ /*
++ * If we previously sent an IPI to the target CPU & it hasn't
++ * cleared its bit in the busy cpumask then it didn't handle
++ * our previous IPI & it's not safe for us to reuse the
++ * call_single_data_t.
++ */
++ if (cpumask_test_and_set_cpu(cpu, &backtrace_csd_busy)) {
++ pr_warn("Unable to send backtrace IPI to CPU%u - perhaps it hung?\n",
++ cpu);
++ continue;
++ }
+
+- dump_stack();
++ csd = &per_cpu(backtrace_csd, cpu);
++ csd->func = handle_backtrace;
++ smp_call_function_single_async(cpu, csd);
++ }
+ }
+
+ void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
+ {
+- long this_cpu = get_cpu();
+-
+- if (cpumask_test_cpu(this_cpu, mask) && !exclude_self)
+- dump_stack();
+-
+- smp_call_function_many(mask, arch_dump_stack, NULL, 1);
+-
+- put_cpu();
++ nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace);
+ }
+
+ int mips_get_process_fp_mode(struct task_struct *task)
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index 967e9e4e795e..a91777630045 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -351,6 +351,7 @@ static void __show_regs(const struct pt_regs *regs)
+ void show_regs(struct pt_regs *regs)
+ {
+ __show_regs((struct pt_regs *)regs);
++ dump_stack();
+ }
+
+ void show_registers(struct pt_regs *regs)
+diff --git a/arch/mips/mm/ioremap.c b/arch/mips/mm/ioremap.c
+index 1986e09fb457..1601d90b087b 100644
+--- a/arch/mips/mm/ioremap.c
++++ b/arch/mips/mm/ioremap.c
+@@ -9,6 +9,7 @@
+ #include <linux/export.h>
+ #include <asm/addrspace.h>
+ #include <asm/byteorder.h>
++#include <linux/ioport.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+@@ -98,6 +99,20 @@ static int remap_area_pages(unsigned long address, phys_addr_t phys_addr,
+ return error;
+ }
+
++static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
++ void *arg)
++{
++ unsigned long i;
++
++ for (i = 0; i < nr_pages; i++) {
++ if (pfn_valid(start_pfn + i) &&
++ !PageReserved(pfn_to_page(start_pfn + i)))
++ return 1;
++ }
++
++ return 0;
++}
++
+ /*
+ * Generic mapping function (not visible outside):
+ */
+@@ -116,8 +131,8 @@ static int remap_area_pages(unsigned long address, phys_addr_t phys_addr,
+
+ void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long flags)
+ {
++ unsigned long offset, pfn, last_pfn;
+ struct vm_struct * area;
+- unsigned long offset;
+ phys_addr_t last_addr;
+ void * addr;
+
+@@ -137,18 +152,16 @@ void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long
+ return (void __iomem *) CKSEG1ADDR(phys_addr);
+
+ /*
+- * Don't allow anybody to remap normal RAM that we're using..
++ * Don't allow anybody to remap RAM that may be allocated by the page
++ * allocator, since that could lead to races & data clobbering.
+ */
+- if (phys_addr < virt_to_phys(high_memory)) {
+- char *t_addr, *t_end;
+- struct page *page;
+-
+- t_addr = __va(phys_addr);
+- t_end = t_addr + (size - 1);
+-
+- for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++)
+- if(!PageReserved(page))
+- return NULL;
++ pfn = PFN_DOWN(phys_addr);
++ last_pfn = PFN_DOWN(last_addr);
++ if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL,
++ __ioremap_check_ram) == 1) {
++ WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n",
++ &phys_addr, &last_addr);
++ return NULL;
+ }
+
+ /*
+diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
+index 5f07333bb224..9c903a420cda 100644
+--- a/arch/x86/crypto/Makefile
++++ b/arch/x86/crypto/Makefile
+@@ -15,7 +15,6 @@ obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o
+
+ obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
+ obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o
+-obj-$(CONFIG_CRYPTO_SALSA20_586) += salsa20-i586.o
+ obj-$(CONFIG_CRYPTO_SERPENT_SSE2_586) += serpent-sse2-i586.o
+
+ obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o
+@@ -24,7 +23,6 @@ obj-$(CONFIG_CRYPTO_CAMELLIA_X86_64) += camellia-x86_64.o
+ obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o
+ obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o
+ obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o
+-obj-$(CONFIG_CRYPTO_SALSA20_X86_64) += salsa20-x86_64.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha20-x86_64.o
+ obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o
+ obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o
+@@ -59,7 +57,6 @@ endif
+
+ aes-i586-y := aes-i586-asm_32.o aes_glue.o
+ twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o
+-salsa20-i586-y := salsa20-i586-asm_32.o salsa20_glue.o
+ serpent-sse2-i586-y := serpent-sse2-i586-asm_32.o serpent_sse2_glue.o
+
+ aes-x86_64-y := aes-x86_64-asm_64.o aes_glue.o
+@@ -68,7 +65,6 @@ camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o
+ blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o
+ twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o
+ twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o
+-salsa20-x86_64-y := salsa20-x86_64-asm_64.o salsa20_glue.o
+ chacha20-x86_64-y := chacha20-ssse3-x86_64.o chacha20_glue.o
+ serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o
+
+diff --git a/arch/x86/crypto/salsa20-i586-asm_32.S b/arch/x86/crypto/salsa20-i586-asm_32.S
+deleted file mode 100644
+index 6014b7b9e52a..000000000000
+--- a/arch/x86/crypto/salsa20-i586-asm_32.S
++++ /dev/null
+@@ -1,938 +0,0 @@
+-# Derived from:
+-# salsa20_pm.s version 20051229
+-# D. J. Bernstein
+-# Public domain.
+-
+-#include <linux/linkage.h>
+-
+-.text
+-
+-# enter salsa20_encrypt_bytes
+-ENTRY(salsa20_encrypt_bytes)
+- mov %esp,%eax
+- and $31,%eax
+- add $256,%eax
+- sub %eax,%esp
+- # eax_stack = eax
+- movl %eax,80(%esp)
+- # ebx_stack = ebx
+- movl %ebx,84(%esp)
+- # esi_stack = esi
+- movl %esi,88(%esp)
+- # edi_stack = edi
+- movl %edi,92(%esp)
+- # ebp_stack = ebp
+- movl %ebp,96(%esp)
+- # x = arg1
+- movl 4(%esp,%eax),%edx
+- # m = arg2
+- movl 8(%esp,%eax),%esi
+- # out = arg3
+- movl 12(%esp,%eax),%edi
+- # bytes = arg4
+- movl 16(%esp,%eax),%ebx
+- # bytes -= 0
+- sub $0,%ebx
+- # goto done if unsigned<=
+- jbe ._done
+-._start:
+- # in0 = *(uint32 *) (x + 0)
+- movl 0(%edx),%eax
+- # in1 = *(uint32 *) (x + 4)
+- movl 4(%edx),%ecx
+- # in2 = *(uint32 *) (x + 8)
+- movl 8(%edx),%ebp
+- # j0 = in0
+- movl %eax,164(%esp)
+- # in3 = *(uint32 *) (x + 12)
+- movl 12(%edx),%eax
+- # j1 = in1
+- movl %ecx,168(%esp)
+- # in4 = *(uint32 *) (x + 16)
+- movl 16(%edx),%ecx
+- # j2 = in2
+- movl %ebp,172(%esp)
+- # in5 = *(uint32 *) (x + 20)
+- movl 20(%edx),%ebp
+- # j3 = in3
+- movl %eax,176(%esp)
+- # in6 = *(uint32 *) (x + 24)
+- movl 24(%edx),%eax
+- # j4 = in4
+- movl %ecx,180(%esp)
+- # in7 = *(uint32 *) (x + 28)
+- movl 28(%edx),%ecx
+- # j5 = in5
+- movl %ebp,184(%esp)
+- # in8 = *(uint32 *) (x + 32)
+- movl 32(%edx),%ebp
+- # j6 = in6
+- movl %eax,188(%esp)
+- # in9 = *(uint32 *) (x + 36)
+- movl 36(%edx),%eax
+- # j7 = in7
+- movl %ecx,192(%esp)
+- # in10 = *(uint32 *) (x + 40)
+- movl 40(%edx),%ecx
+- # j8 = in8
+- movl %ebp,196(%esp)
+- # in11 = *(uint32 *) (x + 44)
+- movl 44(%edx),%ebp
+- # j9 = in9
+- movl %eax,200(%esp)
+- # in12 = *(uint32 *) (x + 48)
+- movl 48(%edx),%eax
+- # j10 = in10
+- movl %ecx,204(%esp)
+- # in13 = *(uint32 *) (x + 52)
+- movl 52(%edx),%ecx
+- # j11 = in11
+- movl %ebp,208(%esp)
+- # in14 = *(uint32 *) (x + 56)
+- movl 56(%edx),%ebp
+- # j12 = in12
+- movl %eax,212(%esp)
+- # in15 = *(uint32 *) (x + 60)
+- movl 60(%edx),%eax
+- # j13 = in13
+- movl %ecx,216(%esp)
+- # j14 = in14
+- movl %ebp,220(%esp)
+- # j15 = in15
+- movl %eax,224(%esp)
+- # x_backup = x
+- movl %edx,64(%esp)
+-._bytesatleast1:
+- # bytes - 64
+- cmp $64,%ebx
+- # goto nocopy if unsigned>=
+- jae ._nocopy
+- # ctarget = out
+- movl %edi,228(%esp)
+- # out = &tmp
+- leal 0(%esp),%edi
+- # i = bytes
+- mov %ebx,%ecx
+- # while (i) { *out++ = *m++; --i }
+- rep movsb
+- # out = &tmp
+- leal 0(%esp),%edi
+- # m = &tmp
+- leal 0(%esp),%esi
+-._nocopy:
+- # out_backup = out
+- movl %edi,72(%esp)
+- # m_backup = m
+- movl %esi,68(%esp)
+- # bytes_backup = bytes
+- movl %ebx,76(%esp)
+- # in0 = j0
+- movl 164(%esp),%eax
+- # in1 = j1
+- movl 168(%esp),%ecx
+- # in2 = j2
+- movl 172(%esp),%edx
+- # in3 = j3
+- movl 176(%esp),%ebx
+- # x0 = in0
+- movl %eax,100(%esp)
+- # x1 = in1
+- movl %ecx,104(%esp)
+- # x2 = in2
+- movl %edx,108(%esp)
+- # x3 = in3
+- movl %ebx,112(%esp)
+- # in4 = j4
+- movl 180(%esp),%eax
+- # in5 = j5
+- movl 184(%esp),%ecx
+- # in6 = j6
+- movl 188(%esp),%edx
+- # in7 = j7
+- movl 192(%esp),%ebx
+- # x4 = in4
+- movl %eax,116(%esp)
+- # x5 = in5
+- movl %ecx,120(%esp)
+- # x6 = in6
+- movl %edx,124(%esp)
+- # x7 = in7
+- movl %ebx,128(%esp)
+- # in8 = j8
+- movl 196(%esp),%eax
+- # in9 = j9
+- movl 200(%esp),%ecx
+- # in10 = j10
+- movl 204(%esp),%edx
+- # in11 = j11
+- movl 208(%esp),%ebx
+- # x8 = in8
+- movl %eax,132(%esp)
+- # x9 = in9
+- movl %ecx,136(%esp)
+- # x10 = in10
+- movl %edx,140(%esp)
+- # x11 = in11
+- movl %ebx,144(%esp)
+- # in12 = j12
+- movl 212(%esp),%eax
+- # in13 = j13
+- movl 216(%esp),%ecx
+- # in14 = j14
+- movl 220(%esp),%edx
+- # in15 = j15
+- movl 224(%esp),%ebx
+- # x12 = in12
+- movl %eax,148(%esp)
+- # x13 = in13
+- movl %ecx,152(%esp)
+- # x14 = in14
+- movl %edx,156(%esp)
+- # x15 = in15
+- movl %ebx,160(%esp)
+- # i = 20
+- mov $20,%ebp
+- # p = x0
+- movl 100(%esp),%eax
+- # s = x5
+- movl 120(%esp),%ecx
+- # t = x10
+- movl 140(%esp),%edx
+- # w = x15
+- movl 160(%esp),%ebx
+-._mainloop:
+- # x0 = p
+- movl %eax,100(%esp)
+- # x10 = t
+- movl %edx,140(%esp)
+- # p += x12
+- addl 148(%esp),%eax
+- # x5 = s
+- movl %ecx,120(%esp)
+- # t += x6
+- addl 124(%esp),%edx
+- # x15 = w
+- movl %ebx,160(%esp)
+- # r = x1
+- movl 104(%esp),%esi
+- # r += s
+- add %ecx,%esi
+- # v = x11
+- movl 144(%esp),%edi
+- # v += w
+- add %ebx,%edi
+- # p <<<= 7
+- rol $7,%eax
+- # p ^= x4
+- xorl 116(%esp),%eax
+- # t <<<= 7
+- rol $7,%edx
+- # t ^= x14
+- xorl 156(%esp),%edx
+- # r <<<= 7
+- rol $7,%esi
+- # r ^= x9
+- xorl 136(%esp),%esi
+- # v <<<= 7
+- rol $7,%edi
+- # v ^= x3
+- xorl 112(%esp),%edi
+- # x4 = p
+- movl %eax,116(%esp)
+- # x14 = t
+- movl %edx,156(%esp)
+- # p += x0
+- addl 100(%esp),%eax
+- # x9 = r
+- movl %esi,136(%esp)
+- # t += x10
+- addl 140(%esp),%edx
+- # x3 = v
+- movl %edi,112(%esp)
+- # p <<<= 9
+- rol $9,%eax
+- # p ^= x8
+- xorl 132(%esp),%eax
+- # t <<<= 9
+- rol $9,%edx
+- # t ^= x2
+- xorl 108(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 9
+- rol $9,%ecx
+- # s ^= x13
+- xorl 152(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 9
+- rol $9,%ebx
+- # w ^= x7
+- xorl 128(%esp),%ebx
+- # x8 = p
+- movl %eax,132(%esp)
+- # x2 = t
+- movl %edx,108(%esp)
+- # p += x4
+- addl 116(%esp),%eax
+- # x13 = s
+- movl %ecx,152(%esp)
+- # t += x14
+- addl 156(%esp),%edx
+- # x7 = w
+- movl %ebx,128(%esp)
+- # p <<<= 13
+- rol $13,%eax
+- # p ^= x12
+- xorl 148(%esp),%eax
+- # t <<<= 13
+- rol $13,%edx
+- # t ^= x6
+- xorl 124(%esp),%edx
+- # r += s
+- add %ecx,%esi
+- # r <<<= 13
+- rol $13,%esi
+- # r ^= x1
+- xorl 104(%esp),%esi
+- # v += w
+- add %ebx,%edi
+- # v <<<= 13
+- rol $13,%edi
+- # v ^= x11
+- xorl 144(%esp),%edi
+- # x12 = p
+- movl %eax,148(%esp)
+- # x6 = t
+- movl %edx,124(%esp)
+- # p += x8
+- addl 132(%esp),%eax
+- # x1 = r
+- movl %esi,104(%esp)
+- # t += x2
+- addl 108(%esp),%edx
+- # x11 = v
+- movl %edi,144(%esp)
+- # p <<<= 18
+- rol $18,%eax
+- # p ^= x0
+- xorl 100(%esp),%eax
+- # t <<<= 18
+- rol $18,%edx
+- # t ^= x10
+- xorl 140(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 18
+- rol $18,%ecx
+- # s ^= x5
+- xorl 120(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 18
+- rol $18,%ebx
+- # w ^= x15
+- xorl 160(%esp),%ebx
+- # x0 = p
+- movl %eax,100(%esp)
+- # x10 = t
+- movl %edx,140(%esp)
+- # p += x3
+- addl 112(%esp),%eax
+- # p <<<= 7
+- rol $7,%eax
+- # x5 = s
+- movl %ecx,120(%esp)
+- # t += x9
+- addl 136(%esp),%edx
+- # x15 = w
+- movl %ebx,160(%esp)
+- # r = x4
+- movl 116(%esp),%esi
+- # r += s
+- add %ecx,%esi
+- # v = x14
+- movl 156(%esp),%edi
+- # v += w
+- add %ebx,%edi
+- # p ^= x1
+- xorl 104(%esp),%eax
+- # t <<<= 7
+- rol $7,%edx
+- # t ^= x11
+- xorl 144(%esp),%edx
+- # r <<<= 7
+- rol $7,%esi
+- # r ^= x6
+- xorl 124(%esp),%esi
+- # v <<<= 7
+- rol $7,%edi
+- # v ^= x12
+- xorl 148(%esp),%edi
+- # x1 = p
+- movl %eax,104(%esp)
+- # x11 = t
+- movl %edx,144(%esp)
+- # p += x0
+- addl 100(%esp),%eax
+- # x6 = r
+- movl %esi,124(%esp)
+- # t += x10
+- addl 140(%esp),%edx
+- # x12 = v
+- movl %edi,148(%esp)
+- # p <<<= 9
+- rol $9,%eax
+- # p ^= x2
+- xorl 108(%esp),%eax
+- # t <<<= 9
+- rol $9,%edx
+- # t ^= x8
+- xorl 132(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 9
+- rol $9,%ecx
+- # s ^= x7
+- xorl 128(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 9
+- rol $9,%ebx
+- # w ^= x13
+- xorl 152(%esp),%ebx
+- # x2 = p
+- movl %eax,108(%esp)
+- # x8 = t
+- movl %edx,132(%esp)
+- # p += x1
+- addl 104(%esp),%eax
+- # x7 = s
+- movl %ecx,128(%esp)
+- # t += x11
+- addl 144(%esp),%edx
+- # x13 = w
+- movl %ebx,152(%esp)
+- # p <<<= 13
+- rol $13,%eax
+- # p ^= x3
+- xorl 112(%esp),%eax
+- # t <<<= 13
+- rol $13,%edx
+- # t ^= x9
+- xorl 136(%esp),%edx
+- # r += s
+- add %ecx,%esi
+- # r <<<= 13
+- rol $13,%esi
+- # r ^= x4
+- xorl 116(%esp),%esi
+- # v += w
+- add %ebx,%edi
+- # v <<<= 13
+- rol $13,%edi
+- # v ^= x14
+- xorl 156(%esp),%edi
+- # x3 = p
+- movl %eax,112(%esp)
+- # x9 = t
+- movl %edx,136(%esp)
+- # p += x2
+- addl 108(%esp),%eax
+- # x4 = r
+- movl %esi,116(%esp)
+- # t += x8
+- addl 132(%esp),%edx
+- # x14 = v
+- movl %edi,156(%esp)
+- # p <<<= 18
+- rol $18,%eax
+- # p ^= x0
+- xorl 100(%esp),%eax
+- # t <<<= 18
+- rol $18,%edx
+- # t ^= x10
+- xorl 140(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 18
+- rol $18,%ecx
+- # s ^= x5
+- xorl 120(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 18
+- rol $18,%ebx
+- # w ^= x15
+- xorl 160(%esp),%ebx
+- # x0 = p
+- movl %eax,100(%esp)
+- # x10 = t
+- movl %edx,140(%esp)
+- # p += x12
+- addl 148(%esp),%eax
+- # x5 = s
+- movl %ecx,120(%esp)
+- # t += x6
+- addl 124(%esp),%edx
+- # x15 = w
+- movl %ebx,160(%esp)
+- # r = x1
+- movl 104(%esp),%esi
+- # r += s
+- add %ecx,%esi
+- # v = x11
+- movl 144(%esp),%edi
+- # v += w
+- add %ebx,%edi
+- # p <<<= 7
+- rol $7,%eax
+- # p ^= x4
+- xorl 116(%esp),%eax
+- # t <<<= 7
+- rol $7,%edx
+- # t ^= x14
+- xorl 156(%esp),%edx
+- # r <<<= 7
+- rol $7,%esi
+- # r ^= x9
+- xorl 136(%esp),%esi
+- # v <<<= 7
+- rol $7,%edi
+- # v ^= x3
+- xorl 112(%esp),%edi
+- # x4 = p
+- movl %eax,116(%esp)
+- # x14 = t
+- movl %edx,156(%esp)
+- # p += x0
+- addl 100(%esp),%eax
+- # x9 = r
+- movl %esi,136(%esp)
+- # t += x10
+- addl 140(%esp),%edx
+- # x3 = v
+- movl %edi,112(%esp)
+- # p <<<= 9
+- rol $9,%eax
+- # p ^= x8
+- xorl 132(%esp),%eax
+- # t <<<= 9
+- rol $9,%edx
+- # t ^= x2
+- xorl 108(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 9
+- rol $9,%ecx
+- # s ^= x13
+- xorl 152(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 9
+- rol $9,%ebx
+- # w ^= x7
+- xorl 128(%esp),%ebx
+- # x8 = p
+- movl %eax,132(%esp)
+- # x2 = t
+- movl %edx,108(%esp)
+- # p += x4
+- addl 116(%esp),%eax
+- # x13 = s
+- movl %ecx,152(%esp)
+- # t += x14
+- addl 156(%esp),%edx
+- # x7 = w
+- movl %ebx,128(%esp)
+- # p <<<= 13
+- rol $13,%eax
+- # p ^= x12
+- xorl 148(%esp),%eax
+- # t <<<= 13
+- rol $13,%edx
+- # t ^= x6
+- xorl 124(%esp),%edx
+- # r += s
+- add %ecx,%esi
+- # r <<<= 13
+- rol $13,%esi
+- # r ^= x1
+- xorl 104(%esp),%esi
+- # v += w
+- add %ebx,%edi
+- # v <<<= 13
+- rol $13,%edi
+- # v ^= x11
+- xorl 144(%esp),%edi
+- # x12 = p
+- movl %eax,148(%esp)
+- # x6 = t
+- movl %edx,124(%esp)
+- # p += x8
+- addl 132(%esp),%eax
+- # x1 = r
+- movl %esi,104(%esp)
+- # t += x2
+- addl 108(%esp),%edx
+- # x11 = v
+- movl %edi,144(%esp)
+- # p <<<= 18
+- rol $18,%eax
+- # p ^= x0
+- xorl 100(%esp),%eax
+- # t <<<= 18
+- rol $18,%edx
+- # t ^= x10
+- xorl 140(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 18
+- rol $18,%ecx
+- # s ^= x5
+- xorl 120(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 18
+- rol $18,%ebx
+- # w ^= x15
+- xorl 160(%esp),%ebx
+- # x0 = p
+- movl %eax,100(%esp)
+- # x10 = t
+- movl %edx,140(%esp)
+- # p += x3
+- addl 112(%esp),%eax
+- # p <<<= 7
+- rol $7,%eax
+- # x5 = s
+- movl %ecx,120(%esp)
+- # t += x9
+- addl 136(%esp),%edx
+- # x15 = w
+- movl %ebx,160(%esp)
+- # r = x4
+- movl 116(%esp),%esi
+- # r += s
+- add %ecx,%esi
+- # v = x14
+- movl 156(%esp),%edi
+- # v += w
+- add %ebx,%edi
+- # p ^= x1
+- xorl 104(%esp),%eax
+- # t <<<= 7
+- rol $7,%edx
+- # t ^= x11
+- xorl 144(%esp),%edx
+- # r <<<= 7
+- rol $7,%esi
+- # r ^= x6
+- xorl 124(%esp),%esi
+- # v <<<= 7
+- rol $7,%edi
+- # v ^= x12
+- xorl 148(%esp),%edi
+- # x1 = p
+- movl %eax,104(%esp)
+- # x11 = t
+- movl %edx,144(%esp)
+- # p += x0
+- addl 100(%esp),%eax
+- # x6 = r
+- movl %esi,124(%esp)
+- # t += x10
+- addl 140(%esp),%edx
+- # x12 = v
+- movl %edi,148(%esp)
+- # p <<<= 9
+- rol $9,%eax
+- # p ^= x2
+- xorl 108(%esp),%eax
+- # t <<<= 9
+- rol $9,%edx
+- # t ^= x8
+- xorl 132(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 9
+- rol $9,%ecx
+- # s ^= x7
+- xorl 128(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 9
+- rol $9,%ebx
+- # w ^= x13
+- xorl 152(%esp),%ebx
+- # x2 = p
+- movl %eax,108(%esp)
+- # x8 = t
+- movl %edx,132(%esp)
+- # p += x1
+- addl 104(%esp),%eax
+- # x7 = s
+- movl %ecx,128(%esp)
+- # t += x11
+- addl 144(%esp),%edx
+- # x13 = w
+- movl %ebx,152(%esp)
+- # p <<<= 13
+- rol $13,%eax
+- # p ^= x3
+- xorl 112(%esp),%eax
+- # t <<<= 13
+- rol $13,%edx
+- # t ^= x9
+- xorl 136(%esp),%edx
+- # r += s
+- add %ecx,%esi
+- # r <<<= 13
+- rol $13,%esi
+- # r ^= x4
+- xorl 116(%esp),%esi
+- # v += w
+- add %ebx,%edi
+- # v <<<= 13
+- rol $13,%edi
+- # v ^= x14
+- xorl 156(%esp),%edi
+- # x3 = p
+- movl %eax,112(%esp)
+- # x9 = t
+- movl %edx,136(%esp)
+- # p += x2
+- addl 108(%esp),%eax
+- # x4 = r
+- movl %esi,116(%esp)
+- # t += x8
+- addl 132(%esp),%edx
+- # x14 = v
+- movl %edi,156(%esp)
+- # p <<<= 18
+- rol $18,%eax
+- # p ^= x0
+- xorl 100(%esp),%eax
+- # t <<<= 18
+- rol $18,%edx
+- # t ^= x10
+- xorl 140(%esp),%edx
+- # s += r
+- add %esi,%ecx
+- # s <<<= 18
+- rol $18,%ecx
+- # s ^= x5
+- xorl 120(%esp),%ecx
+- # w += v
+- add %edi,%ebx
+- # w <<<= 18
+- rol $18,%ebx
+- # w ^= x15
+- xorl 160(%esp),%ebx
+- # i -= 4
+- sub $4,%ebp
+- # goto mainloop if unsigned >
+- ja ._mainloop
+- # x0 = p
+- movl %eax,100(%esp)
+- # x5 = s
+- movl %ecx,120(%esp)
+- # x10 = t
+- movl %edx,140(%esp)
+- # x15 = w
+- movl %ebx,160(%esp)
+- # out = out_backup
+- movl 72(%esp),%edi
+- # m = m_backup
+- movl 68(%esp),%esi
+- # in0 = x0
+- movl 100(%esp),%eax
+- # in1 = x1
+- movl 104(%esp),%ecx
+- # in0 += j0
+- addl 164(%esp),%eax
+- # in1 += j1
+- addl 168(%esp),%ecx
+- # in0 ^= *(uint32 *) (m + 0)
+- xorl 0(%esi),%eax
+- # in1 ^= *(uint32 *) (m + 4)
+- xorl 4(%esi),%ecx
+- # *(uint32 *) (out + 0) = in0
+- movl %eax,0(%edi)
+- # *(uint32 *) (out + 4) = in1
+- movl %ecx,4(%edi)
+- # in2 = x2
+- movl 108(%esp),%eax
+- # in3 = x3
+- movl 112(%esp),%ecx
+- # in2 += j2
+- addl 172(%esp),%eax
+- # in3 += j3
+- addl 176(%esp),%ecx
+- # in2 ^= *(uint32 *) (m + 8)
+- xorl 8(%esi),%eax
+- # in3 ^= *(uint32 *) (m + 12)
+- xorl 12(%esi),%ecx
+- # *(uint32 *) (out + 8) = in2
+- movl %eax,8(%edi)
+- # *(uint32 *) (out + 12) = in3
+- movl %ecx,12(%edi)
+- # in4 = x4
+- movl 116(%esp),%eax
+- # in5 = x5
+- movl 120(%esp),%ecx
+- # in4 += j4
+- addl 180(%esp),%eax
+- # in5 += j5
+- addl 184(%esp),%ecx
+- # in4 ^= *(uint32 *) (m + 16)
+- xorl 16(%esi),%eax
+- # in5 ^= *(uint32 *) (m + 20)
+- xorl 20(%esi),%ecx
+- # *(uint32 *) (out + 16) = in4
+- movl %eax,16(%edi)
+- # *(uint32 *) (out + 20) = in5
+- movl %ecx,20(%edi)
+- # in6 = x6
+- movl 124(%esp),%eax
+- # in7 = x7
+- movl 128(%esp),%ecx
+- # in6 += j6
+- addl 188(%esp),%eax
+- # in7 += j7
+- addl 192(%esp),%ecx
+- # in6 ^= *(uint32 *) (m + 24)
+- xorl 24(%esi),%eax
+- # in7 ^= *(uint32 *) (m + 28)
+- xorl 28(%esi),%ecx
+- # *(uint32 *) (out + 24) = in6
+- movl %eax,24(%edi)
+- # *(uint32 *) (out + 28) = in7
+- movl %ecx,28(%edi)
+- # in8 = x8
+- movl 132(%esp),%eax
+- # in9 = x9
+- movl 136(%esp),%ecx
+- # in8 += j8
+- addl 196(%esp),%eax
+- # in9 += j9
+- addl 200(%esp),%ecx
+- # in8 ^= *(uint32 *) (m + 32)
+- xorl 32(%esi),%eax
+- # in9 ^= *(uint32 *) (m + 36)
+- xorl 36(%esi),%ecx
+- # *(uint32 *) (out + 32) = in8
+- movl %eax,32(%edi)
+- # *(uint32 *) (out + 36) = in9
+- movl %ecx,36(%edi)
+- # in10 = x10
+- movl 140(%esp),%eax
+- # in11 = x11
+- movl 144(%esp),%ecx
+- # in10 += j10
+- addl 204(%esp),%eax
+- # in11 += j11
+- addl 208(%esp),%ecx
+- # in10 ^= *(uint32 *) (m + 40)
+- xorl 40(%esi),%eax
+- # in11 ^= *(uint32 *) (m + 44)
+- xorl 44(%esi),%ecx
+- # *(uint32 *) (out + 40) = in10
+- movl %eax,40(%edi)
+- # *(uint32 *) (out + 44) = in11
+- movl %ecx,44(%edi)
+- # in12 = x12
+- movl 148(%esp),%eax
+- # in13 = x13
+- movl 152(%esp),%ecx
+- # in12 += j12
+- addl 212(%esp),%eax
+- # in13 += j13
+- addl 216(%esp),%ecx
+- # in12 ^= *(uint32 *) (m + 48)
+- xorl 48(%esi),%eax
+- # in13 ^= *(uint32 *) (m + 52)
+- xorl 52(%esi),%ecx
+- # *(uint32 *) (out + 48) = in12
+- movl %eax,48(%edi)
+- # *(uint32 *) (out + 52) = in13
+- movl %ecx,52(%edi)
+- # in14 = x14
+- movl 156(%esp),%eax
+- # in15 = x15
+- movl 160(%esp),%ecx
+- # in14 += j14
+- addl 220(%esp),%eax
+- # in15 += j15
+- addl 224(%esp),%ecx
+- # in14 ^= *(uint32 *) (m + 56)
+- xorl 56(%esi),%eax
+- # in15 ^= *(uint32 *) (m + 60)
+- xorl 60(%esi),%ecx
+- # *(uint32 *) (out + 56) = in14
+- movl %eax,56(%edi)
+- # *(uint32 *) (out + 60) = in15
+- movl %ecx,60(%edi)
+- # bytes = bytes_backup
+- movl 76(%esp),%ebx
+- # in8 = j8
+- movl 196(%esp),%eax
+- # in9 = j9
+- movl 200(%esp),%ecx
+- # in8 += 1
+- add $1,%eax
+- # in9 += 0 + carry
+- adc $0,%ecx
+- # j8 = in8
+- movl %eax,196(%esp)
+- # j9 = in9
+- movl %ecx,200(%esp)
+- # bytes - 64
+- cmp $64,%ebx
+- # goto bytesatleast65 if unsigned>
+- ja ._bytesatleast65
+- # goto bytesatleast64 if unsigned>=
+- jae ._bytesatleast64
+- # m = out
+- mov %edi,%esi
+- # out = ctarget
+- movl 228(%esp),%edi
+- # i = bytes
+- mov %ebx,%ecx
+- # while (i) { *out++ = *m++; --i }
+- rep movsb
+-._bytesatleast64:
+- # x = x_backup
+- movl 64(%esp),%eax
+- # in8 = j8
+- movl 196(%esp),%ecx
+- # in9 = j9
+- movl 200(%esp),%edx
+- # *(uint32 *) (x + 32) = in8
+- movl %ecx,32(%eax)
+- # *(uint32 *) (x + 36) = in9
+- movl %edx,36(%eax)
+-._done:
+- # eax = eax_stack
+- movl 80(%esp),%eax
+- # ebx = ebx_stack
+- movl 84(%esp),%ebx
+- # esi = esi_stack
+- movl 88(%esp),%esi
+- # edi = edi_stack
+- movl 92(%esp),%edi
+- # ebp = ebp_stack
+- movl 96(%esp),%ebp
+- # leave
+- add %eax,%esp
+- ret
+-._bytesatleast65:
+- # bytes -= 64
+- sub $64,%ebx
+- # out += 64
+- add $64,%edi
+- # m += 64
+- add $64,%esi
+- # goto bytesatleast1
+- jmp ._bytesatleast1
+-ENDPROC(salsa20_encrypt_bytes)
+diff --git a/arch/x86/crypto/salsa20-x86_64-asm_64.S b/arch/x86/crypto/salsa20-x86_64-asm_64.S
+deleted file mode 100644
+index 03a4918f41ee..000000000000
+--- a/arch/x86/crypto/salsa20-x86_64-asm_64.S
++++ /dev/null
+@@ -1,805 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#include <linux/linkage.h>
+-
+-# enter salsa20_encrypt_bytes
+-ENTRY(salsa20_encrypt_bytes)
+- mov %rsp,%r11
+- and $31,%r11
+- add $256,%r11
+- sub %r11,%rsp
+- # x = arg1
+- mov %rdi,%r8
+- # m = arg2
+- mov %rsi,%rsi
+- # out = arg3
+- mov %rdx,%rdi
+- # bytes = arg4
+- mov %rcx,%rdx
+- # unsigned>? bytes - 0
+- cmp $0,%rdx
+- # comment:fp stack unchanged by jump
+- # goto done if !unsigned>
+- jbe ._done
+- # comment:fp stack unchanged by fallthrough
+-# start:
+-._start:
+- # r11_stack = r11
+- movq %r11,0(%rsp)
+- # r12_stack = r12
+- movq %r12,8(%rsp)
+- # r13_stack = r13
+- movq %r13,16(%rsp)
+- # r14_stack = r14
+- movq %r14,24(%rsp)
+- # r15_stack = r15
+- movq %r15,32(%rsp)
+- # rbx_stack = rbx
+- movq %rbx,40(%rsp)
+- # rbp_stack = rbp
+- movq %rbp,48(%rsp)
+- # in0 = *(uint64 *) (x + 0)
+- movq 0(%r8),%rcx
+- # in2 = *(uint64 *) (x + 8)
+- movq 8(%r8),%r9
+- # in4 = *(uint64 *) (x + 16)
+- movq 16(%r8),%rax
+- # in6 = *(uint64 *) (x + 24)
+- movq 24(%r8),%r10
+- # in8 = *(uint64 *) (x + 32)
+- movq 32(%r8),%r11
+- # in10 = *(uint64 *) (x + 40)
+- movq 40(%r8),%r12
+- # in12 = *(uint64 *) (x + 48)
+- movq 48(%r8),%r13
+- # in14 = *(uint64 *) (x + 56)
+- movq 56(%r8),%r14
+- # j0 = in0
+- movq %rcx,56(%rsp)
+- # j2 = in2
+- movq %r9,64(%rsp)
+- # j4 = in4
+- movq %rax,72(%rsp)
+- # j6 = in6
+- movq %r10,80(%rsp)
+- # j8 = in8
+- movq %r11,88(%rsp)
+- # j10 = in10
+- movq %r12,96(%rsp)
+- # j12 = in12
+- movq %r13,104(%rsp)
+- # j14 = in14
+- movq %r14,112(%rsp)
+- # x_backup = x
+- movq %r8,120(%rsp)
+-# bytesatleast1:
+-._bytesatleast1:
+- # unsigned<? bytes - 64
+- cmp $64,%rdx
+- # comment:fp stack unchanged by jump
+- # goto nocopy if !unsigned<
+- jae ._nocopy
+- # ctarget = out
+- movq %rdi,128(%rsp)
+- # out = &tmp
+- leaq 192(%rsp),%rdi
+- # i = bytes
+- mov %rdx,%rcx
+- # while (i) { *out++ = *m++; --i }
+- rep movsb
+- # out = &tmp
+- leaq 192(%rsp),%rdi
+- # m = &tmp
+- leaq 192(%rsp),%rsi
+- # comment:fp stack unchanged by fallthrough
+-# nocopy:
+-._nocopy:
+- # out_backup = out
+- movq %rdi,136(%rsp)
+- # m_backup = m
+- movq %rsi,144(%rsp)
+- # bytes_backup = bytes
+- movq %rdx,152(%rsp)
+- # x1 = j0
+- movq 56(%rsp),%rdi
+- # x0 = x1
+- mov %rdi,%rdx
+- # (uint64) x1 >>= 32
+- shr $32,%rdi
+- # x3 = j2
+- movq 64(%rsp),%rsi
+- # x2 = x3
+- mov %rsi,%rcx
+- # (uint64) x3 >>= 32
+- shr $32,%rsi
+- # x5 = j4
+- movq 72(%rsp),%r8
+- # x4 = x5
+- mov %r8,%r9
+- # (uint64) x5 >>= 32
+- shr $32,%r8
+- # x5_stack = x5
+- movq %r8,160(%rsp)
+- # x7 = j6
+- movq 80(%rsp),%r8
+- # x6 = x7
+- mov %r8,%rax
+- # (uint64) x7 >>= 32
+- shr $32,%r8
+- # x9 = j8
+- movq 88(%rsp),%r10
+- # x8 = x9
+- mov %r10,%r11
+- # (uint64) x9 >>= 32
+- shr $32,%r10
+- # x11 = j10
+- movq 96(%rsp),%r12
+- # x10 = x11
+- mov %r12,%r13
+- # x10_stack = x10
+- movq %r13,168(%rsp)
+- # (uint64) x11 >>= 32
+- shr $32,%r12
+- # x13 = j12
+- movq 104(%rsp),%r13
+- # x12 = x13
+- mov %r13,%r14
+- # (uint64) x13 >>= 32
+- shr $32,%r13
+- # x15 = j14
+- movq 112(%rsp),%r15
+- # x14 = x15
+- mov %r15,%rbx
+- # (uint64) x15 >>= 32
+- shr $32,%r15
+- # x15_stack = x15
+- movq %r15,176(%rsp)
+- # i = 20
+- mov $20,%r15
+-# mainloop:
+-._mainloop:
+- # i_backup = i
+- movq %r15,184(%rsp)
+- # x5 = x5_stack
+- movq 160(%rsp),%r15
+- # a = x12 + x0
+- lea (%r14,%rdx),%rbp
+- # (uint32) a <<<= 7
+- rol $7,%ebp
+- # x4 ^= a
+- xor %rbp,%r9
+- # b = x1 + x5
+- lea (%rdi,%r15),%rbp
+- # (uint32) b <<<= 7
+- rol $7,%ebp
+- # x9 ^= b
+- xor %rbp,%r10
+- # a = x0 + x4
+- lea (%rdx,%r9),%rbp
+- # (uint32) a <<<= 9
+- rol $9,%ebp
+- # x8 ^= a
+- xor %rbp,%r11
+- # b = x5 + x9
+- lea (%r15,%r10),%rbp
+- # (uint32) b <<<= 9
+- rol $9,%ebp
+- # x13 ^= b
+- xor %rbp,%r13
+- # a = x4 + x8
+- lea (%r9,%r11),%rbp
+- # (uint32) a <<<= 13
+- rol $13,%ebp
+- # x12 ^= a
+- xor %rbp,%r14
+- # b = x9 + x13
+- lea (%r10,%r13),%rbp
+- # (uint32) b <<<= 13
+- rol $13,%ebp
+- # x1 ^= b
+- xor %rbp,%rdi
+- # a = x8 + x12
+- lea (%r11,%r14),%rbp
+- # (uint32) a <<<= 18
+- rol $18,%ebp
+- # x0 ^= a
+- xor %rbp,%rdx
+- # b = x13 + x1
+- lea (%r13,%rdi),%rbp
+- # (uint32) b <<<= 18
+- rol $18,%ebp
+- # x5 ^= b
+- xor %rbp,%r15
+- # x10 = x10_stack
+- movq 168(%rsp),%rbp
+- # x5_stack = x5
+- movq %r15,160(%rsp)
+- # c = x6 + x10
+- lea (%rax,%rbp),%r15
+- # (uint32) c <<<= 7
+- rol $7,%r15d
+- # x14 ^= c
+- xor %r15,%rbx
+- # c = x10 + x14
+- lea (%rbp,%rbx),%r15
+- # (uint32) c <<<= 9
+- rol $9,%r15d
+- # x2 ^= c
+- xor %r15,%rcx
+- # c = x14 + x2
+- lea (%rbx,%rcx),%r15
+- # (uint32) c <<<= 13
+- rol $13,%r15d
+- # x6 ^= c
+- xor %r15,%rax
+- # c = x2 + x6
+- lea (%rcx,%rax),%r15
+- # (uint32) c <<<= 18
+- rol $18,%r15d
+- # x10 ^= c
+- xor %r15,%rbp
+- # x15 = x15_stack
+- movq 176(%rsp),%r15
+- # x10_stack = x10
+- movq %rbp,168(%rsp)
+- # d = x11 + x15
+- lea (%r12,%r15),%rbp
+- # (uint32) d <<<= 7
+- rol $7,%ebp
+- # x3 ^= d
+- xor %rbp,%rsi
+- # d = x15 + x3
+- lea (%r15,%rsi),%rbp
+- # (uint32) d <<<= 9
+- rol $9,%ebp
+- # x7 ^= d
+- xor %rbp,%r8
+- # d = x3 + x7
+- lea (%rsi,%r8),%rbp
+- # (uint32) d <<<= 13
+- rol $13,%ebp
+- # x11 ^= d
+- xor %rbp,%r12
+- # d = x7 + x11
+- lea (%r8,%r12),%rbp
+- # (uint32) d <<<= 18
+- rol $18,%ebp
+- # x15 ^= d
+- xor %rbp,%r15
+- # x15_stack = x15
+- movq %r15,176(%rsp)
+- # x5 = x5_stack
+- movq 160(%rsp),%r15
+- # a = x3 + x0
+- lea (%rsi,%rdx),%rbp
+- # (uint32) a <<<= 7
+- rol $7,%ebp
+- # x1 ^= a
+- xor %rbp,%rdi
+- # b = x4 + x5
+- lea (%r9,%r15),%rbp
+- # (uint32) b <<<= 7
+- rol $7,%ebp
+- # x6 ^= b
+- xor %rbp,%rax
+- # a = x0 + x1
+- lea (%rdx,%rdi),%rbp
+- # (uint32) a <<<= 9
+- rol $9,%ebp
+- # x2 ^= a
+- xor %rbp,%rcx
+- # b = x5 + x6
+- lea (%r15,%rax),%rbp
+- # (uint32) b <<<= 9
+- rol $9,%ebp
+- # x7 ^= b
+- xor %rbp,%r8
+- # a = x1 + x2
+- lea (%rdi,%rcx),%rbp
+- # (uint32) a <<<= 13
+- rol $13,%ebp
+- # x3 ^= a
+- xor %rbp,%rsi
+- # b = x6 + x7
+- lea (%rax,%r8),%rbp
+- # (uint32) b <<<= 13
+- rol $13,%ebp
+- # x4 ^= b
+- xor %rbp,%r9
+- # a = x2 + x3
+- lea (%rcx,%rsi),%rbp
+- # (uint32) a <<<= 18
+- rol $18,%ebp
+- # x0 ^= a
+- xor %rbp,%rdx
+- # b = x7 + x4
+- lea (%r8,%r9),%rbp
+- # (uint32) b <<<= 18
+- rol $18,%ebp
+- # x5 ^= b
+- xor %rbp,%r15
+- # x10 = x10_stack
+- movq 168(%rsp),%rbp
+- # x5_stack = x5
+- movq %r15,160(%rsp)
+- # c = x9 + x10
+- lea (%r10,%rbp),%r15
+- # (uint32) c <<<= 7
+- rol $7,%r15d
+- # x11 ^= c
+- xor %r15,%r12
+- # c = x10 + x11
+- lea (%rbp,%r12),%r15
+- # (uint32) c <<<= 9
+- rol $9,%r15d
+- # x8 ^= c
+- xor %r15,%r11
+- # c = x11 + x8
+- lea (%r12,%r11),%r15
+- # (uint32) c <<<= 13
+- rol $13,%r15d
+- # x9 ^= c
+- xor %r15,%r10
+- # c = x8 + x9
+- lea (%r11,%r10),%r15
+- # (uint32) c <<<= 18
+- rol $18,%r15d
+- # x10 ^= c
+- xor %r15,%rbp
+- # x15 = x15_stack
+- movq 176(%rsp),%r15
+- # x10_stack = x10
+- movq %rbp,168(%rsp)
+- # d = x14 + x15
+- lea (%rbx,%r15),%rbp
+- # (uint32) d <<<= 7
+- rol $7,%ebp
+- # x12 ^= d
+- xor %rbp,%r14
+- # d = x15 + x12
+- lea (%r15,%r14),%rbp
+- # (uint32) d <<<= 9
+- rol $9,%ebp
+- # x13 ^= d
+- xor %rbp,%r13
+- # d = x12 + x13
+- lea (%r14,%r13),%rbp
+- # (uint32) d <<<= 13
+- rol $13,%ebp
+- # x14 ^= d
+- xor %rbp,%rbx
+- # d = x13 + x14
+- lea (%r13,%rbx),%rbp
+- # (uint32) d <<<= 18
+- rol $18,%ebp
+- # x15 ^= d
+- xor %rbp,%r15
+- # x15_stack = x15
+- movq %r15,176(%rsp)
+- # x5 = x5_stack
+- movq 160(%rsp),%r15
+- # a = x12 + x0
+- lea (%r14,%rdx),%rbp
+- # (uint32) a <<<= 7
+- rol $7,%ebp
+- # x4 ^= a
+- xor %rbp,%r9
+- # b = x1 + x5
+- lea (%rdi,%r15),%rbp
+- # (uint32) b <<<= 7
+- rol $7,%ebp
+- # x9 ^= b
+- xor %rbp,%r10
+- # a = x0 + x4
+- lea (%rdx,%r9),%rbp
+- # (uint32) a <<<= 9
+- rol $9,%ebp
+- # x8 ^= a
+- xor %rbp,%r11
+- # b = x5 + x9
+- lea (%r15,%r10),%rbp
+- # (uint32) b <<<= 9
+- rol $9,%ebp
+- # x13 ^= b
+- xor %rbp,%r13
+- # a = x4 + x8
+- lea (%r9,%r11),%rbp
+- # (uint32) a <<<= 13
+- rol $13,%ebp
+- # x12 ^= a
+- xor %rbp,%r14
+- # b = x9 + x13
+- lea (%r10,%r13),%rbp
+- # (uint32) b <<<= 13
+- rol $13,%ebp
+- # x1 ^= b
+- xor %rbp,%rdi
+- # a = x8 + x12
+- lea (%r11,%r14),%rbp
+- # (uint32) a <<<= 18
+- rol $18,%ebp
+- # x0 ^= a
+- xor %rbp,%rdx
+- # b = x13 + x1
+- lea (%r13,%rdi),%rbp
+- # (uint32) b <<<= 18
+- rol $18,%ebp
+- # x5 ^= b
+- xor %rbp,%r15
+- # x10 = x10_stack
+- movq 168(%rsp),%rbp
+- # x5_stack = x5
+- movq %r15,160(%rsp)
+- # c = x6 + x10
+- lea (%rax,%rbp),%r15
+- # (uint32) c <<<= 7
+- rol $7,%r15d
+- # x14 ^= c
+- xor %r15,%rbx
+- # c = x10 + x14
+- lea (%rbp,%rbx),%r15
+- # (uint32) c <<<= 9
+- rol $9,%r15d
+- # x2 ^= c
+- xor %r15,%rcx
+- # c = x14 + x2
+- lea (%rbx,%rcx),%r15
+- # (uint32) c <<<= 13
+- rol $13,%r15d
+- # x6 ^= c
+- xor %r15,%rax
+- # c = x2 + x6
+- lea (%rcx,%rax),%r15
+- # (uint32) c <<<= 18
+- rol $18,%r15d
+- # x10 ^= c
+- xor %r15,%rbp
+- # x15 = x15_stack
+- movq 176(%rsp),%r15
+- # x10_stack = x10
+- movq %rbp,168(%rsp)
+- # d = x11 + x15
+- lea (%r12,%r15),%rbp
+- # (uint32) d <<<= 7
+- rol $7,%ebp
+- # x3 ^= d
+- xor %rbp,%rsi
+- # d = x15 + x3
+- lea (%r15,%rsi),%rbp
+- # (uint32) d <<<= 9
+- rol $9,%ebp
+- # x7 ^= d
+- xor %rbp,%r8
+- # d = x3 + x7
+- lea (%rsi,%r8),%rbp
+- # (uint32) d <<<= 13
+- rol $13,%ebp
+- # x11 ^= d
+- xor %rbp,%r12
+- # d = x7 + x11
+- lea (%r8,%r12),%rbp
+- # (uint32) d <<<= 18
+- rol $18,%ebp
+- # x15 ^= d
+- xor %rbp,%r15
+- # x15_stack = x15
+- movq %r15,176(%rsp)
+- # x5 = x5_stack
+- movq 160(%rsp),%r15
+- # a = x3 + x0
+- lea (%rsi,%rdx),%rbp
+- # (uint32) a <<<= 7
+- rol $7,%ebp
+- # x1 ^= a
+- xor %rbp,%rdi
+- # b = x4 + x5
+- lea (%r9,%r15),%rbp
+- # (uint32) b <<<= 7
+- rol $7,%ebp
+- # x6 ^= b
+- xor %rbp,%rax
+- # a = x0 + x1
+- lea (%rdx,%rdi),%rbp
+- # (uint32) a <<<= 9
+- rol $9,%ebp
+- # x2 ^= a
+- xor %rbp,%rcx
+- # b = x5 + x6
+- lea (%r15,%rax),%rbp
+- # (uint32) b <<<= 9
+- rol $9,%ebp
+- # x7 ^= b
+- xor %rbp,%r8
+- # a = x1 + x2
+- lea (%rdi,%rcx),%rbp
+- # (uint32) a <<<= 13
+- rol $13,%ebp
+- # x3 ^= a
+- xor %rbp,%rsi
+- # b = x6 + x7
+- lea (%rax,%r8),%rbp
+- # (uint32) b <<<= 13
+- rol $13,%ebp
+- # x4 ^= b
+- xor %rbp,%r9
+- # a = x2 + x3
+- lea (%rcx,%rsi),%rbp
+- # (uint32) a <<<= 18
+- rol $18,%ebp
+- # x0 ^= a
+- xor %rbp,%rdx
+- # b = x7 + x4
+- lea (%r8,%r9),%rbp
+- # (uint32) b <<<= 18
+- rol $18,%ebp
+- # x5 ^= b
+- xor %rbp,%r15
+- # x10 = x10_stack
+- movq 168(%rsp),%rbp
+- # x5_stack = x5
+- movq %r15,160(%rsp)
+- # c = x9 + x10
+- lea (%r10,%rbp),%r15
+- # (uint32) c <<<= 7
+- rol $7,%r15d
+- # x11 ^= c
+- xor %r15,%r12
+- # c = x10 + x11
+- lea (%rbp,%r12),%r15
+- # (uint32) c <<<= 9
+- rol $9,%r15d
+- # x8 ^= c
+- xor %r15,%r11
+- # c = x11 + x8
+- lea (%r12,%r11),%r15
+- # (uint32) c <<<= 13
+- rol $13,%r15d
+- # x9 ^= c
+- xor %r15,%r10
+- # c = x8 + x9
+- lea (%r11,%r10),%r15
+- # (uint32) c <<<= 18
+- rol $18,%r15d
+- # x10 ^= c
+- xor %r15,%rbp
+- # x15 = x15_stack
+- movq 176(%rsp),%r15
+- # x10_stack = x10
+- movq %rbp,168(%rsp)
+- # d = x14 + x15
+- lea (%rbx,%r15),%rbp
+- # (uint32) d <<<= 7
+- rol $7,%ebp
+- # x12 ^= d
+- xor %rbp,%r14
+- # d = x15 + x12
+- lea (%r15,%r14),%rbp
+- # (uint32) d <<<= 9
+- rol $9,%ebp
+- # x13 ^= d
+- xor %rbp,%r13
+- # d = x12 + x13
+- lea (%r14,%r13),%rbp
+- # (uint32) d <<<= 13
+- rol $13,%ebp
+- # x14 ^= d
+- xor %rbp,%rbx
+- # d = x13 + x14
+- lea (%r13,%rbx),%rbp
+- # (uint32) d <<<= 18
+- rol $18,%ebp
+- # x15 ^= d
+- xor %rbp,%r15
+- # x15_stack = x15
+- movq %r15,176(%rsp)
+- # i = i_backup
+- movq 184(%rsp),%r15
+- # unsigned>? i -= 4
+- sub $4,%r15
+- # comment:fp stack unchanged by jump
+- # goto mainloop if unsigned>
+- ja ._mainloop
+- # (uint32) x2 += j2
+- addl 64(%rsp),%ecx
+- # x3 <<= 32
+- shl $32,%rsi
+- # x3 += j2
+- addq 64(%rsp),%rsi
+- # (uint64) x3 >>= 32
+- shr $32,%rsi
+- # x3 <<= 32
+- shl $32,%rsi
+- # x2 += x3
+- add %rsi,%rcx
+- # (uint32) x6 += j6
+- addl 80(%rsp),%eax
+- # x7 <<= 32
+- shl $32,%r8
+- # x7 += j6
+- addq 80(%rsp),%r8
+- # (uint64) x7 >>= 32
+- shr $32,%r8
+- # x7 <<= 32
+- shl $32,%r8
+- # x6 += x7
+- add %r8,%rax
+- # (uint32) x8 += j8
+- addl 88(%rsp),%r11d
+- # x9 <<= 32
+- shl $32,%r10
+- # x9 += j8
+- addq 88(%rsp),%r10
+- # (uint64) x9 >>= 32
+- shr $32,%r10
+- # x9 <<= 32
+- shl $32,%r10
+- # x8 += x9
+- add %r10,%r11
+- # (uint32) x12 += j12
+- addl 104(%rsp),%r14d
+- # x13 <<= 32
+- shl $32,%r13
+- # x13 += j12
+- addq 104(%rsp),%r13
+- # (uint64) x13 >>= 32
+- shr $32,%r13
+- # x13 <<= 32
+- shl $32,%r13
+- # x12 += x13
+- add %r13,%r14
+- # (uint32) x0 += j0
+- addl 56(%rsp),%edx
+- # x1 <<= 32
+- shl $32,%rdi
+- # x1 += j0
+- addq 56(%rsp),%rdi
+- # (uint64) x1 >>= 32
+- shr $32,%rdi
+- # x1 <<= 32
+- shl $32,%rdi
+- # x0 += x1
+- add %rdi,%rdx
+- # x5 = x5_stack
+- movq 160(%rsp),%rdi
+- # (uint32) x4 += j4
+- addl 72(%rsp),%r9d
+- # x5 <<= 32
+- shl $32,%rdi
+- # x5 += j4
+- addq 72(%rsp),%rdi
+- # (uint64) x5 >>= 32
+- shr $32,%rdi
+- # x5 <<= 32
+- shl $32,%rdi
+- # x4 += x5
+- add %rdi,%r9
+- # x10 = x10_stack
+- movq 168(%rsp),%r8
+- # (uint32) x10 += j10
+- addl 96(%rsp),%r8d
+- # x11 <<= 32
+- shl $32,%r12
+- # x11 += j10
+- addq 96(%rsp),%r12
+- # (uint64) x11 >>= 32
+- shr $32,%r12
+- # x11 <<= 32
+- shl $32,%r12
+- # x10 += x11
+- add %r12,%r8
+- # x15 = x15_stack
+- movq 176(%rsp),%rdi
+- # (uint32) x14 += j14
+- addl 112(%rsp),%ebx
+- # x15 <<= 32
+- shl $32,%rdi
+- # x15 += j14
+- addq 112(%rsp),%rdi
+- # (uint64) x15 >>= 32
+- shr $32,%rdi
+- # x15 <<= 32
+- shl $32,%rdi
+- # x14 += x15
+- add %rdi,%rbx
+- # out = out_backup
+- movq 136(%rsp),%rdi
+- # m = m_backup
+- movq 144(%rsp),%rsi
+- # x0 ^= *(uint64 *) (m + 0)
+- xorq 0(%rsi),%rdx
+- # *(uint64 *) (out + 0) = x0
+- movq %rdx,0(%rdi)
+- # x2 ^= *(uint64 *) (m + 8)
+- xorq 8(%rsi),%rcx
+- # *(uint64 *) (out + 8) = x2
+- movq %rcx,8(%rdi)
+- # x4 ^= *(uint64 *) (m + 16)
+- xorq 16(%rsi),%r9
+- # *(uint64 *) (out + 16) = x4
+- movq %r9,16(%rdi)
+- # x6 ^= *(uint64 *) (m + 24)
+- xorq 24(%rsi),%rax
+- # *(uint64 *) (out + 24) = x6
+- movq %rax,24(%rdi)
+- # x8 ^= *(uint64 *) (m + 32)
+- xorq 32(%rsi),%r11
+- # *(uint64 *) (out + 32) = x8
+- movq %r11,32(%rdi)
+- # x10 ^= *(uint64 *) (m + 40)
+- xorq 40(%rsi),%r8
+- # *(uint64 *) (out + 40) = x10
+- movq %r8,40(%rdi)
+- # x12 ^= *(uint64 *) (m + 48)
+- xorq 48(%rsi),%r14
+- # *(uint64 *) (out + 48) = x12
+- movq %r14,48(%rdi)
+- # x14 ^= *(uint64 *) (m + 56)
+- xorq 56(%rsi),%rbx
+- # *(uint64 *) (out + 56) = x14
+- movq %rbx,56(%rdi)
+- # bytes = bytes_backup
+- movq 152(%rsp),%rdx
+- # in8 = j8
+- movq 88(%rsp),%rcx
+- # in8 += 1
+- add $1,%rcx
+- # j8 = in8
+- movq %rcx,88(%rsp)
+- # unsigned>? unsigned<? bytes - 64
+- cmp $64,%rdx
+- # comment:fp stack unchanged by jump
+- # goto bytesatleast65 if unsigned>
+- ja ._bytesatleast65
+- # comment:fp stack unchanged by jump
+- # goto bytesatleast64 if !unsigned<
+- jae ._bytesatleast64
+- # m = out
+- mov %rdi,%rsi
+- # out = ctarget
+- movq 128(%rsp),%rdi
+- # i = bytes
+- mov %rdx,%rcx
+- # while (i) { *out++ = *m++; --i }
+- rep movsb
+- # comment:fp stack unchanged by fallthrough
+-# bytesatleast64:
+-._bytesatleast64:
+- # x = x_backup
+- movq 120(%rsp),%rdi
+- # in8 = j8
+- movq 88(%rsp),%rsi
+- # *(uint64 *) (x + 32) = in8
+- movq %rsi,32(%rdi)
+- # r11 = r11_stack
+- movq 0(%rsp),%r11
+- # r12 = r12_stack
+- movq 8(%rsp),%r12
+- # r13 = r13_stack
+- movq 16(%rsp),%r13
+- # r14 = r14_stack
+- movq 24(%rsp),%r14
+- # r15 = r15_stack
+- movq 32(%rsp),%r15
+- # rbx = rbx_stack
+- movq 40(%rsp),%rbx
+- # rbp = rbp_stack
+- movq 48(%rsp),%rbp
+- # comment:fp stack unchanged by fallthrough
+-# done:
+-._done:
+- # leave
+- add %r11,%rsp
+- mov %rdi,%rax
+- mov %rsi,%rdx
+- ret
+-# bytesatleast65:
+-._bytesatleast65:
+- # bytes -= 64
+- sub $64,%rdx
+- # out += 64
+- add $64,%rdi
+- # m += 64
+- add $64,%rsi
+- # comment:fp stack unchanged by jump
+- # goto bytesatleast1
+- jmp ._bytesatleast1
+-ENDPROC(salsa20_encrypt_bytes)
+diff --git a/arch/x86/crypto/salsa20_glue.c b/arch/x86/crypto/salsa20_glue.c
+deleted file mode 100644
+index b07d7d959806..000000000000
+--- a/arch/x86/crypto/salsa20_glue.c
++++ /dev/null
+@@ -1,91 +0,0 @@
+-/*
+- * Glue code for optimized assembly version of Salsa20.
+- *
+- * Copyright (c) 2007 Tan Swee Heng <thesweeheng@gmail.com>
+- *
+- * The assembly codes are public domain assembly codes written by Daniel. J.
+- * Bernstein <djb@cr.yp.to>. The codes are modified to include indentation
+- * and to remove extraneous comments and functions that are not needed.
+- * - i586 version, renamed as salsa20-i586-asm_32.S
+- * available from <http://cr.yp.to/snuffle/salsa20/x86-pm/salsa20.s>
+- * - x86-64 version, renamed as salsa20-x86_64-asm_64.S
+- * available from <http://cr.yp.to/snuffle/salsa20/amd64-3/salsa20.s>
+- *
+- * Also modified to set up the initial state using the generic C code rather
+- * than in assembly.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of the GNU General Public License as published by the Free
+- * Software Foundation; either version 2 of the License, or (at your option)
+- * any later version.
+- *
+- */
+-
+-#include <asm/unaligned.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/salsa20.h>
+-#include <linux/module.h>
+-
+-asmlinkage void salsa20_encrypt_bytes(u32 state[16], const u8 *src, u8 *dst,
+- u32 bytes);
+-
+-static int salsa20_asm_crypt(struct skcipher_request *req)
+-{
+- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+- const struct salsa20_ctx *ctx = crypto_skcipher_ctx(tfm);
+- struct skcipher_walk walk;
+- u32 state[16];
+- int err;
+-
+- err = skcipher_walk_virt(&walk, req, true);
+-
+- crypto_salsa20_init(state, ctx, walk.iv);
+-
+- while (walk.nbytes > 0) {
+- unsigned int nbytes = walk.nbytes;
+-
+- if (nbytes < walk.total)
+- nbytes = round_down(nbytes, walk.stride);
+-
+- salsa20_encrypt_bytes(state, walk.src.virt.addr,
+- walk.dst.virt.addr, nbytes);
+- err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+- }
+-
+- return err;
+-}
+-
+-static struct skcipher_alg alg = {
+- .base.cra_name = "salsa20",
+- .base.cra_driver_name = "salsa20-asm",
+- .base.cra_priority = 200,
+- .base.cra_blocksize = 1,
+- .base.cra_ctxsize = sizeof(struct salsa20_ctx),
+- .base.cra_module = THIS_MODULE,
+-
+- .min_keysize = SALSA20_MIN_KEY_SIZE,
+- .max_keysize = SALSA20_MAX_KEY_SIZE,
+- .ivsize = SALSA20_IV_SIZE,
+- .chunksize = SALSA20_BLOCK_SIZE,
+- .setkey = crypto_salsa20_setkey,
+- .encrypt = salsa20_asm_crypt,
+- .decrypt = salsa20_asm_crypt,
+-};
+-
+-static int __init init(void)
+-{
+- return crypto_register_skcipher(&alg);
+-}
+-
+-static void __exit fini(void)
+-{
+- crypto_unregister_skcipher(&alg);
+-}
+-
+-module_init(init);
+-module_exit(fini);
+-
+-MODULE_LICENSE("GPL");
+-MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm (optimized assembly version)");
+-MODULE_ALIAS_CRYPTO("salsa20");
+-MODULE_ALIAS_CRYPTO("salsa20-asm");
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 5db8b0b10766..7cc81c586d71 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -114,6 +114,7 @@
+ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f
+ #define VMX_MISC_SAVE_EFER_LMA 0x00000020
+ #define VMX_MISC_ACTIVITY_HLT 0x00000040
++#define VMX_MISC_ZERO_LEN_INS 0x40000000
+
+ /* VMFUNC functions */
+ #define VMX_VMFUNC_EPTP_SWITCHING 0x00000001
+@@ -349,11 +350,13 @@ enum vmcs_field {
+ #define VECTORING_INFO_VALID_MASK INTR_INFO_VALID_MASK
+
+ #define INTR_TYPE_EXT_INTR (0 << 8) /* external interrupt */
++#define INTR_TYPE_RESERVED (1 << 8) /* reserved */
+ #define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */
+ #define INTR_TYPE_HARD_EXCEPTION (3 << 8) /* processor exception */
+ #define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */
+ #define INTR_TYPE_PRIV_SW_EXCEPTION (5 << 8) /* ICE breakpoint - undocumented */
+ #define INTR_TYPE_SOFT_EXCEPTION (6 << 8) /* software exception */
++#define INTR_TYPE_OTHER_EVENT (7 << 8) /* other event */
+
+ /* GUEST_INTERRUPTIBILITY_INFO flags. */
+ #define GUEST_INTR_STATE_STI 0x00000001
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index c84bb5396958..442fae7b8b61 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -293,7 +293,7 @@ static int uprobe_init_insn(struct arch_uprobe *auprobe, struct insn *insn, bool
+ insn_init(insn, auprobe->insn, sizeof(auprobe->insn), x86_64);
+ /* has the side-effect of processing the entire instruction */
+ insn_get_length(insn);
+- if (WARN_ON_ONCE(!insn_complete(insn)))
++ if (!insn_complete(insn))
+ return -ENOEXEC;
+
+ if (is_prefix_bad(insn))
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 82f5e915e568..dd4366edc200 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -1510,6 +1510,17 @@ static inline unsigned nested_cpu_vmx_misc_cr3_count(struct kvm_vcpu *vcpu)
+ return vmx_misc_cr3_count(to_vmx(vcpu)->nested.msrs.misc_low);
+ }
+
++static inline bool nested_cpu_has_zero_length_injection(struct kvm_vcpu *vcpu)
++{
++ return to_vmx(vcpu)->nested.msrs.misc_low & VMX_MISC_ZERO_LEN_INS;
++}
++
++static inline bool nested_cpu_supports_monitor_trap_flag(struct kvm_vcpu *vcpu)
++{
++ return to_vmx(vcpu)->nested.msrs.procbased_ctls_high &
++ CPU_BASED_MONITOR_TRAP_FLAG;
++}
++
+ static inline bool nested_cpu_has(struct vmcs12 *vmcs12, u32 bit)
+ {
+ return vmcs12->cpu_based_vm_exec_control & bit;
+@@ -11364,6 +11375,62 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ !nested_cr3_valid(vcpu, vmcs12->host_cr3))
+ return VMXERR_ENTRY_INVALID_HOST_STATE_FIELD;
+
++ /*
++ * From the Intel SDM, volume 3:
++ * Fields relevant to VM-entry event injection must be set properly.
++ * These fields are the VM-entry interruption-information field, the
++ * VM-entry exception error code, and the VM-entry instruction length.
++ */
++ if (vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) {
++ u32 intr_info = vmcs12->vm_entry_intr_info_field;
++ u8 vector = intr_info & INTR_INFO_VECTOR_MASK;
++ u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK;
++ bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK;
++ bool should_have_error_code;
++ bool urg = nested_cpu_has2(vmcs12,
++ SECONDARY_EXEC_UNRESTRICTED_GUEST);
++ bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE;
++
++ /* VM-entry interruption-info field: interruption type */
++ if (intr_type == INTR_TYPE_RESERVED ||
++ (intr_type == INTR_TYPE_OTHER_EVENT &&
++ !nested_cpu_supports_monitor_trap_flag(vcpu)))
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
++ /* VM-entry interruption-info field: vector */
++ if ((intr_type == INTR_TYPE_NMI_INTR && vector != NMI_VECTOR) ||
++ (intr_type == INTR_TYPE_HARD_EXCEPTION && vector > 31) ||
++ (intr_type == INTR_TYPE_OTHER_EVENT && vector != 0))
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
++ /* VM-entry interruption-info field: deliver error code */
++ should_have_error_code =
++ intr_type == INTR_TYPE_HARD_EXCEPTION && prot_mode &&
++ x86_exception_has_error_code(vector);
++ if (has_error_code != should_have_error_code)
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
++ /* VM-entry exception error code */
++ if (has_error_code &&
++ vmcs12->vm_entry_exception_error_code & GENMASK(31, 15))
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
++ /* VM-entry interruption-info field: reserved bits */
++ if (intr_info & INTR_INFO_RESVD_BITS_MASK)
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++
++ /* VM-entry instruction length */
++ switch (intr_type) {
++ case INTR_TYPE_SOFT_EXCEPTION:
++ case INTR_TYPE_SOFT_INTR:
++ case INTR_TYPE_PRIV_SW_EXCEPTION:
++ if ((vmcs12->vm_entry_instruction_len > 15) ||
++ (vmcs12->vm_entry_instruction_len == 0 &&
++ !nested_cpu_has_zero_length_injection(vcpu)))
++ return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
++ }
++ }
++
+ return 0;
+ }
+
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 331993c49dae..257f27620bc2 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -110,6 +110,15 @@ static inline bool is_la57_mode(struct kvm_vcpu *vcpu)
+ #endif
+ }
+
++static inline bool x86_exception_has_error_code(unsigned int vector)
++{
++ static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) |
++ BIT(NP_VECTOR) | BIT(SS_VECTOR) | BIT(GP_VECTOR) |
++ BIT(PF_VECTOR) | BIT(AC_VECTOR);
++
++ return (1U << vector) & exception_has_error_code;
++}
++
+ static inline bool mmu_is_nested(struct kvm_vcpu *vcpu)
+ {
+ return vcpu->arch.walk_mmu == &vcpu->arch.nested_mmu;
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index 2e9ee023e6bc..81a8e33115ad 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -6,7 +6,7 @@ purgatory-y := purgatory.o stack.o setup-x86_$(BITS).o sha256.o entry64.o string
+ targets += $(purgatory-y)
+ PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
+
+-$(obj)/sha256.o: $(srctree)/lib/sha256.c
++$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE
+ $(call if_changed_rule,cc_o_c)
+
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 357969a3697c..ce8f6a8d4ac9 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1206,12 +1206,20 @@ asmlinkage __visible void __init xen_start_kernel(void)
+
+ xen_setup_features();
+
+- xen_setup_machphys_mapping();
+-
+ /* Install Xen paravirt ops */
+ pv_info = xen_info;
+ pv_init_ops.patch = paravirt_patch_default;
+ pv_cpu_ops = xen_cpu_ops;
++ xen_init_irq_ops();
++
++ /*
++ * Setup xen_vcpu early because it is needed for
++ * local_irq_disable(), irqs_disabled(), e.g. in printk().
++ *
++ * Don't do the full vcpu_info placement stuff until we have
++ * the cpu_possible_mask and a non-dummy shared_info.
++ */
++ xen_vcpu_info_reset(0);
+
+ x86_platform.get_nmi_reason = xen_get_nmi_reason;
+
+@@ -1224,10 +1232,12 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ * Set up some pagetable state before starting to set any ptes.
+ */
+
++ xen_setup_machphys_mapping();
+ xen_init_mmu_ops();
+
+ /* Prevent unwanted bits from being set in PTEs. */
+ __supported_pte_mask &= ~_PAGE_GLOBAL;
++ __default_kernel_pte_mask &= ~_PAGE_GLOBAL;
+
+ /*
+ * Prevent page tables from being allocated in highmem, even
+@@ -1248,20 +1258,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ get_cpu_cap(&boot_cpu_data);
+ x86_configure_nx();
+
+- xen_init_irq_ops();
+-
+ /* Let's presume PV guests always boot on vCPU with id 0. */
+ per_cpu(xen_vcpu_id, 0) = 0;
+
+- /*
+- * Setup xen_vcpu early because idt_setup_early_handler needs it for
+- * local_irq_disable(), irqs_disabled().
+- *
+- * Don't do the full vcpu_info placement stuff until we have
+- * the cpu_possible_mask and a non-dummy shared_info.
+- */
+- xen_vcpu_info_reset(0);
+-
+ idt_setup_early_handler();
+
+ xen_init_capabilities();
+diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
+index 74179852e46c..7515a19fd324 100644
+--- a/arch/x86/xen/irq.c
++++ b/arch/x86/xen/irq.c
+@@ -128,8 +128,6 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
+
+ void __init xen_init_irq_ops(void)
+ {
+- /* For PVH we use default pv_irq_ops settings. */
+- if (!xen_feature(XENFEAT_hvm_callback_vector))
+- pv_irq_ops = xen_irq_ops;
++ pv_irq_ops = xen_irq_ops;
+ x86_init.irqs.intr_init = xen_init_IRQ;
+ }
+diff --git a/block/bsg.c b/block/bsg.c
+index defa06c11858..7a54b645d469 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -268,8 +268,6 @@ bsg_map_hdr(struct request_queue *q, struct sg_io_v4 *hdr, fmode_t mode)
+ } else if (hdr->din_xfer_len) {
+ ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->din_xferp),
+ hdr->din_xfer_len, GFP_KERNEL);
+- } else {
+- ret = blk_rq_map_user(q, rq, NULL, NULL, 0, GFP_KERNEL);
+ }
+
+ if (ret)
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 76e8c88c97b4..2f4d4f711950 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1335,34 +1335,6 @@ config CRYPTO_SALSA20
+ The Salsa20 stream cipher algorithm is designed by Daniel J.
+ Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
+
+-config CRYPTO_SALSA20_586
+- tristate "Salsa20 stream cipher algorithm (i586)"
+- depends on (X86 || UML_X86) && !64BIT
+- select CRYPTO_BLKCIPHER
+- select CRYPTO_SALSA20
+- help
+- Salsa20 stream cipher algorithm.
+-
+- Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
+- Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
+-
+- The Salsa20 stream cipher algorithm is designed by Daniel J.
+- Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
+-
+-config CRYPTO_SALSA20_X86_64
+- tristate "Salsa20 stream cipher algorithm (x86_64)"
+- depends on (X86 || UML_X86) && 64BIT
+- select CRYPTO_BLKCIPHER
+- select CRYPTO_SALSA20
+- help
+- Salsa20 stream cipher algorithm.
+-
+- Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
+- Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
+-
+- The Salsa20 stream cipher algorithm is designed by Daniel J.
+- Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
+-
+ config CRYPTO_CHACHA20
+ tristate "ChaCha20 cipher algorithm"
+ select CRYPTO_BLKCIPHER
+diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
+index 264ec12c0b9c..7f6735d9003f 100644
+--- a/crypto/sha3_generic.c
++++ b/crypto/sha3_generic.c
+@@ -152,7 +152,7 @@ static SHA3_INLINE void keccakf_round(u64 st[25])
+ st[24] ^= bc[ 4];
+ }
+
+-static void __optimize("O3") keccakf(u64 st[25])
++static void keccakf(u64 st[25])
+ {
+ int round;
+
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index fc0c2e2328cd..fe9d46d81750 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -51,16 +51,23 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ return_ACPI_STATUS(status);
+ }
+
+- /*
+- * 1) Disable all GPEs
+- * 2) Enable all wakeup GPEs
+- */
++ /* Disable all GPEs */
+ status = acpi_hw_disable_all_gpes();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
++ /*
++ * If the target sleep state is S5, clear all GPEs and fixed events too
++ */
++ if (sleep_state == ACPI_STATE_S5) {
++ status = acpi_hw_clear_acpi_status();
++ if (ACPI_FAILURE(status)) {
++ return_ACPI_STATUS(status);
++ }
++ }
+ acpi_gbl_system_awake_and_running = FALSE;
+
++ /* Enable all wakeup GPEs */
+ status = acpi_hw_enable_all_wakeup_gpes();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index e2235ed3e4be..964106d173bd 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1272,7 +1272,7 @@ static ssize_t scrub_show(struct device *dev,
+
+ mutex_lock(&acpi_desc->init_mutex);
+ rc = sprintf(buf, "%d%s", acpi_desc->scrub_count,
+- work_busy(&acpi_desc->dwork.work)
++ acpi_desc->scrub_busy
+ && !acpi_desc->cancel ? "+\n" : "\n");
+ mutex_unlock(&acpi_desc->init_mutex);
+ }
+@@ -2949,6 +2949,32 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ return 0;
+ }
+
++static void __sched_ars(struct acpi_nfit_desc *acpi_desc, unsigned int tmo)
++{
++ lockdep_assert_held(&acpi_desc->init_mutex);
++
++ acpi_desc->scrub_busy = 1;
++ /* note this should only be set from within the workqueue */
++ if (tmo)
++ acpi_desc->scrub_tmo = tmo;
++ queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ);
++}
++
++static void sched_ars(struct acpi_nfit_desc *acpi_desc)
++{
++ __sched_ars(acpi_desc, 0);
++}
++
++static void notify_ars_done(struct acpi_nfit_desc *acpi_desc)
++{
++ lockdep_assert_held(&acpi_desc->init_mutex);
++
++ acpi_desc->scrub_busy = 0;
++ acpi_desc->scrub_count++;
++ if (acpi_desc->scrub_count_state)
++ sysfs_notify_dirent(acpi_desc->scrub_count_state);
++}
++
+ static void acpi_nfit_scrub(struct work_struct *work)
+ {
+ struct acpi_nfit_desc *acpi_desc;
+@@ -2959,14 +2985,10 @@ static void acpi_nfit_scrub(struct work_struct *work)
+ mutex_lock(&acpi_desc->init_mutex);
+ query_rc = acpi_nfit_query_poison(acpi_desc);
+ tmo = __acpi_nfit_scrub(acpi_desc, query_rc);
+- if (tmo) {
+- queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ);
+- acpi_desc->scrub_tmo = tmo;
+- } else {
+- acpi_desc->scrub_count++;
+- if (acpi_desc->scrub_count_state)
+- sysfs_notify_dirent(acpi_desc->scrub_count_state);
+- }
++ if (tmo)
++ __sched_ars(acpi_desc, tmo);
++ else
++ notify_ars_done(acpi_desc);
+ memset(acpi_desc->ars_status, 0, acpi_desc->max_ars);
+ mutex_unlock(&acpi_desc->init_mutex);
+ }
+@@ -3047,7 +3069,7 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ break;
+ }
+
+- queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0);
++ sched_ars(acpi_desc);
+ return 0;
+ }
+
+@@ -3249,7 +3271,7 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
+ }
+ }
+ if (scheduled) {
+- queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0);
++ sched_ars(acpi_desc);
+ dev_dbg(dev, "ars_scan triggered\n");
+ }
+ mutex_unlock(&acpi_desc->init_mutex);
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index 7d15856a739f..a97ff42fe311 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -203,6 +203,7 @@ struct acpi_nfit_desc {
+ unsigned int max_ars;
+ unsigned int scrub_count;
+ unsigned int scrub_mode;
++ unsigned int scrub_busy:1;
+ unsigned int cancel:1;
+ unsigned long dimm_cmd_force_en;
+ unsigned long bus_cmd_force_en;
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 738fb22978dd..b2b9eba1d214 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -400,6 +400,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */
+ { PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */
+ { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_mobile }, /* ApolloLake AHCI */
++ { PCI_VDEVICE(INTEL, 0x34d3), board_ahci_mobile }, /* Ice Lake LP AHCI */
+
+ /* JMicron 360/1/3/5/6, match class to avoid IDE function */
+ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+@@ -1280,6 +1281,59 @@ static bool ahci_broken_suspend(struct pci_dev *pdev)
+ return strcmp(buf, dmi->driver_data) < 0;
+ }
+
++static bool ahci_broken_lpm(struct pci_dev *pdev)
++{
++ static const struct dmi_system_id sysids[] = {
++ /* Various Lenovo 50 series have LPM issues with older BIOSen */
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X250"),
++ },
++ .driver_data = "20180406", /* 1.31 */
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L450"),
++ },
++ .driver_data = "20180420", /* 1.28 */
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T450s"),
++ },
++ .driver_data = "20180315", /* 1.33 */
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"),
++ },
++ /*
++ * Note date based on release notes, 2.35 has been
++ * reported to be good, but I've been unable to get
++ * a hold of the reporter to get the DMI BIOS date.
++ * TODO: fix this.
++ */
++ .driver_data = "20180310", /* 2.35 */
++ },
++ { } /* terminate list */
++ };
++ const struct dmi_system_id *dmi = dmi_first_match(sysids);
++ int year, month, date;
++ char buf[9];
++
++ if (!dmi)
++ return false;
++
++ dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);
++ snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
++
++ return strcmp(buf, dmi->driver_data) < 0;
++}
++
+ static bool ahci_broken_online(struct pci_dev *pdev)
+ {
+ #define ENCODE_BUSDEVFN(bus, slot, func) \
+@@ -1694,6 +1748,12 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ "quirky BIOS, skipping spindown on poweroff\n");
+ }
+
++ if (ahci_broken_lpm(pdev)) {
++ pi.flags |= ATA_FLAG_NO_LPM;
++ dev_warn(&pdev->dev,
++ "BIOS update required for Link Power Management support\n");
++ }
++
+ if (ahci_broken_suspend(pdev)) {
+ hpriv->flags |= AHCI_HFLAG_NO_SUSPEND;
+ dev_warn(&pdev->dev,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 9bfd2f7e4542..55cbaab4bc20 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2502,6 +2502,9 @@ int ata_dev_configure(struct ata_device *dev)
+ (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2)
+ dev->horkage |= ATA_HORKAGE_NOLPM;
+
++ if (ap->flags & ATA_FLAG_NO_LPM)
++ dev->horkage |= ATA_HORKAGE_NOLPM;
++
+ if (dev->horkage & ATA_HORKAGE_NOLPM) {
+ ata_dev_warn(dev, "LPM support broken, forcing max_power\n");
+ dev->link->ap->target_lpm_policy = ATA_LPM_MAX_POWER;
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 89a9d4a2efc8..c0ac1ea3e7a3 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -3802,10 +3802,20 @@ static unsigned int ata_scsi_zbc_out_xlat(struct ata_queued_cmd *qc)
+ */
+ goto invalid_param_len;
+ }
+- if (block > dev->n_sectors)
+- goto out_of_range;
+
+ all = cdb[14] & 0x1;
++ if (all) {
++ /*
++ * Ignore the block address (zone ID) as defined by ZBC.
++ */
++ block = 0;
++ } else if (block >= dev->n_sectors) {
++ /*
++ * Block must be a valid zone ID (a zone start LBA).
++ */
++ fp = 2;
++ goto invalid_fld;
++ }
+
+ if (ata_ncq_enabled(qc->dev) &&
+ ata_fpdma_zac_mgmt_out_supported(qc->dev)) {
+@@ -3834,10 +3844,6 @@ static unsigned int ata_scsi_zbc_out_xlat(struct ata_queued_cmd *qc)
+ invalid_fld:
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff);
+ return 1;
+- out_of_range:
+- /* "Logical Block Address out of range" */
+- ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x00);
+- return 1;
+ invalid_param_len:
+ /* "Parameter list length error" */
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 55cf554bc914..1a2777bc5a57 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -644,6 +644,36 @@ static void loop_reread_partitions(struct loop_device *lo,
+ __func__, lo->lo_number, lo->lo_file_name, rc);
+ }
+
++static inline int is_loop_device(struct file *file)
++{
++ struct inode *i = file->f_mapping->host;
++
++ return i && S_ISBLK(i->i_mode) && MAJOR(i->i_rdev) == LOOP_MAJOR;
++}
++
++static int loop_validate_file(struct file *file, struct block_device *bdev)
++{
++ struct inode *inode = file->f_mapping->host;
++ struct file *f = file;
++
++ /* Avoid recursion */
++ while (is_loop_device(f)) {
++ struct loop_device *l;
++
++ if (f->f_mapping->host->i_bdev == bdev)
++ return -EBADF;
++
++ l = f->f_mapping->host->i_bdev->bd_disk->private_data;
++ if (l->lo_state == Lo_unbound) {
++ return -EINVAL;
++ }
++ f = l->lo_backing_file;
++ }
++ if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
++ return -EINVAL;
++ return 0;
++}
++
+ /*
+ * loop_change_fd switched the backing store of a loopback device to
+ * a new file. This is useful for operating system installers to free up
+@@ -673,14 +703,15 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ if (!file)
+ goto out;
+
++ error = loop_validate_file(file, bdev);
++ if (error)
++ goto out_putf;
++
+ inode = file->f_mapping->host;
+ old_file = lo->lo_backing_file;
+
+ error = -EINVAL;
+
+- if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
+- goto out_putf;
+-
+ /* size of the new backing store needs to be the same */
+ if (get_loop_size(lo, file) != get_loop_size(lo, old_file))
+ goto out_putf;
+@@ -706,13 +737,6 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ return error;
+ }
+
+-static inline int is_loop_device(struct file *file)
+-{
+- struct inode *i = file->f_mapping->host;
+-
+- return i && S_ISBLK(i->i_mode) && MAJOR(i->i_rdev) == LOOP_MAJOR;
+-}
+-
+ /* loop sysfs attributes */
+
+ static ssize_t loop_attr_show(struct device *dev, char *page,
+@@ -809,16 +833,17 @@ static struct attribute_group loop_attribute_group = {
+ .attrs= loop_attrs,
+ };
+
+-static int loop_sysfs_init(struct loop_device *lo)
++static void loop_sysfs_init(struct loop_device *lo)
+ {
+- return sysfs_create_group(&disk_to_dev(lo->lo_disk)->kobj,
+- &loop_attribute_group);
++ lo->sysfs_inited = !sysfs_create_group(&disk_to_dev(lo->lo_disk)->kobj,
++ &loop_attribute_group);
+ }
+
+ static void loop_sysfs_exit(struct loop_device *lo)
+ {
+- sysfs_remove_group(&disk_to_dev(lo->lo_disk)->kobj,
+- &loop_attribute_group);
++ if (lo->sysfs_inited)
++ sysfs_remove_group(&disk_to_dev(lo->lo_disk)->kobj,
++ &loop_attribute_group);
+ }
+
+ static void loop_config_discard(struct loop_device *lo)
+@@ -877,7 +902,7 @@ static int loop_prepare_queue(struct loop_device *lo)
+ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
+ struct block_device *bdev, unsigned int arg)
+ {
+- struct file *file, *f;
++ struct file *file;
+ struct inode *inode;
+ struct address_space *mapping;
+ int lo_flags = 0;
+@@ -896,29 +921,13 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
+ if (lo->lo_state != Lo_unbound)
+ goto out_putf;
+
+- /* Avoid recursion */
+- f = file;
+- while (is_loop_device(f)) {
+- struct loop_device *l;
+-
+- if (f->f_mapping->host->i_bdev == bdev)
+- goto out_putf;
+-
+- l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+- if (l->lo_state == Lo_unbound) {
+- error = -EINVAL;
+- goto out_putf;
+- }
+- f = l->lo_backing_file;
+- }
++ error = loop_validate_file(file, bdev);
++ if (error)
++ goto out_putf;
+
+ mapping = file->f_mapping;
+ inode = mapping->host;
+
+- error = -EINVAL;
+- if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
+- goto out_putf;
+-
+ if (!(file->f_mode & FMODE_WRITE) || !(mode & FMODE_WRITE) ||
+ !file->f_op->write_iter)
+ lo_flags |= LO_FLAGS_READ_ONLY;
+diff --git a/drivers/block/loop.h b/drivers/block/loop.h
+index b78de9879f4f..4d42c7af7de7 100644
+--- a/drivers/block/loop.h
++++ b/drivers/block/loop.h
+@@ -58,6 +58,7 @@ struct loop_device {
+ struct kthread_worker worker;
+ struct task_struct *worker_task;
+ bool use_dio;
++ bool sysfs_inited;
+
+ struct request_queue *lo_queue;
+ struct blk_mq_tag_set tag_set;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index ab50090d066c..ff1a9ffbf17f 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -693,8 +693,11 @@ static struct platform_driver etnaviv_platform_driver = {
+ },
+ };
+
++static struct platform_device *etnaviv_drm;
++
+ static int __init etnaviv_init(void)
+ {
++ struct platform_device *pdev;
+ int ret;
+ struct device_node *np;
+
+@@ -706,7 +709,7 @@ static int __init etnaviv_init(void)
+
+ ret = platform_driver_register(&etnaviv_platform_driver);
+ if (ret != 0)
+- platform_driver_unregister(&etnaviv_gpu_driver);
++ goto unregister_gpu_driver;
+
+ /*
+ * If the DT contains at least one available GPU device, instantiate
+@@ -715,20 +718,33 @@ static int __init etnaviv_init(void)
+ for_each_compatible_node(np, NULL, "vivante,gc") {
+ if (!of_device_is_available(np))
+ continue;
+-
+- platform_device_register_simple("etnaviv", -1, NULL, 0);
++ pdev = platform_device_register_simple("etnaviv", -1,
++ NULL, 0);
++ if (IS_ERR(pdev)) {
++ ret = PTR_ERR(pdev);
++ of_node_put(np);
++ goto unregister_platform_driver;
++ }
++ etnaviv_drm = pdev;
+ of_node_put(np);
+ break;
+ }
+
++ return 0;
++
++unregister_platform_driver:
++ platform_driver_unregister(&etnaviv_platform_driver);
++unregister_gpu_driver:
++ platform_driver_unregister(&etnaviv_gpu_driver);
+ return ret;
+ }
+ module_init(etnaviv_init);
+
+ static void __exit etnaviv_exit(void)
+ {
+- platform_driver_unregister(&etnaviv_gpu_driver);
++ platform_device_unregister(etnaviv_drm);
+ platform_driver_unregister(&etnaviv_platform_driver);
++ platform_driver_unregister(&etnaviv_gpu_driver);
+ }
+ module_exit(etnaviv_exit);
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+index 3c3005501846..feb3c6fab382 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+@@ -142,6 +142,9 @@ struct etnaviv_gpu {
+ struct work_struct sync_point_work;
+ int sync_point_event;
+
++ /* hang detection */
++ u32 hangcheck_dma_addr;
++
+ void __iomem *mmio;
+ int irq;
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index 6cf0775dbcd7..506b05a67dfe 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -21,6 +21,7 @@
+ #include "etnaviv_gem.h"
+ #include "etnaviv_gpu.h"
+ #include "etnaviv_sched.h"
++#include "state.xml.h"
+
+ static int etnaviv_job_hang_limit = 0;
+ module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444);
+@@ -96,6 +97,29 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+ {
+ struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);
+ struct etnaviv_gpu *gpu = submit->gpu;
++ u32 dma_addr;
++ int change;
++
++ /*
++ * If the GPU managed to complete this jobs fence, the timout is
++ * spurious. Bail out.
++ */
++ if (fence_completed(gpu, submit->out_fence->seqno))
++ return;
++
++ /*
++ * If the GPU is still making forward progress on the front-end (which
++ * should never loop) we shift out the timeout to give it a chance to
++ * finish the job.
++ */
++ dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
++ change = dma_addr - gpu->hangcheck_dma_addr;
++ if (change < 0 || change > 16) {
++ gpu->hangcheck_dma_addr = dma_addr;
++ schedule_delayed_work(&sched_job->work_tdr,
++ sched_job->sched->timeout);
++ return;
++ }
+
+ /* block scheduler */
+ kthread_park(gpu->sched.thread);
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 60292d243e24..ec2d11af6c78 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -547,6 +547,14 @@ static int tegra_i2c_disable_packet_mode(struct tegra_i2c_dev *i2c_dev)
+ {
+ u32 cnfg;
+
++ /*
++ * NACK interrupt is generated before the I2C controller generates
++ * the STOP condition on the bus. So wait for 2 clock periods
++ * before disabling the controller so that the STOP condition has
++ * been delivered properly.
++ */
++ udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate));
++
+ cnfg = i2c_readl(i2c_dev, I2C_CNFG);
+ if (cnfg & I2C_CNFG_PACKET_MODE_EN)
+ i2c_writel(i2c_dev, cnfg & ~I2C_CNFG_PACKET_MODE_EN, I2C_CNFG);
+@@ -708,15 +716,6 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
+ if (likely(i2c_dev->msg_err == I2C_ERR_NONE))
+ return 0;
+
+- /*
+- * NACK interrupt is generated before the I2C controller generates
+- * the STOP condition on the bus. So wait for 2 clock periods
+- * before resetting the controller so that the STOP condition has
+- * been delivered properly.
+- */
+- if (i2c_dev->msg_err == I2C_ERR_NO_ACK)
+- udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate));
+-
+ tegra_i2c_init(i2c_dev);
+ if (i2c_dev->msg_err == I2C_ERR_NO_ACK) {
+ if (msg->flags & I2C_M_IGNORE_NAK)
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 1ba40bb2b966..1b6e80123fba 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -198,7 +198,16 @@ int i2c_generic_scl_recovery(struct i2c_adapter *adap)
+
+ val = !val;
+ bri->set_scl(adap, val);
+- ndelay(RECOVERY_NDELAY);
++
++ /*
++ * If we can set SDA, we will always create STOP here to ensure
++ * the additional pulses will do no harm. This is achieved by
++ * letting SDA follow SCL half a cycle later.
++ */
++ ndelay(RECOVERY_NDELAY / 2);
++ if (bri->set_sda)
++ bri->set_sda(adap, val);
++ ndelay(RECOVERY_NDELAY / 2);
+ }
+
+ /* check if recovery actually succeeded */
+diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
+index 2a972ed6851b..b03af54367c0 100644
+--- a/drivers/infiniband/Kconfig
++++ b/drivers/infiniband/Kconfig
+@@ -35,6 +35,17 @@ config INFINIBAND_USER_ACCESS
+ libibverbs, libibcm and a hardware driver library from
+ rdma-core <https://github.com/linux-rdma/rdma-core>.
+
++config INFINIBAND_USER_ACCESS_UCM
++ bool "Userspace CM (UCM, DEPRECATED)"
++ depends on BROKEN
++ depends on INFINIBAND_USER_ACCESS
++ help
++ The UCM module has known security flaws, which no one is
++ interested to fix. The user-space part of this code was
++ dropped from the upstream a long time ago.
++
++ This option is DEPRECATED and planned to be removed.
++
+ config INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI
+ bool "Allow experimental legacy verbs in new ioctl uAPI (EXPERIMENTAL)"
+ depends on INFINIBAND_USER_ACCESS
+diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
+index dda9e856e3fa..e1acbf5f87f1 100644
+--- a/drivers/infiniband/core/Makefile
++++ b/drivers/infiniband/core/Makefile
+@@ -5,8 +5,8 @@ user_access-$(CONFIG_INFINIBAND_ADDR_TRANS) := rdma_ucm.o
+ obj-$(CONFIG_INFINIBAND) += ib_core.o ib_cm.o iw_cm.o \
+ $(infiniband-y)
+ obj-$(CONFIG_INFINIBAND_USER_MAD) += ib_umad.o
+-obj-$(CONFIG_INFINIBAND_USER_ACCESS) += ib_uverbs.o ib_ucm.o \
+- $(user_access-y)
++obj-$(CONFIG_INFINIBAND_USER_ACCESS) += ib_uverbs.o $(user_access-y)
++obj-$(CONFIG_INFINIBAND_USER_ACCESS_UCM) += ib_ucm.o $(user_access-y)
+
+ ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
+ device.o fmr_pool.o cache.o netlink.o \
+diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
+index 1445918e3239..7b76e6f81aeb 100644
+--- a/drivers/infiniband/hw/cxgb4/mem.c
++++ b/drivers/infiniband/hw/cxgb4/mem.c
+@@ -774,7 +774,7 @@ static int c4iw_set_page(struct ib_mr *ibmr, u64 addr)
+ {
+ struct c4iw_mr *mhp = to_c4iw_mr(ibmr);
+
+- if (unlikely(mhp->mpl_len == mhp->max_mpl_len))
++ if (unlikely(mhp->mpl_len == mhp->attr.pbl_size))
+ return -ENOMEM;
+
+ mhp->mpl[mhp->mpl_len++] = addr;
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index da58046a02ea..295d3c40a994 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -271,7 +271,7 @@ int hfi1_make_rc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
+
+ lockdep_assert_held(&qp->s_lock);
+ ps->s_txreq = get_txreq(ps->dev, qp);
+- if (IS_ERR(ps->s_txreq))
++ if (!ps->s_txreq)
+ goto bail_no_tx;
+
+ if (priv->hdr_type == HFI1_PKT_TYPE_9B) {
+diff --git a/drivers/infiniband/hw/hfi1/uc.c b/drivers/infiniband/hw/hfi1/uc.c
+index 9d7a3110c14c..d140aaccdf14 100644
+--- a/drivers/infiniband/hw/hfi1/uc.c
++++ b/drivers/infiniband/hw/hfi1/uc.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright(c) 2015, 2016 Intel Corporation.
++ * Copyright(c) 2015 - 2018 Intel Corporation.
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+@@ -72,7 +72,7 @@ int hfi1_make_uc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
+ int middle = 0;
+
+ ps->s_txreq = get_txreq(ps->dev, qp);
+- if (IS_ERR(ps->s_txreq))
++ if (!ps->s_txreq)
+ goto bail_no_tx;
+
+ if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_SEND_OK)) {
+diff --git a/drivers/infiniband/hw/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c
+index 69c17a5ef038..3ae10530f754 100644
+--- a/drivers/infiniband/hw/hfi1/ud.c
++++ b/drivers/infiniband/hw/hfi1/ud.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright(c) 2015, 2016 Intel Corporation.
++ * Copyright(c) 2015 - 2018 Intel Corporation.
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+@@ -482,7 +482,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
+ u32 lid;
+
+ ps->s_txreq = get_txreq(ps->dev, qp);
+- if (IS_ERR(ps->s_txreq))
++ if (!ps->s_txreq)
+ goto bail_no_tx;
+
+ if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_NEXT_SEND_OK)) {
+diff --git a/drivers/infiniband/hw/hfi1/verbs_txreq.c b/drivers/infiniband/hw/hfi1/verbs_txreq.c
+index 873e48ea923f..c4ab2d5b4502 100644
+--- a/drivers/infiniband/hw/hfi1/verbs_txreq.c
++++ b/drivers/infiniband/hw/hfi1/verbs_txreq.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright(c) 2016 - 2017 Intel Corporation.
++ * Copyright(c) 2016 - 2018 Intel Corporation.
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+@@ -94,7 +94,7 @@ struct verbs_txreq *__get_txreq(struct hfi1_ibdev *dev,
+ struct rvt_qp *qp)
+ __must_hold(&qp->s_lock)
+ {
+- struct verbs_txreq *tx = ERR_PTR(-EBUSY);
++ struct verbs_txreq *tx = NULL;
+
+ write_seqlock(&dev->txwait_lock);
+ if (ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) {
+diff --git a/drivers/infiniband/hw/hfi1/verbs_txreq.h b/drivers/infiniband/hw/hfi1/verbs_txreq.h
+index 729244c3086c..1c19bbc764b2 100644
+--- a/drivers/infiniband/hw/hfi1/verbs_txreq.h
++++ b/drivers/infiniband/hw/hfi1/verbs_txreq.h
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright(c) 2016 Intel Corporation.
++ * Copyright(c) 2016 - 2018 Intel Corporation.
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+@@ -83,7 +83,7 @@ static inline struct verbs_txreq *get_txreq(struct hfi1_ibdev *dev,
+ if (unlikely(!tx)) {
+ /* call slow path to get the lock */
+ tx = __get_txreq(dev, qp);
+- if (IS_ERR(tx))
++ if (!tx)
+ return tx;
+ }
+ tx->qp = qp;
+diff --git a/drivers/misc/ibmasm/ibmasmfs.c b/drivers/misc/ibmasm/ibmasmfs.c
+index e05c3245930a..fa840666bdd1 100644
+--- a/drivers/misc/ibmasm/ibmasmfs.c
++++ b/drivers/misc/ibmasm/ibmasmfs.c
+@@ -507,35 +507,14 @@ static int remote_settings_file_close(struct inode *inode, struct file *file)
+ static ssize_t remote_settings_file_read(struct file *file, char __user *buf, size_t count, loff_t *offset)
+ {
+ void __iomem *address = (void __iomem *)file->private_data;
+- unsigned char *page;
+- int retval;
+ int len = 0;
+ unsigned int value;
+-
+- if (*offset < 0)
+- return -EINVAL;
+- if (count == 0 || count > 1024)
+- return 0;
+- if (*offset != 0)
+- return 0;
+-
+- page = (unsigned char *)__get_free_page(GFP_KERNEL);
+- if (!page)
+- return -ENOMEM;
++ char lbuf[20];
+
+ value = readl(address);
+- len = sprintf(page, "%d\n", value);
+-
+- if (copy_to_user(buf, page, len)) {
+- retval = -EFAULT;
+- goto exit;
+- }
+- *offset += len;
+- retval = len;
++ len = snprintf(lbuf, sizeof(lbuf), "%d\n", value);
+
+-exit:
+- free_page((unsigned long)page);
+- return retval;
++ return simple_read_from_buffer(buf, count, offset, lbuf, len);
+ }
+
+ static ssize_t remote_settings_file_write(struct file *file, const char __user *ubuff, size_t count, loff_t *offset)
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index b0b8f18a85e3..6649f0d56d2f 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -310,8 +310,11 @@ int mei_irq_read_handler(struct mei_device *dev,
+ if (&cl->link == &dev->file_list) {
+ /* A message for not connected fixed address clients
+ * should be silently discarded
++ * On power down client may be force cleaned,
++ * silently discard such messages
+ */
+- if (hdr_is_fixed(mei_hdr)) {
++ if (hdr_is_fixed(mei_hdr) ||
++ dev->dev_state == MEI_DEV_POWER_DOWN) {
+ mei_irq_discard_msg(dev, mei_hdr);
+ ret = 0;
+ goto reset_slots;
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index efd733472a35..56c6f79a5c5a 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -467,7 +467,7 @@ static int vmballoon_send_batched_lock(struct vmballoon *b,
+ unsigned int num_pages, bool is_2m_pages, unsigned int *target)
+ {
+ unsigned long status;
+- unsigned long pfn = page_to_pfn(b->page);
++ unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page));
+
+ STATS_INC(b->stats.lock[is_2m_pages]);
+
+@@ -515,7 +515,7 @@ static bool vmballoon_send_batched_unlock(struct vmballoon *b,
+ unsigned int num_pages, bool is_2m_pages, unsigned int *target)
+ {
+ unsigned long status;
+- unsigned long pfn = page_to_pfn(b->page);
++ unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page));
+
+ STATS_INC(b->stats.unlock[is_2m_pages]);
+
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 29a1afa81f66..3ee8f57fd612 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -1065,8 +1065,8 @@ static void dw_mci_ctrl_thld(struct dw_mci *host, struct mmc_data *data)
+ * It's used when HS400 mode is enabled.
+ */
+ if (data->flags & MMC_DATA_WRITE &&
+- !(host->timing != MMC_TIMING_MMC_HS400))
+- return;
++ host->timing != MMC_TIMING_MMC_HS400)
++ goto disable;
+
+ if (data->flags & MMC_DATA_WRITE)
+ enable = SDMMC_CARD_WR_THR_EN;
+@@ -1074,7 +1074,8 @@ static void dw_mci_ctrl_thld(struct dw_mci *host, struct mmc_data *data)
+ enable = SDMMC_CARD_RD_THR_EN;
+
+ if (host->timing != MMC_TIMING_MMC_HS200 &&
+- host->timing != MMC_TIMING_UHS_SDR104)
++ host->timing != MMC_TIMING_UHS_SDR104 &&
++ host->timing != MMC_TIMING_MMC_HS400)
+ goto disable;
+
+ blksz_depth = blksz / (1 << host->data_shift);
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index eb027cdc8f24..8d19b5903fd1 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -139,8 +139,7 @@ renesas_sdhi_internal_dmac_abort_dma(struct tmio_mmc_host *host) {
+ renesas_sdhi_internal_dmac_dm_write(host, DM_CM_RST,
+ RST_RESERVED_BITS | val);
+
+- if (host->data && host->data->flags & MMC_DATA_READ)
+- clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);
++ clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);
+
+ renesas_sdhi_internal_dmac_enable_dma(host, true);
+ }
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index cd2b5f643a15..6891be4ff9f1 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -306,6 +306,15 @@ static u32 esdhc_readl_le(struct sdhci_host *host, int reg)
+
+ if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ val |= SDHCI_SUPPORT_HS400;
++
++ /*
++ * Do not advertise faster UHS modes if there are no
++ * pinctrl states for 100MHz/200MHz.
++ */
++ if (IS_ERR_OR_NULL(imx_data->pins_100mhz) ||
++ IS_ERR_OR_NULL(imx_data->pins_200mhz))
++ val &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50
++ | SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_HS400);
+ }
+ }
+
+@@ -1136,18 +1145,6 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
+ ESDHC_PINCTRL_STATE_100MHZ);
+ imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl,
+ ESDHC_PINCTRL_STATE_200MHZ);
+- if (IS_ERR(imx_data->pins_100mhz) ||
+- IS_ERR(imx_data->pins_200mhz)) {
+- dev_warn(mmc_dev(host->mmc),
+- "could not get ultra high speed state, work on normal mode\n");
+- /*
+- * fall back to not supporting uhs by specifying no
+- * 1.8v quirk
+- */
+- host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
+- }
+- } else {
+- host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
+ }
+
+ /* call to generic mmc_of_parse to support additional capabilities */
+diff --git a/drivers/mtd/spi-nor/cadence-quadspi.c b/drivers/mtd/spi-nor/cadence-quadspi.c
+index 5872f31eaa60..5cf9ef6cf259 100644
+--- a/drivers/mtd/spi-nor/cadence-quadspi.c
++++ b/drivers/mtd/spi-nor/cadence-quadspi.c
+@@ -920,10 +920,12 @@ static ssize_t cqspi_write(struct spi_nor *nor, loff_t to,
+ if (ret)
+ return ret;
+
+- if (f_pdata->use_direct_mode)
++ if (f_pdata->use_direct_mode) {
+ memcpy_toio(cqspi->ahb_base + to, buf, len);
+- else
++ ret = cqspi_wait_idle(cqspi);
++ } else {
+ ret = cqspi_indirect_write_execute(nor, to, buf, len);
++ }
+ if (ret)
+ return ret;
+
+diff --git a/drivers/staging/rtl8723bs/core/rtw_ap.c b/drivers/staging/rtl8723bs/core/rtw_ap.c
+index 0b530ea7fd81..4b39b7ddf506 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_ap.c
++++ b/drivers/staging/rtl8723bs/core/rtw_ap.c
+@@ -1059,7 +1059,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
+ return _FAIL;
+
+
+- if (len > MAX_IE_SZ)
++ if (len < 0 || len > MAX_IE_SZ)
+ return _FAIL;
+
+ pbss_network->IELength = len;
+diff --git a/drivers/staging/rtlwifi/rtl8822be/hw.c b/drivers/staging/rtlwifi/rtl8822be/hw.c
+index 74386003044f..c6db2bd20594 100644
+--- a/drivers/staging/rtlwifi/rtl8822be/hw.c
++++ b/drivers/staging/rtlwifi/rtl8822be/hw.c
+@@ -814,7 +814,7 @@ static void _rtl8822be_enable_aspm_back_door(struct ieee80211_hw *hw)
+ return;
+
+ pci_read_config_byte(rtlpci->pdev, 0x70f, &tmp);
+- pci_write_config_byte(rtlpci->pdev, 0x70f, tmp | BIT(7));
++ pci_write_config_byte(rtlpci->pdev, 0x70f, tmp | ASPM_L1_LATENCY << 3);
+
+ pci_read_config_byte(rtlpci->pdev, 0x719, &tmp);
+ pci_write_config_byte(rtlpci->pdev, 0x719, tmp | BIT(3) | BIT(4));
+diff --git a/drivers/staging/rtlwifi/wifi.h b/drivers/staging/rtlwifi/wifi.h
+index a23bb1719e35..0ab1e2d50535 100644
+--- a/drivers/staging/rtlwifi/wifi.h
++++ b/drivers/staging/rtlwifi/wifi.h
+@@ -99,6 +99,7 @@
+ #define RTL_USB_MAX_RX_COUNT 100
+ #define QBSS_LOAD_SIZE 5
+ #define MAX_WMMELE_LENGTH 64
++#define ASPM_L1_LATENCY 7
+
+ #define TOTAL_CAM_ENTRY 32
+
+diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
+index 6281266b8ec0..a923ebdeb73c 100644
+--- a/drivers/thunderbolt/domain.c
++++ b/drivers/thunderbolt/domain.c
+@@ -213,6 +213,10 @@ static ssize_t boot_acl_store(struct device *dev, struct device_attribute *attr,
+ goto err_free_acl;
+ }
+ ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl);
++ if (!ret) {
++ /* Notify userspace about the change */
++ kobject_uevent(&tb->dev.kobj, KOBJ_CHANGE);
++ }
+ mutex_unlock(&tb->lock);
+
+ err_free_acl:
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index c55def2f1320..097057d2eacf 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -378,6 +378,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Corsair K70 RGB */
+ { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT },
+
++ /* Corsair Strafe */
++ { USB_DEVICE(0x1b1c, 0x1b15), .driver_info = USB_QUIRK_DELAY_INIT |
++ USB_QUIRK_DELAY_CTRL_MSG },
++
+ /* Corsair Strafe RGB */
+ { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT |
+ USB_QUIRK_DELAY_CTRL_MSG },
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 99e7547f234f..0b424a4b58ec 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -591,7 +591,7 @@ struct xhci_ring *xhci_stream_id_to_ring(
+ if (!ep->stream_info)
+ return NULL;
+
+- if (stream_id > ep->stream_info->num_streams)
++ if (stream_id >= ep->stream_info->num_streams)
+ return NULL;
+ return ep->stream_info->stream_rings[stream_id];
+ }
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 8abb6cbbd98a..3be40eaa1ac9 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -396,8 +396,7 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ loff_t *ppos)
+ {
+ struct usb_yurex *dev;
+- int retval = 0;
+- int bytes_read = 0;
++ int len = 0;
+ char in_buffer[20];
+ unsigned long flags;
+
+@@ -405,26 +404,16 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+
+ mutex_lock(&dev->io_mutex);
+ if (!dev->interface) { /* already disconnected */
+- retval = -ENODEV;
+- goto exit;
++ mutex_unlock(&dev->io_mutex);
++ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+- bytes_read = snprintf(in_buffer, 20, "%lld\n", dev->bbu);
++ len = snprintf(in_buffer, 20, "%lld\n", dev->bbu);
+ spin_unlock_irqrestore(&dev->lock, flags);
+-
+- if (*ppos < bytes_read) {
+- if (copy_to_user(buffer, in_buffer + *ppos, bytes_read - *ppos))
+- retval = -EFAULT;
+- else {
+- retval = bytes_read - *ppos;
+- *ppos += bytes_read;
+- }
+- }
+-
+-exit:
+ mutex_unlock(&dev->io_mutex);
+- return retval;
++
++ return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+ }
+
+ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index bdd7a5ad3bf1..3bb1fff02bed 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -128,7 +128,7 @@ static int ch341_control_in(struct usb_device *dev,
+ r = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), request,
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ value, index, buf, bufsize, DEFAULT_TIMEOUT);
+- if (r < bufsize) {
++ if (r < (int)bufsize) {
+ if (r >= 0) {
+ dev_err(&dev->dev,
+ "short control message received (%d < %u)\n",
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index ee0cc1d90b51..626a29d9aa58 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -149,6 +149,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */
+ { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
+ { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
++ { USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */
+ { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+ { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
+ { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
+diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
+index 5169624d8b11..38d43c4b7ce5 100644
+--- a/drivers/usb/serial/keyspan_pda.c
++++ b/drivers/usb/serial/keyspan_pda.c
+@@ -369,8 +369,10 @@ static int keyspan_pda_get_modem_info(struct usb_serial *serial,
+ 3, /* get pins */
+ USB_TYPE_VENDOR|USB_RECIP_INTERFACE|USB_DIR_IN,
+ 0, 0, data, 1, 2000);
+- if (rc >= 0)
++ if (rc == 1)
+ *value = *data;
++ else if (rc >= 0)
++ rc = -EIO;
+
+ kfree(data);
+ return rc;
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index fdceb46d9fc6..b580b4c7fa48 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -468,6 +468,9 @@ static void mos7840_control_callback(struct urb *urb)
+ }
+
+ dev_dbg(dev, "%s urb buffer size is %d\n", __func__, urb->actual_length);
++ if (urb->actual_length < 1)
++ goto out;
++
+ dev_dbg(dev, "%s mos7840_port->MsrLsr is %d port %d\n", __func__,
+ mos7840_port->MsrLsr, mos7840_port->port_num);
+ data = urb->transfer_buffer;
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 4ad6f669fe34..39d3a724a4b4 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1259,9 +1259,8 @@ static int load_elf_library(struct file *file)
+ goto out_free_ph;
+ }
+
+- len = ELF_PAGESTART(eppnt->p_filesz + eppnt->p_vaddr +
+- ELF_MIN_ALIGN - 1);
+- bss = eppnt->p_memsz + eppnt->p_vaddr;
++ len = ELF_PAGEALIGN(eppnt->p_filesz + eppnt->p_vaddr);
++ bss = ELF_PAGEALIGN(eppnt->p_memsz + eppnt->p_vaddr);
+ if (bss > len) {
+ error = vm_brk(len, bss - len);
+ if (error)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 1df7f10476d6..20149b8771d9 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1586,18 +1586,6 @@ static inline bool __exist_node_summaries(struct f2fs_sb_info *sbi)
+ is_set_ckpt_flags(sbi, CP_FASTBOOT_FLAG));
+ }
+
+-/*
+- * Check whether the given nid is within node id range.
+- */
+-static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
+-{
+- if (unlikely(nid < F2FS_ROOT_INO(sbi)))
+- return -EINVAL;
+- if (unlikely(nid >= NM_I(sbi)->max_nid))
+- return -EINVAL;
+- return 0;
+-}
+-
+ /*
+ * Check whether the inode has blocks or not
+ */
+@@ -2720,6 +2708,7 @@ f2fs_hash_t f2fs_dentry_hash(const struct qstr *name_info,
+ struct dnode_of_data;
+ struct node_info;
+
++int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid);
+ bool available_free_memory(struct f2fs_sb_info *sbi, int type);
+ int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
+ bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index f8ef04c9f69d..3d91bea4ec90 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -185,6 +185,21 @@ void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page)
+ ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page));
+ }
+
++static bool sanity_check_inode(struct inode *inode)
++{
++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
++
++ if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)
++ && !f2fs_has_extra_attr(inode)) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_msg(sbi->sb, KERN_WARNING,
++ "%s: corrupted inode ino=%lx, run fsck to fix.",
++ __func__, inode->i_ino);
++ return false;
++ }
++ return true;
++}
++
+ static int do_read_inode(struct inode *inode)
+ {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+@@ -194,12 +209,8 @@ static int do_read_inode(struct inode *inode)
+ projid_t i_projid;
+
+ /* Check if ino is within scope */
+- if (check_nid_range(sbi, inode->i_ino)) {
+- f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu",
+- (unsigned long) inode->i_ino);
+- WARN_ON(1);
++ if (check_nid_range(sbi, inode->i_ino))
+ return -EINVAL;
+- }
+
+ node_page = get_node_page(sbi, inode->i_ino);
+ if (IS_ERR(node_page))
+@@ -239,7 +250,6 @@ static int do_read_inode(struct inode *inode)
+ le16_to_cpu(ri->i_extra_isize) : 0;
+
+ if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) {
+- f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode));
+ fi->i_inline_xattr_size = le16_to_cpu(ri->i_inline_xattr_size);
+ } else if (f2fs_has_inline_xattr(inode) ||
+ f2fs_has_inline_dentry(inode)) {
+@@ -317,6 +327,10 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ ret = do_read_inode(inode);
+ if (ret)
+ goto bad_inode;
++ if (!sanity_check_inode(inode)) {
++ ret = -EINVAL;
++ goto bad_inode;
++ }
+ make_now:
+ if (ino == F2FS_NODE_INO(sbi)) {
+ inode->i_mapping->a_ops = &f2fs_node_aops;
+@@ -588,8 +602,11 @@ void f2fs_evict_inode(struct inode *inode)
+ alloc_nid_failed(sbi, inode->i_ino);
+ clear_inode_flag(inode, FI_FREE_NID);
+ } else {
+- f2fs_bug_on(sbi, err &&
+- !exist_written_data(sbi, inode->i_ino, ORPHAN_INO));
++ /*
++ * If xattr nid is corrupted, we can reach out error condition,
++ * err & !exist_written_data(sbi, inode->i_ino, ORPHAN_INO)).
++ * In that case, check_nid_range() is enough to give a clue.
++ */
+ }
+ out_clear:
+ fscrypt_put_encryption_info(inode);
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index f202398e20ea..de48222f48e3 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -29,6 +29,21 @@ static struct kmem_cache *nat_entry_slab;
+ static struct kmem_cache *free_nid_slab;
+ static struct kmem_cache *nat_entry_set_slab;
+
++/*
++ * Check whether the given nid is within node id range.
++ */
++int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
++{
++ if (unlikely(nid < F2FS_ROOT_INO(sbi) || nid >= NM_I(sbi)->max_nid)) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_msg(sbi->sb, KERN_WARNING,
++ "%s: out-of-range nid=%x, run fsck to fix.",
++ __func__, nid);
++ return -EINVAL;
++ }
++ return 0;
++}
++
+ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
+ {
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -1158,7 +1173,8 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
+
+ if (!nid)
+ return;
+- f2fs_bug_on(sbi, check_nid_range(sbi, nid));
++ if (check_nid_range(sbi, nid))
++ return;
+
+ rcu_read_lock();
+ apage = radix_tree_lookup(&NODE_MAPPING(sbi)->i_pages, nid);
+@@ -1182,7 +1198,8 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
+
+ if (!nid)
+ return ERR_PTR(-ENOENT);
+- f2fs_bug_on(sbi, check_nid_range(sbi, nid));
++ if (check_nid_range(sbi, nid))
++ return ERR_PTR(-EINVAL);
+ repeat:
+ page = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false);
+ if (!page)
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index be8d1b16b8d1..cffaf842f4e7 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3600,6 +3600,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ unsigned int i, start, end;
+ unsigned int readed, start_blk = 0;
+ int err = 0;
++ block_t total_node_blocks = 0;
+
+ do {
+ readed = ra_meta_pages(sbi, start_blk, BIO_MAX_PAGES,
+@@ -3622,6 +3623,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ if (err)
+ return err;
+ seg_info_from_raw_sit(se, &sit);
++ if (IS_NODESEG(se->type))
++ total_node_blocks += se->valid_blocks;
+
+ /* build discard map only one time */
+ if (f2fs_discard_en(sbi)) {
+@@ -3650,15 +3653,28 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ unsigned int old_valid_blocks;
+
+ start = le32_to_cpu(segno_in_journal(journal, i));
++ if (start >= MAIN_SEGS(sbi)) {
++ f2fs_msg(sbi->sb, KERN_ERR,
++ "Wrong journal entry on segno %u",
++ start);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ err = -EINVAL;
++ break;
++ }
++
+ se = &sit_i->sentries[start];
+ sit = sit_in_journal(journal, i);
+
+ old_valid_blocks = se->valid_blocks;
++ if (IS_NODESEG(se->type))
++ total_node_blocks -= old_valid_blocks;
+
+ err = check_block_count(sbi, start, &sit);
+ if (err)
+ break;
+ seg_info_from_raw_sit(se, &sit);
++ if (IS_NODESEG(se->type))
++ total_node_blocks += se->valid_blocks;
+
+ if (f2fs_discard_en(sbi)) {
+ if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+@@ -3677,6 +3693,15 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ se->valid_blocks - old_valid_blocks;
+ }
+ up_read(&curseg->journal_rwsem);
++
++ if (!err && total_node_blocks != valid_node_count(sbi)) {
++ f2fs_msg(sbi->sb, KERN_ERR,
++ "SIT is corrupted node# %u vs %u",
++ total_node_blocks, valid_node_count(sbi));
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ err = -EINVAL;
++ }
++
+ return err;
+ }
+
+diff --git a/fs/inode.c b/fs/inode.c
+index 3b55391072f3..6c6fef8c7c27 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -2013,8 +2013,14 @@ void inode_init_owner(struct inode *inode, const struct inode *dir,
+ inode->i_uid = current_fsuid();
+ if (dir && dir->i_mode & S_ISGID) {
+ inode->i_gid = dir->i_gid;
++
++ /* Directories are special, and always inherit S_ISGID */
+ if (S_ISDIR(mode))
+ mode |= S_ISGID;
++ else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) &&
++ !in_group_p(inode->i_gid) &&
++ !capable_wrt_inode_uidgid(dir, CAP_FSETID))
++ mode &= ~S_ISGID;
+ } else
+ inode->i_gid = current_fsgid();
+ inode->i_mode = mode;
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index c486ad4b43f0..b0abfe02beab 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -831,7 +831,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ SEQ_PUT_DEC(" kB\nSwap: ", mss->swap);
+ SEQ_PUT_DEC(" kB\nSwapPss: ",
+ mss->swap_pss >> PSS_SHIFT);
+- SEQ_PUT_DEC(" kB\nLocked: ", mss->pss >> PSS_SHIFT);
++ SEQ_PUT_DEC(" kB\nLocked: ",
++ mss->pss_locked >> PSS_SHIFT);
+ seq_puts(m, " kB\n");
+ }
+ if (!rollup_mode) {
+diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
+index 367e9a0726e6..ead8c4842c29 100644
+--- a/fs/xfs/libxfs/xfs_ialloc_btree.c
++++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
+@@ -296,7 +296,7 @@ xfs_inobt_verify(
+ case cpu_to_be32(XFS_FIBT_MAGIC):
+ break;
+ default:
+- return NULL;
++ return __this_address;
+ }
+
+ /* level verification */
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 1795fecdea17..6fed495af95c 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -211,6 +211,7 @@ enum {
+ ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */
+ /* (doesn't imply presence) */
+ ATA_FLAG_SATA = (1 << 1),
++ ATA_FLAG_NO_LPM = (1 << 2), /* host not happy with LPM */
+ ATA_FLAG_NO_LOG_PAGE = (1 << 5), /* do not issue log page read */
+ ATA_FLAG_NO_ATAPI = (1 << 6), /* No ATAPI support */
+ ATA_FLAG_PIO_DMA = (1 << 7), /* PIO cmds via DMA */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1904e814f282..56212edd6f23 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1610,6 +1610,30 @@ static int get_callee_stack_depth(struct bpf_verifier_env *env,
+ }
+ #endif
+
++static int check_ctx_reg(struct bpf_verifier_env *env,
++ const struct bpf_reg_state *reg, int regno)
++{
++ /* Access to ctx or passing it to a helper is only allowed in
++ * its original, unmodified form.
++ */
++
++ if (reg->off) {
++ verbose(env, "dereference of modified ctx ptr R%d off=%d disallowed\n",
++ regno, reg->off);
++ return -EACCES;
++ }
++
++ if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
++ char tn_buf[48];
++
++ tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
++ verbose(env, "variable ctx access var_off=%s disallowed\n", tn_buf);
++ return -EACCES;
++ }
++
++ return 0;
++}
++
+ /* truncate register to smaller size (in bytes)
+ * must be called with size < BPF_REG_SIZE
+ */
+@@ -1679,24 +1703,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ verbose(env, "R%d leaks addr into ctx\n", value_regno);
+ return -EACCES;
+ }
+- /* ctx accesses must be at a fixed offset, so that we can
+- * determine what type of data were returned.
+- */
+- if (reg->off) {
+- verbose(env,
+- "dereference of modified ctx ptr R%d off=%d+%d, ctx+const is allowed, ctx+const+const is not\n",
+- regno, reg->off, off - reg->off);
+- return -EACCES;
+- }
+- if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
+- char tn_buf[48];
+
+- tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+- verbose(env,
+- "variable ctx access var_off=%s off=%d size=%d",
+- tn_buf, off, size);
+- return -EACCES;
+- }
++ err = check_ctx_reg(env, reg, regno);
++ if (err < 0)
++ return err;
++
+ err = check_ctx_access(env, insn_idx, off, size, t, ®_type);
+ if (!err && t == BPF_READ && value_regno >= 0) {
+ /* ctx access returns either a scalar, or a
+@@ -1977,6 +1988,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
+ expected_type = PTR_TO_CTX;
+ if (type != expected_type)
+ goto err_type;
++ err = check_ctx_reg(env, reg, regno);
++ if (err < 0)
++ return err;
+ } else if (arg_type_is_mem_ptr(arg_type)) {
+ expected_type = PTR_TO_STACK;
+ /* One exception here. In case function allows for NULL to be
+diff --git a/kernel/power/user.c b/kernel/power/user.c
+index 75c959de4b29..abd225550271 100644
+--- a/kernel/power/user.c
++++ b/kernel/power/user.c
+@@ -186,6 +186,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
+ res = PAGE_SIZE - pg_offp;
+ }
+
++ if (!data_of(data->handle)) {
++ res = -EINVAL;
++ goto unlock;
++ }
++
+ res = simple_write_to_buffer(data_of(data->handle), res, &pg_offp,
+ buf, count);
+ if (res > 0)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index bcd93031d042..4e67d0020337 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3363,8 +3363,8 @@ static void print_func_help_header(struct trace_buffer *buf, struct seq_file *m,
+
+ print_event_info(buf, m);
+
+- seq_printf(m, "# TASK-PID CPU# %s TIMESTAMP FUNCTION\n", tgid ? "TGID " : "");
+- seq_printf(m, "# | | | %s | |\n", tgid ? " | " : "");
++ seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? "TGID " : "");
++ seq_printf(m, "# | | %s | | |\n", tgid ? " | " : "");
+ }
+
+ static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file *m,
+@@ -3384,9 +3384,9 @@ static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file
+ tgid ? tgid_space : space);
+ seq_printf(m, "# %s||| / delay\n",
+ tgid ? tgid_space : space);
+- seq_printf(m, "# TASK-PID CPU#%s|||| TIMESTAMP FUNCTION\n",
++ seq_printf(m, "# TASK-PID %sCPU# |||| TIMESTAMP FUNCTION\n",
+ tgid ? " TGID " : space);
+- seq_printf(m, "# | | | %s|||| | |\n",
++ seq_printf(m, "# | | %s | |||| | |\n",
+ tgid ? " | " : space);
+ }
+
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 02aed76e0978..eebc7c92f6d0 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1451,8 +1451,10 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
+ }
+
+ ret = __register_trace_kprobe(tk);
+- if (ret < 0)
++ if (ret < 0) {
++ kfree(tk->tp.call.print_fmt);
+ goto error;
++ }
+
+ return &tk->tp.call;
+ error:
+@@ -1472,6 +1474,8 @@ void destroy_local_trace_kprobe(struct trace_event_call *event_call)
+ }
+
+ __unregister_trace_kprobe(tk);
++
++ kfree(tk->tp.call.print_fmt);
+ free_trace_kprobe(tk);
+ }
+ #endif /* CONFIG_PERF_EVENTS */
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 90db994ac900..1c8e30fda46a 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -594,8 +594,7 @@ int trace_print_context(struct trace_iterator *iter)
+
+ trace_find_cmdline(entry->pid, comm);
+
+- trace_seq_printf(s, "%16s-%-5d [%03d] ",
+- comm, entry->pid, iter->cpu);
++ trace_seq_printf(s, "%16s-%-5d ", comm, entry->pid);
+
+ if (tr->trace_flags & TRACE_ITER_RECORD_TGID) {
+ unsigned int tgid = trace_find_tgid(entry->pid);
+@@ -606,6 +605,8 @@ int trace_print_context(struct trace_iterator *iter)
+ trace_seq_printf(s, "(%5d) ", tgid);
+ }
+
++ trace_seq_printf(s, "[%03d] ", iter->cpu);
++
+ if (tr->trace_flags & TRACE_ITER_IRQ_INFO)
+ trace_print_lat_fmt(s, entry);
+
+diff --git a/mm/gup.c b/mm/gup.c
+index 3d8472d48a0b..e608e6650d60 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1222,8 +1222,6 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
+ int locked = 0;
+ long ret = 0;
+
+- VM_BUG_ON(start & ~PAGE_MASK);
+- VM_BUG_ON(len != PAGE_ALIGN(len));
+ end = start + len;
+
+ for (nstart = start; nstart < end; nstart = nend) {
+diff --git a/mm/mmap.c b/mm/mmap.c
+index fc41c0543d7f..540cfab8c2c4 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -186,8 +186,8 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
+ return next;
+ }
+
+-static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf);
+-
++static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags,
++ struct list_head *uf);
+ SYSCALL_DEFINE1(brk, unsigned long, brk)
+ {
+ unsigned long retval;
+@@ -245,7 +245,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
+ goto out;
+
+ /* Ok, looks good - let it rip. */
+- if (do_brk(oldbrk, newbrk-oldbrk, &uf) < 0)
++ if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0)
+ goto out;
+
+ set_brk:
+@@ -2929,21 +2929,14 @@ static inline void verify_mm_writelocked(struct mm_struct *mm)
+ * anonymous maps. eventually we may be able to do some
+ * brk-specific accounting here.
+ */
+-static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags, struct list_head *uf)
++static int do_brk_flags(unsigned long addr, unsigned long len, unsigned long flags, struct list_head *uf)
+ {
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma, *prev;
+- unsigned long len;
+ struct rb_node **rb_link, *rb_parent;
+ pgoff_t pgoff = addr >> PAGE_SHIFT;
+ int error;
+
+- len = PAGE_ALIGN(request);
+- if (len < request)
+- return -ENOMEM;
+- if (!len)
+- return 0;
+-
+ /* Until we need other flags, refuse anything except VM_EXEC. */
+ if ((flags & (~VM_EXEC)) != 0)
+ return -EINVAL;
+@@ -3015,18 +3008,20 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
+ return 0;
+ }
+
+-static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf)
+-{
+- return do_brk_flags(addr, len, 0, uf);
+-}
+-
+-int vm_brk_flags(unsigned long addr, unsigned long len, unsigned long flags)
++int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)
+ {
+ struct mm_struct *mm = current->mm;
++ unsigned long len;
+ int ret;
+ bool populate;
+ LIST_HEAD(uf);
+
++ len = PAGE_ALIGN(request);
++ if (len < request)
++ return -ENOMEM;
++ if (!len)
++ return 0;
++
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d2d0eb9536a3..322cb12a142f 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6841,6 +6841,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
+ /* Initialise every node */
+ mminit_verify_pageflags_layout();
+ setup_nr_node_ids();
++ zero_resv_unavail();
+ for_each_online_node(nid) {
+ pg_data_t *pgdat = NODE_DATA(nid);
+ free_area_init_node(nid, NULL,
+@@ -6851,7 +6852,6 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
+ node_set_state(nid, N_MEMORY);
+ check_for_memory(pgdat, nid);
+ }
+- zero_resv_unavail();
+ }
+
+ static int __init cmdline_parse_core(char *p, unsigned long *core,
+@@ -7027,9 +7027,9 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
+
+ void __init free_area_init(unsigned long *zones_size)
+ {
++ zero_resv_unavail();
+ free_area_init_node(0, zones_size,
+ __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
+- zero_resv_unavail();
+ }
+
+ static int page_alloc_cpu_dead(unsigned int cpu)
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 8d5337fed37b..09a799c9aebd 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -64,6 +64,7 @@
+ #include <linux/backing-dev.h>
+ #include <linux/page_idle.h>
+ #include <linux/memremap.h>
++#include <linux/userfaultfd_k.h>
+
+ #include <asm/tlbflush.h>
+
+@@ -1481,11 +1482,16 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ set_pte_at(mm, address, pvmw.pte, pteval);
+ }
+
+- } else if (pte_unused(pteval)) {
++ } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
+ /*
+ * The guest indicated that the page content is of no
+ * interest anymore. Simply discard the pte, vmscan
+ * will take care of the rest.
++ * A future reference will then fault in a new zero
++ * page. When userfaultfd is active, we must not drop
++ * this page though, as its main user (postcopy
++ * migration) will not expect userfaults on already
++ * copied pages.
+ */
+ dec_mm_counter(mm, mm_counter(page));
+ /* We have to invalidate as we cleared the pte */
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 6ba639f6c51d..499123afcab5 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -694,6 +694,8 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ }
+ i = 0;
+
++ memset(&mtpar, 0, sizeof(mtpar));
++ memset(&tgpar, 0, sizeof(tgpar));
+ mtpar.net = tgpar.net = net;
+ mtpar.table = tgpar.table = name;
+ mtpar.entryinfo = tgpar.entryinfo = e;
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index e85f35b89c49..f6130704f052 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -531,6 +531,7 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
+ return -ENOMEM;
+
+ j = 0;
++ memset(&mtpar, 0, sizeof(mtpar));
+ mtpar.net = net;
+ mtpar.table = name;
+ mtpar.entryinfo = &e->ip;
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 97f79dc943d7..685c2168f524 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -551,6 +551,7 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
+ return -ENOMEM;
+
+ j = 0;
++ memset(&mtpar, 0, sizeof(mtpar));
+ mtpar.net = net;
+ mtpar.table = name;
+ mtpar.entryinfo = &e->ipv6;
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 74a04638ef03..93b3e4b86870 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -1223,6 +1223,9 @@ static int nfqnl_recv_unsupp(struct net *net, struct sock *ctnl,
+ static const struct nla_policy nfqa_cfg_policy[NFQA_CFG_MAX+1] = {
+ [NFQA_CFG_CMD] = { .len = sizeof(struct nfqnl_msg_config_cmd) },
+ [NFQA_CFG_PARAMS] = { .len = sizeof(struct nfqnl_msg_config_params) },
++ [NFQA_CFG_QUEUE_MAXLEN] = { .type = NLA_U32 },
++ [NFQA_CFG_MASK] = { .type = NLA_U32 },
++ [NFQA_CFG_FLAGS] = { .type = NLA_U32 },
+ };
+
+ static const struct nf_queue_handler nfqh = {
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index ed39a77f9253..76d82eebeef2 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -33,6 +33,7 @@
+ #include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <linux/module.h>
++#include <linux/pm_runtime.h>
+ #include <sound/core.h>
+ #include <sound/jack.h>
+ #include <sound/asoundef.h>
+@@ -764,8 +765,10 @@ static void check_presence_and_report(struct hda_codec *codec, hda_nid_t nid,
+
+ if (pin_idx < 0)
+ return;
++ mutex_lock(&spec->pcm_lock);
+ if (hdmi_present_sense(get_pin(spec, pin_idx), 1))
+ snd_hda_jack_report_sync(codec);
++ mutex_unlock(&spec->pcm_lock);
+ }
+
+ static void jack_callback(struct hda_codec *codec,
+@@ -1628,21 +1631,23 @@ static void sync_eld_via_acomp(struct hda_codec *codec,
+ static bool hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll)
+ {
+ struct hda_codec *codec = per_pin->codec;
+- struct hdmi_spec *spec = codec->spec;
+ int ret;
+
+ /* no temporary power up/down needed for component notifier */
+- if (!codec_has_acomp(codec))
+- snd_hda_power_up_pm(codec);
++ if (!codec_has_acomp(codec)) {
++ ret = snd_hda_power_up_pm(codec);
++ if (ret < 0 && pm_runtime_suspended(hda_codec_dev(codec))) {
++ snd_hda_power_down_pm(codec);
++ return false;
++ }
++ }
+
+- mutex_lock(&spec->pcm_lock);
+ if (codec_has_acomp(codec)) {
+ sync_eld_via_acomp(codec, per_pin);
+ ret = false; /* don't call snd_hda_jack_report_sync() */
+ } else {
+ ret = hdmi_present_sense_via_verbs(per_pin, repoll);
+ }
+- mutex_unlock(&spec->pcm_lock);
+
+ if (!codec_has_acomp(codec))
+ snd_hda_power_down_pm(codec);
+@@ -1654,12 +1659,16 @@ static void hdmi_repoll_eld(struct work_struct *work)
+ {
+ struct hdmi_spec_per_pin *per_pin =
+ container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work);
++ struct hda_codec *codec = per_pin->codec;
++ struct hdmi_spec *spec = codec->spec;
+
+ if (per_pin->repoll_count++ > 6)
+ per_pin->repoll_count = 0;
+
++ mutex_lock(&spec->pcm_lock);
+ if (hdmi_present_sense(per_pin, per_pin->repoll_count))
+ snd_hda_jack_report_sync(per_pin->codec);
++ mutex_unlock(&spec->pcm_lock);
+ }
+
+ static void intel_haswell_fixup_connect_list(struct hda_codec *codec,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index cb9a977bf188..066efe783fe8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6586,7 +6586,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+- SND_PCI_QUIRK(0x17aa, 0x3136, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+@@ -6770,6 +6769,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x1a, 0x02a11040},
+ {0x1b, 0x01014020},
+ {0x21, 0x0221101f}),
++ SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++ {0x14, 0x90170110},
++ {0x19, 0x02a11020},
++ {0x1a, 0x02a11030},
++ {0x21, 0x0221101f}),
+ SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ {0x12, 0x90a60140},
+ {0x14, 0x90170110},
+diff --git a/tools/build/Build.include b/tools/build/Build.include
+index a4bbb984941d..d9048f145f97 100644
+--- a/tools/build/Build.include
++++ b/tools/build/Build.include
+@@ -63,8 +63,8 @@ dep-cmd = $(if $(wildcard $(fixdep)),
+ $(fixdep) $(depfile) $@ '$(make-cmd)' > $(dot-target).tmp; \
+ rm -f $(depfile); \
+ mv -f $(dot-target).tmp $(dot-target).cmd, \
+- printf '\# cannot find fixdep (%s)\n' $(fixdep) > $(dot-target).cmd; \
+- printf '\# using basic dep data\n\n' >> $(dot-target).cmd; \
++ printf '$(pound) cannot find fixdep (%s)\n' $(fixdep) > $(dot-target).cmd; \
++ printf '$(pound) using basic dep data\n\n' >> $(dot-target).cmd; \
+ cat $(depfile) >> $(dot-target).cmd; \
+ printf '\n%s\n' 'cmd_$@ := $(make-cmd)' >> $(dot-target).cmd)
+
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index fd7de7eb329e..30025d7e75e4 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -8190,7 +8190,7 @@ static struct bpf_test tests[] = {
+ offsetof(struct __sk_buff, mark)),
+ BPF_EXIT_INSN(),
+ },
+- .errstr = "dereference of modified ctx ptr R1 off=68+8, ctx+const is allowed, ctx+const+const is not",
++ .errstr = "dereference of modified ctx ptr",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ },
+@@ -11423,6 +11423,62 @@ static struct bpf_test tests[] = {
+ .errstr = "BPF_XADD stores into R2 packet",
+ .prog_type = BPF_PROG_TYPE_XDP,
+ },
++ {
++ "pass unmodified ctx pointer to helper",
++ .insns = {
++ BPF_MOV64_IMM(BPF_REG_2, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_csum_update),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ .result = ACCEPT,
++ },
++ {
++ "pass modified ctx pointer to helper, 1",
++ .insns = {
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
++ BPF_MOV64_IMM(BPF_REG_2, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_csum_update),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ .result = REJECT,
++ .errstr = "dereference of modified ctx ptr",
++ },
++ {
++ "pass modified ctx pointer to helper, 2",
++ .insns = {
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_get_socket_cookie),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .result_unpriv = REJECT,
++ .result = REJECT,
++ .errstr_unpriv = "dereference of modified ctx ptr",
++ .errstr = "dereference of modified ctx ptr",
++ },
++ {
++ "pass modified ctx pointer to helper, 3",
++ .insns = {
++ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 0),
++ BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 4),
++ BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
++ BPF_MOV64_IMM(BPF_REG_2, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_csum_update),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ .result = REJECT,
++ .errstr = "variable ctx access var_off=(0x0; 0x4)",
++ },
+ };
+
+ static int probe_filter_length(const struct bpf_insn *fp)
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-12 15:15 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2018-07-12 15:15 UTC (permalink / raw
To: gentoo-commits
commit: 8d4733995b6d3df95909b5a116fae17c658d9555
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 12 15:14:05 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Jul 12 15:14:05 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8d473399
Update to linux kernel 4.17.6
0000_README | 4 +
1005_linux-4.17.6.patch | 2386 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2390 insertions(+)
diff --git a/0000_README b/0000_README
index 33f7bd8..b414442 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-4.17.5.patch
From: http://www.kernel.org
Desc: Linux 4.17.5
+Patch: 1005_linux-4.17.6.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-4.17.6.patch b/1005_linux-4.17.6.patch
new file mode 100644
index 0000000..7f17226
--- /dev/null
+++ b/1005_linux-4.17.6.patch
@@ -0,0 +1,2386 @@
+diff --git a/Makefile b/Makefile
+index e4ddbad49636..1a885c8f82ef 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/am3517.dtsi b/arch/arm/boot/dts/am3517.dtsi
+index ca294914bbb1..4b6062b631b1 100644
+--- a/arch/arm/boot/dts/am3517.dtsi
++++ b/arch/arm/boot/dts/am3517.dtsi
+@@ -39,6 +39,8 @@
+ ti,davinci-ctrl-ram-size = <0x2000>;
+ ti,davinci-rmii-en = /bits/ 8 <1>;
+ local-mac-address = [ 00 00 00 00 00 00 ];
++ clocks = <&emac_ick>;
++ clock-names = "ick";
+ };
+
+ davinci_mdio: ethernet@5c030000 {
+@@ -49,6 +51,8 @@
+ bus_freq = <1000000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
++ clocks = <&emac_fck>;
++ clock-names = "fck";
+ };
+
+ uart4: serial@4809e000 {
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index f4ddd86f2c77..9cace9f3dd15 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1582,7 +1582,6 @@
+ dr_mode = "otg";
+ snps,dis_u3_susphy_quirk;
+ snps,dis_u2_susphy_quirk;
+- snps,dis_metastability_quirk;
+ };
+ };
+
+@@ -1610,6 +1609,7 @@
+ dr_mode = "otg";
+ snps,dis_u3_susphy_quirk;
+ snps,dis_u2_susphy_quirk;
++ snps,dis_metastability_quirk;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/imx51-zii-rdu1.dts b/arch/arm/boot/dts/imx51-zii-rdu1.dts
+index 6464f2560e06..0662217751dc 100644
+--- a/arch/arm/boot/dts/imx51-zii-rdu1.dts
++++ b/arch/arm/boot/dts/imx51-zii-rdu1.dts
+@@ -768,7 +768,7 @@
+
+ pinctrl_ts: tsgrp {
+ fsl,pins = <
+- MX51_PAD_CSI1_D8__GPIO3_12 0x85
++ MX51_PAD_CSI1_D8__GPIO3_12 0x04
+ MX51_PAD_CSI1_D9__GPIO3_13 0x85
+ >;
+ };
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index f03402efab4b..3891805bfcdd 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -1265,7 +1265,7 @@ cleanup_critical:
+ jl 0f
+ clg %r9,BASED(.Lcleanup_table+104) # .Lload_fpu_regs_end
+ jl .Lcleanup_load_fpu_regs
+-0: BR_EX %r14
++0: BR_EX %r14,%r11
+
+ .align 8
+ .Lcleanup_table:
+@@ -1301,7 +1301,7 @@ cleanup_critical:
+ ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE
+ lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
+ larl %r9,sie_exit # skip forward to sie_exit
+- BR_EX %r14
++ BR_EX %r14,%r11
+ #endif
+
+ .Lcleanup_system_call:
+diff --git a/drivers/acpi/acpica/uterror.c b/drivers/acpi/acpica/uterror.c
+index 5a64ddaed8a3..e47430272692 100644
+--- a/drivers/acpi/acpica/uterror.c
++++ b/drivers/acpi/acpica/uterror.c
+@@ -182,19 +182,19 @@ acpi_ut_prefixed_namespace_error(const char *module_name,
+ switch (lookup_status) {
+ case AE_ALREADY_EXISTS:
+
+- acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR);
++ acpi_os_printf(ACPI_MSG_BIOS_ERROR);
+ message = "Failure creating";
+ break;
+
+ case AE_NOT_FOUND:
+
+- acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR);
++ acpi_os_printf(ACPI_MSG_BIOS_ERROR);
+ message = "Could not resolve";
+ break;
+
+ default:
+
+- acpi_os_printf("\n" ACPI_MSG_ERROR);
++ acpi_os_printf(ACPI_MSG_ERROR);
+ message = "Failure resolving";
+ break;
+ }
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index bdb24d636d9a..4cc7bfec76ff 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -709,10 +709,11 @@ void battery_hook_register(struct acpi_battery_hook *hook)
+ */
+ pr_err("extension failed to load: %s", hook->name);
+ __battery_hook_unregister(hook, 0);
+- return;
++ goto end;
+ }
+ }
+ pr_info("new extension: %s\n", hook->name);
++end:
+ mutex_unlock(&hook_mutex);
+ }
+ EXPORT_SYMBOL_GPL(battery_hook_register);
+@@ -724,7 +725,7 @@ EXPORT_SYMBOL_GPL(battery_hook_register);
+ */
+ static void battery_hook_add_battery(struct acpi_battery *battery)
+ {
+- struct acpi_battery_hook *hook_node;
++ struct acpi_battery_hook *hook_node, *tmp;
+
+ mutex_lock(&hook_mutex);
+ INIT_LIST_HEAD(&battery->list);
+@@ -736,15 +737,15 @@ static void battery_hook_add_battery(struct acpi_battery *battery)
+ * when a battery gets hotplugged or initialized
+ * during the battery module initialization.
+ */
+- list_for_each_entry(hook_node, &battery_hook_list, list) {
++ list_for_each_entry_safe(hook_node, tmp, &battery_hook_list, list) {
+ if (hook_node->add_battery(battery->bat)) {
+ /*
+ * The notification of the extensions has failed, to
+ * prevent further errors we will unload the extension.
+ */
+- __battery_hook_unregister(hook_node, 0);
+ pr_err("error in extension, unloading: %s",
+ hook_node->name);
++ __battery_hook_unregister(hook_node, 0);
+ }
+ }
+ mutex_unlock(&hook_mutex);
+diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
+index 1476cb3439f4..5e793dd7adfb 100644
+--- a/drivers/block/drbd/drbd_worker.c
++++ b/drivers/block/drbd/drbd_worker.c
+@@ -282,8 +282,8 @@ void drbd_request_endio(struct bio *bio)
+ what = COMPLETED_OK;
+ }
+
+- bio_put(req->private_bio);
+ req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status));
++ bio_put(bio);
+
+ /* not req_mod(), we need irqsave here! */
+ spin_lock_irqsave(&device->resource->req_lock, flags);
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index 2b2332b605e4..1d2de641cabb 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -74,42 +74,50 @@ EXPORT_SYMBOL_GPL(fs_dax_get_by_bdev);
+
+ /**
+ * __bdev_dax_supported() - Check if the device supports dax for filesystem
+- * @sb: The superblock of the device
++ * @bdev: block device to check
+ * @blocksize: The block size of the device
+ *
+ * This is a library function for filesystems to check if the block device
+ * can be mounted with dax option.
+ *
+- * Return: negative errno if unsupported, 0 if supported.
++ * Return: true if supported, false if unsupported
+ */
+-int __bdev_dax_supported(struct super_block *sb, int blocksize)
++bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
+ {
+- struct block_device *bdev = sb->s_bdev;
+ struct dax_device *dax_dev;
++ struct request_queue *q;
+ pgoff_t pgoff;
+ int err, id;
+ void *kaddr;
+ pfn_t pfn;
+ long len;
++ char buf[BDEVNAME_SIZE];
+
+ if (blocksize != PAGE_SIZE) {
+- pr_debug("VFS (%s): error: unsupported blocksize for dax\n",
+- sb->s_id);
+- return -EINVAL;
++ pr_debug("%s: error: unsupported blocksize for dax\n",
++ bdevname(bdev, buf));
++ return false;
++ }
++
++ q = bdev_get_queue(bdev);
++ if (!q || !blk_queue_dax(q)) {
++ pr_debug("%s: error: request queue doesn't support dax\n",
++ bdevname(bdev, buf));
++ return false;
+ }
+
+ err = bdev_dax_pgoff(bdev, 0, PAGE_SIZE, &pgoff);
+ if (err) {
+- pr_debug("VFS (%s): error: unaligned partition for dax\n",
+- sb->s_id);
+- return err;
++ pr_debug("%s: error: unaligned partition for dax\n",
++ bdevname(bdev, buf));
++ return false;
+ }
+
+ dax_dev = dax_get_by_host(bdev->bd_disk->disk_name);
+ if (!dax_dev) {
+- pr_debug("VFS (%s): error: device does not support dax\n",
+- sb->s_id);
+- return -EOPNOTSUPP;
++ pr_debug("%s: error: device does not support dax\n",
++ bdevname(bdev, buf));
++ return false;
+ }
+
+ id = dax_read_lock();
+@@ -119,9 +127,9 @@ int __bdev_dax_supported(struct super_block *sb, int blocksize)
+ put_dax(dax_dev);
+
+ if (len < 1) {
+- pr_debug("VFS (%s): error: dax access failed (%ld)\n",
+- sb->s_id, len);
+- return len < 0 ? len : -EIO;
++ pr_debug("%s: error: dax access failed (%ld)\n",
++ bdevname(bdev, buf), len);
++ return false;
+ }
+
+ if (IS_ENABLED(CONFIG_FS_DAX_LIMITED) && pfn_t_special(pfn)) {
+@@ -137,12 +145,12 @@ int __bdev_dax_supported(struct super_block *sb, int blocksize)
+ } else if (pfn_t_devmap(pfn)) {
+ /* pass */;
+ } else {
+- pr_debug("VFS (%s): error: dax support not enabled\n",
+- sb->s_id);
+- return -EOPNOTSUPP;
++ pr_debug("%s: error: dax support not enabled\n",
++ bdevname(bdev, buf));
++ return false;
+ }
+
+- return 0;
++ return true;
+ }
+ EXPORT_SYMBOL_GPL(__bdev_dax_supported);
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index c8b605f3dc05..06401f0cde6d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -188,6 +188,7 @@ struct amdgpu_job;
+ struct amdgpu_irq_src;
+ struct amdgpu_fpriv;
+ struct amdgpu_bo_va_mapping;
++struct amdgpu_atif;
+
+ enum amdgpu_cp_irq {
+ AMDGPU_CP_IRQ_GFX_EOP = 0,
+@@ -1246,43 +1247,6 @@ struct amdgpu_vram_scratch {
+ /*
+ * ACPI
+ */
+-struct amdgpu_atif_notification_cfg {
+- bool enabled;
+- int command_code;
+-};
+-
+-struct amdgpu_atif_notifications {
+- bool display_switch;
+- bool expansion_mode_change;
+- bool thermal_state;
+- bool forced_power_state;
+- bool system_power_state;
+- bool display_conf_change;
+- bool px_gfx_switch;
+- bool brightness_change;
+- bool dgpu_display_event;
+-};
+-
+-struct amdgpu_atif_functions {
+- bool system_params;
+- bool sbios_requests;
+- bool select_active_disp;
+- bool lid_state;
+- bool get_tv_standard;
+- bool set_tv_standard;
+- bool get_panel_expansion_mode;
+- bool set_panel_expansion_mode;
+- bool temperature_change;
+- bool graphics_device_types;
+-};
+-
+-struct amdgpu_atif {
+- struct amdgpu_atif_notifications notifications;
+- struct amdgpu_atif_functions functions;
+- struct amdgpu_atif_notification_cfg notification_cfg;
+- struct amdgpu_encoder *encoder_for_bl;
+-};
+-
+ struct amdgpu_atcs_functions {
+ bool get_ext_state;
+ bool pcie_perf_req;
+@@ -1430,7 +1394,7 @@ struct amdgpu_device {
+ #if defined(CONFIG_DEBUG_FS)
+ struct dentry *debugfs_regs[AMDGPU_DEBUGFS_MAX_COMPONENTS];
+ #endif
+- struct amdgpu_atif atif;
++ struct amdgpu_atif *atif;
+ struct amdgpu_atcs atcs;
+ struct mutex srbm_mutex;
+ /* GRBM index mutex. Protects concurrent access to GRBM index */
+@@ -1855,6 +1819,12 @@ static inline bool amdgpu_atpx_dgpu_req_power_for_displays(void) { return false;
+ static inline bool amdgpu_has_atpx(void) { return false; }
+ #endif
+
++#if defined(CONFIG_VGA_SWITCHEROO) && defined(CONFIG_ACPI)
++void *amdgpu_atpx_get_dhandle(void);
++#else
++static inline void *amdgpu_atpx_get_dhandle(void) { return NULL; }
++#endif
++
+ /*
+ * KMS
+ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 8fa850a070e0..0d8c3fc6eace 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -34,6 +34,45 @@
+ #include "amd_acpi.h"
+ #include "atom.h"
+
++struct amdgpu_atif_notification_cfg {
++ bool enabled;
++ int command_code;
++};
++
++struct amdgpu_atif_notifications {
++ bool display_switch;
++ bool expansion_mode_change;
++ bool thermal_state;
++ bool forced_power_state;
++ bool system_power_state;
++ bool display_conf_change;
++ bool px_gfx_switch;
++ bool brightness_change;
++ bool dgpu_display_event;
++};
++
++struct amdgpu_atif_functions {
++ bool system_params;
++ bool sbios_requests;
++ bool select_active_disp;
++ bool lid_state;
++ bool get_tv_standard;
++ bool set_tv_standard;
++ bool get_panel_expansion_mode;
++ bool set_panel_expansion_mode;
++ bool temperature_change;
++ bool graphics_device_types;
++};
++
++struct amdgpu_atif {
++ acpi_handle handle;
++
++ struct amdgpu_atif_notifications notifications;
++ struct amdgpu_atif_functions functions;
++ struct amdgpu_atif_notification_cfg notification_cfg;
++ struct amdgpu_encoder *encoder_for_bl;
++};
++
+ /* Call the ATIF method
+ */
+ /**
+@@ -46,8 +85,9 @@
+ * Executes the requested ATIF function (all asics).
+ * Returns a pointer to the acpi output buffer.
+ */
+-static union acpi_object *amdgpu_atif_call(acpi_handle handle, int function,
+- struct acpi_buffer *params)
++static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif,
++ int function,
++ struct acpi_buffer *params)
+ {
+ acpi_status status;
+ union acpi_object atif_arg_elements[2];
+@@ -70,7 +110,8 @@ static union acpi_object *amdgpu_atif_call(acpi_handle handle, int function,
+ atif_arg_elements[1].integer.value = 0;
+ }
+
+- status = acpi_evaluate_object(handle, "ATIF", &atif_arg, &buffer);
++ status = acpi_evaluate_object(atif->handle, NULL, &atif_arg,
++ &buffer);
+
+ /* Fail only if calling the method fails and ATIF is supported */
+ if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
+@@ -141,15 +182,14 @@ static void amdgpu_atif_parse_functions(struct amdgpu_atif_functions *f, u32 mas
+ * (all asics).
+ * returns 0 on success, error on failure.
+ */
+-static int amdgpu_atif_verify_interface(acpi_handle handle,
+- struct amdgpu_atif *atif)
++static int amdgpu_atif_verify_interface(struct amdgpu_atif *atif)
+ {
+ union acpi_object *info;
+ struct atif_verify_interface output;
+ size_t size;
+ int err = 0;
+
+- info = amdgpu_atif_call(handle, ATIF_FUNCTION_VERIFY_INTERFACE, NULL);
++ info = amdgpu_atif_call(atif, ATIF_FUNCTION_VERIFY_INTERFACE, NULL);
+ if (!info)
+ return -EIO;
+
+@@ -176,6 +216,35 @@ static int amdgpu_atif_verify_interface(acpi_handle handle,
+ return err;
+ }
+
++static acpi_handle amdgpu_atif_probe_handle(acpi_handle dhandle)
++{
++ acpi_handle handle = NULL;
++ char acpi_method_name[255] = { 0 };
++ struct acpi_buffer buffer = { sizeof(acpi_method_name), acpi_method_name };
++ acpi_status status;
++
++ /* For PX/HG systems, ATIF and ATPX are in the iGPU's namespace, on dGPU only
++ * systems, ATIF is in the dGPU's namespace.
++ */
++ status = acpi_get_handle(dhandle, "ATIF", &handle);
++ if (ACPI_SUCCESS(status))
++ goto out;
++
++ if (amdgpu_has_atpx()) {
++ status = acpi_get_handle(amdgpu_atpx_get_dhandle(), "ATIF",
++ &handle);
++ if (ACPI_SUCCESS(status))
++ goto out;
++ }
++
++ DRM_DEBUG_DRIVER("No ATIF handle found\n");
++ return NULL;
++out:
++ acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer);
++ DRM_DEBUG_DRIVER("Found ATIF handle %s\n", acpi_method_name);
++ return handle;
++}
++
+ /**
+ * amdgpu_atif_get_notification_params - determine notify configuration
+ *
+@@ -188,15 +257,16 @@ static int amdgpu_atif_verify_interface(acpi_handle handle,
+ * where n is specified in the result if a notifier is used.
+ * Returns 0 on success, error on failure.
+ */
+-static int amdgpu_atif_get_notification_params(acpi_handle handle,
+- struct amdgpu_atif_notification_cfg *n)
++static int amdgpu_atif_get_notification_params(struct amdgpu_atif *atif)
+ {
+ union acpi_object *info;
++ struct amdgpu_atif_notification_cfg *n = &atif->notification_cfg;
+ struct atif_system_params params;
+ size_t size;
+ int err = 0;
+
+- info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS, NULL);
++ info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS,
++ NULL);
+ if (!info) {
+ err = -EIO;
+ goto out;
+@@ -250,14 +320,15 @@ static int amdgpu_atif_get_notification_params(acpi_handle handle,
+ * (all asics).
+ * Returns 0 on success, error on failure.
+ */
+-static int amdgpu_atif_get_sbios_requests(acpi_handle handle,
+- struct atif_sbios_requests *req)
++static int amdgpu_atif_get_sbios_requests(struct amdgpu_atif *atif,
++ struct atif_sbios_requests *req)
+ {
+ union acpi_object *info;
+ size_t size;
+ int count = 0;
+
+- info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS, NULL);
++ info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS,
++ NULL);
+ if (!info)
+ return -EIO;
+
+@@ -290,11 +361,10 @@ static int amdgpu_atif_get_sbios_requests(acpi_handle handle,
+ * Returns NOTIFY code
+ */
+ static int amdgpu_atif_handler(struct amdgpu_device *adev,
+- struct acpi_bus_event *event)
++ struct acpi_bus_event *event)
+ {
+- struct amdgpu_atif *atif = &adev->atif;
++ struct amdgpu_atif *atif = adev->atif;
+ struct atif_sbios_requests req;
+- acpi_handle handle;
+ int count;
+
+ DRM_DEBUG_DRIVER("event, device_class = %s, type = %#x\n",
+@@ -303,14 +373,14 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev,
+ if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0)
+ return NOTIFY_DONE;
+
+- if (!atif->notification_cfg.enabled ||
++ if (!atif ||
++ !atif->notification_cfg.enabled ||
+ event->type != atif->notification_cfg.command_code)
+ /* Not our event */
+ return NOTIFY_DONE;
+
+ /* Check pending SBIOS requests */
+- handle = ACPI_HANDLE(&adev->pdev->dev);
+- count = amdgpu_atif_get_sbios_requests(handle, &req);
++ count = amdgpu_atif_get_sbios_requests(atif, &req);
+
+ if (count <= 0)
+ return NOTIFY_DONE;
+@@ -641,8 +711,8 @@ static int amdgpu_acpi_event(struct notifier_block *nb,
+ */
+ int amdgpu_acpi_init(struct amdgpu_device *adev)
+ {
+- acpi_handle handle;
+- struct amdgpu_atif *atif = &adev->atif;
++ acpi_handle handle, atif_handle;
++ struct amdgpu_atif *atif;
+ struct amdgpu_atcs *atcs = &adev->atcs;
+ int ret;
+
+@@ -658,12 +728,26 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
+ DRM_DEBUG_DRIVER("Call to ATCS verify_interface failed: %d\n", ret);
+ }
+
++ /* Probe for ATIF, and initialize it if found */
++ atif_handle = amdgpu_atif_probe_handle(handle);
++ if (!atif_handle)
++ goto out;
++
++ atif = kzalloc(sizeof(*atif), GFP_KERNEL);
++ if (!atif) {
++ DRM_WARN("Not enough memory to initialize ATIF\n");
++ goto out;
++ }
++ atif->handle = atif_handle;
++
+ /* Call the ATIF method */
+- ret = amdgpu_atif_verify_interface(handle, atif);
++ ret = amdgpu_atif_verify_interface(atif);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Call to ATIF verify_interface failed: %d\n", ret);
++ kfree(atif);
+ goto out;
+ }
++ adev->atif = atif;
+
+ if (atif->notifications.brightness_change) {
+ struct drm_encoder *tmp;
+@@ -693,8 +777,7 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
+ }
+
+ if (atif->functions.system_params) {
+- ret = amdgpu_atif_get_notification_params(handle,
+- &atif->notification_cfg);
++ ret = amdgpu_atif_get_notification_params(atif);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Call to GET_SYSTEM_PARAMS failed: %d\n",
+ ret);
+@@ -720,4 +803,6 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
+ void amdgpu_acpi_fini(struct amdgpu_device *adev)
+ {
+ unregister_acpi_notifier(&adev->acpi_nb);
++ if (adev->atif)
++ kfree(adev->atif);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+index 1ae5ae8c45a4..2593b106d970 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+@@ -90,6 +90,12 @@ bool amdgpu_atpx_dgpu_req_power_for_displays(void) {
+ return amdgpu_atpx_priv.atpx.dgpu_req_power_for_displays;
+ }
+
++#if defined(CONFIG_ACPI)
++void *amdgpu_atpx_get_dhandle(void) {
++ return amdgpu_atpx_priv.dhandle;
++}
++#endif
++
+ /**
+ * amdgpu_atpx_call - call an ATPX method
+ *
+diff --git a/drivers/gpu/drm/drm_property.c b/drivers/gpu/drm/drm_property.c
+index 8f4672daac7f..52174d017fb4 100644
+--- a/drivers/gpu/drm/drm_property.c
++++ b/drivers/gpu/drm/drm_property.c
+@@ -533,7 +533,7 @@ static void drm_property_free_blob(struct kref *kref)
+
+ drm_mode_object_unregister(blob->dev, &blob->base);
+
+- kfree(blob);
++ kvfree(blob);
+ }
+
+ /**
+@@ -560,7 +560,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
+ if (!length || length > ULONG_MAX - sizeof(struct drm_property_blob))
+ return ERR_PTR(-EINVAL);
+
+- blob = kzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL);
++ blob = kvzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL);
+ if (!blob)
+ return ERR_PTR(-ENOMEM);
+
+@@ -577,7 +577,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
+ ret = __drm_mode_object_add(dev, &blob->base, DRM_MODE_OBJECT_BLOB,
+ true, drm_property_free_blob);
+ if (ret) {
+- kfree(blob);
++ kvfree(blob);
+ return ERR_PTR(-EINVAL);
+ }
+
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index 2ebdc6d5a76e..d5583190f3e4 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -137,7 +137,10 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+
+ if (cmd > (char *) urb->transfer_buffer) {
+ /* Send partial buffer remaining before exiting */
+- int len = cmd - (char *) urb->transfer_buffer;
++ int len;
++ if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++ *cmd++ = 0xAF;
++ len = cmd - (char *) urb->transfer_buffer;
+ ret = udl_submit_urb(dev, urb, len);
+ bytes_sent += len;
+ } else
+diff --git a/drivers/gpu/drm/udl/udl_transfer.c b/drivers/gpu/drm/udl/udl_transfer.c
+index 0c87b1ac6b68..b992644c17e6 100644
+--- a/drivers/gpu/drm/udl/udl_transfer.c
++++ b/drivers/gpu/drm/udl/udl_transfer.c
+@@ -153,11 +153,11 @@ static void udl_compress_hline16(
+ raw_pixels_count_byte = cmd++; /* we'll know this later */
+ raw_pixel_start = pixel;
+
+- cmd_pixel_end = pixel + (min(MAX_CMD_PIXELS + 1,
+- min((int)(pixel_end - pixel) / bpp,
+- (int)(cmd_buffer_end - cmd) / 2))) * bpp;
++ cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
++ (unsigned long)(pixel_end - pixel) / bpp,
++ (unsigned long)(cmd_buffer_end - 1 - cmd) / 2) * bpp;
+
+- prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * bpp);
++ prefetch_range((void *) pixel, cmd_pixel_end - pixel);
+ pixel_val16 = get_pixel_val16(pixel, bpp);
+
+ while (pixel < cmd_pixel_end) {
+@@ -193,6 +193,9 @@ static void udl_compress_hline16(
+ if (pixel > raw_pixel_start) {
+ /* finalize last RAW span */
+ *raw_pixels_count_byte = ((pixel-raw_pixel_start) / bpp) & 0xFF;
++ } else {
++ /* undo unused byte */
++ cmd--;
+ }
+
+ *cmd_pixels_count_byte = ((pixel - cmd_pixel_start) / bpp) & 0xFF;
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 5d7cc6bbbac6..c1ce4baeeaca 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1942,6 +1942,8 @@ static int hid_device_probe(struct device *dev)
+ }
+ hdev->io_started = false;
+
++ clear_bit(ffs(HID_STAT_REPROBED), &hdev->status);
++
+ if (!hdev->driver) {
+ id = hid_match_device(hdev, hdrv);
+ if (id == NULL) {
+@@ -2205,7 +2207,8 @@ static int __hid_bus_reprobe_drivers(struct device *dev, void *data)
+ struct hid_device *hdev = to_hid_device(dev);
+
+ if (hdev->driver == hdrv &&
+- !hdrv->match(hdev, hid_ignore_special_drivers))
++ !hdrv->match(hdev, hid_ignore_special_drivers) &&
++ !test_and_set_bit(ffs(HID_STAT_REPROBED), &hdev->status))
+ return device_reprobe(dev);
+
+ return 0;
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index 4f4e7a08a07b..4db8e140f709 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -1154,6 +1154,8 @@ static ssize_t hid_debug_events_read(struct file *file, char __user *buffer,
+ goto out;
+ if (list->tail > list->head) {
+ len = list->tail - list->head;
++ if (len > count)
++ len = count;
+
+ if (copy_to_user(buffer + ret, &list->hid_debug_buf[list->head], len)) {
+ ret = -EFAULT;
+@@ -1163,6 +1165,8 @@ static ssize_t hid_debug_events_read(struct file *file, char __user *buffer,
+ list->head += len;
+ } else {
+ len = HID_DEBUG_BUFSIZE - list->head;
++ if (len > count)
++ len = count;
+
+ if (copy_to_user(buffer, &list->hid_debug_buf[list->head], len)) {
+ ret = -EFAULT;
+@@ -1170,7 +1174,9 @@ static ssize_t hid_debug_events_read(struct file *file, char __user *buffer,
+ }
+ list->head = 0;
+ ret += len;
+- goto copy_rest;
++ count -= len;
++ if (count > 0)
++ goto copy_rest;
+ }
+
+ }
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index cc33622253aa..a92377285034 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -486,7 +486,7 @@ static void i2c_hid_get_input(struct i2c_hid *ihid)
+ return;
+ }
+
+- if ((ret_size > size) || (ret_size <= 2)) {
++ if ((ret_size > size) || (ret_size < 2)) {
+ dev_err(&ihid->client->dev, "%s: incomplete report (%d/%d)\n",
+ __func__, size, ret_size);
+ return;
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index e3ce233f8bdc..23872d08308c 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -36,6 +36,7 @@
+ #include <linux/hiddev.h>
+ #include <linux/compat.h>
+ #include <linux/vmalloc.h>
++#include <linux/nospec.h>
+ #include "usbhid.h"
+
+ #ifdef CONFIG_USB_DYNAMIC_MINORS
+@@ -469,10 +470,14 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+
+ if (uref->field_index >= report->maxfield)
+ goto inval;
++ uref->field_index = array_index_nospec(uref->field_index,
++ report->maxfield);
+
+ field = report->field[uref->field_index];
+ if (uref->usage_index >= field->maxusage)
+ goto inval;
++ uref->usage_index = array_index_nospec(uref->usage_index,
++ field->maxusage);
+
+ uref->usage_code = field->usage[uref->usage_index].hid;
+
+@@ -499,6 +504,8 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+
+ if (uref->field_index >= report->maxfield)
+ goto inval;
++ uref->field_index = array_index_nospec(uref->field_index,
++ report->maxfield);
+
+ field = report->field[uref->field_index];
+
+@@ -753,6 +760,8 @@ static long hiddev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+
+ if (finfo.field_index >= report->maxfield)
+ break;
++ finfo.field_index = array_index_nospec(finfo.field_index,
++ report->maxfield);
+
+ field = report->field[finfo.field_index];
+ memset(&finfo, 0, sizeof(finfo));
+@@ -797,6 +806,8 @@ static long hiddev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+
+ if (cinfo.index >= hid->maxcollection)
+ break;
++ cinfo.index = array_index_nospec(cinfo.index,
++ hid->maxcollection);
+
+ cinfo.type = hid->collection[cinfo.index].type;
+ cinfo.usage = hid->collection[cinfo.index].usage;
+diff --git a/drivers/i2c/i2c-core-smbus.c b/drivers/i2c/i2c-core-smbus.c
+index b5aec33002c3..51970bae3c4a 100644
+--- a/drivers/i2c/i2c-core-smbus.c
++++ b/drivers/i2c/i2c-core-smbus.c
+@@ -465,13 +465,18 @@ static s32 i2c_smbus_xfer_emulated(struct i2c_adapter *adapter, u16 addr,
+
+ status = i2c_transfer(adapter, msg, num);
+ if (status < 0)
+- return status;
++ goto cleanup;
++ if (status != num) {
++ status = -EIO;
++ goto cleanup;
++ }
++ status = 0;
+
+ /* Check PEC if last message is a read */
+ if (i && (msg[num-1].flags & I2C_M_RD)) {
+ status = i2c_smbus_check_pec(partial_pec, &msg[num-1]);
+ if (status < 0)
+- return status;
++ goto cleanup;
+ }
+
+ if (read_write == I2C_SMBUS_READ)
+@@ -497,12 +502,13 @@ static s32 i2c_smbus_xfer_emulated(struct i2c_adapter *adapter, u16 addr,
+ break;
+ }
+
++cleanup:
+ if (msg[0].flags & I2C_M_DMA_SAFE)
+ kfree(msg[0].buf);
+ if (msg[1].flags & I2C_M_DMA_SAFE)
+ kfree(msg[1].buf);
+
+- return 0;
++ return status;
+ }
+
+ /**
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 0589a4da12bb..7c8e5878446a 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -885,9 +885,7 @@ EXPORT_SYMBOL_GPL(dm_table_set_type);
+ static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
+ sector_t start, sector_t len, void *data)
+ {
+- struct request_queue *q = bdev_get_queue(dev->bdev);
+-
+- return q && blk_queue_dax(q);
++ return bdev_dax_supported(dev->bdev, PAGE_SIZE);
+ }
+
+ static bool dm_table_supports_dax(struct dm_table *t)
+@@ -1907,6 +1905,9 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+
+ if (dm_table_supports_dax(t))
+ blk_queue_flag_set(QUEUE_FLAG_DAX, q);
++ else
++ blk_queue_flag_clear(QUEUE_FLAG_DAX, q);
++
+ if (dm_table_supports_dax_write_cache(t))
+ dax_write_cache(t->md->dax_dev, true);
+
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index cabae3e280c2..78173e137176 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1056,8 +1056,7 @@ static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff,
+ if (len < 1)
+ goto out;
+ nr_pages = min(len, nr_pages);
+- if (ti->type->direct_access)
+- ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn);
++ ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn);
+
+ out:
+ dm_put_live_table(md, srcu_idx);
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index 3a8a88fa06aa..a863ae4e8538 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -42,7 +42,7 @@
+ #define AMD_BOOTLOC_BUG
+ #define FORCE_WORD_WRITE 0
+
+-#define MAX_WORD_RETRIES 3
++#define MAX_RETRIES 3
+
+ #define SST49LF004B 0x0060
+ #define SST49LF040B 0x0050
+@@ -1647,7 +1647,7 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
+ map_write( map, CMD(0xF0), chip->start );
+ /* FIXME - should have reset delay before continuing */
+
+- if (++retry_cnt <= MAX_WORD_RETRIES)
++ if (++retry_cnt <= MAX_RETRIES)
+ goto retry;
+
+ ret = -EIO;
+@@ -2106,7 +2106,7 @@ static int do_panic_write_oneword(struct map_info *map, struct flchip *chip,
+ map_write(map, CMD(0xF0), chip->start);
+ /* FIXME - should have reset delay before continuing */
+
+- if (++retry_cnt <= MAX_WORD_RETRIES)
++ if (++retry_cnt <= MAX_RETRIES)
+ goto retry;
+
+ ret = -EIO;
+@@ -2241,6 +2241,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ unsigned long int adr;
+ DECLARE_WAITQUEUE(wait, current);
+ int ret = 0;
++ int retry_cnt = 0;
+
+ adr = cfi->addr_unlock1;
+
+@@ -2258,6 +2259,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ ENABLE_VPP(map);
+ xip_disable(map, chip, adr);
+
++ retry:
+ cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0x80, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
+@@ -2294,12 +2296,13 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ chip->erase_suspended = 0;
+ }
+
+- if (chip_ready(map, adr))
++ if (chip_good(map, adr, map_word_ff(map)))
+ break;
+
+ if (time_after(jiffies, timeo)) {
+ printk(KERN_WARNING "MTD %s(): software timeout\n",
+ __func__ );
++ ret = -EIO;
+ break;
+ }
+
+@@ -2307,12 +2310,15 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ UDELAY(map, chip, adr, 1000000/HZ);
+ }
+ /* Did we succeed? */
+- if (!chip_good(map, adr, map_word_ff(map))) {
++ if (ret) {
+ /* reset on all failures. */
+ map_write( map, CMD(0xF0), chip->start );
+ /* FIXME - should have reset delay before continuing */
+
+- ret = -EIO;
++ if (++retry_cnt <= MAX_RETRIES) {
++ ret = 0;
++ goto retry;
++ }
+ }
+
+ chip->state = FL_READY;
+@@ -2331,6 +2337,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ unsigned long timeo = jiffies + HZ;
+ DECLARE_WAITQUEUE(wait, current);
+ int ret = 0;
++ int retry_cnt = 0;
+
+ adr += chip->start;
+
+@@ -2348,6 +2355,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ ENABLE_VPP(map);
+ xip_disable(map, chip, adr);
+
++ retry:
+ cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0x80, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
+@@ -2384,7 +2392,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ chip->erase_suspended = 0;
+ }
+
+- if (chip_ready(map, adr)) {
++ if (chip_good(map, adr, map_word_ff(map))) {
+ xip_enable(map, chip, adr);
+ break;
+ }
+@@ -2393,6 +2401,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ xip_enable(map, chip, adr);
+ printk(KERN_WARNING "MTD %s(): software timeout\n",
+ __func__ );
++ ret = -EIO;
+ break;
+ }
+
+@@ -2400,12 +2409,15 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ UDELAY(map, chip, adr, 1000000/HZ);
+ }
+ /* Did we succeed? */
+- if (!chip_good(map, adr, map_word_ff(map))) {
++ if (ret) {
+ /* reset on all failures. */
+ map_write( map, CMD(0xF0), chip->start );
+ /* FIXME - should have reset delay before continuing */
+
+- ret = -EIO;
++ if (++retry_cnt <= MAX_RETRIES) {
++ ret = 0;
++ goto retry;
++ }
+ }
+
+ chip->state = FL_READY;
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 1abdbf267c19..054974055ea4 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -598,6 +598,18 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ {
+ struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+
++ /*
++ * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over
++ * system-wide suspend/resume confuses the platform firmware, so avoid
++ * doing that, unless the bridge has a driver that should take care of
++ * the PM handling. According to Section 16.1.6 of ACPI 6.2, endpoint
++ * devices are expected to be in D3 before invoking the S3 entry path
++ * from the firmware, so they should not be affected by this issue.
++ */
++ if (pci_is_bridge(dev) && !dev->driver &&
++ acpi_target_system_state() != ACPI_STATE_S0)
++ return true;
++
+ if (!adev || !acpi_device_power_manageable(adev))
+ return false;
+
+diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
+index e7961cbd2c55..1d20aad3aa92 100644
+--- a/drivers/scsi/aacraid/aachba.c
++++ b/drivers/scsi/aacraid/aachba.c
+@@ -1974,7 +1974,6 @@ static void aac_set_safw_attr_all_targets(struct aac_dev *dev)
+ u32 lun_count, nexus;
+ u32 i, bus, target;
+ u8 expose_flag, attribs;
+- u8 devtype;
+
+ lun_count = aac_get_safw_phys_lun_count(dev);
+
+@@ -1992,23 +1991,23 @@ static void aac_set_safw_attr_all_targets(struct aac_dev *dev)
+ continue;
+
+ if (expose_flag != 0) {
+- devtype = AAC_DEVTYPE_RAID_MEMBER;
+- goto update_devtype;
++ dev->hba_map[bus][target].devtype =
++ AAC_DEVTYPE_RAID_MEMBER;
++ continue;
+ }
+
+ if (nexus != 0 && (attribs & 8)) {
+- devtype = AAC_DEVTYPE_NATIVE_RAW;
++ dev->hba_map[bus][target].devtype =
++ AAC_DEVTYPE_NATIVE_RAW;
+ dev->hba_map[bus][target].rmw_nexus =
+ nexus;
+ } else
+- devtype = AAC_DEVTYPE_ARC_RAW;
++ dev->hba_map[bus][target].devtype =
++ AAC_DEVTYPE_ARC_RAW;
+
+ dev->hba_map[bus][target].scan_counter = dev->scan_counter;
+
+ aac_set_safw_target_qd(dev, bus, target);
+-
+-update_devtype:
+- dev->hba_map[bus][target].devtype = devtype;
+ }
+ }
+
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 5c40d809830f..ecc87a53294f 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -51,6 +51,7 @@ static int sg_version_num = 30536; /* 2 digits for each component */
+ #include <linux/atomic.h>
+ #include <linux/ratelimit.h>
+ #include <linux/uio.h>
++#include <linux/cred.h> /* for sg_check_file_access() */
+
+ #include "scsi.h"
+ #include <scsi/scsi_dbg.h>
+@@ -210,6 +211,33 @@ static void sg_device_destroy(struct kref *kref);
+ sdev_prefix_printk(prefix, (sdp)->device, \
+ (sdp)->disk->disk_name, fmt, ##a)
+
++/*
++ * The SCSI interfaces that use read() and write() as an asynchronous variant of
++ * ioctl(..., SG_IO, ...) are fundamentally unsafe, since there are lots of ways
++ * to trigger read() and write() calls from various contexts with elevated
++ * privileges. This can lead to kernel memory corruption (e.g. if these
++ * interfaces are called through splice()) and privilege escalation inside
++ * userspace (e.g. if a process with access to such a device passes a file
++ * descriptor to a SUID binary as stdin/stdout/stderr).
++ *
++ * This function provides protection for the legacy API by restricting the
++ * calling context.
++ */
++static int sg_check_file_access(struct file *filp, const char *caller)
++{
++ if (filp->f_cred != current_real_cred()) {
++ pr_err_once("%s: process %d (%s) changed security contexts after opening file descriptor, this is not allowed.\n",
++ caller, task_tgid_vnr(current), current->comm);
++ return -EPERM;
++ }
++ if (uaccess_kernel()) {
++ pr_err_once("%s: process %d (%s) called from kernel context, this is not allowed.\n",
++ caller, task_tgid_vnr(current), current->comm);
++ return -EACCES;
++ }
++ return 0;
++}
++
+ static int sg_allow_access(struct file *filp, unsigned char *cmd)
+ {
+ struct sg_fd *sfp = filp->private_data;
+@@ -394,6 +422,14 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
+ struct sg_header *old_hdr = NULL;
+ int retval = 0;
+
++ /*
++ * This could cause a response to be stranded. Close the associated
++ * file descriptor to free up any resources being held.
++ */
++ retval = sg_check_file_access(filp, __func__);
++ if (retval)
++ return retval;
++
+ if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
+ return -ENXIO;
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp,
+@@ -581,9 +617,11 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ struct sg_header old_hdr;
+ sg_io_hdr_t *hp;
+ unsigned char cmnd[SG_MAX_CDB_SIZE];
++ int retval;
+
+- if (unlikely(uaccess_kernel()))
+- return -EINVAL;
++ retval = sg_check_file_access(filp, __func__);
++ if (retval)
++ return retval;
+
+ if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
+ return -ENXIO;
+diff --git a/drivers/staging/comedi/drivers/quatech_daqp_cs.c b/drivers/staging/comedi/drivers/quatech_daqp_cs.c
+index ea194aa01a64..257b0daff01f 100644
+--- a/drivers/staging/comedi/drivers/quatech_daqp_cs.c
++++ b/drivers/staging/comedi/drivers/quatech_daqp_cs.c
+@@ -642,7 +642,7 @@ static int daqp_ao_insn_write(struct comedi_device *dev,
+ /* Make sure D/A update mode is direct update */
+ outb(0, dev->iobase + DAQP_AUX_REG);
+
+- for (i = 0; i > insn->n; i++) {
++ for (i = 0; i < insn->n; i++) {
+ unsigned int val = data[i];
+ int ret;
+
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 01ac306131c1..10db5656fd5d 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -3727,11 +3727,16 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
+ * Check for overflow of 8byte PRI READ_KEYS payload and
+ * next reservation key list descriptor.
+ */
+- if ((add_len + 8) > (cmd->data_length - 8))
+- break;
+-
+- put_unaligned_be64(pr_reg->pr_res_key, &buf[off]);
+- off += 8;
++ if (off + 8 <= cmd->data_length) {
++ put_unaligned_be64(pr_reg->pr_res_key, &buf[off]);
++ off += 8;
++ }
++ /*
++ * SPC5r17: 6.16.2 READ KEYS service action
++ * The ADDITIONAL LENGTH field indicates the number of bytes in
++ * the Reservation key list. The contents of the ADDITIONAL
++ * LENGTH field are not altered based on the allocation length
++ */
+ add_len += 8;
+ }
+ spin_unlock(&dev->t10_pr.registration_lock);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 3c082451ab1a..0586ad5eb590 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -346,18 +346,16 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ struct page *page[1];
+ struct vm_area_struct *vma;
+ struct vm_area_struct *vmas[1];
++ unsigned int flags = 0;
+ int ret;
+
++ if (prot & IOMMU_WRITE)
++ flags |= FOLL_WRITE;
++
++ down_read(&mm->mmap_sem);
+ if (mm == current->mm) {
+- ret = get_user_pages_longterm(vaddr, 1, !!(prot & IOMMU_WRITE),
+- page, vmas);
++ ret = get_user_pages_longterm(vaddr, 1, flags, page, vmas);
+ } else {
+- unsigned int flags = 0;
+-
+- if (prot & IOMMU_WRITE)
+- flags |= FOLL_WRITE;
+-
+- down_read(&mm->mmap_sem);
+ ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page,
+ vmas, NULL);
+ /*
+@@ -371,8 +369,8 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ ret = -EOPNOTSUPP;
+ put_page(page[0]);
+ }
+- up_read(&mm->mmap_sem);
+ }
++ up_read(&mm->mmap_sem);
+
+ if (ret == 1) {
+ *pfn = page_to_pfn(page[0]);
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index cb950a5fa078..c7ee09d9a236 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1362,6 +1362,7 @@ typedef int (mid_handle_t)(struct TCP_Server_Info *server,
+ /* one of these for every pending CIFS request to the server */
+ struct mid_q_entry {
+ struct list_head qhead; /* mids waiting on reply from this server */
++ struct kref refcount;
+ struct TCP_Server_Info *server; /* server corresponding to this mid */
+ __u64 mid; /* multiplex id */
+ __u32 pid; /* process id */
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 365a414a75e9..c4e5c69810f9 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -76,6 +76,7 @@ extern struct mid_q_entry *AllocMidQEntry(const struct smb_hdr *smb_buffer,
+ struct TCP_Server_Info *server);
+ extern void DeleteMidQEntry(struct mid_q_entry *midEntry);
+ extern void cifs_delete_mid(struct mid_q_entry *mid);
++extern void cifs_mid_q_entry_release(struct mid_q_entry *midEntry);
+ extern void cifs_wake_up_task(struct mid_q_entry *mid);
+ extern int cifs_handle_standard(struct TCP_Server_Info *server,
+ struct mid_q_entry *mid);
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 1529a088383d..9540699ce85a 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -151,8 +151,14 @@ cifs_reconnect_tcon(struct cifs_tcon *tcon, int smb_command)
+ * greater than cifs socket timeout which is 7 seconds
+ */
+ while (server->tcpStatus == CifsNeedReconnect) {
+- wait_event_interruptible_timeout(server->response_q,
+- (server->tcpStatus != CifsNeedReconnect), 10 * HZ);
++ rc = wait_event_interruptible_timeout(server->response_q,
++ (server->tcpStatus != CifsNeedReconnect),
++ 10 * HZ);
++ if (rc < 0) {
++ cifs_dbg(FYI, "%s: aborting reconnect due to a received"
++ " signal by the process\n", __func__);
++ return -ERESTARTSYS;
++ }
+
+ /* are we still trying to reconnect? */
+ if (server->tcpStatus != CifsNeedReconnect)
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 7a10a5d0731f..5e1c09a3e0ea 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -906,6 +906,7 @@ cifs_demultiplex_thread(void *p)
+ continue;
+ server->total_read += length;
+
++ mid_entry = NULL;
+ if (server->ops->is_transform_hdr &&
+ server->ops->receive_transform &&
+ server->ops->is_transform_hdr(buf)) {
+@@ -920,8 +921,11 @@ cifs_demultiplex_thread(void *p)
+ length = mid_entry->receive(server, mid_entry);
+ }
+
+- if (length < 0)
++ if (length < 0) {
++ if (mid_entry)
++ cifs_mid_q_entry_release(mid_entry);
+ continue;
++ }
+
+ if (server->large_buf)
+ buf = server->bigbuf;
+@@ -938,6 +942,8 @@ cifs_demultiplex_thread(void *p)
+
+ if (!mid_entry->multiRsp || mid_entry->multiEnd)
+ mid_entry->callback(mid_entry);
++
++ cifs_mid_q_entry_release(mid_entry);
+ } else if (server->ops->is_oplock_break &&
+ server->ops->is_oplock_break(buf, server)) {
+ cifs_dbg(FYI, "Received oplock break\n");
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index aff8ce8ba34d..646dcd149de1 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -107,6 +107,7 @@ cifs_find_mid(struct TCP_Server_Info *server, char *buffer)
+ if (compare_mid(mid->mid, buf) &&
+ mid->mid_state == MID_REQUEST_SUBMITTED &&
+ le16_to_cpu(mid->command) == buf->Command) {
++ kref_get(&mid->refcount);
+ spin_unlock(&GlobalMid_Lock);
+ return mid;
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 4ee32488ff74..824ec1742557 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -203,6 +203,7 @@ smb2_find_mid(struct TCP_Server_Info *server, char *buf)
+ if ((mid->mid == wire_mid) &&
+ (mid->mid_state == MID_REQUEST_SUBMITTED) &&
+ (mid->command == shdr->Command)) {
++ kref_get(&mid->refcount);
+ spin_unlock(&GlobalMid_Lock);
+ return mid;
+ }
+@@ -654,6 +655,8 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = SMB2_set_ea(xid, tcon, fid.persistent_fid, fid.volatile_fid, ea,
+ len);
++ kfree(ea);
++
+ SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+
+ return rc;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 32d7fd830aae..71013c5268b9 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -154,7 +154,7 @@ smb2_hdr_assemble(struct smb2_sync_hdr *shdr, __le16 smb2_cmd,
+ static int
+ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
+ {
+- int rc = 0;
++ int rc;
+ struct nls_table *nls_codepage;
+ struct cifs_ses *ses;
+ struct TCP_Server_Info *server;
+@@ -165,10 +165,10 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
+ * for those three - in the calling routine.
+ */
+ if (tcon == NULL)
+- return rc;
++ return 0;
+
+ if (smb2_command == SMB2_TREE_CONNECT)
+- return rc;
++ return 0;
+
+ if (tcon->tidStatus == CifsExiting) {
+ /*
+@@ -211,8 +211,14 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
+ return -EAGAIN;
+ }
+
+- wait_event_interruptible_timeout(server->response_q,
+- (server->tcpStatus != CifsNeedReconnect), 10 * HZ);
++ rc = wait_event_interruptible_timeout(server->response_q,
++ (server->tcpStatus != CifsNeedReconnect),
++ 10 * HZ);
++ if (rc < 0) {
++ cifs_dbg(FYI, "%s: aborting reconnect due to a received"
++ " signal by the process\n", __func__);
++ return -ERESTARTSYS;
++ }
+
+ /* are we still trying to reconnect? */
+ if (server->tcpStatus != CifsNeedReconnect)
+@@ -230,7 +236,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
+ }
+
+ if (!tcon->ses->need_reconnect && !tcon->need_reconnect)
+- return rc;
++ return 0;
+
+ nls_codepage = load_nls_default();
+
+@@ -339,7 +345,10 @@ smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+ return rc;
+
+ /* BB eventually switch this to SMB2 specific small buf size */
+- *request_buf = cifs_small_buf_get();
++ if (smb2_command == SMB2_SET_INFO)
++ *request_buf = cifs_buf_get();
++ else
++ *request_buf = cifs_small_buf_get();
+ if (*request_buf == NULL) {
+ /* BB should we add a retry in here if not a writepage? */
+ return -ENOMEM;
+@@ -3363,7 +3372,7 @@ send_set_info(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = smb2_send_recv(xid, ses, iov, num, &resp_buftype, flags,
+ &rsp_iov);
+- cifs_small_buf_release(req);
++ cifs_buf_release(req);
+ rsp = (struct smb2_set_info_rsp *)rsp_iov.iov_base;
+
+ if (rc != 0)
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 8806f3f76c1d..97f24d82ae6b 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -548,6 +548,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+
+ temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS);
+ memset(temp, 0, sizeof(struct mid_q_entry));
++ kref_init(&temp->refcount);
+ temp->mid = le64_to_cpu(shdr->MessageId);
+ temp->pid = current->pid;
+ temp->command = shdr->Command; /* Always LE */
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 927226a2122f..60faf2fcec7f 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -61,6 +61,7 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
+
+ temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS);
+ memset(temp, 0, sizeof(struct mid_q_entry));
++ kref_init(&temp->refcount);
+ temp->mid = get_mid(smb_buffer);
+ temp->pid = current->pid;
+ temp->command = cpu_to_le16(smb_buffer->Command);
+@@ -82,6 +83,21 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
+ return temp;
+ }
+
++static void _cifs_mid_q_entry_release(struct kref *refcount)
++{
++ struct mid_q_entry *mid = container_of(refcount, struct mid_q_entry,
++ refcount);
++
++ mempool_free(mid, cifs_mid_poolp);
++}
++
++void cifs_mid_q_entry_release(struct mid_q_entry *midEntry)
++{
++ spin_lock(&GlobalMid_Lock);
++ kref_put(&midEntry->refcount, _cifs_mid_q_entry_release);
++ spin_unlock(&GlobalMid_Lock);
++}
++
+ void
+ DeleteMidQEntry(struct mid_q_entry *midEntry)
+ {
+@@ -110,7 +126,7 @@ DeleteMidQEntry(struct mid_q_entry *midEntry)
+ }
+ }
+ #endif
+- mempool_free(midEntry, cifs_mid_poolp);
++ cifs_mid_q_entry_release(midEntry);
+ }
+
+ void
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index de1694512f1f..c09289a42dc5 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -961,8 +961,7 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
+ blocksize = BLOCK_SIZE << le32_to_cpu(sbi->s_es->s_log_block_size);
+
+ if (sbi->s_mount_opt & EXT2_MOUNT_DAX) {
+- err = bdev_dax_supported(sb, blocksize);
+- if (err) {
++ if (!bdev_dax_supported(sb->s_bdev, blocksize)) {
+ ext2_msg(sb, KERN_ERR,
+ "DAX unsupported by block device. Turning off DAX.");
+ sbi->s_mount_opt &= ~EXT2_MOUNT_DAX;
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 508b905d744d..f8b5635f0396 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -184,7 +184,6 @@ static int ext4_init_block_bitmap(struct super_block *sb,
+ unsigned int bit, bit_max;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ ext4_fsblk_t start, tmp;
+- int flex_bg = 0;
+ struct ext4_group_info *grp;
+
+ J_ASSERT_BH(bh, buffer_locked(bh));
+@@ -217,22 +216,19 @@ static int ext4_init_block_bitmap(struct super_block *sb,
+
+ start = ext4_group_first_block_no(sb, block_group);
+
+- if (ext4_has_feature_flex_bg(sb))
+- flex_bg = 1;
+-
+ /* Set bits for block and inode bitmaps, and inode table */
+ tmp = ext4_block_bitmap(sb, gdp);
+- if (!flex_bg || ext4_block_in_group(sb, tmp, block_group))
++ if (ext4_block_in_group(sb, tmp, block_group))
+ ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data);
+
+ tmp = ext4_inode_bitmap(sb, gdp);
+- if (!flex_bg || ext4_block_in_group(sb, tmp, block_group))
++ if (ext4_block_in_group(sb, tmp, block_group))
+ ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data);
+
+ tmp = ext4_inode_table(sb, gdp);
+ for (; tmp < ext4_inode_table(sb, gdp) +
+ sbi->s_itb_per_group; tmp++) {
+- if (!flex_bg || ext4_block_in_group(sb, tmp, block_group))
++ if (ext4_block_in_group(sb, tmp, block_group))
+ ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data);
+ }
+
+@@ -455,7 +451,16 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
+ goto verify;
+ }
+ ext4_lock_group(sb, block_group);
+- if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ if (ext4_has_group_desc_csum(sb) &&
++ (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
++ if (block_group == 0) {
++ ext4_unlock_group(sb, block_group);
++ unlock_buffer(bh);
++ ext4_error(sb, "Block bitmap for bg 0 marked "
++ "uninitialized");
++ err = -EFSCORRUPTED;
++ goto out;
++ }
+ err = ext4_init_block_bitmap(sb, bh, block_group, desc);
+ set_bitmap_uptodate(bh);
+ set_buffer_uptodate(bh);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index a42e71203e53..51fcfdefc3a6 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1501,11 +1501,6 @@ static inline struct ext4_inode_info *EXT4_I(struct inode *inode)
+ static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino)
+ {
+ return ino == EXT4_ROOT_INO ||
+- ino == EXT4_USR_QUOTA_INO ||
+- ino == EXT4_GRP_QUOTA_INO ||
+- ino == EXT4_BOOT_LOADER_INO ||
+- ino == EXT4_JOURNAL_INO ||
+- ino == EXT4_RESIZE_INO ||
+ (ino >= EXT4_FIRST_INO(sb) &&
+ ino <= le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count));
+ }
+@@ -3005,9 +3000,6 @@ extern int ext4_inline_data_fiemap(struct inode *inode,
+ struct iomap;
+ extern int ext4_inline_data_iomap(struct inode *inode, struct iomap *iomap);
+
+-extern int ext4_try_to_evict_inline_data(handle_t *handle,
+- struct inode *inode,
+- int needed);
+ extern int ext4_inline_data_truncate(struct inode *inode, int *has_inline);
+
+ extern int ext4_convert_inline_data(struct inode *inode);
+diff --git a/fs/ext4/ext4_extents.h b/fs/ext4/ext4_extents.h
+index 98fb0c119c68..adf6668b596f 100644
+--- a/fs/ext4/ext4_extents.h
++++ b/fs/ext4/ext4_extents.h
+@@ -91,6 +91,7 @@ struct ext4_extent_header {
+ };
+
+ #define EXT4_EXT_MAGIC cpu_to_le16(0xf30a)
++#define EXT4_MAX_EXTENT_DEPTH 5
+
+ #define EXT4_EXTENT_TAIL_OFFSET(hdr) \
+ (sizeof(struct ext4_extent_header) + \
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c969275ce3ee..08226f72b7ee 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -869,6 +869,12 @@ ext4_find_extent(struct inode *inode, ext4_lblk_t block,
+
+ eh = ext_inode_hdr(inode);
+ depth = ext_depth(inode);
++ if (depth < 0 || depth > EXT4_MAX_EXTENT_DEPTH) {
++ EXT4_ERROR_INODE(inode, "inode has invalid extent depth: %d",
++ depth);
++ ret = -EFSCORRUPTED;
++ goto err;
++ }
+
+ if (path) {
+ ext4_ext_drop_refs(path);
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index df92e3ec9913..478b8f21c814 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -155,7 +155,16 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ }
+
+ ext4_lock_group(sb, block_group);
+- if (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)) {
++ if (ext4_has_group_desc_csum(sb) &&
++ (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT))) {
++ if (block_group == 0) {
++ ext4_unlock_group(sb, block_group);
++ unlock_buffer(bh);
++ ext4_error(sb, "Inode bitmap for bg 0 marked "
++ "uninitialized");
++ err = -EFSCORRUPTED;
++ goto out;
++ }
+ memset(bh->b_data, 0, (EXT4_INODES_PER_GROUP(sb) + 7) / 8);
+ ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb),
+ sb->s_blocksize * 8, bh->b_data);
+@@ -1000,7 +1009,8 @@ struct inode *__ext4_new_inode(handle_t *handle, struct inode *dir,
+
+ /* recheck and clear flag under lock if we still need to */
+ ext4_lock_group(sb, group);
+- if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ if (ext4_has_group_desc_csum(sb) &&
++ (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+ ext4_free_group_clusters_set(sb, gdp,
+ ext4_free_clusters_after_init(sb, group, gdp));
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 44b4fcdc3755..851bc552d849 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -437,6 +437,7 @@ static int ext4_destroy_inline_data_nolock(handle_t *handle,
+
+ memset((void *)ext4_raw_inode(&is.iloc)->i_block,
+ 0, EXT4_MIN_INLINE_DATA_SIZE);
++ memset(ei->i_data, 0, EXT4_MIN_INLINE_DATA_SIZE);
+
+ if (ext4_has_feature_extents(inode->i_sb)) {
+ if (S_ISDIR(inode->i_mode) ||
+@@ -886,11 +887,11 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
+ flags |= AOP_FLAG_NOFS;
+
+ if (ret == -ENOSPC) {
++ ext4_journal_stop(handle);
+ ret = ext4_da_convert_inline_data_to_extent(mapping,
+ inode,
+ flags,
+ fsdata);
+- ext4_journal_stop(handle);
+ if (ret == -ENOSPC &&
+ ext4_should_retry_alloc(inode->i_sb, &retries))
+ goto retry_journal;
+@@ -1890,42 +1891,6 @@ int ext4_inline_data_fiemap(struct inode *inode,
+ return (error < 0 ? error : 0);
+ }
+
+-/*
+- * Called during xattr set, and if we can sparse space 'needed',
+- * just create the extent tree evict the data to the outer block.
+- *
+- * We use jbd2 instead of page cache to move data to the 1st block
+- * so that the whole transaction can be committed as a whole and
+- * the data isn't lost because of the delayed page cache write.
+- */
+-int ext4_try_to_evict_inline_data(handle_t *handle,
+- struct inode *inode,
+- int needed)
+-{
+- int error;
+- struct ext4_xattr_entry *entry;
+- struct ext4_inode *raw_inode;
+- struct ext4_iloc iloc;
+-
+- error = ext4_get_inode_loc(inode, &iloc);
+- if (error)
+- return error;
+-
+- raw_inode = ext4_raw_inode(&iloc);
+- entry = (struct ext4_xattr_entry *)((void *)raw_inode +
+- EXT4_I(inode)->i_inline_off);
+- if (EXT4_XATTR_LEN(entry->e_name_len) +
+- EXT4_XATTR_SIZE(le32_to_cpu(entry->e_value_size)) < needed) {
+- error = -ENOSPC;
+- goto out;
+- }
+-
+- error = ext4_convert_inline_data_nolock(handle, inode, &iloc);
+-out:
+- brelse(iloc.bh);
+- return error;
+-}
+-
+ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ {
+ handle_t *handle;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index c73cb9346aee..06b963d2fc36 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -402,9 +402,9 @@ static int __check_block_validity(struct inode *inode, const char *func,
+ if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), map->m_pblk,
+ map->m_len)) {
+ ext4_error_inode(inode, func, line, map->m_pblk,
+- "lblock %lu mapped to illegal pblock "
++ "lblock %lu mapped to illegal pblock %llu "
+ "(length %d)", (unsigned long) map->m_lblk,
+- map->m_len);
++ map->m_pblk, map->m_len);
+ return -EFSCORRUPTED;
+ }
+ return 0;
+@@ -4506,7 +4506,8 @@ static int __ext4_get_inode_loc(struct inode *inode,
+ int inodes_per_block, inode_offset;
+
+ iloc->bh = NULL;
+- if (!ext4_valid_inum(sb, inode->i_ino))
++ if (inode->i_ino < EXT4_ROOT_INO ||
++ inode->i_ino > le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count))
+ return -EFSCORRUPTED;
+
+ iloc->block_group = (inode->i_ino - 1) / EXT4_INODES_PER_GROUP(sb);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 769a62708b1c..39187e7b3748 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2444,7 +2444,8 @@ int ext4_mb_add_groupinfo(struct super_block *sb, ext4_group_t group,
+ * initialize bb_free to be able to skip
+ * empty groups without initialization
+ */
+- if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ if (ext4_has_group_desc_csum(sb) &&
++ (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+ meta_group_info[i]->bb_free =
+ ext4_free_clusters_after_init(sb, group, desc);
+ } else {
+@@ -3011,7 +3012,8 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
+ #endif
+ ext4_set_bits(bitmap_bh->b_data, ac->ac_b_ex.fe_start,
+ ac->ac_b_ex.fe_len);
+- if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ if (ext4_has_group_desc_csum(sb) &&
++ (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+ ext4_free_group_clusters_set(sb, gdp,
+ ext4_free_clusters_after_init(sb,
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index eb104e8476f0..74a6d884ede4 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2307,6 +2307,7 @@ static int ext4_check_descriptors(struct super_block *sb,
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ ext4_fsblk_t first_block = le32_to_cpu(sbi->s_es->s_first_data_block);
+ ext4_fsblk_t last_block;
++ ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0) + 1;
+ ext4_fsblk_t block_bitmap;
+ ext4_fsblk_t inode_bitmap;
+ ext4_fsblk_t inode_table;
+@@ -2339,6 +2340,14 @@ static int ext4_check_descriptors(struct super_block *sb,
+ if (!sb_rdonly(sb))
+ return 0;
+ }
++ if (block_bitmap >= sb_block + 1 &&
++ block_bitmap <= last_bg_block) {
++ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
++ "Block bitmap for group %u overlaps "
++ "block group descriptors", i);
++ if (!sb_rdonly(sb))
++ return 0;
++ }
+ if (block_bitmap < first_block || block_bitmap > last_block) {
+ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
+ "Block bitmap for group %u not in group "
+@@ -2353,6 +2362,14 @@ static int ext4_check_descriptors(struct super_block *sb,
+ if (!sb_rdonly(sb))
+ return 0;
+ }
++ if (inode_bitmap >= sb_block + 1 &&
++ inode_bitmap <= last_bg_block) {
++ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
++ "Inode bitmap for group %u overlaps "
++ "block group descriptors", i);
++ if (!sb_rdonly(sb))
++ return 0;
++ }
+ if (inode_bitmap < first_block || inode_bitmap > last_block) {
+ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
+ "Inode bitmap for group %u not in group "
+@@ -2367,6 +2384,14 @@ static int ext4_check_descriptors(struct super_block *sb,
+ if (!sb_rdonly(sb))
+ return 0;
+ }
++ if (inode_table >= sb_block + 1 &&
++ inode_table <= last_bg_block) {
++ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
++ "Inode table for group %u overlaps "
++ "block group descriptors", i);
++ if (!sb_rdonly(sb))
++ return 0;
++ }
+ if (inode_table < first_block ||
+ inode_table + sbi->s_itb_per_group - 1 > last_block) {
+ ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
+@@ -3073,13 +3098,22 @@ static ext4_group_t ext4_has_uninit_itable(struct super_block *sb)
+ ext4_group_t group, ngroups = EXT4_SB(sb)->s_groups_count;
+ struct ext4_group_desc *gdp = NULL;
+
++ if (!ext4_has_group_desc_csum(sb))
++ return ngroups;
++
+ for (group = 0; group < ngroups; group++) {
+ gdp = ext4_get_group_desc(sb, group, NULL);
+ if (!gdp)
+ continue;
+
+- if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED)))
++ if (gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED))
++ continue;
++ if (group != 0)
+ break;
++ ext4_error(sb, "Inode table for bg 0 marked as "
++ "needing zeroing");
++ if (sb_rdonly(sb))
++ return ngroups;
+ }
+
+ return group;
+@@ -3718,6 +3752,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ le32_to_cpu(es->s_log_block_size));
+ goto failed_mount;
+ }
++ if (le32_to_cpu(es->s_log_cluster_size) >
++ (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
++ ext4_msg(sb, KERN_ERR,
++ "Invalid log cluster size: %u",
++ le32_to_cpu(es->s_log_cluster_size));
++ goto failed_mount;
++ }
+
+ if (le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) > (blocksize / 4)) {
+ ext4_msg(sb, KERN_ERR,
+@@ -3732,8 +3773,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ " that may contain inline data");
+ sbi->s_mount_opt &= ~EXT4_MOUNT_DAX;
+ }
+- err = bdev_dax_supported(sb, blocksize);
+- if (err) {
++ if (!bdev_dax_supported(sb->s_bdev, blocksize)) {
+ ext4_msg(sb, KERN_ERR,
+ "DAX unsupported by block device. Turning off DAX.");
+ sbi->s_mount_opt &= ~EXT4_MOUNT_DAX;
+@@ -3783,6 +3823,11 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ } else {
+ sbi->s_inode_size = le16_to_cpu(es->s_inode_size);
+ sbi->s_first_ino = le32_to_cpu(es->s_first_ino);
++ if (sbi->s_first_ino < EXT4_GOOD_OLD_FIRST_INO) {
++ ext4_msg(sb, KERN_ERR, "invalid first ino: %u",
++ sbi->s_first_ino);
++ goto failed_mount;
++ }
+ if ((sbi->s_inode_size < EXT4_GOOD_OLD_INODE_SIZE) ||
+ (!is_power_of_2(sbi->s_inode_size)) ||
+ (sbi->s_inode_size > blocksize)) {
+@@ -3859,13 +3904,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ "block size (%d)", clustersize, blocksize);
+ goto failed_mount;
+ }
+- if (le32_to_cpu(es->s_log_cluster_size) >
+- (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
+- ext4_msg(sb, KERN_ERR,
+- "Invalid log cluster size: %u",
+- le32_to_cpu(es->s_log_cluster_size));
+- goto failed_mount;
+- }
+ sbi->s_cluster_bits = le32_to_cpu(es->s_log_cluster_size) -
+ le32_to_cpu(es->s_log_block_size);
+ sbi->s_clusters_per_group =
+@@ -3886,10 +3924,10 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ }
+ } else {
+ if (clustersize != blocksize) {
+- ext4_warning(sb, "fragment/cluster size (%d) != "
+- "block size (%d)", clustersize,
+- blocksize);
+- clustersize = blocksize;
++ ext4_msg(sb, KERN_ERR,
++ "fragment/cluster size (%d) != "
++ "block size (%d)", clustersize, blocksize);
++ goto failed_mount;
+ }
+ if (sbi->s_blocks_per_group > blocksize * 8) {
+ ext4_msg(sb, KERN_ERR,
+@@ -3943,6 +3981,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ ext4_blocks_count(es));
+ goto failed_mount;
+ }
++ if ((es->s_first_data_block == 0) && (es->s_log_block_size == 0) &&
++ (sbi->s_cluster_ratio == 1)) {
++ ext4_msg(sb, KERN_WARNING, "bad geometry: first data "
++ "block is 0 with a 1k block and cluster size");
++ goto failed_mount;
++ }
++
+ blocks_count = (ext4_blocks_count(es) -
+ le32_to_cpu(es->s_first_data_block) +
+ EXT4_BLOCKS_PER_GROUP(sb) - 1);
+@@ -3978,6 +4023,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ ret = -ENOMEM;
+ goto failed_mount;
+ }
++ if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) !=
++ le32_to_cpu(es->s_inodes_count)) {
++ ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu",
++ le32_to_cpu(es->s_inodes_count),
++ ((u64)sbi->s_groups_count * sbi->s_inodes_per_group));
++ ret = -EINVAL;
++ goto failed_mount;
++ }
+
+ bgl_lock_init(sbi->s_blockgroup_lock);
+
+@@ -4709,6 +4762,14 @@ static int ext4_commit_super(struct super_block *sb, int sync)
+
+ if (!sbh || block_device_ejected(sb))
+ return error;
++
++ /*
++ * The superblock bh should be mapped, but it might not be if the
++ * device was hot-removed. Not much we can do but fail the I/O.
++ */
++ if (!buffer_mapped(sbh))
++ return error;
++
+ /*
+ * If the file system is mounted read-only, don't update the
+ * superblock write time. This avoids updating the superblock
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index fc4ced59c565..723df14f4084 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -230,12 +230,12 @@ __ext4_xattr_check_block(struct inode *inode, struct buffer_head *bh,
+ {
+ int error = -EFSCORRUPTED;
+
+- if (buffer_verified(bh))
+- return 0;
+-
+ if (BHDR(bh)->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC) ||
+ BHDR(bh)->h_blocks != cpu_to_le32(1))
+ goto errout;
++ if (buffer_verified(bh))
++ return 0;
++
+ error = -EFSBADCRC;
+ if (!ext4_xattr_block_csum_verify(inode, bh))
+ goto errout;
+@@ -1560,7 +1560,7 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ handle_t *handle, struct inode *inode,
+ bool is_block)
+ {
+- struct ext4_xattr_entry *last;
++ struct ext4_xattr_entry *last, *next;
+ struct ext4_xattr_entry *here = s->here;
+ size_t min_offs = s->end - s->base, name_len = strlen(i->name);
+ int in_inode = i->in_inode;
+@@ -1595,7 +1595,13 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+
+ /* Compute min_offs and last. */
+ last = s->first;
+- for (; !IS_LAST_ENTRY(last); last = EXT4_XATTR_NEXT(last)) {
++ for (; !IS_LAST_ENTRY(last); last = next) {
++ next = EXT4_XATTR_NEXT(last);
++ if ((void *)next >= s->end) {
++ EXT4_ERROR_INODE(inode, "corrupted xattr entries");
++ ret = -EFSCORRUPTED;
++ goto out;
++ }
+ if (!last->e_value_inum && last->e_value_size) {
+ size_t offs = le16_to_cpu(last->e_value_offs);
+ if (offs < min_offs)
+@@ -2206,23 +2212,8 @@ int ext4_xattr_ibody_inline_set(handle_t *handle, struct inode *inode,
+ if (EXT4_I(inode)->i_extra_isize == 0)
+ return -ENOSPC;
+ error = ext4_xattr_set_entry(i, s, handle, inode, false /* is_block */);
+- if (error) {
+- if (error == -ENOSPC &&
+- ext4_has_inline_data(inode)) {
+- error = ext4_try_to_evict_inline_data(handle, inode,
+- EXT4_XATTR_LEN(strlen(i->name) +
+- EXT4_XATTR_SIZE(i->value_len)));
+- if (error)
+- return error;
+- error = ext4_xattr_ibody_find(inode, i, is);
+- if (error)
+- return error;
+- error = ext4_xattr_set_entry(i, s, handle, inode,
+- false /* is_block */);
+- }
+- if (error)
+- return error;
+- }
++ if (error)
++ return error;
+ header = IHDR(inode, ext4_raw_inode(&is->iloc));
+ if (!IS_LAST_ENTRY(s->first)) {
+ header->h_magic = cpu_to_le32(EXT4_XATTR_MAGIC);
+@@ -2651,6 +2642,11 @@ static int ext4_xattr_make_inode_space(handle_t *handle, struct inode *inode,
+ last = IFIRST(header);
+ /* Find the entry best suited to be pushed into EA block */
+ for (; !IS_LAST_ENTRY(last); last = EXT4_XATTR_NEXT(last)) {
++ /* never move system.data out of the inode */
++ if ((last->e_name_len == 4) &&
++ (last->e_name_index == EXT4_XATTR_INDEX_SYSTEM) &&
++ !memcmp(last->e_name, "data", 4))
++ continue;
+ total_size = EXT4_XATTR_LEN(last->e_name_len);
+ if (!last->e_value_inum)
+ total_size += EXT4_XATTR_SIZE(
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 8aa453784402..c51bf0d2aa9b 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1363,6 +1363,13 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ if (jh->b_transaction == transaction &&
+ jh->b_jlist != BJ_Metadata) {
+ jbd_lock_bh_state(bh);
++ if (jh->b_transaction == transaction &&
++ jh->b_jlist != BJ_Metadata)
++ pr_err("JBD2: assertion failure: h_type=%u "
++ "h_line_no=%u block_no=%llu jlist=%u\n",
++ handle->h_type, handle->h_line_no,
++ (unsigned long long) bh->b_blocknr,
++ jh->b_jlist);
+ J_ASSERT_JH(jh, jh->b_transaction != transaction ||
+ jh->b_jlist == BJ_Metadata);
+ jbd_unlock_bh_state(bh);
+@@ -1382,11 +1389,11 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ * of the transaction. This needs to be done
+ * once a transaction -bzzz
+ */
+- jh->b_modified = 1;
+ if (handle->h_buffer_credits <= 0) {
+ ret = -ENOSPC;
+ goto out_unlock_bh;
+ }
++ jh->b_modified = 1;
+ handle->h_buffer_credits--;
+ }
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index cec550c8468f..1d85efacfc8e 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -220,24 +220,26 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
+ unsigned long reason)
+ {
+ struct mm_struct *mm = ctx->mm;
+- pte_t *pte;
++ pte_t *ptep, pte;
+ bool ret = true;
+
+ VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem));
+
+- pte = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));
+- if (!pte)
++ ptep = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));
++
++ if (!ptep)
+ goto out;
+
+ ret = false;
++ pte = huge_ptep_get(ptep);
+
+ /*
+ * Lockless access: we're in a wait_event so it's ok if it
+ * changes under us.
+ */
+- if (huge_pte_none(*pte))
++ if (huge_pte_none(pte))
+ ret = true;
+- if (!huge_pte_write(*pte) && (reason & VM_UFFD_WP))
++ if (!huge_pte_write(pte) && (reason & VM_UFFD_WP))
+ ret = true;
+ out:
+ return ret;
+diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
+index 89fb1eb80aae..2c70a0a4f59f 100644
+--- a/fs/xfs/xfs_ioctl.c
++++ b/fs/xfs/xfs_ioctl.c
+@@ -1103,7 +1103,8 @@ xfs_ioctl_setattr_dax_invalidate(
+ if (fa->fsx_xflags & FS_XFLAG_DAX) {
+ if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode)))
+ return -EINVAL;
+- if (bdev_dax_supported(sb, sb->s_blocksize) < 0)
++ if (!bdev_dax_supported(xfs_find_bdev_for_inode(VFS_I(ip)),
++ sb->s_blocksize))
+ return -EINVAL;
+ }
+
+diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
+index a3ed3c811dfa..6e83acf74a95 100644
+--- a/fs/xfs/xfs_iops.c
++++ b/fs/xfs/xfs_iops.c
+@@ -1195,6 +1195,30 @@ static const struct inode_operations xfs_inline_symlink_inode_operations = {
+ .update_time = xfs_vn_update_time,
+ };
+
++/* Figure out if this file actually supports DAX. */
++static bool
++xfs_inode_supports_dax(
++ struct xfs_inode *ip)
++{
++ struct xfs_mount *mp = ip->i_mount;
++
++ /* Only supported on non-reflinked files. */
++ if (!S_ISREG(VFS_I(ip)->i_mode) || xfs_is_reflink_inode(ip))
++ return false;
++
++ /* DAX mount option or DAX iflag must be set. */
++ if (!(mp->m_flags & XFS_MOUNT_DAX) &&
++ !(ip->i_d.di_flags2 & XFS_DIFLAG2_DAX))
++ return false;
++
++ /* Block size must match page size */
++ if (mp->m_sb.sb_blocksize != PAGE_SIZE)
++ return false;
++
++ /* Device has to support DAX too. */
++ return xfs_find_daxdev_for_inode(VFS_I(ip)) != NULL;
++}
++
+ STATIC void
+ xfs_diflags_to_iflags(
+ struct inode *inode,
+@@ -1213,11 +1237,7 @@ xfs_diflags_to_iflags(
+ inode->i_flags |= S_SYNC;
+ if (flags & XFS_DIFLAG_NOATIME)
+ inode->i_flags |= S_NOATIME;
+- if (S_ISREG(inode->i_mode) &&
+- ip->i_mount->m_sb.sb_blocksize == PAGE_SIZE &&
+- !xfs_is_reflink_inode(ip) &&
+- (ip->i_mount->m_flags & XFS_MOUNT_DAX ||
+- ip->i_d.di_flags2 & XFS_DIFLAG2_DAX))
++ if (xfs_inode_supports_dax(ip))
+ inode->i_flags |= S_DAX;
+ }
+
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index d71424052917..86915dc40eed 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -1690,11 +1690,17 @@ xfs_fs_fill_super(
+ sb->s_flags |= SB_I_VERSION;
+
+ if (mp->m_flags & XFS_MOUNT_DAX) {
++ bool rtdev_is_dax = false, datadev_is_dax;
++
+ xfs_warn(mp,
+ "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
+
+- error = bdev_dax_supported(sb, sb->s_blocksize);
+- if (error) {
++ datadev_is_dax = bdev_dax_supported(mp->m_ddev_targp->bt_bdev,
++ sb->s_blocksize);
++ if (mp->m_rtdev_targp)
++ rtdev_is_dax = bdev_dax_supported(
++ mp->m_rtdev_targp->bt_bdev, sb->s_blocksize);
++ if (!rtdev_is_dax && !datadev_is_dax) {
+ xfs_alert(mp,
+ "DAX unsupported by block device. Turning off DAX.");
+ mp->m_flags &= ~XFS_MOUNT_DAX;
+diff --git a/include/linux/dax.h b/include/linux/dax.h
+index f9eb22ad341e..c99692ddd4b5 100644
+--- a/include/linux/dax.h
++++ b/include/linux/dax.h
+@@ -64,10 +64,10 @@ static inline bool dax_write_cache_enabled(struct dax_device *dax_dev)
+ struct writeback_control;
+ int bdev_dax_pgoff(struct block_device *, sector_t, size_t, pgoff_t *pgoff);
+ #if IS_ENABLED(CONFIG_FS_DAX)
+-int __bdev_dax_supported(struct super_block *sb, int blocksize);
+-static inline int bdev_dax_supported(struct super_block *sb, int blocksize)
++bool __bdev_dax_supported(struct block_device *bdev, int blocksize);
++static inline bool bdev_dax_supported(struct block_device *bdev, int blocksize)
+ {
+- return __bdev_dax_supported(sb, blocksize);
++ return __bdev_dax_supported(bdev, blocksize);
+ }
+
+ static inline struct dax_device *fs_dax_get_by_host(const char *host)
+@@ -84,9 +84,10 @@ struct dax_device *fs_dax_get_by_bdev(struct block_device *bdev);
+ int dax_writeback_mapping_range(struct address_space *mapping,
+ struct block_device *bdev, struct writeback_control *wbc);
+ #else
+-static inline int bdev_dax_supported(struct super_block *sb, int blocksize)
++static inline bool bdev_dax_supported(struct block_device *bdev,
++ int blocksize)
+ {
+- return -EOPNOTSUPP;
++ return false;
+ }
+
+ static inline struct dax_device *fs_dax_get_by_host(const char *host)
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 26240a22978a..2a4c0900e46a 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -502,6 +502,7 @@ struct hid_output_fifo {
+
+ #define HID_STAT_ADDED BIT(0)
+ #define HID_STAT_PARSED BIT(1)
++#define HID_STAT_REPROBED BIT(3)
+
+ struct hid_input {
+ struct list_head list;
+@@ -568,7 +569,7 @@ struct hid_device { /* device report descriptor */
+ bool battery_avoid_query;
+ #endif
+
+- unsigned int status; /* see STAT flags above */
++ unsigned long status; /* see STAT flags above */
+ unsigned claimed; /* Claimed by hidinput, hiddev? */
+ unsigned quirks; /* Various quirks the device can pull on us */
+ bool io_started; /* If IO has started */
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index b9061ed59bbd..c7bbc8997db8 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -393,7 +393,7 @@ static void hist_err_event(char *str, char *system, char *event, char *var)
+ else if (system)
+ snprintf(err, MAX_FILTER_STR_VAL, "%s.%s", system, event);
+ else
+- strncpy(err, var, MAX_FILTER_STR_VAL);
++ strscpy(err, var, MAX_FILTER_STR_VAL);
+
+ hist_err(str, err);
+ }
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 23c0b0cb5fb9..169b3c44ee97 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -831,6 +831,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
+ struct ftrace_graph_ret *graph_ret;
+ struct ftrace_graph_ent *call;
+ unsigned long long duration;
++ int cpu = iter->cpu;
+ int i;
+
+ graph_ret = &ret_entry->ret;
+@@ -839,7 +840,6 @@ print_graph_entry_leaf(struct trace_iterator *iter,
+
+ if (data) {
+ struct fgraph_cpu_data *cpu_data;
+- int cpu = iter->cpu;
+
+ cpu_data = per_cpu_ptr(data->cpu_data, cpu);
+
+@@ -869,6 +869,9 @@ print_graph_entry_leaf(struct trace_iterator *iter,
+
+ trace_seq_printf(s, "%ps();\n", (void *)call->func);
+
++ print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET,
++ cpu, iter->ent->pid, flags);
++
+ return trace_handle_return(s);
+ }
+
+diff --git a/mm/debug.c b/mm/debug.c
+index 56e2d9125ea5..38c926520c97 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -43,12 +43,25 @@ const struct trace_print_flags vmaflag_names[] = {
+
+ void __dump_page(struct page *page, const char *reason)
+ {
++ bool page_poisoned = PagePoisoned(page);
++ int mapcount;
++
++ /*
++ * If struct page is poisoned don't access Page*() functions as that
++ * leads to recursive loop. Page*() check for poisoned pages, and calls
++ * dump_page() when detected.
++ */
++ if (page_poisoned) {
++ pr_emerg("page:%px is uninitialized and poisoned", page);
++ goto hex_only;
++ }
++
+ /*
+ * Avoid VM_BUG_ON() in page_mapcount().
+ * page->_mapcount space in struct page is used by sl[aou]b pages to
+ * encode own info.
+ */
+- int mapcount = PageSlab(page) ? 0 : page_mapcount(page);
++ mapcount = PageSlab(page) ? 0 : page_mapcount(page);
+
+ pr_emerg("page:%px count:%d mapcount:%d mapping:%px index:%#lx",
+ page, page_ref_count(page), mapcount,
+@@ -60,6 +73,7 @@ void __dump_page(struct page *page, const char *reason)
+
+ pr_emerg("flags: %#lx(%pGp)\n", page->flags, &page->flags);
+
++hex_only:
+ print_hex_dump(KERN_ALERT, "raw: ", DUMP_PREFIX_NONE, 32,
+ sizeof(unsigned long), page,
+ sizeof(struct page), false);
+@@ -68,7 +82,7 @@ void __dump_page(struct page *page, const char *reason)
+ pr_alert("page dumped because: %s\n", reason);
+
+ #ifdef CONFIG_MEMCG
+- if (page->mem_cgroup)
++ if (!page_poisoned && page->mem_cgroup)
+ pr_alert("page->mem_cgroup:%px\n", page->mem_cgroup);
+ #endif
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 218679138255..a2d9eb6a0af9 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2163,6 +2163,7 @@ static void __init gather_bootmem_prealloc(void)
+ */
+ if (hstate_is_gigantic(h))
+ adjust_managed_page_count(page, 1 << h->order);
++ cond_resched();
+ }
+ }
+
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index a2b9518980ce..1377a89eb84c 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1844,11 +1844,9 @@ static void vmstat_update(struct work_struct *w)
+ * to occur in the future. Keep on running the
+ * update worker thread.
+ */
+- preempt_disable();
+ queue_delayed_work_on(smp_processor_id(), mm_percpu_wq,
+ this_cpu_ptr(&vmstat_work),
+ round_jiffies_relative(sysctl_stat_interval));
+- preempt_enable();
+ }
+ }
+
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index 6d0357817cda..a82dfb8f8790 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -457,14 +457,17 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write,
+ rcu_assign_pointer(net->nf.nf_loggers[tindex], logger);
+ mutex_unlock(&nf_log_mutex);
+ } else {
++ struct ctl_table tmp = *table;
++
++ tmp.data = buf;
+ mutex_lock(&nf_log_mutex);
+ logger = nft_log_dereference(net->nf.nf_loggers[tindex]);
+ if (!logger)
+- table->data = "NONE";
++ strlcpy(buf, "NONE", sizeof(buf));
+ else
+- table->data = logger->name;
+- r = proc_dostring(table, write, buffer, lenp, ppos);
++ strlcpy(buf, logger->name, sizeof(buf));
+ mutex_unlock(&nf_log_mutex);
++ r = proc_dostring(&tmp, write, buffer, lenp, ppos);
+ }
+
+ return r;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-09 15:01 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2018-07-09 15:01 UTC (permalink / raw
To: gentoo-commits
commit: 6ed4528b54ca6f6a9836bb1b132e41d96885579f
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 9 15:00:04 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Jul 9 15:00:04 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6ed4528b
linux kernel 4.17.5
0000_README | 4 +
1004_linux-4.17.5.patch | 1735 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1739 insertions(+)
diff --git a/0000_README b/0000_README
index 76ef096..33f7bd8 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-4.17.4.patch
From: http://www.kernel.org
Desc: Linux 4.17.4
+Patch: 1004_linux-4.17.5.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-4.17.5.patch b/1004_linux-4.17.5.patch
new file mode 100644
index 0000000..feb534b
--- /dev/null
+++ b/1004_linux-4.17.5.patch
@@ -0,0 +1,1735 @@
+diff --git a/Makefile b/Makefile
+index 1d740dbe676d..e4ddbad49636 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
+index ae7b3f107893..5185300cc11f 100644
+--- a/arch/arm/boot/dts/imx6q.dtsi
++++ b/arch/arm/boot/dts/imx6q.dtsi
+@@ -96,7 +96,7 @@
+ clocks = <&clks IMX6Q_CLK_ECSPI5>,
+ <&clks IMX6Q_CLK_ECSPI5>;
+ clock-names = "ipg", "per";
+- dmas = <&sdma 11 7 1>, <&sdma 12 7 2>;
++ dmas = <&sdma 11 8 1>, <&sdma 12 8 2>;
+ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
+index 0cfd701809de..a1b31013ab6e 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
+@@ -189,3 +189,10 @@
+ &usb0 {
+ status = "okay";
+ };
++
++&usb2_phy0 {
++ /*
++ * HDMI_5V is also used as supply for the USB VBUS.
++ */
++ phy-supply = <&hdmi_5v>;
++};
+diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
+index 263c142a6a6c..f65e9e1cea4c 100644
+--- a/arch/x86/include/asm/pgalloc.h
++++ b/arch/x86/include/asm/pgalloc.h
+@@ -184,6 +184,9 @@ static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr)
+
+ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d)
+ {
++ if (!pgtable_l5_enabled)
++ return;
++
+ BUG_ON((unsigned long)p4d & (PAGE_SIZE-1));
+ free_page((unsigned long)p4d);
+ }
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 7ca41bf023c9..8df9abfa947b 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -45,6 +45,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/io-64-nonatomic-lo-hi.h>
+
++#include "acpica/accommon.h"
++#include "acpica/acnamesp.h"
+ #include "internal.h"
+
+ #define _COMPONENT ACPI_OS_SERVICES
+@@ -1490,6 +1492,76 @@ int acpi_check_region(resource_size_t start, resource_size_t n,
+ }
+ EXPORT_SYMBOL(acpi_check_region);
+
++static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level,
++ void *_res, void **return_value)
++{
++ struct acpi_mem_space_context **mem_ctx;
++ union acpi_operand_object *handler_obj;
++ union acpi_operand_object *region_obj2;
++ union acpi_operand_object *region_obj;
++ struct resource *res = _res;
++ acpi_status status;
++
++ region_obj = acpi_ns_get_attached_object(handle);
++ if (!region_obj)
++ return AE_OK;
++
++ handler_obj = region_obj->region.handler;
++ if (!handler_obj)
++ return AE_OK;
++
++ if (region_obj->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
++ return AE_OK;
++
++ if (!(region_obj->region.flags & AOPOBJ_SETUP_COMPLETE))
++ return AE_OK;
++
++ region_obj2 = acpi_ns_get_secondary_object(region_obj);
++ if (!region_obj2)
++ return AE_OK;
++
++ mem_ctx = (void *)®ion_obj2->extra.region_context;
++
++ if (!(mem_ctx[0]->address >= res->start &&
++ mem_ctx[0]->address < res->end))
++ return AE_OK;
++
++ status = handler_obj->address_space.setup(region_obj,
++ ACPI_REGION_DEACTIVATE,
++ NULL, (void **)mem_ctx);
++ if (ACPI_SUCCESS(status))
++ region_obj->region.flags &= ~(AOPOBJ_SETUP_COMPLETE);
++
++ return status;
++}
++
++/**
++ * acpi_release_memory - Release any mappings done to a memory region
++ * @handle: Handle to namespace node
++ * @res: Memory resource
++ * @level: A level that terminates the search
++ *
++ * Walks through @handle and unmaps all SystemMemory Operation Regions that
++ * overlap with @res and that have already been activated (mapped).
++ *
++ * This is a helper that allows drivers to place special requirements on memory
++ * region that may overlap with operation regions, primarily allowing them to
++ * safely map the region as non-cached memory.
++ *
++ * The unmapped Operation Regions will be automatically remapped next time they
++ * are called, so the drivers do not need to do anything else.
++ */
++acpi_status acpi_release_memory(acpi_handle handle, struct resource *res,
++ u32 level)
++{
++ if (!(res->flags & IORESOURCE_MEM))
++ return AE_TYPE;
++
++ return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
++ acpi_deactivate_mem_region, NULL, res, NULL);
++}
++EXPORT_SYMBOL_GPL(acpi_release_memory);
++
+ /*
+ * Let drivers know whether the resource checks are effective
+ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 34af664b9f93..6fcc537d7779 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2080,10 +2080,18 @@ bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
+ switch (asic_type) {
+ #if defined(CONFIG_DRM_AMD_DC)
+ case CHIP_BONAIRE:
+- case CHIP_HAWAII:
+ case CHIP_KAVERI:
+ case CHIP_KABINI:
+ case CHIP_MULLINS:
++ /*
++ * We have systems in the wild with these ASICs that require
++ * LVDS and VGA support which is not supported with DC.
++ *
++ * Fallback to the non-DC driver here by default so as not to
++ * cause regressions.
++ */
++ return amdgpu_dc > 0;
++ case CHIP_HAWAII:
+ case CHIP_CARRIZO:
+ case CHIP_STONEY:
+ case CHIP_POLARIS11:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 6d08cde8443c..b52f26e7db98 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -749,8 +749,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
+ if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
+ adev->vram_pin_size += amdgpu_bo_size(bo);
+- if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)
+- adev->invisible_pin_size += amdgpu_bo_size(bo);
++ adev->invisible_pin_size += amdgpu_vram_mgr_bo_invisible_size(bo);
+ } else if (domain == AMDGPU_GEM_DOMAIN_GTT) {
+ adev->gart_pin_size += amdgpu_bo_size(bo);
+ }
+@@ -777,25 +776,22 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
+ bo->pin_count--;
+ if (bo->pin_count)
+ return 0;
+- for (i = 0; i < bo->placement.num_placement; i++) {
+- bo->placements[i].lpfn = 0;
+- bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
+- }
+- r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+- if (unlikely(r)) {
+- dev_err(adev->dev, "%p validate failed for unpin\n", bo);
+- goto error;
+- }
+
+ if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
+ adev->vram_pin_size -= amdgpu_bo_size(bo);
+- if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)
+- adev->invisible_pin_size -= amdgpu_bo_size(bo);
++ adev->invisible_pin_size -= amdgpu_vram_mgr_bo_invisible_size(bo);
+ } else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
+ adev->gart_pin_size -= amdgpu_bo_size(bo);
+ }
+
+-error:
++ for (i = 0; i < bo->placement.num_placement; i++) {
++ bo->placements[i].lpfn = 0;
++ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
++ }
++ r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
++ if (unlikely(r))
++ dev_err(adev->dev, "%p validate failed for unpin\n", bo);
++
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index 6ea7de863041..379e9ff173f1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -73,6 +73,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem);
+ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
+ int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
+
++u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo);
+ uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
+ uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 58e495330b38..87e89cc12397 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -84,6 +84,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+ }
+
+ hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
++ adev->vcn.fw_version = le32_to_cpu(hdr->ucode_version);
+ family_id = le32_to_cpu(hdr->ucode_version) & 0xff;
+ version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff;
+ version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index da55a78d7380..11aa36aa304b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -1442,7 +1442,9 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev,
+ uint64_t count;
+
+ max_entries = min(max_entries, 16ull * 1024ull);
+- for (count = 1; count < max_entries; ++count) {
++ for (count = 1;
++ count < max_entries / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE);
++ ++count) {
+ uint64_t idx = pfn + count;
+
+ if (pages_addr[idx] !=
+@@ -1455,7 +1457,7 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev,
+ dma_addr = pages_addr;
+ } else {
+ addr = pages_addr[pfn];
+- max_entries = count;
++ max_entries = count * (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE);
+ }
+
+ } else if (flags & AMDGPU_PTE_VALID) {
+@@ -1470,7 +1472,7 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev,
+ if (r)
+ return r;
+
+- pfn += last - start + 1;
++ pfn += (last - start + 1) / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE);
+ if (nodes && nodes->size == pfn) {
+ pfn = 0;
+ ++nodes;
+@@ -2112,7 +2114,8 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
+ before->last = saddr - 1;
+ before->offset = tmp->offset;
+ before->flags = tmp->flags;
+- list_add(&before->list, &tmp->list);
++ before->bo_va = tmp->bo_va;
++ list_add(&before->list, &tmp->bo_va->invalids);
+ }
+
+ /* Remember mapping split at the end */
+@@ -2122,7 +2125,8 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
+ after->offset = tmp->offset;
+ after->offset += after->start - tmp->start;
+ after->flags = tmp->flags;
+- list_add(&after->list, &tmp->list);
++ after->bo_va = tmp->bo_va;
++ list_add(&after->list, &tmp->bo_va->invalids);
+ }
+
+ list_del(&tmp->list);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 9aca653bec07..b6333f92ba45 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -96,6 +96,38 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
+ adev->gmc.visible_vram_size : end) - start;
+ }
+
++/**
++ * amdgpu_vram_mgr_bo_invisible_size - CPU invisible BO size
++ *
++ * @bo: &amdgpu_bo buffer object (must be in VRAM)
++ *
++ * Returns:
++ * How much of the given &amdgpu_bo buffer object lies in CPU invisible VRAM.
++ */
++u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo)
++{
++ struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
++ struct ttm_mem_reg *mem = &bo->tbo.mem;
++ struct drm_mm_node *nodes = mem->mm_node;
++ unsigned pages = mem->num_pages;
++ u64 usage = 0;
++
++ if (adev->gmc.visible_vram_size == adev->gmc.real_vram_size)
++ return 0;
++
++ if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
++ return amdgpu_bo_size(bo);
++
++ while (nodes && pages) {
++ usage += nodes->size << PAGE_SHIFT;
++ usage -= amdgpu_vram_mgr_vis_size(adev, nodes);
++ pages -= nodes->size;
++ ++nodes;
++ }
++
++ return usage;
++}
++
+ /**
+ * amdgpu_vram_mgr_new - allocate new ranges
+ *
+@@ -135,7 +167,8 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
+ num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node);
+ }
+
+- nodes = kcalloc(num_nodes, sizeof(*nodes), GFP_KERNEL);
++ nodes = kvmalloc_array(num_nodes, sizeof(*nodes),
++ GFP_KERNEL | __GFP_ZERO);
+ if (!nodes)
+ return -ENOMEM;
+
+@@ -190,7 +223,7 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
+ drm_mm_remove_node(&nodes[i]);
+ spin_unlock(&mgr->lock);
+
+- kfree(nodes);
++ kvfree(nodes);
+ return r == -ENOSPC ? 0 : r;
+ }
+
+@@ -229,7 +262,7 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man,
+ atomic64_sub(usage, &mgr->usage);
+ atomic64_sub(vis_usage, &mgr->vis_usage);
+
+- kfree(mem->mm_node);
++ kvfree(mem->mm_node);
+ mem->mm_node = NULL;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+index 428d1928e44e..ac9617269a2f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+@@ -467,8 +467,8 @@ static int vce_v3_0_hw_init(void *handle)
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+ vce_v3_0_override_vce_clock_gating(adev, true);
+- if (!(adev->flags & AMD_IS_APU))
+- amdgpu_asic_set_vce_clocks(adev, 10000, 10000);
++
++ amdgpu_asic_set_vce_clocks(adev, 10000, 10000);
+
+ for (i = 0; i < adev->vce.num_rings; i++)
+ adev->vce.ring[i].ready = false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 126f1276d347..9ae350dad235 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -728,33 +728,59 @@ static int vi_set_uvd_clock(struct amdgpu_device *adev, u32 clock,
+ return r;
+
+ tmp = RREG32_SMC(cntl_reg);
+- tmp &= ~(CG_DCLK_CNTL__DCLK_DIR_CNTL_EN_MASK |
+- CG_DCLK_CNTL__DCLK_DIVIDER_MASK);
++
++ if (adev->flags & AMD_IS_APU)
++ tmp &= ~CG_DCLK_CNTL__DCLK_DIVIDER_MASK;
++ else
++ tmp &= ~(CG_DCLK_CNTL__DCLK_DIR_CNTL_EN_MASK |
++ CG_DCLK_CNTL__DCLK_DIVIDER_MASK);
+ tmp |= dividers.post_divider;
+ WREG32_SMC(cntl_reg, tmp);
+
+ for (i = 0; i < 100; i++) {
+- if (RREG32_SMC(status_reg) & CG_DCLK_STATUS__DCLK_STATUS_MASK)
+- break;
++ tmp = RREG32_SMC(status_reg);
++ if (adev->flags & AMD_IS_APU) {
++ if (tmp & 0x10000)
++ break;
++ } else {
++ if (tmp & CG_DCLK_STATUS__DCLK_STATUS_MASK)
++ break;
++ }
+ mdelay(10);
+ }
+ if (i == 100)
+ return -ETIMEDOUT;
+-
+ return 0;
+ }
+
++#define ixGNB_CLK1_DFS_CNTL 0xD82200F0
++#define ixGNB_CLK1_STATUS 0xD822010C
++#define ixGNB_CLK2_DFS_CNTL 0xD8220110
++#define ixGNB_CLK2_STATUS 0xD822012C
++#define ixGNB_CLK3_DFS_CNTL 0xD8220130
++#define ixGNB_CLK3_STATUS 0xD822014C
++
+ static int vi_set_uvd_clocks(struct amdgpu_device *adev, u32 vclk, u32 dclk)
+ {
+ int r;
+
+- r = vi_set_uvd_clock(adev, vclk, ixCG_VCLK_CNTL, ixCG_VCLK_STATUS);
+- if (r)
+- return r;
++ if (adev->flags & AMD_IS_APU) {
++ r = vi_set_uvd_clock(adev, vclk, ixGNB_CLK2_DFS_CNTL, ixGNB_CLK2_STATUS);
++ if (r)
++ return r;
+
+- r = vi_set_uvd_clock(adev, dclk, ixCG_DCLK_CNTL, ixCG_DCLK_STATUS);
+- if (r)
+- return r;
++ r = vi_set_uvd_clock(adev, dclk, ixGNB_CLK1_DFS_CNTL, ixGNB_CLK1_STATUS);
++ if (r)
++ return r;
++ } else {
++ r = vi_set_uvd_clock(adev, vclk, ixCG_VCLK_CNTL, ixCG_VCLK_STATUS);
++ if (r)
++ return r;
++
++ r = vi_set_uvd_clock(adev, dclk, ixCG_DCLK_CNTL, ixCG_DCLK_STATUS);
++ if (r)
++ return r;
++ }
+
+ return 0;
+ }
+@@ -764,6 +790,22 @@ static int vi_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
+ int r, i;
+ struct atom_clock_dividers dividers;
+ u32 tmp;
++ u32 reg_ctrl;
++ u32 reg_status;
++ u32 status_mask;
++ u32 reg_mask;
++
++ if (adev->flags & AMD_IS_APU) {
++ reg_ctrl = ixGNB_CLK3_DFS_CNTL;
++ reg_status = ixGNB_CLK3_STATUS;
++ status_mask = 0x00010000;
++ reg_mask = CG_ECLK_CNTL__ECLK_DIVIDER_MASK;
++ } else {
++ reg_ctrl = ixCG_ECLK_CNTL;
++ reg_status = ixCG_ECLK_STATUS;
++ status_mask = CG_ECLK_STATUS__ECLK_STATUS_MASK;
++ reg_mask = CG_ECLK_CNTL__ECLK_DIR_CNTL_EN_MASK | CG_ECLK_CNTL__ECLK_DIVIDER_MASK;
++ }
+
+ r = amdgpu_atombios_get_clock_dividers(adev,
+ COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+@@ -772,24 +814,25 @@ static int vi_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
+ return r;
+
+ for (i = 0; i < 100; i++) {
+- if (RREG32_SMC(ixCG_ECLK_STATUS) & CG_ECLK_STATUS__ECLK_STATUS_MASK)
++ if (RREG32_SMC(reg_status) & status_mask)
+ break;
+ mdelay(10);
+ }
++
+ if (i == 100)
+ return -ETIMEDOUT;
+
+- tmp = RREG32_SMC(ixCG_ECLK_CNTL);
+- tmp &= ~(CG_ECLK_CNTL__ECLK_DIR_CNTL_EN_MASK |
+- CG_ECLK_CNTL__ECLK_DIVIDER_MASK);
++ tmp = RREG32_SMC(reg_ctrl);
++ tmp &= ~reg_mask;
+ tmp |= dividers.post_divider;
+- WREG32_SMC(ixCG_ECLK_CNTL, tmp);
++ WREG32_SMC(reg_ctrl, tmp);
+
+ for (i = 0; i < 100; i++) {
+- if (RREG32_SMC(ixCG_ECLK_STATUS) & CG_ECLK_STATUS__ECLK_STATUS_MASK)
++ if (RREG32_SMC(reg_status) & status_mask)
+ break;
+ mdelay(10);
+ }
++
+ if (i == 100)
+ return -ETIMEDOUT;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 27579443cdc5..79afffa00772 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -46,6 +46,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/version.h>
+ #include <linux/types.h>
++#include <linux/pm_runtime.h>
+
+ #include <drm/drmP.h>
+ #include <drm/drm_atomic.h>
+@@ -927,6 +928,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ drm_mode_connector_update_edid_property(connector, NULL);
+ aconnector->num_modes = 0;
+ aconnector->dc_sink = NULL;
++ aconnector->edid = NULL;
+ }
+
+ mutex_unlock(&dev->mode_config.mutex);
+@@ -3965,10 +3967,11 @@ static void amdgpu_dm_do_flip(struct drm_crtc *crtc,
+ if (acrtc->base.state->event)
+ prepare_flip_isr(acrtc);
+
++ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++
+ surface_updates->surface = dc_stream_get_status(acrtc_state->stream)->plane_states[0];
+ surface_updates->flip_addr = &addr;
+
+-
+ dc_commit_updates_for_stream(adev->dm.dc,
+ surface_updates,
+ 1,
+@@ -3981,9 +3984,6 @@ static void amdgpu_dm_do_flip(struct drm_crtc *crtc,
+ __func__,
+ addr.address.grph.addr.high_part,
+ addr.address.grph.addr.low_part);
+-
+-
+- spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ }
+
+ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+@@ -4149,6 +4149,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ struct drm_connector *connector;
+ struct drm_connector_state *old_con_state, *new_con_state;
+ struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;
++ int crtc_disable_count = 0;
+
+ drm_atomic_helper_update_legacy_modeset_state(dev, state);
+
+@@ -4211,6 +4212,8 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ if (dm_old_crtc_state->stream)
+ remove_stream(adev, acrtc, dm_old_crtc_state->stream);
+
++ pm_runtime_get_noresume(dev->dev);
++
+ acrtc->enabled = true;
+ acrtc->hw_mode = new_crtc_state->mode;
+ crtc->hwmode = new_crtc_state->mode;
+@@ -4348,6 +4351,9 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+ bool modeset_needed;
+
++ if (old_crtc_state->active && !new_crtc_state->active)
++ crtc_disable_count++;
++
+ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+ modeset_needed = modeset_required(
+@@ -4396,6 +4402,14 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ drm_atomic_helper_wait_for_flip_done(dev, state);
+
+ drm_atomic_helper_cleanup_planes(dev, state);
++
++ /* Finally, drop a runtime PM reference for each newly disabled CRTC,
++ * so we can put the GPU into runtime suspend if we're not driving any
++ * displays anymore
++ */
++ for (i = 0; i < crtc_disable_count; i++)
++ pm_runtime_put_autosuspend(dev->dev);
++ pm_runtime_mark_last_busy(dev->dev);
+ }
+
+
+diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
+index e18800ed7cd1..7b8191eae68a 100644
+--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
++++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
+@@ -875,7 +875,7 @@ static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane,
+ drm_object_attach_property(&plane->base.base,
+ props->alpha, 255);
+
+- if (desc->layout.xstride && desc->layout.pstride) {
++ if (desc->layout.xstride[0] && desc->layout.pstride[0]) {
+ int ret;
+
+ ret = drm_plane_create_rotation_property(&plane->base,
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 633c18785c1e..b25cc5aa8fbe 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -1862,9 +1862,17 @@ static void i9xx_pipestat_irq_ack(struct drm_i915_private *dev_priv,
+
+ /*
+ * Clear the PIPE*STAT regs before the IIR
++ *
++ * Toggle the enable bits to make sure we get an
++ * edge in the ISR pipe event bit if we don't clear
++ * all the enabled status bits. Otherwise the edge
++ * triggered IIR on i965/g4x wouldn't notice that
++ * an interrupt is still pending.
+ */
+- if (pipe_stats[pipe])
+- I915_WRITE(reg, enable_mask | pipe_stats[pipe]);
++ if (pipe_stats[pipe]) {
++ I915_WRITE(reg, pipe_stats[pipe]);
++ I915_WRITE(reg, enable_mask);
++ }
+ }
+ spin_unlock(&dev_priv->irq_lock);
+ }
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 8a69a9275e28..29dc0a57e466 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -2565,12 +2565,17 @@ enum i915_power_well_id {
+ #define _3D_CHICKEN _MMIO(0x2084)
+ #define _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB (1 << 10)
+ #define _3D_CHICKEN2 _MMIO(0x208c)
++
++#define FF_SLICE_CHICKEN _MMIO(0x2088)
++#define FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX (1 << 1)
++
+ /* Disables pipelining of read flushes past the SF-WIZ interface.
+ * Required on all Ironlake steppings according to the B-Spec, but the
+ * particular danger of not doing so is not specified.
+ */
+ # define _3D_CHICKEN2_WM_READ_PIPELINED (1 << 14)
+ #define _3D_CHICKEN3 _MMIO(0x2090)
++#define _3D_CHICKEN_SF_PROVOKING_VERTEX_FIX (1 << 12)
+ #define _3D_CHICKEN_SF_DISABLE_OBJEND_CULL (1 << 10)
+ #define _3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE (1 << 5)
+ #define _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL (1 << 5)
+diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c
+index c0a8805b277f..d26827c44fb0 100644
+--- a/drivers/gpu/drm/i915/intel_crt.c
++++ b/drivers/gpu/drm/i915/intel_crt.c
+@@ -304,6 +304,9 @@ intel_crt_mode_valid(struct drm_connector *connector,
+ int max_dotclk = dev_priv->max_dotclk_freq;
+ int max_clock;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ if (mode->clock < 25000)
+ return MODE_CLOCK_LOW;
+
+@@ -337,6 +340,12 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder,
+ struct intel_crtc_state *pipe_config,
+ struct drm_connector_state *conn_state)
+ {
++ struct drm_display_mode *adjusted_mode =
++ &pipe_config->base.adjusted_mode;
++
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ return true;
+ }
+
+@@ -344,6 +353,12 @@ static bool pch_crt_compute_config(struct intel_encoder *encoder,
+ struct intel_crtc_state *pipe_config,
+ struct drm_connector_state *conn_state)
+ {
++ struct drm_display_mode *adjusted_mode =
++ &pipe_config->base.adjusted_mode;
++
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ pipe_config->has_pch_encoder = true;
+
+ return true;
+@@ -354,6 +369,11 @@ static bool hsw_crt_compute_config(struct intel_encoder *encoder,
+ struct drm_connector_state *conn_state)
+ {
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct drm_display_mode *adjusted_mode =
++ &pipe_config->base.adjusted_mode;
++
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
+
+ pipe_config->has_pch_encoder = true;
+
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index 8c2d778560f0..1d14ebc7480d 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -2205,7 +2205,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+ intel_prepare_dp_ddi_buffers(encoder, crtc_state);
+
+ intel_ddi_init_dp_buf_reg(encoder);
+- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
++ if (!is_mst)
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+ intel_dp_start_link_train(intel_dp);
+ if (port != PORT_A || INTEL_GEN(dev_priv) >= 9)
+ intel_dp_stop_link_train(intel_dp);
+@@ -2303,12 +2304,15 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
+ struct intel_dp *intel_dp = &dig_port->dp;
++ bool is_mst = intel_crtc_has_type(old_crtc_state,
++ INTEL_OUTPUT_DP_MST);
+
+ /*
+ * Power down sink before disabling the port, otherwise we end
+ * up getting interrupts from the sink on detecting link loss.
+ */
+- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
++ if (!is_mst)
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
+
+ intel_disable_ddi_buf(encoder);
+
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 56004ffbd8bb..84011e08adc3 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -14211,12 +14211,22 @@ static enum drm_mode_status
+ intel_mode_valid(struct drm_device *dev,
+ const struct drm_display_mode *mode)
+ {
++ /*
++ * Can't reject DBLSCAN here because Xorg ddxen can add piles
++ * of DBLSCAN modes to the output's mode list when they detect
++ * the scaling mode property on the connector. And they don't
++ * ask the kernel to validate those modes in any way until
++ * modeset time at which point the client gets a protocol error.
++ * So in order to not upset those clients we silently ignore the
++ * DBLSCAN flag on such connectors. For other connectors we will
++ * reject modes with the DBLSCAN flag in encoder->compute_config().
++ * And we always reject DBLSCAN modes in connector->mode_valid()
++ * as we never want such modes on the connector's mode list.
++ */
++
+ if (mode->vscan > 1)
+ return MODE_NO_VSCAN;
+
+- if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+- return MODE_NO_DBLESCAN;
+-
+ if (mode->flags & DRM_MODE_FLAG_HSKEW)
+ return MODE_H_ILLEGAL;
+
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index b7b4cfdeb974..cd6e87756509 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -423,6 +423,9 @@ intel_dp_mode_valid(struct drm_connector *connector,
+ int max_rate, mode_rate, max_lanes, max_link_clock;
+ int max_dotclk;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ max_dotclk = intel_dp_downstream_max_dotclock(intel_dp);
+
+ if (intel_dp_is_edp(intel_dp) && fixed_mode) {
+@@ -1760,7 +1763,10 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ conn_state->scaling_mode);
+ }
+
+- if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
++ if (HAS_GMCH_DISPLAY(dev_priv) &&
+ adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
+ return false;
+
+@@ -2759,16 +2765,6 @@ static void intel_disable_dp(struct intel_encoder *encoder,
+ static void g4x_disable_dp(struct intel_encoder *encoder,
+ const struct intel_crtc_state *old_crtc_state,
+ const struct drm_connector_state *old_conn_state)
+-{
+- intel_disable_dp(encoder, old_crtc_state, old_conn_state);
+-
+- /* disable the port before the pipe on g4x */
+- intel_dp_link_down(encoder, old_crtc_state);
+-}
+-
+-static void ilk_disable_dp(struct intel_encoder *encoder,
+- const struct intel_crtc_state *old_crtc_state,
+- const struct drm_connector_state *old_conn_state)
+ {
+ intel_disable_dp(encoder, old_crtc_state, old_conn_state);
+ }
+@@ -2784,13 +2780,19 @@ static void vlv_disable_dp(struct intel_encoder *encoder,
+ intel_disable_dp(encoder, old_crtc_state, old_conn_state);
+ }
+
+-static void ilk_post_disable_dp(struct intel_encoder *encoder,
++static void g4x_post_disable_dp(struct intel_encoder *encoder,
+ const struct intel_crtc_state *old_crtc_state,
+ const struct drm_connector_state *old_conn_state)
+ {
+ struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+ enum port port = encoder->port;
+
++ /*
++ * Bspec does not list a specific disable sequence for g4x DP.
++ * Follow the ilk+ sequence (disable pipe before the port) for
++ * g4x DP as it does not suffer from underruns like the normal
++ * g4x modeset sequence (disable pipe after the port).
++ */
+ intel_dp_link_down(encoder, old_crtc_state);
+
+ /* Only ilk+ has port A */
+@@ -6327,7 +6329,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
+ drm_connector_init(dev, connector, &intel_dp_connector_funcs, type);
+ drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs);
+
+- if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
++ if (!HAS_GMCH_DISPLAY(dev_priv))
+ connector->interlace_allowed = true;
+ connector->doublescan_allowed = 0;
+
+@@ -6426,15 +6428,11 @@ bool intel_dp_init(struct drm_i915_private *dev_priv,
+ intel_encoder->enable = vlv_enable_dp;
+ intel_encoder->disable = vlv_disable_dp;
+ intel_encoder->post_disable = vlv_post_disable_dp;
+- } else if (INTEL_GEN(dev_priv) >= 5) {
+- intel_encoder->pre_enable = g4x_pre_enable_dp;
+- intel_encoder->enable = g4x_enable_dp;
+- intel_encoder->disable = ilk_disable_dp;
+- intel_encoder->post_disable = ilk_post_disable_dp;
+ } else {
+ intel_encoder->pre_enable = g4x_pre_enable_dp;
+ intel_encoder->enable = g4x_enable_dp;
+ intel_encoder->disable = g4x_disable_dp;
++ intel_encoder->post_disable = g4x_post_disable_dp;
+ }
+
+ intel_dig_port->dp.output_reg = output_reg;
+diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
+index c3de0918ee13..5890500a3a8b 100644
+--- a/drivers/gpu/drm/i915/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/intel_dp_mst.c
+@@ -48,6 +48,9 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
+ bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc,
+ DP_DPCD_QUIRK_LIMITED_M_N);
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ pipe_config->has_pch_encoder = false;
+ bpp = 24;
+ if (intel_dp->compliance.test_data.bpc) {
+@@ -180,9 +183,11 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
+ intel_dp->active_mst_links--;
+
+ intel_mst->connector = NULL;
+- if (intel_dp->active_mst_links == 0)
++ if (intel_dp->active_mst_links == 0) {
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
+ intel_dig_port->base.post_disable(&intel_dig_port->base,
+ old_crtc_state, NULL);
++ }
+
+ DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
+ }
+@@ -223,7 +228,11 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
+
+ DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
+
++ if (intel_dp->active_mst_links == 0)
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
++
+ drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);
++
+ if (intel_dp->active_mst_links == 0)
+ intel_dig_port->base.pre_enable(&intel_dig_port->base,
+ pipe_config, NULL);
+@@ -360,6 +369,9 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
+ if (!intel_dp)
+ return MODE_ERROR;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ max_link_clock = intel_dp_max_link_rate(intel_dp);
+ max_lanes = intel_dp_max_lane_count(intel_dp);
+
+diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c
+index 51a1d6868b1e..384b37e2da70 100644
+--- a/drivers/gpu/drm/i915/intel_dsi.c
++++ b/drivers/gpu/drm/i915/intel_dsi.c
+@@ -326,6 +326,9 @@ static bool intel_dsi_compute_config(struct intel_encoder *encoder,
+ conn_state->scaling_mode);
+ }
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ /* DSI uses short packets for sync events, so clear mode flags for DSI */
+ adjusted_mode->flags = 0;
+
+@@ -1266,6 +1269,9 @@ intel_dsi_mode_valid(struct drm_connector *connector,
+
+ DRM_DEBUG_KMS("\n");
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ if (fixed_mode) {
+ if (mode->hdisplay > fixed_mode->hdisplay)
+ return MODE_PANEL;
+diff --git a/drivers/gpu/drm/i915/intel_dvo.c b/drivers/gpu/drm/i915/intel_dvo.c
+index eb0c559b2715..6604806f89d5 100644
+--- a/drivers/gpu/drm/i915/intel_dvo.c
++++ b/drivers/gpu/drm/i915/intel_dvo.c
+@@ -219,6 +219,9 @@ intel_dvo_mode_valid(struct drm_connector *connector,
+ int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
+ int target_clock = mode->clock;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ /* XXX: Validate clock range */
+
+ if (fixed_mode) {
+@@ -254,6 +257,9 @@ static bool intel_dvo_compute_config(struct intel_encoder *encoder,
+ if (fixed_mode)
+ intel_fixed_panel_mode(fixed_mode, adjusted_mode);
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index 1baef4ac7ecb..383f9df4145e 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -1557,6 +1557,9 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
+ bool force_dvi =
+ READ_ONCE(to_intel_digital_connector_state(connector->state)->force_audio) == HDMI_AUDIO_OFF_DVI;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ clock = mode->clock;
+
+ if ((mode->flags & DRM_MODE_FLAG_3D_MASK) == DRM_MODE_FLAG_3D_FRAME_PACKING)
+@@ -1677,6 +1680,9 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
+ int desired_bpp;
+ bool force_dvi = intel_conn_state->force_audio == HDMI_AUDIO_OFF_DVI;
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ pipe_config->has_hdmi_sink = !force_dvi && intel_hdmi->has_hdmi_sink;
+
+ if (pipe_config->has_hdmi_sink)
+diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
+index 8704f7f8d072..df5ba1de8aea 100644
+--- a/drivers/gpu/drm/i915/intel_lrc.c
++++ b/drivers/gpu/drm/i915/intel_lrc.c
+@@ -1386,11 +1386,21 @@ static u32 *gen9_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
+ /* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt,glk */
+ batch = gen8_emit_flush_coherentl3_wa(engine, batch);
+
++ *batch++ = MI_LOAD_REGISTER_IMM(3);
++
+ /* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */
+- *batch++ = MI_LOAD_REGISTER_IMM(1);
+ *batch++ = i915_mmio_reg_offset(COMMON_SLICE_CHICKEN2);
+ *batch++ = _MASKED_BIT_DISABLE(
+ GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE);
++
++ /* BSpec: 11391 */
++ *batch++ = i915_mmio_reg_offset(FF_SLICE_CHICKEN);
++ *batch++ = _MASKED_BIT_ENABLE(FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX);
++
++ /* BSpec: 11299 */
++ *batch++ = i915_mmio_reg_offset(_3D_CHICKEN3);
++ *batch++ = _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_PROVOKING_VERTEX_FIX);
++
+ *batch++ = MI_NOOP;
+
+ /* WaClearSlmSpaceAtContextSwitch:kbl */
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index e125d16a1aa7..34dd1e5233ac 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -380,6 +380,8 @@ intel_lvds_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
+ int max_pixclk = to_i915(connector->dev)->max_dotclk_freq;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
+ if (mode->hdisplay > fixed_mode->hdisplay)
+ return MODE_PANEL;
+ if (mode->vdisplay > fixed_mode->vdisplay)
+@@ -429,6 +431,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
+ intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
+ adjusted_mode);
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ if (HAS_PCH_SPLIT(dev_priv)) {
+ pipe_config->has_pch_encoder = true;
+
+diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
+index 96e213ec202d..d253e3a06e30 100644
+--- a/drivers/gpu/drm/i915/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/intel_sdvo.c
+@@ -1160,6 +1160,9 @@ static bool intel_sdvo_compute_config(struct intel_encoder *encoder,
+ adjusted_mode);
+ }
+
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
+ /*
+ * Make the CRTC code factor in the SDVO pixel multiplier. The
+ * SDVO device will factor out the multiplier during mode_set.
+@@ -1621,6 +1624,9 @@ intel_sdvo_mode_valid(struct drm_connector *connector,
+ struct intel_sdvo *intel_sdvo = intel_attached_sdvo(connector);
+ int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ if (intel_sdvo->pixel_clock_min > mode->clock)
+ return MODE_CLOCK_LOW;
+
+diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c
+index 885fc3809f7f..b55b5c157e38 100644
+--- a/drivers/gpu/drm/i915/intel_tv.c
++++ b/drivers/gpu/drm/i915/intel_tv.c
+@@ -850,6 +850,9 @@ intel_tv_mode_valid(struct drm_connector *connector,
+ const struct tv_mode *tv_mode = intel_tv_mode_find(connector->state);
+ int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
+
++ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return MODE_NO_DBLESCAN;
++
+ if (mode->clock > max_dotclk)
+ return MODE_CLOCK_HIGH;
+
+@@ -877,16 +880,21 @@ intel_tv_compute_config(struct intel_encoder *encoder,
+ struct drm_connector_state *conn_state)
+ {
+ const struct tv_mode *tv_mode = intel_tv_mode_find(conn_state);
++ struct drm_display_mode *adjusted_mode =
++ &pipe_config->base.adjusted_mode;
+
+ if (!tv_mode)
+ return false;
+
+- pipe_config->base.adjusted_mode.crtc_clock = tv_mode->clock;
++ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
++ return false;
++
++ adjusted_mode->crtc_clock = tv_mode->clock;
+ DRM_DEBUG_KMS("forcing bpc to 8 for TV\n");
+ pipe_config->pipe_bpp = 8*3;
+
+ /* TV has it's own notion of sync and other mode flags, so clear them. */
+- pipe_config->base.adjusted_mode.flags = 0;
++ adjusted_mode->flags = 0;
+
+ /*
+ * FIXME: We don't check whether the input mode is actually what we want
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index ecb35ed0eac8..61e51516fec5 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -630,7 +630,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ struct qxl_cursor_cmd *cmd;
+ struct qxl_cursor *cursor;
+ struct drm_gem_object *obj;
+- struct qxl_bo *cursor_bo = NULL, *user_bo = NULL;
++ struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
+ int ret;
+ void *user_ptr;
+ int size = 64*64*4;
+@@ -684,7 +684,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ cursor_bo, 0);
+ cmd->type = QXL_CURSOR_SET;
+
+- qxl_bo_unref(&qcrtc->cursor_bo);
++ old_cursor_bo = qcrtc->cursor_bo;
+ qcrtc->cursor_bo = cursor_bo;
+ cursor_bo = NULL;
+ } else {
+@@ -704,6 +704,9 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ qxl_release_fence_buffer_objects(release);
+
++ if (old_cursor_bo)
++ qxl_bo_unref(&old_cursor_bo);
++
+ qxl_bo_unref(&cursor_bo);
+
+ return;
+diff --git a/drivers/gpu/drm/sti/Kconfig b/drivers/gpu/drm/sti/Kconfig
+index cca4b3c9aeb5..1963cc1b1cc5 100644
+--- a/drivers/gpu/drm/sti/Kconfig
++++ b/drivers/gpu/drm/sti/Kconfig
+@@ -1,6 +1,6 @@
+ config DRM_STI
+ tristate "DRM Support for STMicroelectronics SoC stiH4xx Series"
+- depends on DRM && (ARCH_STI || ARCH_MULTIPLATFORM)
++ depends on OF && DRM && (ARCH_STI || ARCH_MULTIPLATFORM)
+ select RESET_CONTROLLER
+ select DRM_KMS_HELPER
+ select DRM_GEM_CMA_HELPER
+@@ -8,6 +8,5 @@ config DRM_STI
+ select DRM_PANEL
+ select FW_LOADER
+ select SND_SOC_HDMI_CODEC if SND_SOC
+- select OF
+ help
+ Choose this option to enable DRM on STM stiH4xx chipset
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index c3d92d537240..8045871335b5 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -17,7 +17,6 @@
+ #include <drm/drm_encoder.h>
+ #include <drm/drm_modes.h>
+ #include <drm/drm_of.h>
+-#include <drm/drm_panel.h>
+
+ #include <uapi/drm/drm_mode.h>
+
+@@ -350,9 +349,6 @@ static void sun4i_tcon0_mode_set_lvds(struct sun4i_tcon *tcon,
+ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
+ const struct drm_display_mode *mode)
+ {
+- struct drm_panel *panel = tcon->panel;
+- struct drm_connector *connector = panel->connector;
+- struct drm_display_info display_info = connector->display_info;
+ unsigned int bp, hsync, vsync;
+ u8 clk_delay;
+ u32 val = 0;
+@@ -410,27 +406,6 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
+ if (mode->flags & DRM_MODE_FLAG_PVSYNC)
+ val |= SUN4I_TCON0_IO_POL_VSYNC_POSITIVE;
+
+- /*
+- * On A20 and similar SoCs, the only way to achieve Positive Edge
+- * (Rising Edge), is setting dclk clock phase to 2/3(240°).
+- * By default TCON works in Negative Edge(Falling Edge),
+- * this is why phase is set to 0 in that case.
+- * Unfortunately there's no way to logically invert dclk through
+- * IO_POL register.
+- * The only acceptable way to work, triple checked with scope,
+- * is using clock phase set to 0° for Negative Edge and set to 240°
+- * for Positive Edge.
+- * On A33 and similar SoCs there would be a 90° phase option,
+- * but it divides also dclk by 2.
+- * Following code is a way to avoid quirks all around TCON
+- * and DOTCLOCK drivers.
+- */
+- if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_POSEDGE)
+- clk_set_phase(tcon->dclk, 240);
+-
+- if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_NEGEDGE)
+- clk_set_phase(tcon->dclk, 0);
+-
+ regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
+ SUN4I_TCON0_IO_POL_HSYNC_POSITIVE | SUN4I_TCON0_IO_POL_VSYNC_POSITIVE,
+ val);
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 7a2da7f9d4dc..5485b35fe553 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -1034,7 +1034,7 @@ static irqreturn_t mma8452_interrupt(int irq, void *p)
+ if (src < 0)
+ return IRQ_NONE;
+
+- if (!(src & data->chip_info->enabled_events))
++ if (!(src & (data->chip_info->enabled_events | MMA8452_INT_DRDY)))
+ return IRQ_NONE;
+
+ if (src & MMA8452_INT_DRDY) {
+diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c
+index 772dad65396e..f32c12439eee 100644
+--- a/drivers/staging/android/ion/ion_heap.c
++++ b/drivers/staging/android/ion/ion_heap.c
+@@ -29,7 +29,7 @@ void *ion_heap_map_kernel(struct ion_heap *heap,
+ struct page **tmp = pages;
+
+ if (!pages)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ if (buffer->flags & ION_FLAG_CACHED)
+ pgprot = PAGE_KERNEL;
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index cbe98bc2b998..431742201709 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -124,6 +124,8 @@ struct n_tty_data {
+ struct mutex output_lock;
+ };
+
++#define MASK(x) ((x) & (N_TTY_BUF_SIZE - 1))
++
+ static inline size_t read_cnt(struct n_tty_data *ldata)
+ {
+ return ldata->read_head - ldata->read_tail;
+@@ -141,6 +143,7 @@ static inline unsigned char *read_buf_addr(struct n_tty_data *ldata, size_t i)
+
+ static inline unsigned char echo_buf(struct n_tty_data *ldata, size_t i)
+ {
++ smp_rmb(); /* Matches smp_wmb() in add_echo_byte(). */
+ return ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)];
+ }
+
+@@ -316,9 +319,7 @@ static inline void put_tty_queue(unsigned char c, struct n_tty_data *ldata)
+ static void reset_buffer_flags(struct n_tty_data *ldata)
+ {
+ ldata->read_head = ldata->canon_head = ldata->read_tail = 0;
+- ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0;
+ ldata->commit_head = 0;
+- ldata->echo_mark = 0;
+ ldata->line_start = 0;
+
+ ldata->erasing = 0;
+@@ -617,12 +618,19 @@ static size_t __process_echoes(struct tty_struct *tty)
+ old_space = space = tty_write_room(tty);
+
+ tail = ldata->echo_tail;
+- while (ldata->echo_commit != tail) {
++ while (MASK(ldata->echo_commit) != MASK(tail)) {
+ c = echo_buf(ldata, tail);
+ if (c == ECHO_OP_START) {
+ unsigned char op;
+ int no_space_left = 0;
+
++ /*
++ * Since add_echo_byte() is called without holding
++ * output_lock, we might see only portion of multi-byte
++ * operation.
++ */
++ if (MASK(ldata->echo_commit) == MASK(tail + 1))
++ goto not_yet_stored;
+ /*
+ * If the buffer byte is the start of a multi-byte
+ * operation, get the next byte, which is either the
+@@ -634,6 +642,8 @@ static size_t __process_echoes(struct tty_struct *tty)
+ unsigned int num_chars, num_bs;
+
+ case ECHO_OP_ERASE_TAB:
++ if (MASK(ldata->echo_commit) == MASK(tail + 2))
++ goto not_yet_stored;
+ num_chars = echo_buf(ldata, tail + 2);
+
+ /*
+@@ -728,7 +738,8 @@ static size_t __process_echoes(struct tty_struct *tty)
+ /* If the echo buffer is nearly full (so that the possibility exists
+ * of echo overrun before the next commit), then discard enough
+ * data at the tail to prevent a subsequent overrun */
+- while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) {
++ while (ldata->echo_commit > tail &&
++ ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) {
+ if (echo_buf(ldata, tail) == ECHO_OP_START) {
+ if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB)
+ tail += 3;
+@@ -738,6 +749,7 @@ static size_t __process_echoes(struct tty_struct *tty)
+ tail++;
+ }
+
++ not_yet_stored:
+ ldata->echo_tail = tail;
+ return old_space - space;
+ }
+@@ -748,6 +760,7 @@ static void commit_echoes(struct tty_struct *tty)
+ size_t nr, old, echoed;
+ size_t head;
+
++ mutex_lock(&ldata->output_lock);
+ head = ldata->echo_head;
+ ldata->echo_mark = head;
+ old = ldata->echo_commit - ldata->echo_tail;
+@@ -756,10 +769,12 @@ static void commit_echoes(struct tty_struct *tty)
+ * is over the threshold (and try again each time another
+ * block is accumulated) */
+ nr = head - ldata->echo_tail;
+- if (nr < ECHO_COMMIT_WATERMARK || (nr % ECHO_BLOCK > old % ECHO_BLOCK))
++ if (nr < ECHO_COMMIT_WATERMARK ||
++ (nr % ECHO_BLOCK > old % ECHO_BLOCK)) {
++ mutex_unlock(&ldata->output_lock);
+ return;
++ }
+
+- mutex_lock(&ldata->output_lock);
+ ldata->echo_commit = head;
+ echoed = __process_echoes(tty);
+ mutex_unlock(&ldata->output_lock);
+@@ -810,7 +825,9 @@ static void flush_echoes(struct tty_struct *tty)
+
+ static inline void add_echo_byte(unsigned char c, struct n_tty_data *ldata)
+ {
+- *echo_buf_addr(ldata, ldata->echo_head++) = c;
++ *echo_buf_addr(ldata, ldata->echo_head) = c;
++ smp_wmb(); /* Matches smp_rmb() in echo_buf(). */
++ ldata->echo_head++;
+ }
+
+ /**
+@@ -978,14 +995,15 @@ static void eraser(unsigned char c, struct tty_struct *tty)
+ }
+
+ seen_alnums = 0;
+- while (ldata->read_head != ldata->canon_head) {
++ while (MASK(ldata->read_head) != MASK(ldata->canon_head)) {
+ head = ldata->read_head;
+
+ /* erase a single possibly multibyte character */
+ do {
+ head--;
+ c = read_buf(ldata, head);
+- } while (is_continuation(c, tty) && head != ldata->canon_head);
++ } while (is_continuation(c, tty) &&
++ MASK(head) != MASK(ldata->canon_head));
+
+ /* do not partially erase */
+ if (is_continuation(c, tty))
+@@ -1027,7 +1045,7 @@ static void eraser(unsigned char c, struct tty_struct *tty)
+ * This info is used to go back the correct
+ * number of columns.
+ */
+- while (tail != ldata->canon_head) {
++ while (MASK(tail) != MASK(ldata->canon_head)) {
+ tail--;
+ c = read_buf(ldata, tail);
+ if (c == '\t') {
+@@ -1302,7 +1320,7 @@ n_tty_receive_char_special(struct tty_struct *tty, unsigned char c)
+ finish_erasing(ldata);
+ echo_char(c, tty);
+ echo_char_raw('\n', ldata);
+- while (tail != ldata->read_head) {
++ while (MASK(tail) != MASK(ldata->read_head)) {
+ echo_char(read_buf(ldata, tail), tty);
+ tail++;
+ }
+@@ -1878,30 +1896,21 @@ static int n_tty_open(struct tty_struct *tty)
+ struct n_tty_data *ldata;
+
+ /* Currently a malloc failure here can panic */
+- ldata = vmalloc(sizeof(*ldata));
++ ldata = vzalloc(sizeof(*ldata));
+ if (!ldata)
+- goto err;
++ return -ENOMEM;
+
+ ldata->overrun_time = jiffies;
+ mutex_init(&ldata->atomic_read_lock);
+ mutex_init(&ldata->output_lock);
+
+ tty->disc_data = ldata;
+- reset_buffer_flags(tty->disc_data);
+- ldata->column = 0;
+- ldata->canon_column = 0;
+- ldata->num_overrun = 0;
+- ldata->no_room = 0;
+- ldata->lnext = 0;
+ tty->closing = 0;
+ /* indicate buffer work may resume */
+ clear_bit(TTY_LDISC_HALTED, &tty->flags);
+ n_tty_set_termios(tty, NULL);
+ tty_unthrottle(tty);
+-
+ return 0;
+-err:
+- return -ENOMEM;
+ }
+
+ static inline int input_available_p(struct tty_struct *tty, int poll)
+@@ -2411,7 +2420,7 @@ static unsigned long inq_canon(struct n_tty_data *ldata)
+ tail = ldata->read_tail;
+ nr = head - tail;
+ /* Skip EOF-chars.. */
+- while (head != tail) {
++ while (MASK(head) != MASK(tail)) {
+ if (test_bit(tail & (N_TTY_BUF_SIZE - 1), ldata->read_flags) &&
+ read_buf(ldata, tail) == __DISABLED_CHAR)
+ nr--;
+diff --git a/drivers/tty/serdev/core.c b/drivers/tty/serdev/core.c
+index df93b727e984..9e59f4788589 100644
+--- a/drivers/tty/serdev/core.c
++++ b/drivers/tty/serdev/core.c
+@@ -617,6 +617,7 @@ EXPORT_SYMBOL_GPL(__serdev_device_driver_register);
+ static void __exit serdev_exit(void)
+ {
+ bus_unregister(&serdev_bus_type);
++ ida_destroy(&ctrl_ida);
+ }
+ module_exit(serdev_exit);
+
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 3296a05cda2d..f80a300b5d68 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -3339,9 +3339,7 @@ static const struct pci_device_id blacklist[] = {
+ /* multi-io cards handled by parport_serial */
+ { PCI_DEVICE(0x4348, 0x7053), }, /* WCH CH353 2S1P */
+ { PCI_DEVICE(0x4348, 0x5053), }, /* WCH CH353 1S1P */
+- { PCI_DEVICE(0x4348, 0x7173), }, /* WCH CH355 4S */
+ { PCI_DEVICE(0x1c00, 0x3250), }, /* WCH CH382 2S1P */
+- { PCI_DEVICE(0x1c00, 0x3470), }, /* WCH CH384 4S */
+
+ /* Moxa Smartio MUE boards handled by 8250_moxa */
+ { PCI_VDEVICE(MOXA, 0x1024), },
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index f97251f39c26..ec17c9fd6470 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -784,7 +784,7 @@ int vc_allocate(unsigned int currcons) /* return 0 on success */
+ if (!*vc->vc_uni_pagedir_loc)
+ con_set_default_unimap(vc);
+
+- vc->vc_screenbuf = kmalloc(vc->vc_screenbuf_size, GFP_KERNEL);
++ vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL);
+ if (!vc->vc_screenbuf)
+ goto err_free;
+
+@@ -871,7 +871,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+
+ if (new_screen_size > (4 << 20))
+ return -EINVAL;
+- newscreen = kmalloc(new_screen_size, GFP_USER);
++ newscreen = kzalloc(new_screen_size, GFP_USER);
+ if (!newscreen)
+ return -ENOMEM;
+
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 7b366a6c0b49..998b32d0167e 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1758,6 +1758,9 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */
+ .driver_info = SINGLE_RX_URB,
+ },
++ { USB_DEVICE(0x1965, 0x0018), /* Uniden UBC125XLT */
++ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
++ },
+ { USB_DEVICE(0x22b8, 0x7000), /* Motorola Q Phone */
+ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ },
+diff --git a/drivers/usb/dwc2/hcd_queue.c b/drivers/usb/dwc2/hcd_queue.c
+index e34ad5e65350..6baa75da7907 100644
+--- a/drivers/usb/dwc2/hcd_queue.c
++++ b/drivers/usb/dwc2/hcd_queue.c
+@@ -383,7 +383,7 @@ static unsigned long *dwc2_get_ls_map(struct dwc2_hsotg *hsotg,
+ /* Get the map and adjust if this is a multi_tt hub */
+ map = qh->dwc_tt->periodic_bitmaps;
+ if (qh->dwc_tt->usb_tt->multi)
+- map += DWC2_ELEMENTS_PER_LS_BITMAP * qh->ttport;
++ map += DWC2_ELEMENTS_PER_LS_BITMAP * (qh->ttport - 1);
+
+ return map;
+ }
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index e5ace8995b3b..99e7547f234f 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -878,12 +878,12 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
+
+ dev = xhci->devs[slot_id];
+
+- trace_xhci_free_virt_device(dev);
+-
+ xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+ if (!dev)
+ return;
+
++ trace_xhci_free_virt_device(dev);
++
+ if (dev->tt_info)
+ old_active_eps = dev->tt_info->active_eps;
+
+diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h
+index 410544ffe78f..88b427434bd8 100644
+--- a/drivers/usb/host/xhci-trace.h
++++ b/drivers/usb/host/xhci-trace.h
+@@ -171,6 +171,37 @@ DEFINE_EVENT(xhci_log_trb, xhci_dbc_gadget_ep_queue,
+ TP_ARGS(ring, trb)
+ );
+
++DECLARE_EVENT_CLASS(xhci_log_free_virt_dev,
++ TP_PROTO(struct xhci_virt_device *vdev),
++ TP_ARGS(vdev),
++ TP_STRUCT__entry(
++ __field(void *, vdev)
++ __field(unsigned long long, out_ctx)
++ __field(unsigned long long, in_ctx)
++ __field(u8, fake_port)
++ __field(u8, real_port)
++ __field(u16, current_mel)
++
++ ),
++ TP_fast_assign(
++ __entry->vdev = vdev;
++ __entry->in_ctx = (unsigned long long) vdev->in_ctx->dma;
++ __entry->out_ctx = (unsigned long long) vdev->out_ctx->dma;
++ __entry->fake_port = (u8) vdev->fake_port;
++ __entry->real_port = (u8) vdev->real_port;
++ __entry->current_mel = (u16) vdev->current_mel;
++ ),
++ TP_printk("vdev %p ctx %llx | %llx fake_port %d real_port %d current_mel %d",
++ __entry->vdev, __entry->in_ctx, __entry->out_ctx,
++ __entry->fake_port, __entry->real_port, __entry->current_mel
++ )
++);
++
++DEFINE_EVENT(xhci_log_free_virt_dev, xhci_free_virt_device,
++ TP_PROTO(struct xhci_virt_device *vdev),
++ TP_ARGS(vdev)
++);
++
+ DECLARE_EVENT_CLASS(xhci_log_virt_dev,
+ TP_PROTO(struct xhci_virt_device *vdev),
+ TP_ARGS(vdev),
+@@ -208,11 +239,6 @@ DEFINE_EVENT(xhci_log_virt_dev, xhci_alloc_virt_device,
+ TP_ARGS(vdev)
+ );
+
+-DEFINE_EVENT(xhci_log_virt_dev, xhci_free_virt_device,
+- TP_PROTO(struct xhci_virt_device *vdev),
+- TP_ARGS(vdev)
+-);
+-
+ DEFINE_EVENT(xhci_log_virt_dev, xhci_setup_device,
+ TP_PROTO(struct xhci_virt_device *vdev),
+ TP_ARGS(vdev)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index eb6c26cbe579..ee0cc1d90b51 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -95,6 +95,9 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
+ { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
+ { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */
++ { USB_DEVICE(0x10C4, 0x817C) }, /* CESINEL MEDCAL N Power Quality Monitor */
++ { USB_DEVICE(0x10C4, 0x817D) }, /* CESINEL MEDCAL NT Power Quality Monitor */
++ { USB_DEVICE(0x10C4, 0x817E) }, /* CESINEL MEDCAL S Power Quality Monitor */
+ { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */
+ { USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */
+ { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
+@@ -112,6 +115,9 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */
+ { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */
+ { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */
++ { USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */
++ { USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */
++ { USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */
+ { USB_DEVICE(0x10C4, 0x82F4) }, /* Starizona MicroTouch */
+ { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */
+ { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */
+@@ -124,7 +130,9 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */
+ { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */
+ { USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */
++ { USB_DEVICE(0x10C4, 0x851E) }, /* CESINEL MEDCAL PT Network Analyzer */
+ { USB_DEVICE(0x10C4, 0x85A7) }, /* LifeScan OneTouch Verio IQ */
++ { USB_DEVICE(0x10C4, 0x85B8) }, /* CESINEL ReCon T Energy Logger */
+ { USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */
+ { USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */
+ { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
+@@ -134,17 +142,23 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x8857) }, /* CEL EM357 ZigBee USB Stick */
+ { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
+ { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
++ { USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */
++ { USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */
+ { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+ { USB_DEVICE(0x10C4, 0x8962) }, /* Brim Brothers charging dock */
+ { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */
+ { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
++ { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
+ { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+ { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
+ { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
+ { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
++ { USB_DEVICE(0x10C4, 0xEA63) }, /* Silicon Labs Windows Update (CP2101-4/CP2102N) */
+ { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+ { USB_DEVICE(0x10C4, 0xEA71) }, /* Infinity GPS-MIC-1 Radio Monophone */
++ { USB_DEVICE(0x10C4, 0xEA7A) }, /* Silicon Labs Windows Update (CP2105) */
++ { USB_DEVICE(0x10C4, 0xEA7B) }, /* Silicon Labs Windows Update (CP2108) */
+ { USB_DEVICE(0x10C4, 0xF001) }, /* Elan Digital Systems USBscope50 */
+ { USB_DEVICE(0x10C4, 0xF002) }, /* Elan Digital Systems USBwave12 */
+ { USB_DEVICE(0x10C4, 0xF003) }, /* Elan Digital Systems USBpulse100 */
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index ded49e3bf2b0..9b29b67191bc 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -388,17 +388,18 @@ static void _tcpm_log(struct tcpm_port *port, const char *fmt, va_list args)
+ u64 ts_nsec = local_clock();
+ unsigned long rem_nsec;
+
++ mutex_lock(&port->logbuffer_lock);
+ if (!port->logbuffer[port->logbuffer_head]) {
+ port->logbuffer[port->logbuffer_head] =
+ kzalloc(LOG_BUFFER_ENTRY_SIZE, GFP_KERNEL);
+- if (!port->logbuffer[port->logbuffer_head])
++ if (!port->logbuffer[port->logbuffer_head]) {
++ mutex_unlock(&port->logbuffer_lock);
+ return;
++ }
+ }
+
+ vsnprintf(tmpbuffer, sizeof(tmpbuffer), fmt, args);
+
+- mutex_lock(&port->logbuffer_lock);
+-
+ if (tcpm_log_full(port)) {
+ port->logbuffer_head = max(port->logbuffer_head - 1, 0);
+ strcpy(tmpbuffer, "overflow");
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index bd5cca5632b3..8d0a6fe748bd 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -350,6 +350,19 @@ static void ucsi_connector_change(struct work_struct *work)
+ }
+
+ if (con->status.change & UCSI_CONSTAT_CONNECT_CHANGE) {
++ typec_set_pwr_role(con->port, con->status.pwr_dir);
++
++ switch (con->status.partner_type) {
++ case UCSI_CONSTAT_PARTNER_TYPE_UFP:
++ typec_set_data_role(con->port, TYPEC_HOST);
++ break;
++ case UCSI_CONSTAT_PARTNER_TYPE_DFP:
++ typec_set_data_role(con->port, TYPEC_DEVICE);
++ break;
++ default:
++ break;
++ }
++
+ if (con->status.connected)
+ ucsi_register_partner(con);
+ else
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 44eb4e1ea817..a18112a83fae 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -79,6 +79,11 @@ static int ucsi_acpi_probe(struct platform_device *pdev)
+ return -ENODEV;
+ }
+
++ /* This will make sure we can use ioremap_nocache() */
++ status = acpi_release_memory(ACPI_HANDLE(&pdev->dev), res, 1);
++ if (ACPI_FAILURE(status))
++ return -ENOMEM;
++
+ /*
+ * NOTE: The memory region for the data structures is used also in an
+ * operation region, which means ACPI has already reserved it. Therefore
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 15bfb15c2fa5..a6a7ae897b40 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -443,6 +443,9 @@ int acpi_check_resource_conflict(const struct resource *res);
+ int acpi_check_region(resource_size_t start, resource_size_t n,
+ const char *name);
+
++acpi_status acpi_release_memory(acpi_handle handle, struct resource *res,
++ u32 level);
++
+ int acpi_resources_are_enforced(void);
+
+ #ifdef CONFIG_HIBERNATION
+diff --git a/net/ipv6/netfilter/ip6t_rpfilter.c b/net/ipv6/netfilter/ip6t_rpfilter.c
+index d12f511929f5..0fe61ede77c6 100644
+--- a/net/ipv6/netfilter/ip6t_rpfilter.c
++++ b/net/ipv6/netfilter/ip6t_rpfilter.c
+@@ -48,6 +48,8 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ }
+
+ fl6.flowi6_mark = flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
++ if ((flags & XT_RPFILTER_LOOSE) == 0)
++ fl6.flowi6_oif = dev->ifindex;
+
+ rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags);
+ if (rt->dst.error)
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 40e744572283..32b7896929f3 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -208,7 +208,8 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv)
+
+ switch (regs.verdict.code) {
+ case NFT_JUMP:
+- BUG_ON(stackptr >= NFT_JUMP_STACK_SIZE);
++ if (WARN_ON_ONCE(stackptr >= NFT_JUMP_STACK_SIZE))
++ return NF_DROP;
+ jumpstack[stackptr].chain = chain;
+ jumpstack[stackptr].rule = rule;
+ jumpstack[stackptr].rulenum = rulenum;
+diff --git a/net/netfilter/xt_connmark.c b/net/netfilter/xt_connmark.c
+index 94df000abb92..29c38aa7f726 100644
+--- a/net/netfilter/xt_connmark.c
++++ b/net/netfilter/xt_connmark.c
+@@ -211,7 +211,7 @@ static int __init connmark_mt_init(void)
+ static void __exit connmark_mt_exit(void)
+ {
+ xt_unregister_match(&connmark_mt_reg);
+- xt_unregister_target(connmark_tg_reg);
++ xt_unregister_targets(connmark_tg_reg, ARRAY_SIZE(connmark_tg_reg));
+ }
+
+ module_init(connmark_mt_init);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-03 13:36 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-03 13:36 UTC (permalink / raw
To: gentoo-commits
commit: ebd54cc8a3c2fb541c454a7e6a288bf9b27fb03a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 3 13:35:39 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 3 13:35:39 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ebd54cc8
Removal of redundant patch:1800_iommu-amd-dma-direct-revert.patch
0000_README | 4 -
1800_iommu-amd-dma-direct-revert.patch | 164 ---------------------------------
2 files changed, 168 deletions(-)
diff --git a/0000_README b/0000_README
index f45eebe..76ef096 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch: 1003_linux-4.17.4.patch
From: http://www.kernel.org
Desc: Linux 4.17.4
-Patch: 1800_iommu-amd-dma-direct-revert.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=e16c4790de39dc861b749674c2a9319507f6f64f
-Desc: Revert iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and dma_direct_{alloc,free}(). See bug #658538.
-
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1800_iommu-amd-dma-direct-revert.patch b/1800_iommu-amd-dma-direct-revert.patch
deleted file mode 100644
index a78fa02..0000000
--- a/1800_iommu-amd-dma-direct-revert.patch
+++ /dev/null
@@ -1,164 +0,0 @@
-From e16c4790de39dc861b749674c2a9319507f6f64f Mon Sep 17 00:00:00 2001
-From: Linus Torvalds <torvalds@linux-foundation.org>
-Date: Mon, 11 Jun 2018 12:22:12 -0700
-Subject: Revert "iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and
- dma_direct_{alloc,free}()"
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-This reverts commit b468620f2a1dfdcfddfd6fa54367b8bcc1b51248.
-
-It turns out that this broke drm on AMD platforms. Quoting Gabriel C:
- "I can confirm reverting b468620f2a1dfdcfddfd6fa54367b8bcc1b51248 fixes
- that issue for me.
-
- The GPU is working fine with SME enabled.
-
- Now with working GPU :) I can also confirm performance is back to
- normal without doing any other workarounds"
-
-Christan König analyzed it partially:
- "As far as I analyzed it we now get an -ENOMEM from dma_alloc_attrs()
- in drivers/gpu/drm/ttm/ttm_page_alloc_dma.c when IOMMU is enabled"
-
-and Christoph Hellwig responded:
- "I think the prime issue is that dma_direct_alloc respects the dma
- mask. Which we don't need if actually using the iommu. This would be
- mostly harmless exept for the the SEV bit high in the address that
- makes the checks fail.
-
- For now I'd say revert this commit for 4.17/4.18-rc and I'll look into
- addressing these issues properly"
-
-Reported-and-bisected-by: Gabriel C <nix.or.die@gmail.com>
-Acked-by: Christoph Hellwig <hch@lst.de>
-Cc: Christian König <christian.koenig@amd.com>
-Cc: Michel Dänzer <michel.daenzer@amd.com>
-Cc: Joerg Roedel <jroedel@suse.de>
-Cc: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andrew Morton <akpm@linux-foundation.org>
-Cc: stable@kernel.org # v4.17
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
----
- drivers/iommu/Kconfig | 1 -
- drivers/iommu/amd_iommu.c | 68 ++++++++++++++++++++++++++++++++---------------
- 2 files changed, 47 insertions(+), 22 deletions(-)
-
-diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
-index 8ea77ef..e055d22 100644
---- a/drivers/iommu/Kconfig
-+++ b/drivers/iommu/Kconfig
-@@ -107,7 +107,6 @@ config IOMMU_PGTABLES_L2
- # AMD IOMMU support
- config AMD_IOMMU
- bool "AMD IOMMU support"
-- select DMA_DIRECT_OPS
- select SWIOTLB
- select PCI_MSI
- select PCI_ATS
-diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
-index 0cea80be..596b95c 100644
---- a/drivers/iommu/amd_iommu.c
-+++ b/drivers/iommu/amd_iommu.c
-@@ -2596,32 +2596,51 @@ static void *alloc_coherent(struct device *dev, size_t size,
- unsigned long attrs)
- {
- u64 dma_mask = dev->coherent_dma_mask;
-- struct protection_domain *domain = get_domain(dev);
-- bool is_direct = false;
-- void *virt_addr;
-+ struct protection_domain *domain;
-+ struct dma_ops_domain *dma_dom;
-+ struct page *page;
-+
-+ domain = get_domain(dev);
-+ if (PTR_ERR(domain) == -EINVAL) {
-+ page = alloc_pages(flag, get_order(size));
-+ *dma_addr = page_to_phys(page);
-+ return page_address(page);
-+ } else if (IS_ERR(domain))
-+ return NULL;
-
-- if (IS_ERR(domain)) {
-- if (PTR_ERR(domain) != -EINVAL)
-+ dma_dom = to_dma_ops_domain(domain);
-+ size = PAGE_ALIGN(size);
-+ dma_mask = dev->coherent_dma_mask;
-+ flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
-+ flag |= __GFP_ZERO;
-+
-+ page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
-+ if (!page) {
-+ if (!gfpflags_allow_blocking(flag))
- return NULL;
-- is_direct = true;
-- }
-
-- virt_addr = dma_direct_alloc(dev, size, dma_addr, flag, attrs);
-- if (!virt_addr || is_direct)
-- return virt_addr;
-+ page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
-+ get_order(size), flag);
-+ if (!page)
-+ return NULL;
-+ }
-
- if (!dma_mask)
- dma_mask = *dev->dma_mask;
-
-- *dma_addr = __map_single(dev, to_dma_ops_domain(domain),
-- virt_to_phys(virt_addr), PAGE_ALIGN(size),
-- DMA_BIDIRECTIONAL, dma_mask);
-+ *dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
-+ size, DMA_BIDIRECTIONAL, dma_mask);
-+
- if (*dma_addr == AMD_IOMMU_MAPPING_ERROR)
- goto out_free;
-- return virt_addr;
-+
-+ return page_address(page);
-
- out_free:
-- dma_direct_free(dev, size, virt_addr, *dma_addr, attrs);
-+
-+ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-+ __free_pages(page, get_order(size));
-+
- return NULL;
- }
-
-@@ -2632,17 +2651,24 @@ static void free_coherent(struct device *dev, size_t size,
- void *virt_addr, dma_addr_t dma_addr,
- unsigned long attrs)
- {
-- struct protection_domain *domain = get_domain(dev);
-+ struct protection_domain *domain;
-+ struct dma_ops_domain *dma_dom;
-+ struct page *page;
-
-+ page = virt_to_page(virt_addr);
- size = PAGE_ALIGN(size);
-
-- if (!IS_ERR(domain)) {
-- struct dma_ops_domain *dma_dom = to_dma_ops_domain(domain);
-+ domain = get_domain(dev);
-+ if (IS_ERR(domain))
-+ goto free_mem;
-
-- __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
-- }
-+ dma_dom = to_dma_ops_domain(domain);
-+
-+ __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
-
-- dma_direct_free(dev, size, virt_addr, dma_addr, attrs);
-+free_mem:
-+ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-+ __free_pages(page, get_order(size));
- }
-
- /*
---
-cgit v1.1
-
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-07-03 13:19 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-07-03 13:19 UTC (permalink / raw
To: gentoo-commits
commit: 59757de106df0c4e1e658baa34090609c99946b5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 3 13:18:56 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 3 13:18:56 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59757de1
Linux patch 4.17.4
0000_README | 4 +
1003_linux-4.17.4.patch | 6875 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6879 insertions(+)
diff --git a/0000_README b/0000_README
index af14401..f45eebe 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-4.17.3.patch
From: http://www.kernel.org
Desc: Linux 4.17.3
+Patch: 1003_linux-4.17.4.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.4
+
Patch: 1800_iommu-amd-dma-direct-revert.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=e16c4790de39dc861b749674c2a9319507f6f64f
Desc: Revert iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and dma_direct_{alloc,free}(). See bug #658538.
diff --git a/1003_linux-4.17.4.patch b/1003_linux-4.17.4.patch
new file mode 100644
index 0000000..76692ff
--- /dev/null
+++ b/1003_linux-4.17.4.patch
@@ -0,0 +1,6875 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
+index 8e69345c37cc..bbbabffc682a 100644
+--- a/Documentation/ABI/testing/sysfs-class-cxl
++++ b/Documentation/ABI/testing/sysfs-class-cxl
+@@ -69,7 +69,9 @@ Date: September 2014
+ Contact: linuxppc-dev@lists.ozlabs.org
+ Description: read/write
+ Set the mode for prefaulting in segments into the segment table
+- when performing the START_WORK ioctl. Possible values:
++ when performing the START_WORK ioctl. Only applicable when
++ running under hashed page table mmu.
++ Possible values:
+ none: No prefaulting (default)
+ work_element_descriptor: Treat the work element
+ descriptor as an effective address and
+diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst
+index eb30efdd2e78..25dc591cb110 100644
+--- a/Documentation/core-api/printk-formats.rst
++++ b/Documentation/core-api/printk-formats.rst
+@@ -419,11 +419,10 @@ struct clk
+
+ %pC pll1
+ %pCn pll1
+- %pCr 1560000000
+
+ For printing struct clk structures. %pC and %pCn print the name
+ (Common Clock Framework) or address (legacy clock framework) of the
+-structure; %pCr prints the current clock rate.
++structure.
+
+ Passed by reference.
+
+diff --git a/Makefile b/Makefile
+index 31dc3a08295a..1d740dbe676d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index e10c03496524..a115575b38bf 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -22,11 +22,12 @@
+ #include <dt-bindings/phy/phy.h>
+ #include <dt-bindings/reset/mt2701-resets.h>
+ #include <dt-bindings/thermal/thermal.h>
+-#include "skeleton64.dtsi"
+
+ / {
+ compatible = "mediatek,mt7623";
+ interrupt-parent = <&sysirq>;
++ #address-cells = <2>;
++ #size-cells = <2>;
+
+ cpu_opp_table: opp-table {
+ compatible = "operating-points-v2";
+diff --git a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+index bbf56f855e46..5938e4c79deb 100644
+--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
++++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+@@ -109,6 +109,7 @@
+ };
+
+ memory@80000000 {
++ device_type = "memory";
+ reg = <0 0x80000000 0 0x40000000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/mt7623n-rfb.dtsi b/arch/arm/boot/dts/mt7623n-rfb.dtsi
+index 256c5fd947bf..43c9d7ca23a0 100644
+--- a/arch/arm/boot/dts/mt7623n-rfb.dtsi
++++ b/arch/arm/boot/dts/mt7623n-rfb.dtsi
+@@ -47,6 +47,7 @@
+ };
+
+ memory@80000000 {
++ device_type = "memory";
+ reg = <0 0x80000000 0 0x40000000>;
+ };
+
+diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi
+index 486d4e7433ed..b38f8c240558 100644
+--- a/arch/arm/boot/dts/socfpga.dtsi
++++ b/arch/arm/boot/dts/socfpga.dtsi
+@@ -748,13 +748,13 @@
+ nand0: nand@ff900000 {
+ #address-cells = <0x1>;
+ #size-cells = <0x1>;
+- compatible = "denali,denali-nand-dt";
++ compatible = "altr,socfpga-denali-nand";
+ reg = <0xff900000 0x100000>,
+ <0xffb80000 0x10000>;
+ reg-names = "nand_data", "denali_reg";
+ interrupts = <0x0 0x90 0x4>;
+ dma-mask = <0xffffffff>;
+- clocks = <&nand_clk>;
++ clocks = <&nand_x_clk>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index bead79e4b2aa..791ca15c799e 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -593,8 +593,7 @@
+ #size-cells = <0>;
+ reg = <0xffda5000 0x100>;
+ interrupts = <0 102 4>;
+- num-chipselect = <4>;
+- bus-num = <0>;
++ num-cs = <4>;
+ /*32bit_access;*/
+ tx-dma-channel = <&pdma 16>;
+ rx-dma-channel = <&pdma 17>;
+@@ -633,7 +632,7 @@
+ nand: nand@ffb90000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+- compatible = "denali,denali-nand-dt", "altr,socfpga-denali-nand";
++ compatible = "altr,socfpga-denali-nand";
+ reg = <0xffb90000 0x72000>,
+ <0xffb80000 0x10000>;
+ reg-names = "nand_data", "denali_reg";
+diff --git a/arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dts b/arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dts
+index b20a710da7bc..7a4fca36c673 100644
+--- a/arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dts
++++ b/arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dts
+@@ -62,8 +62,8 @@
+ reg_vcc1v2: vcc1v2 {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc1v2";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ regulator-boot-on;
+ vin-supply = <®_vcc5v0>;
+@@ -113,8 +113,8 @@
+ reg_vdd_cpux: vdd-cpux {
+ compatible = "regulator-fixed";
+ regulator-name = "vdd-cpux";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ regulator-boot-on;
+ vin-supply = <®_vcc5v0>;
+diff --git a/arch/arm/include/asm/kgdb.h b/arch/arm/include/asm/kgdb.h
+index 3b73fdcf3627..8de1100d1067 100644
+--- a/arch/arm/include/asm/kgdb.h
++++ b/arch/arm/include/asm/kgdb.h
+@@ -77,7 +77,7 @@ extern int kgdb_fault_expected;
+
+ #define KGDB_MAX_NO_CPUS 1
+ #define BUFMAX 400
+-#define NUMREGBYTES (DBG_MAX_REG_NUM << 2)
++#define NUMREGBYTES (GDB_MAX_REGS << 2)
+ #define NUMCRITREGBYTES (32 << 2)
+
+ #define _R0 0
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index c89d0c307f8d..2c63e60754c5 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -252,8 +252,7 @@
+ interrupts = <0 99 4>;
+ resets = <&rst SPIM0_RESET>;
+ reg-io-width = <4>;
+- num-chipselect = <4>;
+- bus-num = <0>;
++ num-cs = <4>;
+ status = "disabled";
+ };
+
+@@ -265,8 +264,7 @@
+ interrupts = <0 100 4>;
+ resets = <&rst SPIM1_RESET>;
+ reg-io-width = <4>;
+- num-chipselect = <4>;
+- bus-num = <0>;
++ num-cs = <4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 3c31e21cbed7..69693977fe07 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -35,6 +35,12 @@
+ no-map;
+ };
+
++ /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */
++ secmon_reserved_alt: secmon@5000000 {
++ reg = <0x0 0x05000000 0x0 0x300000>;
++ no-map;
++ };
++
+ linux,cma {
+ compatible = "shared-dma-pool";
+ reusable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+index 3e3eb31748a3..f63bceb88caa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+@@ -234,9 +234,6 @@
+
+ bus-width = <4>;
+ cap-sd-highspeed;
+- sd-uhs-sdr12;
+- sd-uhs-sdr25;
+- sd-uhs-sdr50;
+ max-frequency = <100000000>;
+ disable-wp;
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index dba365ed4bd5..33c15f2a949e 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -13,14 +13,6 @@
+ / {
+ compatible = "amlogic,meson-gxl";
+
+- reserved-memory {
+- /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */
+- secmon_reserved_alt: secmon@5000000 {
+- reg = <0x0 0x05000000 0x0 0x300000>;
+- no-map;
+- };
+- };
+-
+ soc {
+ usb0: usb@c9000000 {
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/marvell/armada-cp110.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110.dtsi
+index ed2f1237ea1e..8259b32f0ced 100644
+--- a/arch/arm64/boot/dts/marvell/armada-cp110.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-cp110.dtsi
+@@ -149,7 +149,7 @@
+
+ CP110_LABEL(icu): interrupt-controller@1e0000 {
+ compatible = "marvell,cp110-icu";
+- reg = <0x1e0000 0x10>;
++ reg = <0x1e0000 0x440>;
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ msi-parent = <&gicp>;
+diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
+index 253188fb8cb0..e3e50950a863 100644
+--- a/arch/arm64/crypto/aes-glue.c
++++ b/arch/arm64/crypto/aes-glue.c
+@@ -223,8 +223,8 @@ static int ctr_encrypt(struct skcipher_request *req)
+ kernel_neon_begin();
+ aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
+ (u8 *)ctx->key_enc, rounds, blocks, walk.iv);
+- err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ kernel_neon_end();
++ err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ }
+ if (walk.nbytes) {
+ u8 __aligned(8) tail[AES_BLOCK_SIZE];
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 9d1b06d67c53..df0bd090f0e4 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -937,7 +937,7 @@ static int __init parse_kpti(char *str)
+ __kpti_forced = enabled ? 1 : -1;
+ return 0;
+ }
+-__setup("kpti=", parse_kpti);
++early_param("kpti", parse_kpti);
+ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
+ #ifdef CONFIG_ARM64_HW_AFDBM
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 154b7d30145d..f21209064041 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -830,11 +830,12 @@ static void do_signal(struct pt_regs *regs)
+ unsigned long continue_addr = 0, restart_addr = 0;
+ int retval = 0;
+ struct ksignal ksig;
++ bool syscall = in_syscall(regs);
+
+ /*
+ * If we were from a system call, check for system call restarting...
+ */
+- if (in_syscall(regs)) {
++ if (syscall) {
+ continue_addr = regs->pc;
+ restart_addr = continue_addr - (compat_thumb_mode(regs) ? 2 : 4);
+ retval = regs->regs[0];
+@@ -886,7 +887,7 @@ static void do_signal(struct pt_regs *regs)
+ * Handle restarting a different system call. As above, if a debugger
+ * has chosen to restart at a different PC, ignore the restart.
+ */
+- if (in_syscall(regs) && regs->pc == restart_addr) {
++ if (syscall && regs->pc == restart_addr) {
+ if (retval == -ERESTART_RESTARTBLOCK)
+ setup_restart_syscall(regs);
+ user_rewind_single_step(current);
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 5f9a73a4452c..03646e6a2ef4 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -217,8 +217,9 @@ ENDPROC(idmap_cpu_replace_ttbr1)
+
+ .macro __idmap_kpti_put_pgtable_ent_ng, type
+ orr \type, \type, #PTE_NG // Same bit for blocks and pages
+- str \type, [cur_\()\type\()p] // Update the entry and ensure it
+- dc civac, cur_\()\type\()p // is visible to all CPUs.
++ str \type, [cur_\()\type\()p] // Update the entry and ensure
++ dmb sy // that it is visible to all
++ dc civac, cur_\()\type\()p // CPUs.
+ .endm
+
+ /*
+diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
+index 0c3275aa0197..e522307db47c 100644
+--- a/arch/m68k/mac/config.c
++++ b/arch/m68k/mac/config.c
+@@ -1005,7 +1005,7 @@ int __init mac_platform_init(void)
+ struct resource swim_rsrc = {
+ .flags = IORESOURCE_MEM,
+ .start = (resource_size_t)swim_base,
+- .end = (resource_size_t)swim_base + 0x2000,
++ .end = (resource_size_t)swim_base + 0x1FFF,
+ };
+
+ platform_device_register_simple("swim", -1, &swim_rsrc, 1);
+diff --git a/arch/m68k/mm/kmap.c b/arch/m68k/mm/kmap.c
+index c2a38321c96d..3b420f6d8822 100644
+--- a/arch/m68k/mm/kmap.c
++++ b/arch/m68k/mm/kmap.c
+@@ -89,7 +89,8 @@ static inline void free_io_area(void *addr)
+ for (p = &iolist ; (tmp = *p) ; p = &tmp->next) {
+ if (tmp->addr == addr) {
+ *p = tmp->next;
+- __iounmap(tmp->addr, tmp->size);
++ /* remove gap added in get_io_area() */
++ __iounmap(tmp->addr, tmp->size - IO_SIZE);
+ kfree(tmp);
+ return;
+ }
+diff --git a/arch/mips/ath79/mach-pb44.c b/arch/mips/ath79/mach-pb44.c
+index 6b2c6f3baefa..75fb96ca61db 100644
+--- a/arch/mips/ath79/mach-pb44.c
++++ b/arch/mips/ath79/mach-pb44.c
+@@ -34,7 +34,7 @@
+ #define PB44_KEYS_DEBOUNCE_INTERVAL (3 * PB44_KEYS_POLL_INTERVAL)
+
+ static struct gpiod_lookup_table pb44_i2c_gpiod_table = {
+- .dev_id = "i2c-gpio",
++ .dev_id = "i2c-gpio.0",
+ .table = {
+ GPIO_LOOKUP_IDX("ath79-gpio", PB44_GPIO_I2C_SDA,
+ NULL, 0, GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+diff --git a/arch/mips/bcm47xx/setup.c b/arch/mips/bcm47xx/setup.c
+index 6054d49e608e..8c9cbf13d32a 100644
+--- a/arch/mips/bcm47xx/setup.c
++++ b/arch/mips/bcm47xx/setup.c
+@@ -212,6 +212,12 @@ static int __init bcm47xx_cpu_fixes(void)
+ */
+ if (bcm47xx_bus.bcma.bus.chipinfo.id == BCMA_CHIP_ID_BCM4706)
+ cpu_wait = NULL;
++
++ /*
++ * BCM47XX Erratum "R10: PCIe Transactions Periodically Fail"
++ * Enable ExternalSync for sync instruction to take effect
++ */
++ set_c0_config7(MIPS_CONF7_ES);
+ break;
+ #endif
+ }
+diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
+index a7d0b836f2f7..cea8ad864b3f 100644
+--- a/arch/mips/include/asm/io.h
++++ b/arch/mips/include/asm/io.h
+@@ -414,6 +414,8 @@ static inline type pfx##in##bwlq##p(unsigned long port) \
+ __val = *__addr; \
+ slow; \
+ \
++ /* prevent prefetching of coherent DMA data prematurely */ \
++ rmb(); \
+ return pfx##ioswab##bwlq(__addr, __val); \
+ }
+
+diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
+index f65859784a4c..eeb131e2048e 100644
+--- a/arch/mips/include/asm/mipsregs.h
++++ b/arch/mips/include/asm/mipsregs.h
+@@ -681,6 +681,8 @@
+ #define MIPS_CONF7_WII (_ULCAST_(1) << 31)
+
+ #define MIPS_CONF7_RPS (_ULCAST_(1) << 2)
++/* ExternalSync */
++#define MIPS_CONF7_ES (_ULCAST_(1) << 8)
+
+ #define MIPS_CONF7_IAR (_ULCAST_(1) << 10)
+ #define MIPS_CONF7_AR (_ULCAST_(1) << 16)
+@@ -2760,6 +2762,7 @@ __BUILD_SET_C0(status)
+ __BUILD_SET_C0(cause)
+ __BUILD_SET_C0(config)
+ __BUILD_SET_C0(config5)
++__BUILD_SET_C0(config7)
+ __BUILD_SET_C0(intcontrol)
+ __BUILD_SET_C0(intctl)
+ __BUILD_SET_C0(srsmap)
+diff --git a/arch/mips/kernel/mcount.S b/arch/mips/kernel/mcount.S
+index f2ee7e1e3342..cff52b283e03 100644
+--- a/arch/mips/kernel/mcount.S
++++ b/arch/mips/kernel/mcount.S
+@@ -119,10 +119,20 @@ NESTED(_mcount, PT_SIZE, ra)
+ EXPORT_SYMBOL(_mcount)
+ PTR_LA t1, ftrace_stub
+ PTR_L t2, ftrace_trace_function /* Prepare t2 for (1) */
+- bne t1, t2, static_trace
++ beq t1, t2, fgraph_trace
+ nop
+
++ MCOUNT_SAVE_REGS
++
++ move a0, ra /* arg1: self return address */
++ jalr t2 /* (1) call *ftrace_trace_function */
++ move a1, AT /* arg2: parent's return address */
++
++ MCOUNT_RESTORE_REGS
++
++fgraph_trace:
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
++ PTR_LA t1, ftrace_stub
+ PTR_L t3, ftrace_graph_return
+ bne t1, t3, ftrace_graph_caller
+ nop
+@@ -131,24 +141,11 @@ EXPORT_SYMBOL(_mcount)
+ bne t1, t3, ftrace_graph_caller
+ nop
+ #endif
+- b ftrace_stub
+-#ifdef CONFIG_32BIT
+- addiu sp, sp, 8
+-#else
+- nop
+-#endif
+
+-static_trace:
+- MCOUNT_SAVE_REGS
+-
+- move a0, ra /* arg1: self return address */
+- jalr t2 /* (1) call *ftrace_trace_function */
+- move a1, AT /* arg2: parent's return address */
+-
+- MCOUNT_RESTORE_REGS
+ #ifdef CONFIG_32BIT
+ addiu sp, sp, 8
+ #endif
++
+ .globl ftrace_stub
+ ftrace_stub:
+ RETURN_BACK
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 95813df90801..bb2523b4bd8f 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -251,6 +251,7 @@ cpu-as-$(CONFIG_4xx) += -Wa,-m405
+ cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)
+ cpu-as-$(CONFIG_E200) += -Wa,-me200
+ cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4
++cpu-as-$(CONFIG_PPC_E500MC) += $(call as-option,-Wa$(comma)-me500mc)
+
+ KBUILD_AFLAGS += $(cpu-as-y)
+ KBUILD_CFLAGS += $(cpu-as-y)
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index c904477abaf3..d926100da914 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -711,7 +711,8 @@ static __init void cpufeatures_cpu_quirks(void)
+ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
+ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
+- } else /* DD2.1 and up have DD2_1 */
++ } else if ((version & 0xffff0000) == 0x004e0000)
++ /* DD2.1 and up have DD2_1 */
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
+
+ if ((version & 0xffff0000) == 0x004e0000) {
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 51695608c68b..3d1af55e09dc 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -596,6 +596,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
+ * actually hit this code path.
+ */
+
++ isync
+ slbie r6
+ slbie r6 /* Workaround POWER5 < DD2.1 issue */
+ slbmte r7,r0
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 3c2c2688918f..fe631022ea89 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -1155,6 +1155,9 @@ void fadump_cleanup(void)
+ init_fadump_mem_struct(&fdm,
+ be64_to_cpu(fdm_active->cpu_state_data.destination_address));
+ fadump_invalidate_dump(&fdm);
++ } else if (fw_dump.dump_registered) {
++ /* Un-register Firmware-assisted dump if it was registered. */
++ fadump_unregister_dump(&fdm);
+ }
+ }
+
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index 4c1012b80d3b..80547dad37da 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -178,8 +178,8 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
+ if (cpu_has_feature(CPU_FTR_DAWR)) {
+ length_max = 512 ; /* 64 doublewords */
+ /* DAWR region can't cross 512 boundary */
+- if ((bp->attr.bp_addr >> 10) !=
+- ((bp->attr.bp_addr + bp->attr.bp_len - 1) >> 10))
++ if ((bp->attr.bp_addr >> 9) !=
++ ((bp->attr.bp_addr + bp->attr.bp_len - 1) >> 9))
+ return -EINVAL;
+ }
+ if (info->len >
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index d23cf632edf0..0f63dd5972e9 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -2443,6 +2443,7 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
+ /* Create a new breakpoint request if one doesn't exist already */
+ hw_breakpoint_init(&attr);
+ attr.bp_addr = hw_brk.address;
++ attr.bp_len = 8;
+ arch_bp_generic_fields(hw_brk.type,
+ &attr.bp_type);
+
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index 0eafdf01edc7..e6f500fabf5e 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -383,9 +383,9 @@ int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
+ {
+ /*
+ * If the currently associated pkey is execute-only, but the requested
+- * protection requires read or write, move it back to the default pkey.
++ * protection is not execute-only, move it back to the default pkey.
+ */
+- if (vma_is_pkey_exec_only(vma) && (prot & (PROT_READ | PROT_WRITE)))
++ if (vma_is_pkey_exec_only(vma) && (prot != PROT_EXEC))
+ return 0;
+
+ /*
+diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
+index a5d7309c2d05..465cb604b33a 100644
+--- a/arch/powerpc/mm/tlb-radix.c
++++ b/arch/powerpc/mm/tlb-radix.c
+@@ -733,6 +733,8 @@ extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
+ for (; sib <= cpu_last_thread_sibling(cpu) && !flush; sib++) {
+ if (sib == cpu)
+ continue;
++ if (!cpu_possible(sib))
++ continue;
+ if (paca_ptrs[sib]->kvm_hstate.kvm_vcpu)
+ flush = true;
+ }
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index d7532e7b9ab5..75fb23c24ee8 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -1146,7 +1146,7 @@ static int init_nest_pmu_ref(void)
+
+ static void cleanup_all_core_imc_memory(void)
+ {
+- int i, nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
++ int i, nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
+ struct imc_mem_info *ptr = core_imc_pmu->mem_info;
+ int size = core_imc_pmu->counter_mem_size;
+
+@@ -1264,7 +1264,7 @@ static int imc_mem_init(struct imc_pmu *pmu_ptr, struct device_node *parent,
+ if (!pmu_ptr->pmu.name)
+ return -ENOMEM;
+
+- nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core);
++ nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
+ pmu_ptr->mem_info = kcalloc(nr_cores, sizeof(struct imc_mem_info),
+ GFP_KERNEL);
+
+diff --git a/arch/powerpc/platforms/powernv/copy-paste.h b/arch/powerpc/platforms/powernv/copy-paste.h
+index c9a503623431..e9a6c35f8a29 100644
+--- a/arch/powerpc/platforms/powernv/copy-paste.h
++++ b/arch/powerpc/platforms/powernv/copy-paste.h
+@@ -42,5 +42,6 @@ static inline int vas_paste(void *paste_address, int offset)
+ : "b" (offset), "b" (paste_address)
+ : "memory", "cr0");
+
+- return (cr >> CR0_SHIFT) & CR0_MASK;
++ /* We mask with 0xE to ignore SO */
++ return (cr >> CR0_SHIFT) & 0xE;
+ }
+diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
+index 1f12ab1e6030..1c5d0675b43c 100644
+--- a/arch/powerpc/platforms/powernv/idle.c
++++ b/arch/powerpc/platforms/powernv/idle.c
+@@ -79,7 +79,7 @@ static int pnv_save_sprs_for_deep_states(void)
+ uint64_t msr_val = MSR_IDLE;
+ uint64_t psscr_val = pnv_deepest_stop_psscr_val;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ uint64_t pir = get_hard_smp_processor_id(cpu);
+ uint64_t hsprg0_val = (uint64_t)paca_ptrs[cpu];
+
+@@ -814,7 +814,7 @@ static int __init pnv_init_idle_states(void)
+ int cpu;
+
+ pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n");
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ int base_cpu = cpu_first_thread_sibling(cpu);
+ int idx = cpu_thread_in_core(cpu);
+ int i;
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 3f9c69d7623a..f7d9b3433a29 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -3642,7 +3642,6 @@ static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe)
+ WARN_ON(pe->table_group.group);
+ }
+
+- pnv_pci_ioda2_table_free_pages(tbl);
+ iommu_tce_table_put(tbl);
+ }
+
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index 02168fe25105..cf9bf9b43ec3 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -188,7 +188,7 @@ static int get_transport_options(struct arglist *def)
+ if (strncmp(transport, TRANS_TAP, TRANS_TAP_LEN) == 0)
+ return (vec_rx | VECTOR_BPF);
+ if (strncmp(transport, TRANS_RAW, TRANS_RAW_LEN) == 0)
+- return (vec_rx | vec_tx);
++ return (vec_rx | vec_tx | VECTOR_QDISC_BYPASS);
+ return (vec_rx | vec_tx);
+ }
+
+@@ -504,15 +504,19 @@ static struct vector_queue *create_queue(
+
+ result = kmalloc(sizeof(struct vector_queue), GFP_KERNEL);
+ if (result == NULL)
+- goto out_fail;
++ return NULL;
+ result->max_depth = max_size;
+ result->dev = vp->dev;
+ result->mmsg_vector = kmalloc(
+ (sizeof(struct mmsghdr) * max_size), GFP_KERNEL);
++ if (result->mmsg_vector == NULL)
++ goto out_mmsg_fail;
+ result->skbuff_vector = kmalloc(
+ (sizeof(void *) * max_size), GFP_KERNEL);
+- if (result->mmsg_vector == NULL || result->skbuff_vector == NULL)
+- goto out_fail;
++ if (result->skbuff_vector == NULL)
++ goto out_skb_fail;
++
++ /* further failures can be handled safely by destroy_queue*/
+
+ mmsg_vector = result->mmsg_vector;
+ for (i = 0; i < max_size; i++) {
+@@ -563,6 +567,11 @@ static struct vector_queue *create_queue(
+ result->head = 0;
+ result->tail = 0;
+ return result;
++out_skb_fail:
++ kfree(result->mmsg_vector);
++out_mmsg_fail:
++ kfree(result);
++ return NULL;
+ out_fail:
+ destroy_queue(result);
+ return NULL;
+@@ -1232,9 +1241,8 @@ static int vector_net_open(struct net_device *dev)
+
+ if ((vp->options & VECTOR_QDISC_BYPASS) != 0) {
+ if (!uml_raw_enable_qdisc_bypass(vp->fds->rx_fd))
+- vp->options = vp->options | VECTOR_BPF;
++ vp->options |= VECTOR_BPF;
+ }
+-
+ if ((vp->options & VECTOR_BPF) != 0)
+ vp->bpf = uml_vector_default_bpf(vp->fds->rx_fd, dev->dev_addr);
+
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 9de7f1e1dede..7d0df78db727 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -84,13 +84,13 @@ ENTRY(entry_SYSENTER_compat)
+ pushq %rdx /* pt_regs->dx */
+ pushq %rcx /* pt_regs->cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+- pushq %r8 /* pt_regs->r8 */
++ pushq $0 /* pt_regs->r8 = 0 */
+ xorl %r8d, %r8d /* nospec r8 */
+- pushq %r9 /* pt_regs->r9 */
++ pushq $0 /* pt_regs->r9 = 0 */
+ xorl %r9d, %r9d /* nospec r9 */
+- pushq %r10 /* pt_regs->r10 */
++ pushq $0 /* pt_regs->r10 = 0 */
+ xorl %r10d, %r10d /* nospec r10 */
+- pushq %r11 /* pt_regs->r11 */
++ pushq $0 /* pt_regs->r11 = 0 */
+ xorl %r11d, %r11d /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
+@@ -374,13 +374,13 @@ ENTRY(entry_INT80_compat)
+ pushq %rcx /* pt_regs->cx */
+ xorl %ecx, %ecx /* nospec cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+- pushq $0 /* pt_regs->r8 = 0 */
++ pushq %r8 /* pt_regs->r8 */
+ xorl %r8d, %r8d /* nospec r8 */
+- pushq $0 /* pt_regs->r9 = 0 */
++ pushq %r9 /* pt_regs->r9 */
+ xorl %r9d, %r9d /* nospec r9 */
+- pushq $0 /* pt_regs->r10 = 0 */
++ pushq %r10 /* pt_regs->r10*/
+ xorl %r10d, %r10d /* nospec r10 */
+- pushq $0 /* pt_regs->r11 = 0 */
++ pushq %r11 /* pt_regs->r11 */
+ xorl %r11d, %r11d /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 042b5e892ed1..14de0432d288 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -38,7 +38,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ {
+ unsigned long mask;
+
+- asm ("cmp %1,%2; sbb %0,%0;"
++ asm volatile ("cmp %1,%2; sbb %0,%0;"
+ :"=r" (mask)
+ :"g"(size),"r" (index)
+ :"cc");
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index efaf2d4f9c3c..d492752f79e1 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -26,6 +26,7 @@
+ #include <linux/delay.h>
+ #include <linux/crash_dump.h>
+ #include <linux/reboot.h>
++#include <linux/memory.h>
+
+ #include <asm/uv/uv_mmrs.h>
+ #include <asm/uv/uv_hub.h>
+@@ -392,6 +393,51 @@ extern int uv_hub_info_version(void)
+ }
+ EXPORT_SYMBOL(uv_hub_info_version);
+
++/* Default UV memory block size is 2GB */
++static unsigned long mem_block_size = (2UL << 30);
++
++/* Kernel parameter to specify UV mem block size */
++static int parse_mem_block_size(char *ptr)
++{
++ unsigned long size = memparse(ptr, NULL);
++
++ /* Size will be rounded down by set_block_size() below */
++ mem_block_size = size;
++ return 0;
++}
++early_param("uv_memblksize", parse_mem_block_size);
++
++static __init int adj_blksize(u32 lgre)
++{
++ unsigned long base = (unsigned long)lgre << UV_GAM_RANGE_SHFT;
++ unsigned long size;
++
++ for (size = mem_block_size; size > MIN_MEMORY_BLOCK_SIZE; size >>= 1)
++ if (IS_ALIGNED(base, size))
++ break;
++
++ if (size >= mem_block_size)
++ return 0;
++
++ mem_block_size = size;
++ return 1;
++}
++
++static __init void set_block_size(void)
++{
++ unsigned int order = ffs(mem_block_size);
++
++ if (order) {
++ /* adjust for ffs return of 1..64 */
++ set_memory_block_size_order(order - 1);
++ pr_info("UV: mem_block_size set to 0x%lx\n", mem_block_size);
++ } else {
++ /* bad or zero value, default to 1UL << 31 (2GB) */
++ pr_err("UV: mem_block_size error with 0x%lx\n", mem_block_size);
++ set_memory_block_size_order(31);
++ }
++}
++
+ /* Build GAM range lookup table: */
+ static __init void build_uv_gr_table(void)
+ {
+@@ -1180,23 +1226,30 @@ static void __init decode_gam_rng_tbl(unsigned long ptr)
+ << UV_GAM_RANGE_SHFT);
+ int order = 0;
+ char suffix[] = " KMGTPE";
++ int flag = ' ';
+
+ while (size > 9999 && order < sizeof(suffix)) {
+ size /= 1024;
+ order++;
+ }
+
++ /* adjust max block size to current range start */
++ if (gre->type == 1 || gre->type == 2)
++ if (adj_blksize(lgre))
++ flag = '*';
++
+ if (!index) {
+ pr_info("UV: GAM Range Table...\n");
+- pr_info("UV: # %20s %14s %5s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN");
++ pr_info("UV: # %20s %14s %6s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN");
+ }
+- pr_info("UV: %2d: 0x%014lx-0x%014lx %5lu%c %3d %04x %02x %02x\n",
++ pr_info("UV: %2d: 0x%014lx-0x%014lx%c %5lu%c %3d %04x %02x %02x\n",
+ index++,
+ (unsigned long)lgre << UV_GAM_RANGE_SHFT,
+ (unsigned long)gre->limit << UV_GAM_RANGE_SHFT,
+- size, suffix[order],
++ flag, size, suffix[order],
+ gre->type, gre->nasid, gre->sockid, gre->pnode);
+
++ /* update to next range start */
+ lgre = gre->limit;
+ if (sock_min > gre->sockid)
+ sock_min = gre->sockid;
+@@ -1427,6 +1480,7 @@ static void __init uv_system_init_hub(void)
+
+ build_socket_tables();
+ build_uv_gr_table();
++ set_block_size();
+ uv_init_hub_info(&hub_info);
+ uv_possible_blades = num_possible_nodes();
+ if (!_node_to_pnode)
+diff --git a/arch/x86/kernel/cpu/mcheck/mce-severity.c b/arch/x86/kernel/cpu/mcheck/mce-severity.c
+index 5bbd06f38ff6..f34d89c01edc 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce-severity.c
++++ b/arch/x86/kernel/cpu/mcheck/mce-severity.c
+@@ -160,6 +160,11 @@ static struct severity {
+ SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_INSTR),
+ USER
+ ),
++ MCESEV(
++ PANIC, "Data load in unrecoverable area of kernel",
++ SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA),
++ KERNEL
++ ),
+ #endif
+ MCESEV(
+ PANIC, "Action required: unknown MCACOD",
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 42cf2880d0ed..6f7eda9d5297 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -772,23 +772,25 @@ EXPORT_SYMBOL_GPL(machine_check_poll);
+ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ struct pt_regs *regs)
+ {
+- int i, ret = 0;
+ char *tmp;
++ int i;
+
+ for (i = 0; i < mca_cfg.banks; i++) {
+ m->status = mce_rdmsrl(msr_ops.status(i));
+- if (m->status & MCI_STATUS_VAL) {
+- __set_bit(i, validp);
+- if (quirk_no_way_out)
+- quirk_no_way_out(i, m, regs);
+- }
++ if (!(m->status & MCI_STATUS_VAL))
++ continue;
++
++ __set_bit(i, validp);
++ if (quirk_no_way_out)
++ quirk_no_way_out(i, m, regs);
+
+ if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) {
++ mce_read_aux(m, i);
+ *msg = tmp;
+- ret = 1;
++ return 1;
+ }
+ }
+- return ret;
++ return 0;
+ }
+
+ /*
+@@ -1205,13 +1207,18 @@ void do_machine_check(struct pt_regs *regs, long error_code)
+ lmce = m.mcgstatus & MCG_STATUS_LMCES;
+
+ /*
++ * Local machine check may already know that we have to panic.
++ * Broadcast machine check begins rendezvous in mce_start()
+ * Go through all banks in exclusion of the other CPUs. This way we
+ * don't report duplicated events on shared banks because the first one
+- * to see it will clear it. If this is a Local MCE, then no need to
+- * perform rendezvous.
++ * to see it will clear it.
+ */
+- if (!lmce)
++ if (lmce) {
++ if (no_way_out)
++ mce_panic("Fatal local machine check", &m, msg);
++ } else {
+ order = mce_start(&no_way_out);
++ }
+
+ for (i = 0; i < cfg->banks; i++) {
+ __clear_bit(i, toclear);
+@@ -1287,12 +1294,17 @@ void do_machine_check(struct pt_regs *regs, long error_code)
+ no_way_out = worst >= MCE_PANIC_SEVERITY;
+ } else {
+ /*
+- * Local MCE skipped calling mce_reign()
+- * If we found a fatal error, we need to panic here.
++ * If there was a fatal machine check we should have
++ * already called mce_panic earlier in this function.
++ * Since we re-read the banks, we might have found
++ * something new. Check again to see if we found a
++ * fatal error. We call "mce_severity()" again to
++ * make sure we have the right "msg".
+ */
+- if (worst >= MCE_PANIC_SEVERITY && mca_cfg.tolerant < 3)
+- mce_panic("Machine check from unknown source",
+- NULL, NULL);
++ if (worst >= MCE_PANIC_SEVERITY && mca_cfg.tolerant < 3) {
++ mce_severity(&m, cfg->tolerant, &msg, true);
++ mce_panic("Local fatal machine check!", &m, msg);
++ }
+ }
+
+ /*
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 6a2cb1442e05..aec38a170dbc 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -1246,6 +1246,7 @@ void __init e820__memblock_setup(void)
+ {
+ int i;
+ u64 end;
++ u64 addr = 0;
+
+ /*
+ * The bootstrap memblock region count maximum is 128 entries
+@@ -1262,13 +1263,21 @@ void __init e820__memblock_setup(void)
+ struct e820_entry *entry = &e820_table->entries[i];
+
+ end = entry->addr + entry->size;
++ if (addr < entry->addr)
++ memblock_reserve(addr, entry->addr - addr);
++ addr = end;
+ if (end != (resource_size_t)end)
+ continue;
+
++ /*
++ * all !E820_TYPE_RAM ranges (including gap ranges) are put
++ * into memblock.reserved to make sure that struct pages in
++ * such regions are not left uninitialized after bootup.
++ */
+ if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN)
+- continue;
+-
+- memblock_add(entry->addr, entry->size);
++ memblock_reserve(entry->addr, entry->size);
++ else
++ memblock_add(entry->addr, entry->size);
+ }
+
+ /* Throw away partial pages: */
+diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c
+index 697a4ce04308..736348ead421 100644
+--- a/arch/x86/kernel/quirks.c
++++ b/arch/x86/kernel/quirks.c
+@@ -645,12 +645,19 @@ static void quirk_intel_brickland_xeon_ras_cap(struct pci_dev *pdev)
+ /* Skylake */
+ static void quirk_intel_purley_xeon_ras_cap(struct pci_dev *pdev)
+ {
+- u32 capid0;
++ u32 capid0, capid5;
+
+ pci_read_config_dword(pdev, 0x84, &capid0);
++ pci_read_config_dword(pdev, 0x98, &capid5);
+
+- if ((capid0 & 0xc0) == 0xc0)
++ /*
++ * CAPID0{7:6} indicate whether this is an advanced RAS SKU
++ * CAPID5{8:5} indicate that various NVDIMM usage modes are
++ * enabled, so memory machine check recovery is also enabled.
++ */
++ if ((capid0 & 0xc0) == 0xc0 || (capid5 & 0x1e0))
+ static_branch_inc(&mcsafe_key);
++
+ }
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0ec3, quirk_intel_brickland_xeon_ras_cap);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, quirk_intel_brickland_xeon_ras_cap);
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 03f3d7695dac..162a31d80ad5 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -834,16 +834,18 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr)
+ char *str = (trapnr == X86_TRAP_MF) ? "fpu exception" :
+ "simd exception";
+
+- if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, SIGFPE) == NOTIFY_STOP)
+- return;
+ cond_local_irq_enable(regs);
+
+ if (!user_mode(regs)) {
+- if (!fixup_exception(regs, trapnr)) {
+- task->thread.error_code = error_code;
+- task->thread.trap_nr = trapnr;
++ if (fixup_exception(regs, trapnr))
++ return;
++
++ task->thread.error_code = error_code;
++ task->thread.trap_nr = trapnr;
++
++ if (notify_die(DIE_TRAP, str, regs, error_code,
++ trapnr, SIGFPE) != NOTIFY_STOP)
+ die(str, regs, error_code);
+- }
+ return;
+ }
+
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index fec82b577c18..cee58a972cb2 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -706,7 +706,9 @@ void __init init_mem_mapping(void)
+ */
+ int devmem_is_allowed(unsigned long pagenr)
+ {
+- if (page_is_ram(pagenr)) {
++ if (region_intersects(PFN_PHYS(pagenr), PAGE_SIZE,
++ IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE)
++ != REGION_DISJOINT) {
+ /*
+ * For disallowed memory regions in the low 1MB range,
+ * request that the page be shown as all zeros.
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 0a400606dea0..20d8bf5fbceb 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1350,16 +1350,28 @@ int kern_addr_valid(unsigned long addr)
+ /* Amount of ram needed to start using large blocks */
+ #define MEM_SIZE_FOR_LARGE_BLOCK (64UL << 30)
+
++/* Adjustable memory block size */
++static unsigned long set_memory_block_size;
++int __init set_memory_block_size_order(unsigned int order)
++{
++ unsigned long size = 1UL << order;
++
++ if (size > MEM_SIZE_FOR_LARGE_BLOCK || size < MIN_MEMORY_BLOCK_SIZE)
++ return -EINVAL;
++
++ set_memory_block_size = size;
++ return 0;
++}
++
+ static unsigned long probe_memory_block_size(void)
+ {
+ unsigned long boot_mem_end = max_pfn << PAGE_SHIFT;
+ unsigned long bz;
+
+- /* If this is UV system, always set 2G block size */
+- if (is_uv_system()) {
+- bz = MAX_BLOCK_SIZE;
++ /* If memory block size has been set, then use it */
++ bz = set_memory_block_size;
++ if (bz)
+ goto done;
+- }
+
+ /* Use regular block if RAM is smaller than MEM_SIZE_FOR_LARGE_BLOCK */
+ if (boot_mem_end < MEM_SIZE_FOR_LARGE_BLOCK) {
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index bed7e7f4e44c..84fbfaba8404 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -166,14 +166,14 @@ void __init efi_call_phys_epilog(pgd_t *save_pgd)
+ pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE);
+ set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]);
+
+- if (!(pgd_val(*pgd) & _PAGE_PRESENT))
++ if (!pgd_present(*pgd))
+ continue;
+
+ for (i = 0; i < PTRS_PER_P4D; i++) {
+ p4d = p4d_offset(pgd,
+ pgd_idx * PGDIR_SIZE + i * P4D_SIZE);
+
+- if (!(p4d_val(*p4d) & _PAGE_PRESENT))
++ if (!p4d_present(*p4d))
+ continue;
+
+ pud = (pud_t *)p4d_page_vaddr(*p4d);
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 2e20ae2fa2d6..e3b18ad49889 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -32,6 +32,7 @@
+ #include <xen/interface/vcpu.h>
+ #include <xen/interface/xenpmu.h>
+
++#include <asm/spec-ctrl.h>
+ #include <asm/xen/interface.h>
+ #include <asm/xen/hypercall.h>
+
+@@ -70,6 +71,8 @@ static void cpu_bringup(void)
+ cpu_data(cpu).x86_max_cores = 1;
+ set_cpu_sibling_map(cpu);
+
++ speculative_store_bypass_ht_init();
++
+ xen_setup_cpu_clockevents();
+
+ notify_cpu_starting(cpu);
+@@ -250,6 +253,8 @@ static void __init xen_pv_smp_prepare_cpus(unsigned int max_cpus)
+ }
+ set_cpu_sibling_map(0);
+
++ speculative_store_bypass_ht_init();
++
+ xen_pmu_init(0);
+
+ if (xen_smp_intr_init(0) || xen_smp_intr_init_pv(0))
+diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
+index 32c5207f1226..84a70b8cbe33 100644
+--- a/arch/xtensa/kernel/traps.c
++++ b/arch/xtensa/kernel/traps.c
+@@ -338,7 +338,7 @@ do_unaligned_user (struct pt_regs *regs)
+ info.si_errno = 0;
+ info.si_code = BUS_ADRALN;
+ info.si_addr = (void *) regs->excvaddr;
+- force_sig_info(SIGSEGV, &info, current);
++ force_sig_info(SIGBUS, &info, current);
+
+ }
+ #endif
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 85909b431eb0..b559b9d4f1a2 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -3487,6 +3487,10 @@ static void __blk_rq_prep_clone(struct request *dst, struct request *src)
+ dst->cpu = src->cpu;
+ dst->__sector = blk_rq_pos(src);
+ dst->__data_len = blk_rq_bytes(src);
++ if (src->rq_flags & RQF_SPECIAL_PAYLOAD) {
++ dst->rq_flags |= RQF_SPECIAL_PAYLOAD;
++ dst->special_vec = src->special_vec;
++ }
+ dst->nr_phys_segments = src->nr_phys_segments;
+ dst->ioprio = src->ioprio;
+ dst->extra_len = src->extra_len;
+diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
+index 7d81e6bb461a..b6cabac4b62b 100644
+--- a/crypto/asymmetric_keys/x509_cert_parser.c
++++ b/crypto/asymmetric_keys/x509_cert_parser.c
+@@ -249,6 +249,15 @@ int x509_note_signature(void *context, size_t hdrlen,
+ return -EINVAL;
+ }
+
++ if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0) {
++ /* Discard the BIT STRING metadata */
++ if (vlen < 1 || *(const u8 *)value != 0)
++ return -EBADMSG;
++
++ value++;
++ vlen--;
++ }
++
+ ctx->cert->raw_sig = value;
+ ctx->cert->raw_sig_size = vlen;
+ return 0;
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 2bcffec8dbf0..eb091375c873 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -22,6 +22,7 @@
+ #include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/pwm.h>
++#include <linux/suspend.h>
+ #include <linux/delay.h>
+
+ #include "internal.h"
+@@ -229,11 +230,13 @@ static const struct lpss_device_desc lpt_sdio_dev_desc = {
+
+ static const struct lpss_device_desc byt_pwm_dev_desc = {
+ .flags = LPSS_SAVE_CTX,
++ .prv_offset = 0x800,
+ .setup = byt_pwm_setup,
+ };
+
+ static const struct lpss_device_desc bsw_pwm_dev_desc = {
+ .flags = LPSS_SAVE_CTX | LPSS_NO_D3_DELAY,
++ .prv_offset = 0x800,
+ .setup = bsw_pwm_setup,
+ };
+
+@@ -940,9 +943,10 @@ static void lpss_iosf_exit_d3_state(void)
+ mutex_unlock(&lpss_iosf_mutex);
+ }
+
+-static int acpi_lpss_suspend(struct device *dev, bool wakeup)
++static int acpi_lpss_suspend(struct device *dev, bool runtime)
+ {
+ struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
++ bool wakeup = runtime || device_may_wakeup(dev);
+ int ret;
+
+ if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
+@@ -955,13 +959,14 @@ static int acpi_lpss_suspend(struct device *dev, bool wakeup)
+ * wrong status for devices being about to be powered off. See
+ * lpss_iosf_enter_d3_state() for further information.
+ */
+- if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
++ if ((runtime || !pm_suspend_via_firmware()) &&
++ lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ lpss_iosf_enter_d3_state();
+
+ return ret;
+ }
+
+-static int acpi_lpss_resume(struct device *dev)
++static int acpi_lpss_resume(struct device *dev, bool runtime)
+ {
+ struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
+ int ret;
+@@ -970,7 +975,8 @@ static int acpi_lpss_resume(struct device *dev)
+ * This call is kept first to be in symmetry with
+ * acpi_lpss_runtime_suspend() one.
+ */
+- if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
++ if ((runtime || !pm_resume_via_firmware()) &&
++ lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ lpss_iosf_exit_d3_state();
+
+ ret = acpi_dev_resume(dev);
+@@ -994,12 +1000,12 @@ static int acpi_lpss_suspend_late(struct device *dev)
+ return 0;
+
+ ret = pm_generic_suspend_late(dev);
+- return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
++ return ret ? ret : acpi_lpss_suspend(dev, false);
+ }
+
+ static int acpi_lpss_resume_early(struct device *dev)
+ {
+- int ret = acpi_lpss_resume(dev);
++ int ret = acpi_lpss_resume(dev, false);
+
+ return ret ? ret : pm_generic_resume_early(dev);
+ }
+@@ -1014,7 +1020,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
+
+ static int acpi_lpss_runtime_resume(struct device *dev)
+ {
+- int ret = acpi_lpss_resume(dev);
++ int ret = acpi_lpss_resume(dev, true);
+
+ return ret ? ret : pm_generic_runtime_resume(dev);
+ }
+diff --git a/drivers/auxdisplay/Kconfig b/drivers/auxdisplay/Kconfig
+index 2c2ed9cf8796..f9413755177b 100644
+--- a/drivers/auxdisplay/Kconfig
++++ b/drivers/auxdisplay/Kconfig
+@@ -14,9 +14,6 @@ menuconfig AUXDISPLAY
+
+ If you say N, all options in this submenu will be skipped and disabled.
+
+-config CHARLCD
+- tristate "Character LCD core support" if COMPILE_TEST
+-
+ if AUXDISPLAY
+
+ config HD44780
+@@ -157,8 +154,6 @@ config HT16K33
+ Say yes here to add support for Holtek HT16K33, RAM mapping 16*8
+ LED controller driver with keyscan.
+
+-endif # AUXDISPLAY
+-
+ config ARM_CHARLCD
+ bool "ARM Ltd. Character LCD Driver"
+ depends on PLAT_VERSATILE
+@@ -169,6 +164,8 @@ config ARM_CHARLCD
+ line and the Linux version on the second line, but that's
+ still useful.
+
++endif # AUXDISPLAY
++
+ config PANEL
+ tristate "Parallel port LCD/Keypad Panel support"
+ depends on PARPORT
+@@ -448,3 +445,6 @@ config PANEL_BOOT_MESSAGE
+ printf()-formatted message is valid with newline and escape codes.
+
+ endif # PANEL
++
++config CHARLCD
++ tristate "Character LCD core support" if COMPILE_TEST
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index d680fd030316..f4ba878dd2dc 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -216,6 +216,13 @@ struct device_link *device_link_add(struct device *consumer,
+ link->rpm_active = true;
+ }
+ pm_runtime_new_link(consumer);
++ /*
++ * If the link is being added by the consumer driver at probe
++ * time, balance the decrementation of the supplier's runtime PM
++ * usage counter after consumer probe in driver_probe_device().
++ */
++ if (consumer->links.status == DL_DEV_PROBING)
++ pm_runtime_get_noresume(supplier);
+ }
+ get_device(supplier);
+ link->supplier = supplier;
+@@ -235,12 +242,12 @@ struct device_link *device_link_add(struct device *consumer,
+ switch (consumer->links.status) {
+ case DL_DEV_PROBING:
+ /*
+- * Balance the decrementation of the supplier's
+- * runtime PM usage counter after consumer probe
+- * in driver_probe_device().
++ * Some callers expect the link creation during
++ * consumer driver probe to resume the supplier
++ * even without DL_FLAG_RPM_ACTIVE.
+ */
+ if (flags & DL_FLAG_PM_RUNTIME)
+- pm_runtime_get_sync(supplier);
++ pm_runtime_resume(supplier);
+
+ link->status = DL_STATE_CONSUMER_PROBE;
+ break;
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 1ea0e2502e8e..ef6cf3d5d2b5 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2246,6 +2246,9 @@ int genpd_dev_pm_attach(struct device *dev)
+ genpd_lock(pd);
+ ret = genpd_power_on(pd, 0);
+ genpd_unlock(pd);
++
++ if (ret)
++ genpd_remove_device(pd, dev);
+ out:
+ return ret ? -EPROBE_DEFER : 0;
+ }
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 33b36fea1d73..472afeed1d2f 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3397,7 +3397,6 @@ static void cancel_tasks_sync(struct rbd_device *rbd_dev)
+ {
+ dout("%s rbd_dev %p\n", __func__, rbd_dev);
+
+- cancel_delayed_work_sync(&rbd_dev->watch_dwork);
+ cancel_work_sync(&rbd_dev->acquired_lock_work);
+ cancel_work_sync(&rbd_dev->released_lock_work);
+ cancel_delayed_work_sync(&rbd_dev->lock_dwork);
+@@ -3415,6 +3414,7 @@ static void rbd_unregister_watch(struct rbd_device *rbd_dev)
+ rbd_dev->watch_state = RBD_WATCH_STATE_UNREGISTERED;
+ mutex_unlock(&rbd_dev->watch_mutex);
+
++ cancel_delayed_work_sync(&rbd_dev->watch_dwork);
+ ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
+ }
+
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 05ec530b8a3a..330e9b29e145 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -935,6 +935,12 @@ static int qca_setup(struct hci_uart *hu)
+ } else if (ret == -ENOENT) {
+ /* No patch/nvm-config found, run with original fw/config */
+ ret = 0;
++ } else if (ret == -EAGAIN) {
++ /*
++ * Userspace firmware loader will return -EAGAIN in case no
++ * patch/nvm-config is found, so run with original fw/config.
++ */
++ ret = 0;
+ }
+
+ /* Setup bdaddr */
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 91bb98c42a1c..aaf9e5afaad4 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -516,11 +516,18 @@ EXPORT_SYMBOL_GPL(hwrng_register);
+
+ void hwrng_unregister(struct hwrng *rng)
+ {
++ int err;
++
+ mutex_lock(&rng_mutex);
+
+ list_del(&rng->list);
+- if (current_rng == rng)
+- enable_best_rng();
++ if (current_rng == rng) {
++ err = enable_best_rng();
++ if (err) {
++ drop_current_rng();
++ cur_rng_set_by_user = 0;
++ }
++ }
+
+ if (list_empty(&rng_list)) {
+ mutex_unlock(&rng_mutex);
+diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c
+index fd4ea8d87d4b..a3397664f800 100644
+--- a/drivers/char/ipmi/ipmi_bt_sm.c
++++ b/drivers/char/ipmi/ipmi_bt_sm.c
+@@ -504,11 +504,12 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ if (status & BT_H_BUSY) /* clear a leftover H_BUSY */
+ BT_CONTROL(BT_H_BUSY);
+
++ bt->timeout = bt->BT_CAP_req2rsp;
++
+ /* Read BT capabilities if it hasn't been done yet */
+ if (!bt->BT_CAP_outreqs)
+ BT_STATE_CHANGE(BT_STATE_CAPABILITIES_BEGIN,
+ SI_SM_CALL_WITHOUT_DELAY);
+- bt->timeout = bt->BT_CAP_req2rsp;
+ BT_SI_SM_RETURN(SI_SM_IDLE);
+
+ case BT_STATE_XACTION_START:
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 230b99288024..e4a04b2d3c32 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -37,7 +37,7 @@ static void timeout_work(struct work_struct *work)
+ struct file_priv *priv = container_of(work, struct file_priv, work);
+
+ mutex_lock(&priv->buffer_mutex);
+- atomic_set(&priv->data_pending, 0);
++ priv->data_pending = 0;
+ memset(priv->data_buffer, 0, sizeof(priv->data_buffer));
+ mutex_unlock(&priv->buffer_mutex);
+ }
+@@ -46,7 +46,6 @@ void tpm_common_open(struct file *file, struct tpm_chip *chip,
+ struct file_priv *priv)
+ {
+ priv->chip = chip;
+- atomic_set(&priv->data_pending, 0);
+ mutex_init(&priv->buffer_mutex);
+ timer_setup(&priv->user_read_timer, user_reader_timeout, 0);
+ INIT_WORK(&priv->work, timeout_work);
+@@ -58,29 +57,24 @@ ssize_t tpm_common_read(struct file *file, char __user *buf,
+ size_t size, loff_t *off)
+ {
+ struct file_priv *priv = file->private_data;
+- ssize_t ret_size;
+- ssize_t orig_ret_size;
++ ssize_t ret_size = 0;
+ int rc;
+
+ del_singleshot_timer_sync(&priv->user_read_timer);
+ flush_work(&priv->work);
+- ret_size = atomic_read(&priv->data_pending);
+- if (ret_size > 0) { /* relay data */
+- orig_ret_size = ret_size;
+- if (size < ret_size)
+- ret_size = size;
++ mutex_lock(&priv->buffer_mutex);
+
+- mutex_lock(&priv->buffer_mutex);
++ if (priv->data_pending) {
++ ret_size = min_t(ssize_t, size, priv->data_pending);
+ rc = copy_to_user(buf, priv->data_buffer, ret_size);
+- memset(priv->data_buffer, 0, orig_ret_size);
++ memset(priv->data_buffer, 0, priv->data_pending);
+ if (rc)
+ ret_size = -EFAULT;
+
+- mutex_unlock(&priv->buffer_mutex);
++ priv->data_pending = 0;
+ }
+
+- atomic_set(&priv->data_pending, 0);
+-
++ mutex_unlock(&priv->buffer_mutex);
+ return ret_size;
+ }
+
+@@ -91,17 +85,19 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
+ size_t in_size = size;
+ ssize_t out_size;
+
++ if (in_size > TPM_BUFSIZE)
++ return -E2BIG;
++
++ mutex_lock(&priv->buffer_mutex);
++
+ /* Cannot perform a write until the read has cleared either via
+ * tpm_read or a user_read_timer timeout. This also prevents split
+ * buffered writes from blocking here.
+ */
+- if (atomic_read(&priv->data_pending) != 0)
++ if (priv->data_pending != 0) {
++ mutex_unlock(&priv->buffer_mutex);
+ return -EBUSY;
+-
+- if (in_size > TPM_BUFSIZE)
+- return -E2BIG;
+-
+- mutex_lock(&priv->buffer_mutex);
++ }
+
+ if (copy_from_user
+ (priv->data_buffer, (void __user *) buf, in_size)) {
+@@ -132,7 +128,7 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
+ return out_size;
+ }
+
+- atomic_set(&priv->data_pending, out_size);
++ priv->data_pending = out_size;
+ mutex_unlock(&priv->buffer_mutex);
+
+ /* Set a timeout by which the reader must come claim the result */
+@@ -149,5 +145,5 @@ void tpm_common_release(struct file *file, struct file_priv *priv)
+ del_singleshot_timer_sync(&priv->user_read_timer);
+ flush_work(&priv->work);
+ file->private_data = NULL;
+- atomic_set(&priv->data_pending, 0);
++ priv->data_pending = 0;
+ }
+diff --git a/drivers/char/tpm/tpm-dev.h b/drivers/char/tpm/tpm-dev.h
+index ba3b6f9dacf7..b24cfb4d3ee1 100644
+--- a/drivers/char/tpm/tpm-dev.h
++++ b/drivers/char/tpm/tpm-dev.h
+@@ -8,7 +8,7 @@ struct file_priv {
+ struct tpm_chip *chip;
+
+ /* Data passed to and from the tpm via the read/write calls */
+- atomic_t data_pending;
++ size_t data_pending;
+ struct mutex buffer_mutex;
+
+ struct timer_list user_read_timer; /* user needs to claim result */
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 4e4014eabdb9..6122d3276f72 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -102,8 +102,9 @@ static int tpm2_load_context(struct tpm_chip *chip, u8 *buf,
+ * TPM_RC_REFERENCE_H0 means the session has been
+ * flushed outside the space
+ */
+- rc = -ENOENT;
++ *handle = 0;
+ tpm_buf_destroy(&tbuf);
++ return -ENOENT;
+ } else if (rc > 0) {
+ dev_warn(&chip->dev, "%s: failed with a TPM error 0x%04X\n",
+ __func__, rc);
+diff --git a/drivers/clk/at91/clk-pll.c b/drivers/clk/at91/clk-pll.c
+index 7d3223fc7161..72b6091eb7b9 100644
+--- a/drivers/clk/at91/clk-pll.c
++++ b/drivers/clk/at91/clk-pll.c
+@@ -132,19 +132,8 @@ static unsigned long clk_pll_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+ struct clk_pll *pll = to_clk_pll(hw);
+- unsigned int pllr;
+- u16 mul;
+- u8 div;
+-
+- regmap_read(pll->regmap, PLL_REG(pll->id), &pllr);
+-
+- div = PLL_DIV(pllr);
+- mul = PLL_MUL(pllr, pll->layout);
+-
+- if (!div || !mul)
+- return 0;
+
+- return (parent_rate / div) * (mul + 1);
++ return (parent_rate / pll->div) * (pll->mul + 1);
+ }
+
+ static long clk_pll_get_best_div_mul(struct clk_pll *pll, unsigned long rate,
+diff --git a/drivers/clk/clk-aspeed.c b/drivers/clk/clk-aspeed.c
+index 5eb50c31e455..2c23e7d7ba28 100644
+--- a/drivers/clk/clk-aspeed.c
++++ b/drivers/clk/clk-aspeed.c
+@@ -88,7 +88,7 @@ static const struct aspeed_gate_data aspeed_gates[] = {
+ [ASPEED_CLK_GATE_GCLK] = { 1, 7, "gclk-gate", NULL, 0 }, /* 2D engine */
+ [ASPEED_CLK_GATE_MCLK] = { 2, -1, "mclk-gate", "mpll", CLK_IS_CRITICAL }, /* SDRAM */
+ [ASPEED_CLK_GATE_VCLK] = { 3, 6, "vclk-gate", NULL, 0 }, /* Video Capture */
+- [ASPEED_CLK_GATE_BCLK] = { 4, 10, "bclk-gate", "bclk", 0 }, /* PCIe/PCI */
++ [ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", 0 }, /* PCIe/PCI */
+ [ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, 0 }, /* DAC */
+ [ASPEED_CLK_GATE_REFCLK] = { 6, -1, "refclk-gate", "clkin", CLK_IS_CRITICAL },
+ [ASPEED_CLK_GATE_USBPORT2CLK] = { 7, 3, "usb-port2-gate", NULL, 0 }, /* USB2.0 Host port 2 */
+@@ -297,7 +297,7 @@ static const u8 aspeed_resets[] = {
+ [ASPEED_RESET_JTAG_MASTER] = 22,
+ [ASPEED_RESET_MIC] = 18,
+ [ASPEED_RESET_PWM] = 9,
+- [ASPEED_RESET_PCIVGA] = 8,
++ [ASPEED_RESET_PECI] = 10,
+ [ASPEED_RESET_I2C] = 2,
+ [ASPEED_RESET_AHB] = 1,
+ };
+diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
+index d0524ec71aad..d0d320180c51 100644
+--- a/drivers/clk/meson/meson8b.c
++++ b/drivers/clk/meson/meson8b.c
+@@ -246,6 +246,13 @@ static struct clk_regmap meson8b_fclk_div2 = {
+ .ops = &clk_regmap_gate_ops,
+ .parent_names = (const char *[]){ "fclk_div2_div" },
+ .num_parents = 1,
++ /*
++ * FIXME: Ethernet with a RGMII PHYs is not working if
++ * fclk_div2 is disabled. it is currently unclear why this
++ * is. keep it enabled until the Ethernet driver knows how
++ * to manage this clock.
++ */
++ .flags = CLK_IS_CRITICAL,
+ },
+ };
+
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index 4e88e980fb76..69a7c756658b 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -258,8 +258,9 @@ struct clk *cpg_mssr_clk_src_twocell_get(struct of_phandle_args *clkspec,
+ dev_err(dev, "Cannot get %s clock %u: %ld", type, clkidx,
+ PTR_ERR(clk));
+ else
+- dev_dbg(dev, "clock (%u, %u) is %pC at %pCr Hz\n",
+- clkspec->args[0], clkspec->args[1], clk, clk);
++ dev_dbg(dev, "clock (%u, %u) is %pC at %lu Hz\n",
++ clkspec->args[0], clkspec->args[1], clk,
++ clk_get_rate(clk));
+ return clk;
+ }
+
+@@ -326,7 +327,7 @@ static void __init cpg_mssr_register_core_clk(const struct cpg_core_clk *core,
+ if (IS_ERR_OR_NULL(clk))
+ goto fail;
+
+- dev_dbg(dev, "Core clock %pC at %pCr Hz\n", clk, clk);
++ dev_dbg(dev, "Core clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
+ priv->clks[id] = clk;
+ return;
+
+@@ -392,7 +393,7 @@ static void __init cpg_mssr_register_mod_clk(const struct mssr_mod_clk *mod,
+ if (IS_ERR(clk))
+ goto fail;
+
+- dev_dbg(dev, "Module clock %pC at %pCr Hz\n", clk, clk);
++ dev_dbg(dev, "Module clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
+ priv->clks[id] = clk;
+ priv->smstpcr_saved[clock->index / 32].mask |= BIT(clock->index % 32);
+ return;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 17e566afbb41..bd3f0a9d5e60 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -284,6 +284,7 @@ struct pstate_funcs {
+ static struct pstate_funcs pstate_funcs __read_mostly;
+
+ static int hwp_active __read_mostly;
++static int hwp_mode_bdw __read_mostly;
+ static bool per_cpu_limits __read_mostly;
+
+ static struct cpufreq_driver *intel_pstate_driver __read_mostly;
+@@ -1370,7 +1371,15 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
+ cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
+ cpu->pstate.scaling = pstate_funcs.get_scaling();
+ cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
+- cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
++
++ if (hwp_active && !hwp_mode_bdw) {
++ unsigned int phy_max, current_max;
++
++ intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max);
++ cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
++ } else {
++ cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
++ }
+
+ if (pstate_funcs.get_aperf_mperf_shift)
+ cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift();
+@@ -2252,28 +2261,36 @@ static inline bool intel_pstate_has_acpi_ppc(void) { return false; }
+ static inline void intel_pstate_request_control_from_smm(void) {}
+ #endif /* CONFIG_ACPI */
+
++#define INTEL_PSTATE_HWP_BROADWELL 0x01
++
++#define ICPU_HWP(model, hwp_mode) \
++ { X86_VENDOR_INTEL, 6, model, X86_FEATURE_HWP, hwp_mode }
++
+ static const struct x86_cpu_id hwp_support_ids[] __initconst = {
+- { X86_VENDOR_INTEL, 6, X86_MODEL_ANY, X86_FEATURE_HWP },
++ ICPU_HWP(INTEL_FAM6_BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL),
++ ICPU_HWP(INTEL_FAM6_BROADWELL_XEON_D, INTEL_PSTATE_HWP_BROADWELL),
++ ICPU_HWP(X86_MODEL_ANY, 0),
+ {}
+ };
+
+ static int __init intel_pstate_init(void)
+ {
++ const struct x86_cpu_id *id;
+ int rc;
+
+ if (no_load)
+ return -ENODEV;
+
+- if (x86_match_cpu(hwp_support_ids)) {
++ id = x86_match_cpu(hwp_support_ids);
++ if (id) {
+ copy_cpu_funcs(&core_funcs);
+ if (!no_hwp) {
+ hwp_active++;
++ hwp_mode_bdw = id->driver_data;
+ intel_pstate.attr = hwp_cpufreq_attrs;
+ goto hwp_cpu_matched;
+ }
+ } else {
+- const struct x86_cpu_id *id;
+-
+ id = x86_match_cpu(intel_pstate_cpu_ids);
+ if (!id)
+ return -ENODEV;
+diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
+index 1a8234e706bc..d29e4f041efe 100644
+--- a/drivers/cpuidle/cpuidle-powernv.c
++++ b/drivers/cpuidle/cpuidle-powernv.c
+@@ -43,9 +43,31 @@ struct stop_psscr_table {
+
+ static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;
+
+-static u64 snooze_timeout __read_mostly;
++static u64 default_snooze_timeout __read_mostly;
+ static bool snooze_timeout_en __read_mostly;
+
++static u64 get_snooze_timeout(struct cpuidle_device *dev,
++ struct cpuidle_driver *drv,
++ int index)
++{
++ int i;
++
++ if (unlikely(!snooze_timeout_en))
++ return default_snooze_timeout;
++
++ for (i = index + 1; i < drv->state_count; i++) {
++ struct cpuidle_state *s = &drv->states[i];
++ struct cpuidle_state_usage *su = &dev->states_usage[i];
++
++ if (s->disabled || su->disable)
++ continue;
++
++ return s->target_residency * tb_ticks_per_usec;
++ }
++
++ return default_snooze_timeout;
++}
++
+ static int snooze_loop(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv,
+ int index)
+@@ -56,7 +78,7 @@ static int snooze_loop(struct cpuidle_device *dev,
+
+ local_irq_enable();
+
+- snooze_exit_time = get_tb() + snooze_timeout;
++ snooze_exit_time = get_tb() + get_snooze_timeout(dev, drv, index);
+ ppc64_runlatch_off();
+ HMT_very_low();
+ while (!need_resched()) {
+@@ -465,11 +487,9 @@ static int powernv_idle_probe(void)
+ cpuidle_state_table = powernv_states;
+ /* Device tree can indicate more idle states */
+ max_idle_state = powernv_add_idle_states();
+- if (max_idle_state > 1) {
++ default_snooze_timeout = TICK_USEC * tb_ticks_per_usec;
++ if (max_idle_state > 1)
+ snooze_timeout_en = true;
+- snooze_timeout = powernv_states[1].target_residency *
+- tb_ticks_per_usec;
+- }
+ } else
+ return -ENODEV;
+
+diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c
+index 9d08cea3f1b0..9f5f35362f27 100644
+--- a/drivers/firmware/efi/libstub/tpm.c
++++ b/drivers/firmware/efi/libstub/tpm.c
+@@ -64,7 +64,7 @@ void efi_retrieve_tpm2_eventlog_1_2(efi_system_table_t *sys_table_arg)
+ efi_guid_t tcg2_guid = EFI_TCG2_PROTOCOL_GUID;
+ efi_guid_t linux_eventlog_guid = LINUX_EFI_TPM_EVENT_LOG_GUID;
+ efi_status_t status;
+- efi_physical_addr_t log_location, log_last_entry;
++ efi_physical_addr_t log_location = 0, log_last_entry = 0;
+ struct linux_efi_tpm_eventlog *log_tbl = NULL;
+ unsigned long first_entry_addr, last_entry_addr;
+ size_t log_size, last_entry_size;
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 3b73dee6fdc6..e97105ae4158 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -37,6 +37,10 @@ MODULE_PARM_DESC(force, "force loading on processors with erratum 319");
+ /* Provide lock for writing to NB_SMU_IND_ADDR */
+ static DEFINE_MUTEX(nb_smu_ind_mutex);
+
++#ifndef PCI_DEVICE_ID_AMD_15H_M70H_NB_F3
++#define PCI_DEVICE_ID_AMD_15H_M70H_NB_F3 0x15b3
++#endif
++
+ #ifndef PCI_DEVICE_ID_AMD_17H_DF_F3
+ #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463
+ #endif
+@@ -320,6 +324,7 @@ static const struct pci_device_id k10temp_id_table[] = {
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M10H_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M60H_NB_F3) },
++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M70H_NB_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+diff --git a/drivers/i2c/algos/i2c-algo-bit.c b/drivers/i2c/algos/i2c-algo-bit.c
+index 3df0efd69ae3..1147bddb8b2c 100644
+--- a/drivers/i2c/algos/i2c-algo-bit.c
++++ b/drivers/i2c/algos/i2c-algo-bit.c
+@@ -649,11 +649,6 @@ static int __i2c_bit_add_bus(struct i2c_adapter *adap,
+ if (bit_adap->getscl == NULL)
+ adap->quirks = &i2c_bit_quirk_no_clk_stretch;
+
+- /* Bring bus to a known state. Looks like STOP if bus is not free yet */
+- setscl(bit_adap, 1);
+- udelay(bit_adap->udelay);
+- setsda(bit_adap, 1);
+-
+ ret = add_adapter(adap);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/i2c/busses/i2c-gpio.c b/drivers/i2c/busses/i2c-gpio.c
+index 58abb3eced58..20b81bec0b0b 100644
+--- a/drivers/i2c/busses/i2c-gpio.c
++++ b/drivers/i2c/busses/i2c-gpio.c
+@@ -279,9 +279,9 @@ static int i2c_gpio_probe(struct platform_device *pdev)
+ * required for an I2C bus.
+ */
+ if (pdata->scl_is_open_drain)
+- gflags = GPIOD_OUT_LOW;
++ gflags = GPIOD_OUT_HIGH;
+ else
+- gflags = GPIOD_OUT_LOW_OPEN_DRAIN;
++ gflags = GPIOD_OUT_HIGH_OPEN_DRAIN;
+ priv->scl = i2c_gpio_get_desc(dev, "scl", 1, gflags);
+ if (IS_ERR(priv->scl))
+ return PTR_ERR(priv->scl);
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index f33dadf7b262..562f125235db 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -1277,7 +1277,7 @@ static int sca3000_configure_ring(struct iio_dev *indio_dev)
+ {
+ struct iio_buffer *buffer;
+
+- buffer = iio_kfifo_allocate();
++ buffer = devm_iio_kfifo_allocate(&indio_dev->dev);
+ if (!buffer)
+ return -ENOMEM;
+
+@@ -1287,11 +1287,6 @@ static int sca3000_configure_ring(struct iio_dev *indio_dev)
+ return 0;
+ }
+
+-static void sca3000_unconfigure_ring(struct iio_dev *indio_dev)
+-{
+- iio_kfifo_free(indio_dev->buffer);
+-}
+-
+ static inline
+ int __sca3000_hw_ring_state_set(struct iio_dev *indio_dev, bool state)
+ {
+@@ -1546,8 +1541,6 @@ static int sca3000_remove(struct spi_device *spi)
+ if (spi->irq)
+ free_irq(spi->irq, indio_dev);
+
+- sca3000_unconfigure_ring(indio_dev);
+-
+ return 0;
+ }
+
+diff --git a/drivers/iio/adc/ad7791.c b/drivers/iio/adc/ad7791.c
+index 70fbf92f9827..03a5f7d6cb0c 100644
+--- a/drivers/iio/adc/ad7791.c
++++ b/drivers/iio/adc/ad7791.c
+@@ -244,58 +244,9 @@ static int ad7791_read_raw(struct iio_dev *indio_dev,
+ return -EINVAL;
+ }
+
+-static const char * const ad7791_sample_freq_avail[] = {
+- [AD7791_FILTER_RATE_120] = "120",
+- [AD7791_FILTER_RATE_100] = "100",
+- [AD7791_FILTER_RATE_33_3] = "33.3",
+- [AD7791_FILTER_RATE_20] = "20",
+- [AD7791_FILTER_RATE_16_6] = "16.6",
+- [AD7791_FILTER_RATE_16_7] = "16.7",
+- [AD7791_FILTER_RATE_13_3] = "13.3",
+- [AD7791_FILTER_RATE_9_5] = "9.5",
+-};
+-
+-static ssize_t ad7791_read_frequency(struct device *dev,
+- struct device_attribute *attr, char *buf)
+-{
+- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+- struct ad7791_state *st = iio_priv(indio_dev);
+- unsigned int rate = st->filter & AD7791_FILTER_RATE_MASK;
+-
+- return sprintf(buf, "%s\n", ad7791_sample_freq_avail[rate]);
+-}
+-
+-static ssize_t ad7791_write_frequency(struct device *dev,
+- struct device_attribute *attr, const char *buf, size_t len)
+-{
+- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+- struct ad7791_state *st = iio_priv(indio_dev);
+- int i, ret;
+-
+- i = sysfs_match_string(ad7791_sample_freq_avail, buf);
+- if (i < 0)
+- return i;
+-
+- ret = iio_device_claim_direct_mode(indio_dev);
+- if (ret)
+- return ret;
+- st->filter &= ~AD7791_FILTER_RATE_MASK;
+- st->filter |= i;
+- ad_sd_write_reg(&st->sd, AD7791_REG_FILTER, sizeof(st->filter),
+- st->filter);
+- iio_device_release_direct_mode(indio_dev);
+-
+- return len;
+-}
+-
+-static IIO_DEV_ATTR_SAMP_FREQ(S_IWUSR | S_IRUGO,
+- ad7791_read_frequency,
+- ad7791_write_frequency);
+-
+ static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("120 100 33.3 20 16.7 16.6 13.3 9.5");
+
+ static struct attribute *ad7791_attributes[] = {
+- &iio_dev_attr_sampling_frequency.dev_attr.attr,
+ &iio_const_attr_sampling_frequency_available.dev_attr.attr,
+ NULL
+ };
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 2b6c9b516070..d76455edd292 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -119,16 +119,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ umem->length = size;
+ umem->address = addr;
+ umem->page_shift = PAGE_SHIFT;
+- /*
+- * We ask for writable memory if any of the following
+- * access flags are set. "Local write" and "remote write"
+- * obviously require write access. "Remote atomic" can do
+- * things like fetch and add, which will modify memory, and
+- * "MW bind" can change permissions by binding a window.
+- */
+- umem->writable = !!(access &
+- (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE |
+- IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_MW_BIND));
++ umem->writable = ib_access_writable(access);
+
+ if (access & IB_ACCESS_ON_DEMAND) {
+ ret = ib_umem_odp_get(context, umem, access);
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 4445d8ee9314..2d34a9c827b7 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -734,10 +734,6 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ if (ret)
+ return ret;
+
+- if (!file->ucontext &&
+- (command != IB_USER_VERBS_CMD_GET_CONTEXT || extended))
+- return -EINVAL;
+-
+ if (extended) {
+ if (count < (sizeof(hdr) + sizeof(ex_hdr)))
+ return -EINVAL;
+@@ -757,6 +753,16 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ goto out;
+ }
+
++ /*
++ * Must be after the ib_dev check, as once the RCU clears ib_dev ==
++ * NULL means ucontext == NULL
++ */
++ if (!file->ucontext &&
++ (command != IB_USER_VERBS_CMD_GET_CONTEXT || extended)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ if (!verify_command_mask(ib_dev, command, extended)) {
+ ret = -EOPNOTSUPP;
+ goto out;
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 6ddfb1fade79..def3bc1e6447 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1562,11 +1562,12 @@ EXPORT_SYMBOL(ib_destroy_qp);
+
+ /* Completion queues */
+
+-struct ib_cq *ib_create_cq(struct ib_device *device,
+- ib_comp_handler comp_handler,
+- void (*event_handler)(struct ib_event *, void *),
+- void *cq_context,
+- const struct ib_cq_init_attr *cq_attr)
++struct ib_cq *__ib_create_cq(struct ib_device *device,
++ ib_comp_handler comp_handler,
++ void (*event_handler)(struct ib_event *, void *),
++ void *cq_context,
++ const struct ib_cq_init_attr *cq_attr,
++ const char *caller)
+ {
+ struct ib_cq *cq;
+
+@@ -1580,12 +1581,13 @@ struct ib_cq *ib_create_cq(struct ib_device *device,
+ cq->cq_context = cq_context;
+ atomic_set(&cq->usecnt, 0);
+ cq->res.type = RDMA_RESTRACK_CQ;
++ cq->res.kern_name = caller;
+ rdma_restrack_add(&cq->res);
+ }
+
+ return cq;
+ }
+-EXPORT_SYMBOL(ib_create_cq);
++EXPORT_SYMBOL(__ib_create_cq);
+
+ int rdma_set_cq_moderation(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+ {
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index e6bdd0c1e80a..ebccc4c84827 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -6829,7 +6829,7 @@ static void rxe_kernel_unfreeze(struct hfi1_devdata *dd)
+ }
+ rcvmask = HFI1_RCVCTRL_CTXT_ENB;
+ /* HFI1_RCVCTRL_TAILUPD_[ENB|DIS] needs to be set explicitly */
+- rcvmask |= HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL) ?
++ rcvmask |= rcd->rcvhdrtail_kvaddr ?
+ HFI1_RCVCTRL_TAILUPD_ENB : HFI1_RCVCTRL_TAILUPD_DIS;
+ hfi1_rcvctrl(dd, rcvmask, rcd);
+ hfi1_rcd_put(rcd);
+@@ -8355,7 +8355,7 @@ static inline int check_packet_present(struct hfi1_ctxtdata *rcd)
+ u32 tail;
+ int present;
+
+- if (!HFI1_CAP_IS_KSET(DMA_RTAIL))
++ if (!rcd->rcvhdrtail_kvaddr)
+ present = (rcd->seq_cnt ==
+ rhf_rcv_seq(rhf_to_cpu(get_rhf_addr(rcd))));
+ else /* is RDMA rtail */
+@@ -11823,7 +11823,7 @@ void hfi1_rcvctrl(struct hfi1_devdata *dd, unsigned int op,
+ /* reset the tail and hdr addresses, and sequence count */
+ write_kctxt_csr(dd, ctxt, RCV_HDR_ADDR,
+ rcd->rcvhdrq_dma);
+- if (HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL))
++ if (rcd->rcvhdrtail_kvaddr)
+ write_kctxt_csr(dd, ctxt, RCV_HDR_TAIL_ADDR,
+ rcd->rcvhdrqtailaddr_dma);
+ rcd->seq_cnt = 1;
+@@ -11903,7 +11903,7 @@ void hfi1_rcvctrl(struct hfi1_devdata *dd, unsigned int op,
+ rcvctrl |= RCV_CTXT_CTRL_INTR_AVAIL_SMASK;
+ if (op & HFI1_RCVCTRL_INTRAVAIL_DIS)
+ rcvctrl &= ~RCV_CTXT_CTRL_INTR_AVAIL_SMASK;
+- if (op & HFI1_RCVCTRL_TAILUPD_ENB && rcd->rcvhdrqtailaddr_dma)
++ if ((op & HFI1_RCVCTRL_TAILUPD_ENB) && rcd->rcvhdrtail_kvaddr)
+ rcvctrl |= RCV_CTXT_CTRL_TAIL_UPD_SMASK;
+ if (op & HFI1_RCVCTRL_TAILUPD_DIS) {
+ /* See comment on RcvCtxtCtrl.TailUpd above */
+diff --git a/drivers/infiniband/hw/hfi1/debugfs.c b/drivers/infiniband/hw/hfi1/debugfs.c
+index 852173bf05d0..5343960610fe 100644
+--- a/drivers/infiniband/hw/hfi1/debugfs.c
++++ b/drivers/infiniband/hw/hfi1/debugfs.c
+@@ -1227,7 +1227,8 @@ DEBUGFS_FILE_OPS(fault_stats);
+
+ static void fault_exit_opcode_debugfs(struct hfi1_ibdev *ibd)
+ {
+- debugfs_remove_recursive(ibd->fault_opcode->dir);
++ if (ibd->fault_opcode)
++ debugfs_remove_recursive(ibd->fault_opcode->dir);
+ kfree(ibd->fault_opcode);
+ ibd->fault_opcode = NULL;
+ }
+@@ -1255,6 +1256,7 @@ static int fault_init_opcode_debugfs(struct hfi1_ibdev *ibd)
+ &ibd->fault_opcode->attr);
+ if (IS_ERR(ibd->fault_opcode->dir)) {
+ kfree(ibd->fault_opcode);
++ ibd->fault_opcode = NULL;
+ return -ENOENT;
+ }
+
+@@ -1278,7 +1280,8 @@ static int fault_init_opcode_debugfs(struct hfi1_ibdev *ibd)
+
+ static void fault_exit_packet_debugfs(struct hfi1_ibdev *ibd)
+ {
+- debugfs_remove_recursive(ibd->fault_packet->dir);
++ if (ibd->fault_packet)
++ debugfs_remove_recursive(ibd->fault_packet->dir);
+ kfree(ibd->fault_packet);
+ ibd->fault_packet = NULL;
+ }
+@@ -1304,6 +1307,7 @@ static int fault_init_packet_debugfs(struct hfi1_ibdev *ibd)
+ &ibd->fault_opcode->attr);
+ if (IS_ERR(ibd->fault_packet->dir)) {
+ kfree(ibd->fault_packet);
++ ibd->fault_packet = NULL;
+ return -ENOENT;
+ }
+
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index da4aa1a95b11..cf25913bd81c 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -505,7 +505,7 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
+ ret = -EINVAL;
+ goto done;
+ }
+- if (flags & VM_WRITE) {
++ if ((flags & VM_WRITE) || !uctxt->rcvhdrtail_kvaddr) {
+ ret = -EPERM;
+ goto done;
+ }
+@@ -689,8 +689,8 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
+ * checks to default and disable the send context.
+ */
+ if (uctxt->sc) {
+- set_pio_integrity(uctxt->sc);
+ sc_disable(uctxt->sc);
++ set_pio_integrity(uctxt->sc);
+ }
+
+ hfi1_free_ctxt_rcv_groups(uctxt);
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index cac2c62bc42d..9c97c180c35e 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -1856,6 +1856,7 @@ struct cc_state *get_cc_state_protected(struct hfi1_pportdata *ppd)
+ #define HFI1_HAS_SDMA_TIMEOUT 0x8
+ #define HFI1_HAS_SEND_DMA 0x10 /* Supports Send DMA */
+ #define HFI1_FORCED_FREEZE 0x80 /* driver forced freeze mode */
++#define HFI1_SHUTDOWN 0x100 /* device is shutting down */
+
+ /* IB dword length mask in PBC (lower 11 bits); same for all chips */
+ #define HFI1_PBC_LENGTH_MASK ((1 << 11) - 1)
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 6309edf811df..92e802a64fc4 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -1058,6 +1058,10 @@ static void shutdown_device(struct hfi1_devdata *dd)
+ unsigned pidx;
+ int i;
+
++ if (dd->flags & HFI1_SHUTDOWN)
++ return;
++ dd->flags |= HFI1_SHUTDOWN;
++
+ for (pidx = 0; pidx < dd->num_pports; ++pidx) {
+ ppd = dd->pport + pidx;
+
+@@ -1391,6 +1395,7 @@ void hfi1_disable_after_error(struct hfi1_devdata *dd)
+
+ static void remove_one(struct pci_dev *);
+ static int init_one(struct pci_dev *, const struct pci_device_id *);
++static void shutdown_one(struct pci_dev *);
+
+ #define DRIVER_LOAD_MSG "Intel " DRIVER_NAME " loaded: "
+ #define PFX DRIVER_NAME ": "
+@@ -1407,6 +1412,7 @@ static struct pci_driver hfi1_pci_driver = {
+ .name = DRIVER_NAME,
+ .probe = init_one,
+ .remove = remove_one,
++ .shutdown = shutdown_one,
+ .id_table = hfi1_pci_tbl,
+ .err_handler = &hfi1_pci_err_handler,
+ };
+@@ -1816,6 +1822,13 @@ static void remove_one(struct pci_dev *pdev)
+ postinit_cleanup(dd);
+ }
+
++static void shutdown_one(struct pci_dev *pdev)
++{
++ struct hfi1_devdata *dd = pci_get_drvdata(pdev);
++
++ shutdown_device(dd);
++}
++
+ /**
+ * hfi1_create_rcvhdrq - create a receive header queue
+ * @dd: the hfi1_ib device
+@@ -1831,7 +1844,6 @@ int hfi1_create_rcvhdrq(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd)
+ u64 reg;
+
+ if (!rcd->rcvhdrq) {
+- dma_addr_t dma_hdrqtail;
+ gfp_t gfp_flags;
+
+ /*
+@@ -1856,13 +1868,13 @@ int hfi1_create_rcvhdrq(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd)
+ goto bail;
+ }
+
+- if (HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL)) {
++ if (HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL) ||
++ HFI1_CAP_UGET_MASK(rcd->flags, DMA_RTAIL)) {
+ rcd->rcvhdrtail_kvaddr = dma_zalloc_coherent(
+- &dd->pcidev->dev, PAGE_SIZE, &dma_hdrqtail,
+- gfp_flags);
++ &dd->pcidev->dev, PAGE_SIZE,
++ &rcd->rcvhdrqtailaddr_dma, gfp_flags);
+ if (!rcd->rcvhdrtail_kvaddr)
+ goto bail_free;
+- rcd->rcvhdrqtailaddr_dma = dma_hdrqtail;
+ }
+
+ rcd->rcvhdrq_size = amt;
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 40dac4d16eb8..9cac15d10c4f 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -50,8 +50,6 @@
+ #include "qp.h"
+ #include "trace.h"
+
+-#define SC_CTXT_PACKET_EGRESS_TIMEOUT 350 /* in chip cycles */
+-
+ #define SC(name) SEND_CTXT_##name
+ /*
+ * Send Context functions
+@@ -961,15 +959,40 @@ void sc_disable(struct send_context *sc)
+ }
+
+ /* return SendEgressCtxtStatus.PacketOccupancy */
+-#define packet_occupancy(r) \
+- (((r) & SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_PACKET_OCCUPANCY_SMASK)\
+- >> SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_PACKET_OCCUPANCY_SHIFT)
++static u64 packet_occupancy(u64 reg)
++{
++ return (reg &
++ SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_PACKET_OCCUPANCY_SMASK)
++ >> SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_PACKET_OCCUPANCY_SHIFT;
++}
+
+ /* is egress halted on the context? */
+-#define egress_halted(r) \
+- ((r) & SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_HALT_STATUS_SMASK)
++static bool egress_halted(u64 reg)
++{
++ return !!(reg & SEND_EGRESS_CTXT_STATUS_CTXT_EGRESS_HALT_STATUS_SMASK);
++}
+
+-/* wait for packet egress, optionally pause for credit return */
++/* is the send context halted? */
++static bool is_sc_halted(struct hfi1_devdata *dd, u32 hw_context)
++{
++ return !!(read_kctxt_csr(dd, hw_context, SC(STATUS)) &
++ SC(STATUS_CTXT_HALTED_SMASK));
++}
++
++/**
++ * sc_wait_for_packet_egress
++ * @sc: valid send context
++ * @pause: wait for credit return
++ *
++ * Wait for packet egress, optionally pause for credit return
++ *
++ * Egress halt and Context halt are not necessarily the same thing, so
++ * check for both.
++ *
++ * NOTE: The context halt bit may not be set immediately. Because of this,
++ * it is necessary to check the SW SFC_HALTED bit (set in the IRQ) and the HW
++ * context bit to determine if the context is halted.
++ */
+ static void sc_wait_for_packet_egress(struct send_context *sc, int pause)
+ {
+ struct hfi1_devdata *dd = sc->dd;
+@@ -981,8 +1004,9 @@ static void sc_wait_for_packet_egress(struct send_context *sc, int pause)
+ reg_prev = reg;
+ reg = read_csr(dd, sc->hw_context * 8 +
+ SEND_EGRESS_CTXT_STATUS);
+- /* done if egress is stopped */
+- if (egress_halted(reg))
++ /* done if any halt bits, SW or HW are set */
++ if (sc->flags & SCF_HALTED ||
++ is_sc_halted(dd, sc->hw_context) || egress_halted(reg))
+ break;
+ reg = packet_occupancy(reg);
+ if (reg == 0)
+diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
+index 0793a21d76f4..d604b3d5aa3e 100644
+--- a/drivers/infiniband/hw/mlx4/mad.c
++++ b/drivers/infiniband/hw/mlx4/mad.c
+@@ -1934,7 +1934,6 @@ static void mlx4_ib_sqp_comp_worker(struct work_struct *work)
+ "buf:%lld\n", wc.wr_id);
+ break;
+ default:
+- BUG_ON(1);
+ break;
+ }
+ } else {
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 61d8b06375bb..ed1f253faf97 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -367,6 +367,40 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va,
+ return block_shift;
+ }
+
++static struct ib_umem *mlx4_get_umem_mr(struct ib_ucontext *context, u64 start,
++ u64 length, u64 virt_addr,
++ int access_flags)
++{
++ /*
++ * Force registering the memory as writable if the underlying pages
++ * are writable. This is so rereg can change the access permissions
++ * from readable to writable without having to run through ib_umem_get
++ * again
++ */
++ if (!ib_access_writable(access_flags)) {
++ struct vm_area_struct *vma;
++
++ down_read(¤t->mm->mmap_sem);
++ /*
++ * FIXME: Ideally this would iterate over all the vmas that
++ * cover the memory, but for now it requires a single vma to
++ * entirely cover the MR to support RO mappings.
++ */
++ vma = find_vma(current->mm, start);
++ if (vma && vma->vm_end >= start + length &&
++ vma->vm_start <= start) {
++ if (vma->vm_flags & VM_WRITE)
++ access_flags |= IB_ACCESS_LOCAL_WRITE;
++ } else {
++ access_flags |= IB_ACCESS_LOCAL_WRITE;
++ }
++
++ up_read(¤t->mm->mmap_sem);
++ }
++
++ return ib_umem_get(context, start, length, access_flags, 0);
++}
++
+ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ u64 virt_addr, int access_flags,
+ struct ib_udata *udata)
+@@ -381,10 +415,8 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ if (!mr)
+ return ERR_PTR(-ENOMEM);
+
+- /* Force registering the memory as writable. */
+- /* Used for memory re-registeration. HCA protects the access */
+- mr->umem = ib_umem_get(pd->uobject->context, start, length,
+- access_flags | IB_ACCESS_LOCAL_WRITE, 0);
++ mr->umem = mlx4_get_umem_mr(pd->uobject->context, start, length,
++ virt_addr, access_flags);
+ if (IS_ERR(mr->umem)) {
+ err = PTR_ERR(mr->umem);
+ goto err_free;
+@@ -454,6 +486,9 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
+ }
+
+ if (flags & IB_MR_REREG_ACCESS) {
++ if (ib_access_writable(mr_access_flags) && !mmr->umem->writable)
++ return -EPERM;
++
+ err = mlx4_mr_hw_change_access(dev->dev, *pmpt_entry,
+ convert_access(mr_access_flags));
+
+@@ -467,10 +502,9 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
+
+ mlx4_mr_rereg_mem_cleanup(dev->dev, &mmr->mmr);
+ ib_umem_release(mmr->umem);
+- mmr->umem = ib_umem_get(mr->uobject->context, start, length,
+- mr_access_flags |
+- IB_ACCESS_LOCAL_WRITE,
+- 0);
++ mmr->umem =
++ mlx4_get_umem_mr(mr->uobject->context, start, length,
++ virt_addr, mr_access_flags);
+ if (IS_ERR(mmr->umem)) {
+ err = PTR_ERR(mmr->umem);
+ /* Prevent mlx4_ib_dereg_mr from free'ing invalid pointer */
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 77d257ec899b..9f6bc34cd4db 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -637,7 +637,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
+ }
+
+ static int poll_soft_wc(struct mlx5_ib_cq *cq, int num_entries,
+- struct ib_wc *wc)
++ struct ib_wc *wc, bool is_fatal_err)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(cq->ibcq.device);
+ struct mlx5_ib_wc *soft_wc, *next;
+@@ -650,6 +650,10 @@ static int poll_soft_wc(struct mlx5_ib_cq *cq, int num_entries,
+ mlx5_ib_dbg(dev, "polled software generated completion on CQ 0x%x\n",
+ cq->mcq.cqn);
+
++ if (unlikely(is_fatal_err)) {
++ soft_wc->wc.status = IB_WC_WR_FLUSH_ERR;
++ soft_wc->wc.vendor_err = MLX5_CQE_SYNDROME_WR_FLUSH_ERR;
++ }
+ wc[npolled++] = soft_wc->wc;
+ list_del(&soft_wc->list);
+ kfree(soft_wc);
+@@ -670,12 +674,17 @@ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
+
+ spin_lock_irqsave(&cq->lock, flags);
+ if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
+- mlx5_ib_poll_sw_comp(cq, num_entries, wc, &npolled);
++ /* make sure no soft wqe's are waiting */
++ if (unlikely(!list_empty(&cq->wc_list)))
++ soft_polled = poll_soft_wc(cq, num_entries, wc, true);
++
++ mlx5_ib_poll_sw_comp(cq, num_entries - soft_polled,
++ wc + soft_polled, &npolled);
+ goto out;
+ }
+
+ if (unlikely(!list_empty(&cq->wc_list)))
+- soft_polled = poll_soft_wc(cq, num_entries, wc);
++ soft_polled = poll_soft_wc(cq, num_entries, wc, false);
+
+ for (npolled = 0; npolled < num_entries - soft_polled; npolled++) {
+ if (mlx5_poll_one(cq, &cur_qp, wc + soft_polled + npolled))
+diff --git a/drivers/infiniband/hw/qib/qib.h b/drivers/infiniband/hw/qib/qib.h
+index 46072455130c..3461df002f81 100644
+--- a/drivers/infiniband/hw/qib/qib.h
++++ b/drivers/infiniband/hw/qib/qib.h
+@@ -1228,6 +1228,7 @@ static inline struct qib_ibport *to_iport(struct ib_device *ibdev, u8 port)
+ #define QIB_BADINTR 0x8000 /* severe interrupt problems */
+ #define QIB_DCA_ENABLED 0x10000 /* Direct Cache Access enabled */
+ #define QIB_HAS_QSFP 0x20000 /* device (card instance) has QSFP */
++#define QIB_SHUTDOWN 0x40000 /* device is shutting down */
+
+ /*
+ * values for ppd->lflags (_ib_port_ related flags)
+@@ -1423,8 +1424,7 @@ u64 qib_sps_ints(void);
+ /*
+ * dma_addr wrappers - all 0's invalid for hw
+ */
+-dma_addr_t qib_map_page(struct pci_dev *, struct page *, unsigned long,
+- size_t, int);
++int qib_map_page(struct pci_dev *d, struct page *p, dma_addr_t *daddr);
+ struct pci_dev *qib_get_pci_dev(struct rvt_dev_info *rdi);
+
+ /*
+diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
+index 6a8800b65047..49c8e926cc64 100644
+--- a/drivers/infiniband/hw/qib/qib_file_ops.c
++++ b/drivers/infiniband/hw/qib/qib_file_ops.c
+@@ -364,6 +364,8 @@ static int qib_tid_update(struct qib_ctxtdata *rcd, struct file *fp,
+ goto done;
+ }
+ for (i = 0; i < cnt; i++, vaddr += PAGE_SIZE) {
++ dma_addr_t daddr;
++
+ for (; ntids--; tid++) {
+ if (tid == tidcnt)
+ tid = 0;
+@@ -380,12 +382,14 @@ static int qib_tid_update(struct qib_ctxtdata *rcd, struct file *fp,
+ ret = -ENOMEM;
+ break;
+ }
++ ret = qib_map_page(dd->pcidev, pagep[i], &daddr);
++ if (ret)
++ break;
++
+ tidlist[i] = tid + tidoff;
+ /* we "know" system pages and TID pages are same size */
+ dd->pageshadow[ctxttid + tid] = pagep[i];
+- dd->physshadow[ctxttid + tid] =
+- qib_map_page(dd->pcidev, pagep[i], 0, PAGE_SIZE,
+- PCI_DMA_FROMDEVICE);
++ dd->physshadow[ctxttid + tid] = daddr;
+ /*
+ * don't need atomic or it's overhead
+ */
+diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c
+index 6c68f8a97018..015520289735 100644
+--- a/drivers/infiniband/hw/qib/qib_init.c
++++ b/drivers/infiniband/hw/qib/qib_init.c
+@@ -841,6 +841,10 @@ static void qib_shutdown_device(struct qib_devdata *dd)
+ struct qib_pportdata *ppd;
+ unsigned pidx;
+
++ if (dd->flags & QIB_SHUTDOWN)
++ return;
++ dd->flags |= QIB_SHUTDOWN;
++
+ for (pidx = 0; pidx < dd->num_pports; ++pidx) {
+ ppd = dd->pport + pidx;
+
+@@ -1182,6 +1186,7 @@ void qib_disable_after_error(struct qib_devdata *dd)
+
+ static void qib_remove_one(struct pci_dev *);
+ static int qib_init_one(struct pci_dev *, const struct pci_device_id *);
++static void qib_shutdown_one(struct pci_dev *);
+
+ #define DRIVER_LOAD_MSG "Intel " QIB_DRV_NAME " loaded: "
+ #define PFX QIB_DRV_NAME ": "
+@@ -1199,6 +1204,7 @@ static struct pci_driver qib_driver = {
+ .name = QIB_DRV_NAME,
+ .probe = qib_init_one,
+ .remove = qib_remove_one,
++ .shutdown = qib_shutdown_one,
+ .id_table = qib_pci_tbl,
+ .err_handler = &qib_pci_err_handler,
+ };
+@@ -1549,6 +1555,13 @@ static void qib_remove_one(struct pci_dev *pdev)
+ qib_postinit_cleanup(dd);
+ }
+
++static void qib_shutdown_one(struct pci_dev *pdev)
++{
++ struct qib_devdata *dd = pci_get_drvdata(pdev);
++
++ qib_shutdown_device(dd);
++}
++
+ /**
+ * qib_create_rcvhdrq - create a receive header queue
+ * @dd: the qlogic_ib device
+diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c
+index ce83ba9a12ef..16543d5e80c3 100644
+--- a/drivers/infiniband/hw/qib/qib_user_pages.c
++++ b/drivers/infiniband/hw/qib/qib_user_pages.c
+@@ -99,23 +99,27 @@ static int __qib_get_user_pages(unsigned long start_page, size_t num_pages,
+ *
+ * I'm sure we won't be so lucky with other iommu's, so FIXME.
+ */
+-dma_addr_t qib_map_page(struct pci_dev *hwdev, struct page *page,
+- unsigned long offset, size_t size, int direction)
++int qib_map_page(struct pci_dev *hwdev, struct page *page, dma_addr_t *daddr)
+ {
+ dma_addr_t phys;
+
+- phys = pci_map_page(hwdev, page, offset, size, direction);
++ phys = pci_map_page(hwdev, page, 0, PAGE_SIZE, PCI_DMA_FROMDEVICE);
++ if (pci_dma_mapping_error(hwdev, phys))
++ return -ENOMEM;
+
+- if (phys == 0) {
+- pci_unmap_page(hwdev, phys, size, direction);
+- phys = pci_map_page(hwdev, page, offset, size, direction);
++ if (!phys) {
++ pci_unmap_page(hwdev, phys, PAGE_SIZE, PCI_DMA_FROMDEVICE);
++ phys = pci_map_page(hwdev, page, 0, PAGE_SIZE,
++ PCI_DMA_FROMDEVICE);
++ if (pci_dma_mapping_error(hwdev, phys))
++ return -ENOMEM;
+ /*
+ * FIXME: If we get 0 again, we should keep this page,
+ * map another, then free the 0 page.
+ */
+ }
+-
+- return phys;
++ *daddr = phys;
++ return 0;
+ }
+
+ /**
+diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
+index fb52b669bfce..340c17aba3b0 100644
+--- a/drivers/infiniband/sw/rdmavt/cq.c
++++ b/drivers/infiniband/sw/rdmavt/cq.c
+@@ -120,17 +120,20 @@ void rvt_cq_enter(struct rvt_cq *cq, struct ib_wc *entry, bool solicited)
+ if (cq->notify == IB_CQ_NEXT_COMP ||
+ (cq->notify == IB_CQ_SOLICITED &&
+ (solicited || entry->status != IB_WC_SUCCESS))) {
++ struct kthread_worker *worker;
++
+ /*
+ * This will cause send_complete() to be called in
+ * another thread.
+ */
+- spin_lock(&cq->rdi->n_cqs_lock);
+- if (likely(cq->rdi->worker)) {
++ rcu_read_lock();
++ worker = rcu_dereference(cq->rdi->worker);
++ if (likely(worker)) {
+ cq->notify = RVT_CQ_NONE;
+ cq->triggered++;
+- kthread_queue_work(cq->rdi->worker, &cq->comptask);
++ kthread_queue_work(worker, &cq->comptask);
+ }
+- spin_unlock(&cq->rdi->n_cqs_lock);
++ rcu_read_unlock();
+ }
+
+ spin_unlock_irqrestore(&cq->lock, flags);
+@@ -512,7 +515,7 @@ int rvt_driver_cq_init(struct rvt_dev_info *rdi)
+ int cpu;
+ struct kthread_worker *worker;
+
+- if (rdi->worker)
++ if (rcu_access_pointer(rdi->worker))
+ return 0;
+
+ spin_lock_init(&rdi->n_cqs_lock);
+@@ -524,7 +527,7 @@ int rvt_driver_cq_init(struct rvt_dev_info *rdi)
+ return PTR_ERR(worker);
+
+ set_user_nice(worker->task, MIN_NICE);
+- rdi->worker = worker;
++ RCU_INIT_POINTER(rdi->worker, worker);
+ return 0;
+ }
+
+@@ -536,15 +539,19 @@ void rvt_cq_exit(struct rvt_dev_info *rdi)
+ {
+ struct kthread_worker *worker;
+
+- /* block future queuing from send_complete() */
+- spin_lock_irq(&rdi->n_cqs_lock);
+- worker = rdi->worker;
++ if (!rcu_access_pointer(rdi->worker))
++ return;
++
++ spin_lock(&rdi->n_cqs_lock);
++ worker = rcu_dereference_protected(rdi->worker,
++ lockdep_is_held(&rdi->n_cqs_lock));
+ if (!worker) {
+- spin_unlock_irq(&rdi->n_cqs_lock);
++ spin_unlock(&rdi->n_cqs_lock);
+ return;
+ }
+- rdi->worker = NULL;
+- spin_unlock_irq(&rdi->n_cqs_lock);
++ RCU_INIT_POINTER(rdi->worker, NULL);
++ spin_unlock(&rdi->n_cqs_lock);
++ synchronize_rcu();
+
+ kthread_destroy_worker(worker);
+ }
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index fff40b097947..3130698fee70 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -886,15 +886,9 @@ isert_login_post_send(struct isert_conn *isert_conn, struct iser_tx_desc *tx_des
+ }
+
+ static void
+-isert_create_send_desc(struct isert_conn *isert_conn,
+- struct isert_cmd *isert_cmd,
+- struct iser_tx_desc *tx_desc)
++__isert_create_send_desc(struct isert_device *device,
++ struct iser_tx_desc *tx_desc)
+ {
+- struct isert_device *device = isert_conn->device;
+- struct ib_device *ib_dev = device->ib_device;
+-
+- ib_dma_sync_single_for_cpu(ib_dev, tx_desc->dma_addr,
+- ISER_HEADERS_LEN, DMA_TO_DEVICE);
+
+ memset(&tx_desc->iser_header, 0, sizeof(struct iser_ctrl));
+ tx_desc->iser_header.flags = ISCSI_CTRL;
+@@ -907,6 +901,20 @@ isert_create_send_desc(struct isert_conn *isert_conn,
+ }
+ }
+
++static void
++isert_create_send_desc(struct isert_conn *isert_conn,
++ struct isert_cmd *isert_cmd,
++ struct iser_tx_desc *tx_desc)
++{
++ struct isert_device *device = isert_conn->device;
++ struct ib_device *ib_dev = device->ib_device;
++
++ ib_dma_sync_single_for_cpu(ib_dev, tx_desc->dma_addr,
++ ISER_HEADERS_LEN, DMA_TO_DEVICE);
++
++ __isert_create_send_desc(device, tx_desc);
++}
++
+ static int
+ isert_init_tx_hdrs(struct isert_conn *isert_conn,
+ struct iser_tx_desc *tx_desc)
+@@ -994,7 +1002,7 @@ isert_put_login_tx(struct iscsi_conn *conn, struct iscsi_login *login,
+ struct iser_tx_desc *tx_desc = &isert_conn->login_tx_desc;
+ int ret;
+
+- isert_create_send_desc(isert_conn, NULL, tx_desc);
++ __isert_create_send_desc(device, tx_desc);
+
+ memcpy(&tx_desc->iscsi_header, &login->rsp[0],
+ sizeof(struct iscsi_hdr));
+@@ -2108,7 +2116,7 @@ isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs)
+
+ sig_attrs->check_mask =
+ (se_cmd->prot_checks & TARGET_DIF_CHECK_GUARD ? 0xc0 : 0) |
+- (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x30 : 0) |
++ (se_cmd->prot_checks & TARGET_DIF_CHECK_APPTAG ? 0x30 : 0) |
+ (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x0f : 0);
+ return 0;
+ }
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index a89b81b35932..62f9c23d8a7f 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -123,7 +123,7 @@ static const struct xpad_device {
+ u8 mapping;
+ u8 xtype;
+ } xpad_device[] = {
+- { 0x0079, 0x18d4, "GPD Win 2 Controller", 0, XTYPE_XBOX360 },
++ { 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 },
+ { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+diff --git a/drivers/input/mouse/elan_i2c.h b/drivers/input/mouse/elan_i2c.h
+index 599544c1a91c..243e0fa6e3e3 100644
+--- a/drivers/input/mouse/elan_i2c.h
++++ b/drivers/input/mouse/elan_i2c.h
+@@ -27,6 +27,8 @@
+ #define ETP_DISABLE_POWER 0x0001
+ #define ETP_PRESSURE_OFFSET 25
+
++#define ETP_CALIBRATE_MAX_LEN 3
++
+ /* IAP Firmware handling */
+ #define ETP_PRODUCT_ID_FORMAT_STRING "%d.0"
+ #define ETP_FW_NAME "elan_i2c_" ETP_PRODUCT_ID_FORMAT_STRING ".bin"
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 93967c8139e7..37f954b704a6 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -610,7 +610,7 @@ static ssize_t calibrate_store(struct device *dev,
+ int tries = 20;
+ int retval;
+ int error;
+- u8 val[3];
++ u8 val[ETP_CALIBRATE_MAX_LEN];
+
+ retval = mutex_lock_interruptible(&data->sysfs_mutex);
+ if (retval)
+@@ -1263,6 +1263,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ { "ELAN060C", 0 },
+ { "ELAN0611", 0 },
+ { "ELAN0612", 0 },
++ { "ELAN0618", 0 },
+ { "ELAN1000", 0 },
+ { }
+ };
+diff --git a/drivers/input/mouse/elan_i2c_smbus.c b/drivers/input/mouse/elan_i2c_smbus.c
+index cfcb32559925..c060d270bc4d 100644
+--- a/drivers/input/mouse/elan_i2c_smbus.c
++++ b/drivers/input/mouse/elan_i2c_smbus.c
+@@ -56,7 +56,7 @@
+ static int elan_smbus_initialize(struct i2c_client *client)
+ {
+ u8 check[ETP_SMBUS_HELLOPACKET_LEN] = { 0x55, 0x55, 0x55, 0x55, 0x55 };
+- u8 values[ETP_SMBUS_HELLOPACKET_LEN] = { 0, 0, 0, 0, 0 };
++ u8 values[I2C_SMBUS_BLOCK_MAX] = {0};
+ int len, error;
+
+ /* Get hello packet */
+@@ -117,12 +117,16 @@ static int elan_smbus_calibrate(struct i2c_client *client)
+ static int elan_smbus_calibrate_result(struct i2c_client *client, u8 *val)
+ {
+ int error;
++ u8 buf[I2C_SMBUS_BLOCK_MAX] = {0};
++
++ BUILD_BUG_ON(ETP_CALIBRATE_MAX_LEN > sizeof(buf));
+
+ error = i2c_smbus_read_block_data(client,
+- ETP_SMBUS_CALIBRATE_QUERY, val);
++ ETP_SMBUS_CALIBRATE_QUERY, buf);
+ if (error < 0)
+ return error;
+
++ memcpy(val, buf, ETP_CALIBRATE_MAX_LEN);
+ return 0;
+ }
+
+@@ -472,6 +476,8 @@ static int elan_smbus_get_report(struct i2c_client *client, u8 *report)
+ {
+ int len;
+
++ BUILD_BUG_ON(I2C_SMBUS_BLOCK_MAX > ETP_SMBUS_REPORT_LEN);
++
+ len = i2c_smbus_read_block_data(client,
+ ETP_SMBUS_PACKET_QUERY,
+ &report[ETP_SMBUS_REPORT_OFFSET]);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index db47a5e1d114..b68019109e99 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -796,7 +796,7 @@ static int elantech_packet_check_v4(struct psmouse *psmouse)
+ else if (ic_version == 7 && etd->samples[1] == 0x2A)
+ sanity_check = ((packet[3] & 0x1c) == 0x10);
+ else
+- sanity_check = ((packet[0] & 0x0c) == 0x04 &&
++ sanity_check = ((packet[0] & 0x08) == 0x00 &&
+ (packet[3] & 0x1c) == 0x10);
+
+ if (!sanity_check)
+@@ -1169,6 +1169,12 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ { }
+ };
+
++static const char * const middle_button_pnp_ids[] = {
++ "LEN2131", /* ThinkPad P52 w/ NFC */
++ "LEN2132", /* ThinkPad P52 */
++ NULL
++};
++
+ /*
+ * Set the appropriate event bits for the input subsystem
+ */
+@@ -1188,7 +1194,8 @@ static int elantech_set_input_params(struct psmouse *psmouse)
+ __clear_bit(EV_REL, dev->evbit);
+
+ __set_bit(BTN_LEFT, dev->keybit);
+- if (dmi_check_system(elantech_dmi_has_middle_button))
++ if (dmi_check_system(elantech_dmi_has_middle_button) ||
++ psmouse_matches_pnp_id(psmouse, middle_button_pnp_ids))
+ __set_bit(BTN_MIDDLE, dev->keybit);
+ __set_bit(BTN_RIGHT, dev->keybit);
+
+diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
+index 8900c3166ebf..47ed5616d026 100644
+--- a/drivers/input/mouse/psmouse-base.c
++++ b/drivers/input/mouse/psmouse-base.c
+@@ -192,8 +192,8 @@ psmouse_ret_t psmouse_process_byte(struct psmouse *psmouse)
+ else
+ input_report_rel(dev, REL_WHEEL, -wheel);
+
+- input_report_key(dev, BTN_SIDE, BIT(4));
+- input_report_key(dev, BTN_EXTRA, BIT(5));
++ input_report_key(dev, BTN_SIDE, packet[3] & BIT(4));
++ input_report_key(dev, BTN_EXTRA, packet[3] & BIT(5));
+ break;
+ }
+ break;
+@@ -203,13 +203,13 @@ psmouse_ret_t psmouse_process_byte(struct psmouse *psmouse)
+ input_report_rel(dev, REL_WHEEL, -(s8) packet[3]);
+
+ /* Extra buttons on Genius NewNet 3D */
+- input_report_key(dev, BTN_SIDE, BIT(6));
+- input_report_key(dev, BTN_EXTRA, BIT(7));
++ input_report_key(dev, BTN_SIDE, packet[0] & BIT(6));
++ input_report_key(dev, BTN_EXTRA, packet[0] & BIT(7));
+ break;
+
+ case PSMOUSE_THINKPS:
+ /* Extra button on ThinkingMouse */
+- input_report_key(dev, BTN_EXTRA, BIT(3));
++ input_report_key(dev, BTN_EXTRA, packet[0] & BIT(3));
+
+ /*
+ * Without this bit of weirdness moving up gives wildly
+@@ -223,7 +223,7 @@ psmouse_ret_t psmouse_process_byte(struct psmouse *psmouse)
+ * Cortron PS2 Trackball reports SIDE button in the
+ * 4th bit of the first byte.
+ */
+- input_report_key(dev, BTN_SIDE, BIT(3));
++ input_report_key(dev, BTN_SIDE, packet[0] & BIT(3));
+ packet[0] |= BIT(3);
+ break;
+
+diff --git a/drivers/input/touchscreen/silead.c b/drivers/input/touchscreen/silead.c
+index ff7043f74a3d..d196ac3d8b8c 100644
+--- a/drivers/input/touchscreen/silead.c
++++ b/drivers/input/touchscreen/silead.c
+@@ -603,6 +603,7 @@ static const struct acpi_device_id silead_ts_acpi_match[] = {
+ { "GSL3692", 0 },
+ { "MSSL1680", 0 },
+ { "MSSL0001", 0 },
++ { "MSSL0002", 0 },
+ { }
+ };
+ MODULE_DEVICE_TABLE(acpi, silead_ts_acpi_match);
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index df171cb85822..b38798cc5288 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -107,7 +107,6 @@ config IOMMU_PGTABLES_L2
+ # AMD IOMMU support
+ config AMD_IOMMU
+ bool "AMD IOMMU support"
+- select DMA_DIRECT_OPS
+ select SWIOTLB
+ select PCI_MSI
+ select PCI_ATS
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index b0b30a568db7..12c1491a1a9a 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2593,32 +2593,51 @@ static void *alloc_coherent(struct device *dev, size_t size,
+ unsigned long attrs)
+ {
+ u64 dma_mask = dev->coherent_dma_mask;
+- struct protection_domain *domain = get_domain(dev);
+- bool is_direct = false;
+- void *virt_addr;
++ struct protection_domain *domain;
++ struct dma_ops_domain *dma_dom;
++ struct page *page;
++
++ domain = get_domain(dev);
++ if (PTR_ERR(domain) == -EINVAL) {
++ page = alloc_pages(flag, get_order(size));
++ *dma_addr = page_to_phys(page);
++ return page_address(page);
++ } else if (IS_ERR(domain))
++ return NULL;
+
+- if (IS_ERR(domain)) {
+- if (PTR_ERR(domain) != -EINVAL)
++ dma_dom = to_dma_ops_domain(domain);
++ size = PAGE_ALIGN(size);
++ dma_mask = dev->coherent_dma_mask;
++ flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
++ flag |= __GFP_ZERO;
++
++ page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
++ if (!page) {
++ if (!gfpflags_allow_blocking(flag))
+ return NULL;
+- is_direct = true;
+- }
+
+- virt_addr = dma_direct_alloc(dev, size, dma_addr, flag, attrs);
+- if (!virt_addr || is_direct)
+- return virt_addr;
++ page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
++ get_order(size), flag);
++ if (!page)
++ return NULL;
++ }
+
+ if (!dma_mask)
+ dma_mask = *dev->dma_mask;
+
+- *dma_addr = __map_single(dev, to_dma_ops_domain(domain),
+- virt_to_phys(virt_addr), PAGE_ALIGN(size),
+- DMA_BIDIRECTIONAL, dma_mask);
++ *dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
++ size, DMA_BIDIRECTIONAL, dma_mask);
++
+ if (*dma_addr == AMD_IOMMU_MAPPING_ERROR)
+ goto out_free;
+- return virt_addr;
++
++ return page_address(page);
+
+ out_free:
+- dma_direct_free(dev, size, virt_addr, *dma_addr, attrs);
++
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, get_order(size));
++
+ return NULL;
+ }
+
+@@ -2629,17 +2648,24 @@ static void free_coherent(struct device *dev, size_t size,
+ void *virt_addr, dma_addr_t dma_addr,
+ unsigned long attrs)
+ {
+- struct protection_domain *domain = get_domain(dev);
++ struct protection_domain *domain;
++ struct dma_ops_domain *dma_dom;
++ struct page *page;
+
++ page = virt_to_page(virt_addr);
+ size = PAGE_ALIGN(size);
+
+- if (!IS_ERR(domain)) {
+- struct dma_ops_domain *dma_dom = to_dma_ops_domain(domain);
++ domain = get_domain(dev);
++ if (IS_ERR(domain))
++ goto free_mem;
+
+- __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+- }
++ dma_dom = to_dma_ops_domain(domain);
++
++ __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+
+- dma_direct_free(dev, size, virt_addr, dma_addr, attrs);
++free_mem:
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, get_order(size));
+ }
+
+ /*
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 5416f2b2ac21..ab16968fced8 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -2309,7 +2309,14 @@ static int its_irq_domain_activate(struct irq_domain *domain,
+ cpu_mask = cpumask_of_node(its_dev->its->numa_node);
+
+ /* Bind the LPI to the first possible CPU */
+- cpu = cpumask_first(cpu_mask);
++ cpu = cpumask_first_and(cpu_mask, cpu_online_mask);
++ if (cpu >= nr_cpu_ids) {
++ if (its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144)
++ return -EINVAL;
++
++ cpu = cpumask_first(cpu_online_mask);
++ }
++
+ its_dev->event_map.col_map[event] = cpu;
+ irq_data_update_effective_affinity(d, cpumask_of(cpu));
+
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index b11107497d2e..19478c7b2268 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -1385,6 +1385,8 @@ static void schedule_external_copy(struct thin_c *tc, dm_block_t virt_block,
+
+ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode);
+
++static void requeue_bios(struct pool *pool);
++
+ static void check_for_space(struct pool *pool)
+ {
+ int r;
+@@ -1397,8 +1399,10 @@ static void check_for_space(struct pool *pool)
+ if (r)
+ return;
+
+- if (nr_free)
++ if (nr_free) {
+ set_pool_mode(pool, PM_WRITE);
++ requeue_bios(pool);
++ }
+ }
+
+ /*
+@@ -1475,7 +1479,10 @@ static int alloc_data_block(struct thin_c *tc, dm_block_t *result)
+
+ r = dm_pool_alloc_data_block(pool->pmd, result);
+ if (r) {
+- metadata_operation_failed(pool, "dm_pool_alloc_data_block", r);
++ if (r == -ENOSPC)
++ set_pool_mode(pool, PM_OUT_OF_DATA_SPACE);
++ else
++ metadata_operation_failed(pool, "dm_pool_alloc_data_block", r);
+ return r;
+ }
+
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index e73b0776683c..e302f1558fa0 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -788,7 +788,7 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+
+ /* Chunk BIO work */
+ mutex_init(&dmz->chunk_lock);
+- INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_KERNEL);
++ INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_NOIO);
+ dmz->chunk_wq = alloc_workqueue("dmz_cwq_%s", WQ_MEM_RECLAIM | WQ_UNBOUND,
+ 0, dev->name);
+ if (!dmz->chunk_wq) {
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 0a7b0107ca78..cabae3e280c2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1582,10 +1582,9 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
+ * the usage of io->orig_bio in dm_remap_zone_report()
+ * won't be affected by this reassignment.
+ */
+- struct bio *b = bio_clone_bioset(bio, GFP_NOIO,
+- md->queue->bio_split);
++ struct bio *b = bio_split(bio, bio_sectors(bio) - ci.sector_count,
++ GFP_NOIO, md->queue->bio_split);
+ ci.io->orig_bio = b;
+- bio_advance(bio, (bio_sectors(bio) - ci.sector_count) << 9);
+ bio_chain(b, bio);
+ ret = generic_make_request(bio);
+ break;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index c208c01f63a5..bac480d75d1d 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2853,7 +2853,8 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ err = 0;
+ }
+ } else if (cmd_match(buf, "re-add")) {
+- if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1)) {
++ if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
++ rdev->saved_raid_disk >= 0) {
+ /* clear_bit is performed _after_ all the devices
+ * have their local Faulty bit cleared. If any writes
+ * happen in the meantime in the local node, they
+@@ -8641,6 +8642,7 @@ static int remove_and_add_spares(struct mddev *mddev,
+ if (mddev->pers->hot_remove_disk(
+ mddev, rdev) == 0) {
+ sysfs_unlink_rdev(mddev, rdev);
++ rdev->saved_raid_disk = rdev->raid_disk;
+ rdev->raid_disk = -1;
+ removed++;
+ }
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index e33414975065..a4ada1ccf0df 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -275,8 +275,20 @@ static void dvb_frontend_add_event(struct dvb_frontend *fe,
+ wake_up_interruptible (&events->wait_queue);
+ }
+
++static int dvb_frontend_test_event(struct dvb_frontend_private *fepriv,
++ struct dvb_fe_events *events)
++{
++ int ret;
++
++ up(&fepriv->sem);
++ ret = events->eventw != events->eventr;
++ down(&fepriv->sem);
++
++ return ret;
++}
++
+ static int dvb_frontend_get_event(struct dvb_frontend *fe,
+- struct dvb_frontend_event *event, int flags)
++ struct dvb_frontend_event *event, int flags)
+ {
+ struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ struct dvb_fe_events *events = &fepriv->events;
+@@ -294,13 +306,8 @@ static int dvb_frontend_get_event(struct dvb_frontend *fe,
+ if (flags & O_NONBLOCK)
+ return -EWOULDBLOCK;
+
+- up(&fepriv->sem);
+-
+- ret = wait_event_interruptible (events->wait_queue,
+- events->eventw != events->eventr);
+-
+- if (down_interruptible (&fepriv->sem))
+- return -ERESTARTSYS;
++ ret = wait_event_interruptible(events->wait_queue,
++ dvb_frontend_test_event(fepriv, events));
+
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/media/platform/vsp1/vsp1_video.c b/drivers/media/platform/vsp1/vsp1_video.c
+index c2d3b8f0f487..93f69b3ac911 100644
+--- a/drivers/media/platform/vsp1/vsp1_video.c
++++ b/drivers/media/platform/vsp1/vsp1_video.c
+@@ -849,9 +849,8 @@ static int vsp1_video_setup_pipeline(struct vsp1_pipeline *pipe)
+ return 0;
+ }
+
+-static void vsp1_video_cleanup_pipeline(struct vsp1_pipeline *pipe)
++static void vsp1_video_release_buffers(struct vsp1_video *video)
+ {
+- struct vsp1_video *video = pipe->output->video;
+ struct vsp1_vb2_buffer *buffer;
+ unsigned long flags;
+
+@@ -861,12 +860,18 @@ static void vsp1_video_cleanup_pipeline(struct vsp1_pipeline *pipe)
+ vb2_buffer_done(&buffer->buf.vb2_buf, VB2_BUF_STATE_ERROR);
+ INIT_LIST_HEAD(&video->irqqueue);
+ spin_unlock_irqrestore(&video->irqlock, flags);
++}
++
++static void vsp1_video_cleanup_pipeline(struct vsp1_pipeline *pipe)
++{
++ lockdep_assert_held(&pipe->lock);
+
+ /* Release our partition table allocation */
+- mutex_lock(&pipe->lock);
+ kfree(pipe->part_table);
+ pipe->part_table = NULL;
+- mutex_unlock(&pipe->lock);
++
++ vsp1_dl_list_put(pipe->dl);
++ pipe->dl = NULL;
+ }
+
+ static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+@@ -881,8 +886,9 @@ static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+ if (pipe->stream_count == pipe->num_inputs) {
+ ret = vsp1_video_setup_pipeline(pipe);
+ if (ret < 0) {
+- mutex_unlock(&pipe->lock);
++ vsp1_video_release_buffers(video);
+ vsp1_video_cleanup_pipeline(pipe);
++ mutex_unlock(&pipe->lock);
+ return ret;
+ }
+
+@@ -932,13 +938,12 @@ static void vsp1_video_stop_streaming(struct vb2_queue *vq)
+ if (ret == -ETIMEDOUT)
+ dev_err(video->vsp1->dev, "pipeline stop timeout\n");
+
+- vsp1_dl_list_put(pipe->dl);
+- pipe->dl = NULL;
++ vsp1_video_cleanup_pipeline(pipe);
+ }
+ mutex_unlock(&pipe->lock);
+
+ media_pipeline_stop(&video->video.entity);
+- vsp1_video_cleanup_pipeline(pipe);
++ vsp1_video_release_buffers(video);
+ vsp1_video_pipeline_put(pipe);
+ }
+
+diff --git a/drivers/media/rc/ir-mce_kbd-decoder.c b/drivers/media/rc/ir-mce_kbd-decoder.c
+index c110984ca671..5478fe08f9d3 100644
+--- a/drivers/media/rc/ir-mce_kbd-decoder.c
++++ b/drivers/media/rc/ir-mce_kbd-decoder.c
+@@ -130,6 +130,8 @@ static void mce_kbd_rx_timeout(struct timer_list *t)
+
+ for (i = 0; i < MCIR2_MASK_KEYS_START; i++)
+ input_report_key(raw->mce_kbd.idev, kbd_keycodes[i], 0);
++
++ input_sync(raw->mce_kbd.idev);
+ }
+
+ static enum mce_kbd_mode mce_kbd_mode(struct mce_kbd_dec *data)
+diff --git a/drivers/media/usb/cx231xx/cx231xx-cards.c b/drivers/media/usb/cx231xx/cx231xx-cards.c
+index c76b2101193c..89795d4d0a71 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-cards.c
++++ b/drivers/media/usb/cx231xx/cx231xx-cards.c
+@@ -1024,6 +1024,9 @@ struct usb_device_id cx231xx_id_table[] = {
+ .driver_info = CX231XX_BOARD_CNXT_RDE_250},
+ {USB_DEVICE(0x0572, 0x58A0),
+ .driver_info = CX231XX_BOARD_CNXT_RDU_250},
++ /* AverMedia DVD EZMaker 7 */
++ {USB_DEVICE(0x07ca, 0xc039),
++ .driver_info = CX231XX_BOARD_CNXT_VIDEO_GRABBER},
+ {USB_DEVICE(0x2040, 0xb110),
+ .driver_info = CX231XX_BOARD_HAUPPAUGE_USB2_FM_PAL},
+ {USB_DEVICE(0x2040, 0xb111),
+diff --git a/drivers/media/usb/cx231xx/cx231xx-dvb.c b/drivers/media/usb/cx231xx/cx231xx-dvb.c
+index 67ed66712d05..f31ffaf9d2f2 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-dvb.c
++++ b/drivers/media/usb/cx231xx/cx231xx-dvb.c
+@@ -1151,7 +1151,7 @@ static int dvb_init(struct cx231xx *dev)
+ info.platform_data = &si2157_config;
+ request_module("si2157");
+
+- client = i2c_new_device(adapter, &info);
++ client = i2c_new_device(tuner_i2c, &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ module_put(dvb->i2c_client_demod[0]->dev.driver->owner);
+ i2c_unregister_device(dvb->i2c_client_demod[0]);
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index aa0082fe5833..b28c997a7ab0 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -163,14 +163,27 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ }
+ }
+
++static size_t uvc_video_ctrl_size(struct uvc_streaming *stream)
++{
++ /*
++ * Return the size of the video probe and commit controls, which depends
++ * on the protocol version.
++ */
++ if (stream->dev->uvc_version < 0x0110)
++ return 26;
++ else if (stream->dev->uvc_version < 0x0150)
++ return 34;
++ else
++ return 48;
++}
++
+ static int uvc_get_video_ctrl(struct uvc_streaming *stream,
+ struct uvc_streaming_control *ctrl, int probe, u8 query)
+ {
++ u16 size = uvc_video_ctrl_size(stream);
+ u8 *data;
+- u16 size;
+ int ret;
+
+- size = stream->dev->uvc_version >= 0x0110 ? 34 : 26;
+ if ((stream->dev->quirks & UVC_QUIRK_PROBE_DEF) &&
+ query == UVC_GET_DEF)
+ return -EIO;
+@@ -225,7 +238,7 @@ static int uvc_get_video_ctrl(struct uvc_streaming *stream,
+ ctrl->dwMaxVideoFrameSize = get_unaligned_le32(&data[18]);
+ ctrl->dwMaxPayloadTransferSize = get_unaligned_le32(&data[22]);
+
+- if (size == 34) {
++ if (size >= 34) {
+ ctrl->dwClockFrequency = get_unaligned_le32(&data[26]);
+ ctrl->bmFramingInfo = data[30];
+ ctrl->bPreferedVersion = data[31];
+@@ -254,11 +267,10 @@ static int uvc_get_video_ctrl(struct uvc_streaming *stream,
+ static int uvc_set_video_ctrl(struct uvc_streaming *stream,
+ struct uvc_streaming_control *ctrl, int probe)
+ {
++ u16 size = uvc_video_ctrl_size(stream);
+ u8 *data;
+- u16 size;
+ int ret;
+
+- size = stream->dev->uvc_version >= 0x0110 ? 34 : 26;
+ data = kzalloc(size, GFP_KERNEL);
+ if (data == NULL)
+ return -ENOMEM;
+@@ -275,7 +287,7 @@ static int uvc_set_video_ctrl(struct uvc_streaming *stream,
+ put_unaligned_le32(ctrl->dwMaxVideoFrameSize, &data[18]);
+ put_unaligned_le32(ctrl->dwMaxPayloadTransferSize, &data[22]);
+
+- if (size == 34) {
++ if (size >= 34) {
+ put_unaligned_le32(ctrl->dwClockFrequency, &data[26]);
+ data[30] = ctrl->bmFramingInfo;
+ data[31] = ctrl->bPreferedVersion;
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index 4312935f1dfc..d03a44d89649 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -871,7 +871,7 @@ static int put_v4l2_ext_controls32(struct file *file,
+ get_user(kcontrols, &kp->controls))
+ return -EFAULT;
+
+- if (!count)
++ if (!count || count > (U32_MAX/sizeof(*ucontrols)))
+ return 0;
+ if (get_user(p, &up->controls))
+ return -EFAULT;
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index d1c46de89eb4..d9ae983095c5 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -124,6 +124,11 @@ static const struct intel_lpss_platform_info apl_i2c_info = {
+ .properties = apl_i2c_properties,
+ };
+
++static const struct intel_lpss_platform_info cnl_i2c_info = {
++ .clk_rate = 216000000,
++ .properties = spt_i2c_properties,
++};
++
+ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ /* BXT A-Step */
+ { PCI_VDEVICE(INTEL, 0x0aac), (kernel_ulong_t)&bxt_i2c_info },
+@@ -207,13 +212,13 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x9daa), (kernel_ulong_t)&spt_info },
+ { PCI_VDEVICE(INTEL, 0x9dab), (kernel_ulong_t)&spt_info },
+ { PCI_VDEVICE(INTEL, 0x9dfb), (kernel_ulong_t)&spt_info },
+- { PCI_VDEVICE(INTEL, 0x9dc5), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0x9dc6), (kernel_ulong_t)&spt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9dc5), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9dc6), (kernel_ulong_t)&cnl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x9dc7), (kernel_ulong_t)&spt_uart_info },
+- { PCI_VDEVICE(INTEL, 0x9de8), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0x9de9), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0x9dea), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0x9deb), (kernel_ulong_t)&spt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9de8), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9de9), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9dea), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x9deb), (kernel_ulong_t)&cnl_i2c_info },
+ /* SPT-H */
+ { PCI_VDEVICE(INTEL, 0xa127), (kernel_ulong_t)&spt_uart_info },
+ { PCI_VDEVICE(INTEL, 0xa128), (kernel_ulong_t)&spt_uart_info },
+@@ -240,10 +245,10 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0xa32b), (kernel_ulong_t)&spt_info },
+ { PCI_VDEVICE(INTEL, 0xa37b), (kernel_ulong_t)&spt_info },
+ { PCI_VDEVICE(INTEL, 0xa347), (kernel_ulong_t)&spt_uart_info },
+- { PCI_VDEVICE(INTEL, 0xa368), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0xa369), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0xa36a), (kernel_ulong_t)&spt_i2c_info },
+- { PCI_VDEVICE(INTEL, 0xa36b), (kernel_ulong_t)&spt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xa368), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xa369), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xa36a), (kernel_ulong_t)&cnl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xa36b), (kernel_ulong_t)&cnl_i2c_info },
+ { }
+ };
+ MODULE_DEVICE_TABLE(pci, intel_lpss_pci_ids);
+diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
+index 9e545eb6e8b4..4bcf117a7ba8 100644
+--- a/drivers/mfd/intel-lpss.c
++++ b/drivers/mfd/intel-lpss.c
+@@ -275,11 +275,11 @@ static void intel_lpss_init_dev(const struct intel_lpss *lpss)
+
+ intel_lpss_deassert_reset(lpss);
+
++ intel_lpss_set_remap_addr(lpss);
++
+ if (!intel_lpss_has_idma(lpss))
+ return;
+
+- intel_lpss_set_remap_addr(lpss);
+-
+ /* Make sure that SPI multiblock DMA transfers are re-enabled */
+ if (lpss->type == LPSS_DEV_SPI)
+ writel(value, lpss->priv + LPSS_PRIV_SSP_REG);
+diff --git a/drivers/mfd/twl-core.c b/drivers/mfd/twl-core.c
+index d3133a371e27..c649344fd7f2 100644
+--- a/drivers/mfd/twl-core.c
++++ b/drivers/mfd/twl-core.c
+@@ -1177,7 +1177,7 @@ twl_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ twl_priv->ready = true;
+
+ /* setup clock framework */
+- clocks_init(&pdev->dev, pdata ? pdata->clock : NULL);
++ clocks_init(&client->dev, pdata ? pdata->clock : NULL);
+
+ /* read TWL IDCODE Register */
+ if (twl_class_is_4030()) {
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index 4d6736f9d463..429d6de1dde7 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -514,9 +514,9 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter,
+ cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
+
+ /* Setup the PSL to transmit packets on the PCIe before the
+- * CAPP is enabled
++ * CAPP is enabled. Make sure that CAPP virtual machines are disabled
+ */
+- cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0001001000002A10ULL);
++ cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0001001000012A10ULL);
+
+ /*
+ * A response to an ASB_Notify request is returned by the
+diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
+index 4b5a4c5d3c01..629e2e156412 100644
+--- a/drivers/misc/cxl/sysfs.c
++++ b/drivers/misc/cxl/sysfs.c
+@@ -353,12 +353,20 @@ static ssize_t prefault_mode_store(struct device *device,
+ struct cxl_afu *afu = to_cxl_afu(device);
+ enum prefault_modes mode = -1;
+
+- if (!strncmp(buf, "work_element_descriptor", 23))
+- mode = CXL_PREFAULT_WED;
+- if (!strncmp(buf, "all", 3))
+- mode = CXL_PREFAULT_ALL;
+ if (!strncmp(buf, "none", 4))
+ mode = CXL_PREFAULT_NONE;
++ else {
++ if (!radix_enabled()) {
++
++ /* only allowed when not in radix mode */
++ if (!strncmp(buf, "work_element_descriptor", 23))
++ mode = CXL_PREFAULT_WED;
++ if (!strncmp(buf, "all", 3))
++ mode = CXL_PREFAULT_ALL;
++ } else {
++ dev_err(device, "Cannot prefault with radix enabled\n");
++ }
++ }
+
+ if (mode == -1)
+ return -EINVAL;
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 51e01f03fb99..45c015da2e75 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -28,6 +28,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/mmc/host.h>
++#include <linux/mmc/slot-gpio.h>
+ #include <linux/mfd/tmio.h>
+ #include <linux/sh_dma.h>
+ #include <linux/delay.h>
+@@ -534,6 +535,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ host->multi_io_quirk = renesas_sdhi_multi_io_quirk;
+ host->dma_ops = dma_ops;
+
++ /* For some SoC, we disable internal WP. GPIO may override this */
++ if (mmc_can_gpio_ro(host->mmc))
++ mmc_data->capabilities2 &= ~MMC_CAP2_NO_WRITE_PROTECT;
++
+ /* SDR speeds are only available on Gen2+ */
+ if (mmc_data->flags & TMIO_MMC_MIN_RCAR2) {
+ /* card_busy caused issues on r8a73a4 (pre-Gen2) CD-less SDHI */
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 6af946d16d24..eb027cdc8f24 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -87,6 +87,7 @@ static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = {
+ TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2,
+ .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
+ MMC_CAP_CMD23,
++ .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT,
+ .bus_shift = 2,
+ .scc_offset = 0x1000,
+ .taps = rcar_gen3_scc_taps,
+diff --git a/drivers/mmc/host/renesas_sdhi_sys_dmac.c b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
+index 848e50c1638a..4bb46c489d71 100644
+--- a/drivers/mmc/host/renesas_sdhi_sys_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
+@@ -42,6 +42,7 @@ static const struct renesas_sdhi_of_data of_rz_compatible = {
+ static const struct renesas_sdhi_of_data of_rcar_gen1_compatible = {
+ .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL,
+ .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
++ .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT,
+ };
+
+ /* Definitions for sampling clocks */
+@@ -61,6 +62,7 @@ static const struct renesas_sdhi_of_data of_rcar_gen2_compatible = {
+ TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2,
+ .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
+ MMC_CAP_CMD23,
++ .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT,
+ .dma_buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES,
+ .dma_rx_offset = 0x2000,
+ .scc_offset = 0x0300,
+@@ -81,6 +83,7 @@ static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = {
+ TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2,
+ .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
+ MMC_CAP_CMD23,
++ .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT,
+ .bus_shift = 2,
+ .scc_offset = 0x1000,
+ .taps = rcar_gen3_scc_taps,
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index 692902df2598..3a8a88fa06aa 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -1880,7 +1880,7 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
+ if (time_after(jiffies, timeo) && !chip_ready(map, adr))
+ break;
+
+- if (chip_ready(map, adr)) {
++ if (chip_good(map, adr, datum)) {
+ xip_enable(map, chip, adr);
+ goto op_done;
+ }
+@@ -2515,7 +2515,7 @@ static int cfi_atmel_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
+
+ struct ppb_lock {
+ struct flchip *chip;
+- loff_t offset;
++ unsigned long adr;
+ int locked;
+ };
+
+@@ -2533,8 +2533,9 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+ unsigned long timeo;
+ int ret;
+
++ adr += chip->start;
+ mutex_lock(&chip->mutex);
+- ret = get_chip(map, chip, adr + chip->start, FL_LOCKING);
++ ret = get_chip(map, chip, adr, FL_LOCKING);
+ if (ret) {
+ mutex_unlock(&chip->mutex);
+ return ret;
+@@ -2552,8 +2553,8 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+
+ if (thunk == DO_XXLOCK_ONEBLOCK_LOCK) {
+ chip->state = FL_LOCKING;
+- map_write(map, CMD(0xA0), chip->start + adr);
+- map_write(map, CMD(0x00), chip->start + adr);
++ map_write(map, CMD(0xA0), adr);
++ map_write(map, CMD(0x00), adr);
+ } else if (thunk == DO_XXLOCK_ONEBLOCK_UNLOCK) {
+ /*
+ * Unlocking of one specific sector is not supported, so we
+@@ -2591,7 +2592,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+ map_write(map, CMD(0x00), chip->start);
+
+ chip->state = FL_READY;
+- put_chip(map, chip, adr + chip->start);
++ put_chip(map, chip, adr);
+ mutex_unlock(&chip->mutex);
+
+ return ret;
+@@ -2648,9 +2649,9 @@ static int __maybe_unused cfi_ppb_unlock(struct mtd_info *mtd, loff_t ofs,
+ * sectors shall be unlocked, so lets keep their locking
+ * status at "unlocked" (locked=0) for the final re-locking.
+ */
+- if ((adr < ofs) || (adr >= (ofs + len))) {
++ if ((offset < ofs) || (offset >= (ofs + len))) {
+ sect[sectors].chip = &cfi->chips[chipnum];
+- sect[sectors].offset = offset;
++ sect[sectors].adr = adr;
+ sect[sectors].locked = do_ppb_xxlock(
+ map, &cfi->chips[chipnum], adr, 0,
+ DO_XXLOCK_ONEBLOCK_GETLOCK);
+@@ -2664,6 +2665,8 @@ static int __maybe_unused cfi_ppb_unlock(struct mtd_info *mtd, loff_t ofs,
+ i++;
+
+ if (adr >> cfi->chipshift) {
++ if (offset >= (ofs + len))
++ break;
+ adr = 0;
+ chipnum++;
+
+@@ -2694,7 +2697,7 @@ static int __maybe_unused cfi_ppb_unlock(struct mtd_info *mtd, loff_t ofs,
+ */
+ for (i = 0; i < sectors; i++) {
+ if (sect[i].locked)
+- do_ppb_xxlock(map, sect[i].chip, sect[i].offset, 0,
++ do_ppb_xxlock(map, sect[i].chip, sect[i].adr, 0,
+ DO_XXLOCK_ONEBLOCK_LOCK);
+ }
+
+diff --git a/drivers/mtd/nand/raw/denali_dt.c b/drivers/mtd/nand/raw/denali_dt.c
+index cfd33e6ca77f..5869e90cc14b 100644
+--- a/drivers/mtd/nand/raw/denali_dt.c
++++ b/drivers/mtd/nand/raw/denali_dt.c
+@@ -123,7 +123,11 @@ static int denali_dt_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- denali->clk_x_rate = clk_get_rate(dt->clk);
++ /*
++ * Hardcode the clock rate for the backward compatibility.
++ * This works for both SOCFPGA and UniPhier.
++ */
++ denali->clk_x_rate = 200000000;
+
+ ret = denali_init(denali);
+ if (ret)
+diff --git a/drivers/mtd/nand/raw/mxc_nand.c b/drivers/mtd/nand/raw/mxc_nand.c
+index 45786e707b7b..26cef218bb43 100644
+--- a/drivers/mtd/nand/raw/mxc_nand.c
++++ b/drivers/mtd/nand/raw/mxc_nand.c
+@@ -48,7 +48,7 @@
+ #define NFC_V1_V2_CONFIG (host->regs + 0x0a)
+ #define NFC_V1_V2_ECC_STATUS_RESULT (host->regs + 0x0c)
+ #define NFC_V1_V2_RSLTMAIN_AREA (host->regs + 0x0e)
+-#define NFC_V1_V2_RSLTSPARE_AREA (host->regs + 0x10)
++#define NFC_V21_RSLTSPARE_AREA (host->regs + 0x10)
+ #define NFC_V1_V2_WRPROT (host->regs + 0x12)
+ #define NFC_V1_UNLOCKSTART_BLKADDR (host->regs + 0x14)
+ #define NFC_V1_UNLOCKEND_BLKADDR (host->regs + 0x16)
+@@ -1274,6 +1274,9 @@ static void preset_v2(struct mtd_info *mtd)
+ writew(config1, NFC_V1_V2_CONFIG1);
+ /* preset operation */
+
++ /* spare area size in 16-bit half-words */
++ writew(mtd->oobsize / 2, NFC_V21_RSLTSPARE_AREA);
++
+ /* Unlock the internal RAM Buffer */
+ writew(0x2, NFC_V1_V2_CONFIG);
+
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index f28c3a555861..7a881000eeba 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -440,7 +440,7 @@ static int nand_block_bad(struct mtd_info *mtd, loff_t ofs)
+
+ for (; page < page_end; page++) {
+ res = chip->ecc.read_oob(mtd, chip, page);
+- if (res)
++ if (res < 0)
+ return res;
+
+ bad = chip->oob_poi[chip->badblockpos];
+@@ -2174,7 +2174,6 @@ static int nand_set_features_op(struct nand_chip *chip, u8 feature,
+ struct mtd_info *mtd = nand_to_mtd(chip);
+ const u8 *params = data;
+ int i, ret;
+- u8 status;
+
+ if (chip->exec_op) {
+ const struct nand_sdr_timings *sdr =
+@@ -2188,26 +2187,18 @@ static int nand_set_features_op(struct nand_chip *chip, u8 feature,
+ };
+ struct nand_operation op = NAND_OPERATION(instrs);
+
+- ret = nand_exec_op(chip, &op);
+- if (ret)
+- return ret;
+-
+- ret = nand_status_op(chip, &status);
+- if (ret)
+- return ret;
+- } else {
+- chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, feature, -1);
+- for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
+- chip->write_byte(mtd, params[i]);
++ return nand_exec_op(chip, &op);
++ }
+
+- ret = chip->waitfunc(mtd, chip);
+- if (ret < 0)
+- return ret;
++ chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, feature, -1);
++ for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
++ chip->write_byte(mtd, params[i]);
+
+- status = ret;
+- }
++ ret = chip->waitfunc(mtd, chip);
++ if (ret < 0)
++ return ret;
+
+- if (status & NAND_STATUS_FAIL)
++ if (ret & NAND_STATUS_FAIL)
+ return -EIO;
+
+ return 0;
+diff --git a/drivers/mtd/nand/raw/nand_macronix.c b/drivers/mtd/nand/raw/nand_macronix.c
+index 7ed1f87e742a..49c546c97c6f 100644
+--- a/drivers/mtd/nand/raw/nand_macronix.c
++++ b/drivers/mtd/nand/raw/nand_macronix.c
+@@ -17,23 +17,47 @@
+
+ #include <linux/mtd/rawnand.h>
+
++/*
++ * Macronix AC series does not support using SET/GET_FEATURES to change
++ * the timings unlike what is declared in the parameter page. Unflag
++ * this feature to avoid unnecessary downturns.
++ */
++static void macronix_nand_fix_broken_get_timings(struct nand_chip *chip)
++{
++ unsigned int i;
++ static const char * const broken_get_timings[] = {
++ "MX30LF1G18AC",
++ "MX30LF1G28AC",
++ "MX30LF2G18AC",
++ "MX30LF2G28AC",
++ "MX30LF4G18AC",
++ "MX30LF4G28AC",
++ "MX60LF8G18AC",
++ };
++
++ if (!chip->parameters.supports_set_get_features)
++ return;
++
++ for (i = 0; i < ARRAY_SIZE(broken_get_timings); i++) {
++ if (!strcmp(broken_get_timings[i], chip->parameters.model))
++ break;
++ }
++
++ if (i == ARRAY_SIZE(broken_get_timings))
++ return;
++
++ bitmap_clear(chip->parameters.get_feature_list,
++ ONFI_FEATURE_ADDR_TIMING_MODE, 1);
++ bitmap_clear(chip->parameters.set_feature_list,
++ ONFI_FEATURE_ADDR_TIMING_MODE, 1);
++}
++
+ static int macronix_nand_init(struct nand_chip *chip)
+ {
+ if (nand_is_slc(chip))
+ chip->bbt_options |= NAND_BBT_SCAN2NDPAGE;
+
+- /*
+- * MX30LF2G18AC chip does not support using SET/GET_FEATURES to change
+- * the timings unlike what is declared in the parameter page. Unflag
+- * this feature to avoid unnecessary downturns.
+- */
+- if (chip->parameters.supports_set_get_features &&
+- !strcmp("MX30LF2G18AC", chip->parameters.model)) {
+- bitmap_clear(chip->parameters.get_feature_list,
+- ONFI_FEATURE_ADDR_TIMING_MODE, 1);
+- bitmap_clear(chip->parameters.set_feature_list,
+- ONFI_FEATURE_ADDR_TIMING_MODE, 1);
+- }
++ macronix_nand_fix_broken_get_timings(chip);
+
+ return 0;
+ }
+diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c
+index 0af45b134c0c..5ec4c90a637d 100644
+--- a/drivers/mtd/nand/raw/nand_micron.c
++++ b/drivers/mtd/nand/raw/nand_micron.c
+@@ -66,7 +66,9 @@ static int micron_nand_onfi_init(struct nand_chip *chip)
+
+ if (p->supports_set_get_features) {
+ set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->set_feature_list);
++ set_bit(ONFI_FEATURE_ON_DIE_ECC, p->set_feature_list);
+ set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->get_feature_list);
++ set_bit(ONFI_FEATURE_ON_DIE_ECC, p->get_feature_list);
+ }
+
+ return 0;
+diff --git a/drivers/mtd/spi-nor/intel-spi.c b/drivers/mtd/spi-nor/intel-spi.c
+index 699951523179..8e98f4ab87c1 100644
+--- a/drivers/mtd/spi-nor/intel-spi.c
++++ b/drivers/mtd/spi-nor/intel-spi.c
+@@ -136,6 +136,7 @@
+ * @swseq_reg: Use SW sequencer in register reads/writes
+ * @swseq_erase: Use SW sequencer in erase operation
+ * @erase_64k: 64k erase supported
++ * @atomic_preopcode: Holds preopcode when atomic sequence is requested
+ * @opcodes: Opcodes which are supported. This are programmed by BIOS
+ * before it locks down the controller.
+ */
+@@ -153,6 +154,7 @@ struct intel_spi {
+ bool swseq_reg;
+ bool swseq_erase;
+ bool erase_64k;
++ u8 atomic_preopcode;
+ u8 opcodes[8];
+ };
+
+@@ -474,7 +476,7 @@ static int intel_spi_sw_cycle(struct intel_spi *ispi, u8 opcode, int len,
+ int optype)
+ {
+ u32 val = 0, status;
+- u16 preop;
++ u8 atomic_preopcode;
+ int ret;
+
+ ret = intel_spi_opcode_index(ispi, opcode, optype);
+@@ -484,17 +486,42 @@ static int intel_spi_sw_cycle(struct intel_spi *ispi, u8 opcode, int len,
+ if (len > INTEL_SPI_FIFO_SZ)
+ return -EINVAL;
+
++ /*
++ * Always clear it after each SW sequencer operation regardless
++ * of whether it is successful or not.
++ */
++ atomic_preopcode = ispi->atomic_preopcode;
++ ispi->atomic_preopcode = 0;
++
+ /* Only mark 'Data Cycle' bit when there is data to be transferred */
+ if (len > 0)
+ val = ((len - 1) << SSFSTS_CTL_DBC_SHIFT) | SSFSTS_CTL_DS;
+ val |= ret << SSFSTS_CTL_COP_SHIFT;
+ val |= SSFSTS_CTL_FCERR | SSFSTS_CTL_FDONE;
+ val |= SSFSTS_CTL_SCGO;
+- preop = readw(ispi->sregs + PREOP_OPTYPE);
+- if (preop) {
+- val |= SSFSTS_CTL_ACS;
+- if (preop >> 8)
+- val |= SSFSTS_CTL_SPOP;
++ if (atomic_preopcode) {
++ u16 preop;
++
++ switch (optype) {
++ case OPTYPE_WRITE_NO_ADDR:
++ case OPTYPE_WRITE_WITH_ADDR:
++ /* Pick matching preopcode for the atomic sequence */
++ preop = readw(ispi->sregs + PREOP_OPTYPE);
++ if ((preop & 0xff) == atomic_preopcode)
++ ; /* Do nothing */
++ else if ((preop >> 8) == atomic_preopcode)
++ val |= SSFSTS_CTL_SPOP;
++ else
++ return -EINVAL;
++
++ /* Enable atomic sequence */
++ val |= SSFSTS_CTL_ACS;
++ break;
++
++ default:
++ return -EINVAL;
++ }
++
+ }
+ writel(val, ispi->sregs + SSFSTS_CTL);
+
+@@ -538,13 +565,31 @@ static int intel_spi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
+
+ /*
+ * This is handled with atomic operation and preop code in Intel
+- * controller so skip it here now. If the controller is not locked,
+- * program the opcode to the PREOP register for later use.
++ * controller so we only verify that it is available. If the
++ * controller is not locked, program the opcode to the PREOP
++ * register for later use.
++ *
++ * When hardware sequencer is used there is no need to program
++ * any opcodes (it handles them automatically as part of a command).
+ */
+ if (opcode == SPINOR_OP_WREN) {
+- if (!ispi->locked)
++ u16 preop;
++
++ if (!ispi->swseq_reg)
++ return 0;
++
++ preop = readw(ispi->sregs + PREOP_OPTYPE);
++ if ((preop & 0xff) != opcode && (preop >> 8) != opcode) {
++ if (ispi->locked)
++ return -EINVAL;
+ writel(opcode, ispi->sregs + PREOP_OPTYPE);
++ }
+
++ /*
++ * This enables atomic sequence on next SW sycle. Will
++ * be cleared after next operation.
++ */
++ ispi->atomic_preopcode = opcode;
+ return 0;
+ }
+
+@@ -569,6 +614,13 @@ static ssize_t intel_spi_read(struct spi_nor *nor, loff_t from, size_t len,
+ u32 val, status;
+ ssize_t ret;
+
++ /*
++ * Atomic sequence is not expected with HW sequencer reads. Make
++ * sure it is cleared regardless.
++ */
++ if (WARN_ON_ONCE(ispi->atomic_preopcode))
++ ispi->atomic_preopcode = 0;
++
+ switch (nor->read_opcode) {
+ case SPINOR_OP_READ:
+ case SPINOR_OP_READ_FAST:
+@@ -627,6 +679,9 @@ static ssize_t intel_spi_write(struct spi_nor *nor, loff_t to, size_t len,
+ u32 val, status;
+ ssize_t ret;
+
++ /* Not needed with HW sequencer write, make sure it is cleared */
++ ispi->atomic_preopcode = 0;
++
+ while (len > 0) {
+ block_size = min_t(size_t, len, INTEL_SPI_FIFO_SZ);
+
+@@ -707,6 +762,9 @@ static int intel_spi_erase(struct spi_nor *nor, loff_t offs)
+ return 0;
+ }
+
++ /* Not needed with HW sequencer erase, make sure it is cleared */
++ ispi->atomic_preopcode = 0;
++
+ while (len > 0) {
+ writel(offs, ispi->base + FADDR);
+
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index 753494e042d5..74425af840d6 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -1091,6 +1091,9 @@ int ubi_detach_mtd_dev(int ubi_num, int anyway)
+ if (ubi->bgt_thread)
+ kthread_stop(ubi->bgt_thread);
+
++#ifdef CONFIG_MTD_UBI_FASTMAP
++ cancel_work_sync(&ubi->fm_work);
++#endif
+ ubi_debugfs_exit_dev(ubi);
+ uif_close(ubi);
+
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 250e30fac61b..593a4f9d97e3 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -490,6 +490,82 @@ int ubi_eba_unmap_leb(struct ubi_device *ubi, struct ubi_volume *vol,
+ return err;
+ }
+
++#ifdef CONFIG_MTD_UBI_FASTMAP
++/**
++ * check_mapping - check and fixup a mapping
++ * @ubi: UBI device description object
++ * @vol: volume description object
++ * @lnum: logical eraseblock number
++ * @pnum: physical eraseblock number
++ *
++ * Checks whether a given mapping is valid. Fastmap cannot track LEB unmap
++ * operations, if such an operation is interrupted the mapping still looks
++ * good, but upon first read an ECC is reported to the upper layer.
++ * Normaly during the full-scan at attach time this is fixed, for Fastmap
++ * we have to deal with it while reading.
++ * If the PEB behind a LEB shows this symthom we change the mapping to
++ * %UBI_LEB_UNMAPPED and schedule the PEB for erasure.
++ *
++ * Returns 0 on success, negative error code in case of failure.
++ */
++static int check_mapping(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
++ int *pnum)
++{
++ int err;
++ struct ubi_vid_io_buf *vidb;
++
++ if (!ubi->fast_attach)
++ return 0;
++
++ vidb = ubi_alloc_vid_buf(ubi, GFP_NOFS);
++ if (!vidb)
++ return -ENOMEM;
++
++ err = ubi_io_read_vid_hdr(ubi, *pnum, vidb, 0);
++ if (err > 0 && err != UBI_IO_BITFLIPS) {
++ int torture = 0;
++
++ switch (err) {
++ case UBI_IO_FF:
++ case UBI_IO_FF_BITFLIPS:
++ case UBI_IO_BAD_HDR:
++ case UBI_IO_BAD_HDR_EBADMSG:
++ break;
++ default:
++ ubi_assert(0);
++ }
++
++ if (err == UBI_IO_BAD_HDR_EBADMSG || err == UBI_IO_FF_BITFLIPS)
++ torture = 1;
++
++ down_read(&ubi->fm_eba_sem);
++ vol->eba_tbl->entries[lnum].pnum = UBI_LEB_UNMAPPED;
++ up_read(&ubi->fm_eba_sem);
++ ubi_wl_put_peb(ubi, vol->vol_id, lnum, *pnum, torture);
++
++ *pnum = UBI_LEB_UNMAPPED;
++ } else if (err < 0) {
++ ubi_err(ubi, "unable to read VID header back from PEB %i: %i",
++ *pnum, err);
++
++ goto out_free;
++ }
++
++ err = 0;
++
++out_free:
++ ubi_free_vid_buf(vidb);
++
++ return err;
++}
++#else
++static int check_mapping(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
++ int *pnum)
++{
++ return 0;
++}
++#endif
++
+ /**
+ * ubi_eba_read_leb - read data.
+ * @ubi: UBI device description object
+@@ -522,7 +598,13 @@ int ubi_eba_read_leb(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
+ return err;
+
+ pnum = vol->eba_tbl->entries[lnum].pnum;
+- if (pnum < 0) {
++ if (pnum >= 0) {
++ err = check_mapping(ubi, vol, lnum, &pnum);
++ if (err < 0)
++ goto out_unlock;
++ }
++
++ if (pnum == UBI_LEB_UNMAPPED) {
+ /*
+ * The logical eraseblock is not mapped, fill the whole buffer
+ * with 0xFF bytes. The exception is static volumes for which
+@@ -930,6 +1012,12 @@ int ubi_eba_write_leb(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
+ return err;
+
+ pnum = vol->eba_tbl->entries[lnum].pnum;
++ if (pnum >= 0) {
++ err = check_mapping(ubi, vol, lnum, &pnum);
++ if (err < 0)
++ goto out;
++ }
++
+ if (pnum >= 0) {
+ dbg_eba("write %d bytes at offset %d of LEB %d:%d, PEB %d",
+ len, offset, vol_id, lnum, pnum);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 2052a647220e..f66b3b22f328 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1505,6 +1505,7 @@ int ubi_thread(void *u)
+ }
+
+ dbg_wl("background thread \"%s\" is killed", ubi->bgt_name);
++ ubi->thread_enabled = 0;
+ return 0;
+ }
+
+@@ -1514,9 +1515,6 @@ int ubi_thread(void *u)
+ */
+ static void shutdown_work(struct ubi_device *ubi)
+ {
+-#ifdef CONFIG_MTD_UBI_FASTMAP
+- flush_work(&ubi->fm_work);
+-#endif
+ while (!list_empty(&ubi->works)) {
+ struct ubi_work *wrk;
+
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index 38828ab77eb9..1480c094b57d 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -1385,6 +1385,11 @@ static int emac_devioctl(struct net_device *ndev, struct ifreq *ifrq, int cmd)
+ return -EOPNOTSUPP;
+ }
+
++static int match_first_device(struct device *dev, void *data)
++{
++ return !strncmp(dev_name(dev), "davinci_mdio", 12);
++}
++
+ /**
+ * emac_dev_open - EMAC device open
+ * @ndev: The DaVinci EMAC network adapter
+@@ -1484,8 +1489,14 @@ static int emac_dev_open(struct net_device *ndev)
+
+ /* use the first phy on the bus if pdata did not give us a phy id */
+ if (!phydev && !priv->phy_id) {
+- phy = bus_find_device_by_name(&mdio_bus_type, NULL,
+- "davinci_mdio");
++ /* NOTE: we can't use bus_find_device_by_name() here because
++ * the device name is not guaranteed to be 'davinci_mdio'. On
++ * some systems it can be 'davinci_mdio.0' so we need to use
++ * strncmp() against the first part of the string to correctly
++ * match it.
++ */
++ phy = bus_find_device(&mdio_bus_type, NULL, NULL,
++ match_first_device);
+ if (phy) {
+ priv->phy_id = dev_name(phy);
+ if (!priv->phy_id || !*priv->phy_id)
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index a64023690cad..b9e0d30e317a 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -566,14 +566,18 @@ int nvdimm_revalidate_disk(struct gendisk *disk)
+ {
+ struct device *dev = disk_to_dev(disk)->parent;
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+- const char *pol = nd_region->ro ? "only" : "write";
++ int disk_ro = get_disk_ro(disk);
+
+- if (nd_region->ro == get_disk_ro(disk))
++ /*
++ * Upgrade to read-only if the region is read-only preserve as
++ * read-only if the disk is already read-only.
++ */
++ if (disk_ro || nd_region->ro == disk_ro)
+ return 0;
+
+- dev_info(dev, "%s read-%s, marking %s read-%s\n",
+- dev_name(&nd_region->dev), pol, disk->disk_name, pol);
+- set_disk_ro(disk, nd_region->ro);
++ dev_info(dev, "%s read-only, marking %s read-only\n",
++ dev_name(&nd_region->dev), disk->disk_name);
++ set_disk_ro(disk, 1);
+
+ return 0;
+
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 9d714926ecf5..d7193c4a6ee2 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -299,7 +299,7 @@ static int pmem_attach_disk(struct device *dev,
+ {
+ struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+- int nid = dev_to_node(dev), fua, wbc;
++ int nid = dev_to_node(dev), fua;
+ struct resource *res = &nsio->res;
+ struct resource bb_res;
+ struct nd_pfn *nd_pfn = NULL;
+@@ -335,7 +335,6 @@ static int pmem_attach_disk(struct device *dev,
+ dev_warn(dev, "unable to guarantee persistence of writes\n");
+ fua = 0;
+ }
+- wbc = nvdimm_has_cache(nd_region);
+
+ if (!devm_request_mem_region(dev, res->start, resource_size(res),
+ dev_name(&ndns->dev))) {
+@@ -382,13 +381,14 @@ static int pmem_attach_disk(struct device *dev,
+ return PTR_ERR(addr);
+ pmem->virt_addr = addr;
+
+- blk_queue_write_cache(q, wbc, fua);
++ blk_queue_write_cache(q, true, fua);
+ blk_queue_make_request(q, pmem_make_request);
+ blk_queue_physical_block_size(q, PAGE_SIZE);
+ blk_queue_logical_block_size(q, pmem_sector_size(ndns));
+ blk_queue_max_hw_sectors(q, UINT_MAX);
+ blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+- blk_queue_flag_set(QUEUE_FLAG_DAX, q);
++ if (pmem->pfn_flags & PFN_MAP)
++ blk_queue_flag_set(QUEUE_FLAG_DAX, q);
+ q->queuedata = pmem;
+
+ disk = alloc_disk_node(0, nid);
+@@ -413,7 +413,7 @@ static int pmem_attach_disk(struct device *dev,
+ put_disk(disk);
+ return -ENOMEM;
+ }
+- dax_write_cache(dax_dev, wbc);
++ dax_write_cache(dax_dev, nvdimm_has_cache(nd_region));
+ pmem->dax_dev = dax_dev;
+
+ gendev = disk_to_dev(disk);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index a612be6f019d..ec3543b83330 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -1132,7 +1132,8 @@ EXPORT_SYMBOL_GPL(nvdimm_has_flush);
+
+ int nvdimm_has_cache(struct nd_region *nd_region)
+ {
+- return is_nd_pmem(&nd_region->dev);
++ return is_nd_pmem(&nd_region->dev) &&
++ !test_bit(ND_REGION_PERSIST_CACHE, &nd_region->flags);
+ }
+ EXPORT_SYMBOL_GPL(nvdimm_has_cache);
+
+diff --git a/drivers/of/platform.c b/drivers/of/platform.c
+index c00d81dfac0b..9c91f97ffbe1 100644
+--- a/drivers/of/platform.c
++++ b/drivers/of/platform.c
+@@ -537,6 +537,9 @@ int of_platform_device_destroy(struct device *dev, void *data)
+ if (of_node_check_flag(dev->of_node, OF_POPULATED_BUS))
+ device_for_each_child(dev, NULL, of_platform_device_destroy);
+
++ of_node_clear_flag(dev->of_node, OF_POPULATED);
++ of_node_clear_flag(dev->of_node, OF_POPULATED_BUS);
++
+ if (dev->bus == &platform_bus_type)
+ platform_device_unregister(to_platform_device(dev));
+ #ifdef CONFIG_ARM_AMBA
+@@ -544,8 +547,6 @@ int of_platform_device_destroy(struct device *dev, void *data)
+ amba_device_unregister(to_amba_device(dev));
+ #endif
+
+- of_node_clear_flag(dev->of_node, OF_POPULATED);
+- of_node_clear_flag(dev->of_node, OF_POPULATED_BUS);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(of_platform_device_destroy);
+diff --git a/drivers/of/resolver.c b/drivers/of/resolver.c
+index 65d0b7adfcd4..7edfac6f1914 100644
+--- a/drivers/of/resolver.c
++++ b/drivers/of/resolver.c
+@@ -122,6 +122,11 @@ static int update_usages_of_a_phandle_reference(struct device_node *overlay,
+ goto err_fail;
+ }
+
++ if (offset < 0 || offset + sizeof(__be32) > prop->length) {
++ err = -EINVAL;
++ goto err_fail;
++ }
++
+ *(__be32 *)(prop->value + offset) = cpu_to_be32(phandle);
+ }
+
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 6bb37c18292a..ecee50d10d14 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -165,20 +165,20 @@ static void __init of_unittest_dynamic(void)
+ /* Add a new property - should pass*/
+ prop->name = "new-property";
+ prop->value = "new-property-data";
+- prop->length = strlen(prop->value);
++ prop->length = strlen(prop->value) + 1;
+ unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+
+ /* Try to add an existing property - should fail */
+ prop++;
+ prop->name = "new-property";
+ prop->value = "new-property-data-should-fail";
+- prop->length = strlen(prop->value);
++ prop->length = strlen(prop->value) + 1;
+ unittest(of_add_property(np, prop) != 0,
+ "Adding an existing property should have failed\n");
+
+ /* Try to modify an existing property - should pass */
+ prop->value = "modify-property-data-should-pass";
+- prop->length = strlen(prop->value);
++ prop->length = strlen(prop->value) + 1;
+ unittest(of_update_property(np, prop) == 0,
+ "Updating an existing property should have passed\n");
+
+@@ -186,7 +186,7 @@ static void __init of_unittest_dynamic(void)
+ prop++;
+ prop->name = "modify-property";
+ prop->value = "modify-missing-property-data-should-pass";
+- prop->length = strlen(prop->value);
++ prop->length = strlen(prop->value) + 1;
+ unittest(of_update_property(np, prop) == 0,
+ "Updating a missing property should have passed\n");
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 92fa94a6dcc1..9c3f5e3df232 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -591,7 +591,7 @@ static int _generic_set_opp_regulator(const struct opp_table *opp_table,
+ }
+
+ /* Scaling up? Scale voltage before frequency */
+- if (freq > old_freq) {
++ if (freq >= old_freq) {
+ ret = _set_opp_voltage(dev, reg, new_supply);
+ if (ret)
+ goto restore_voltage;
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index c75199538c05..da4b457a14e0 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -1596,17 +1596,6 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
+ get_pcichild(hpdev, hv_pcidev_ref_childlist);
+ spin_lock_irqsave(&hbus->device_list_lock, flags);
+
+- /*
+- * When a device is being added to the bus, we set the PCI domain
+- * number to be the device serial number, which is non-zero and
+- * unique on the same VM. The serial numbers start with 1, and
+- * increase by 1 for each device. So device names including this
+- * can have shorter names than based on the bus instance UUID.
+- * Only the first device serial number is used for domain, so the
+- * domain number will not change after the first device is added.
+- */
+- if (list_empty(&hbus->children))
+- hbus->sysdata.domain = desc->ser;
+ list_add_tail(&hpdev->list_entry, &hbus->children);
+ spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ return hpdev;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 88e917c9120f..5f892065585e 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -121,7 +121,7 @@ struct controller *pcie_init(struct pcie_device *dev);
+ int pcie_init_notification(struct controller *ctrl);
+ int pciehp_enable_slot(struct slot *p_slot);
+ int pciehp_disable_slot(struct slot *p_slot);
+-void pcie_enable_notification(struct controller *ctrl);
++void pcie_reenable_notification(struct controller *ctrl);
+ int pciehp_power_on_slot(struct slot *slot);
+ void pciehp_power_off_slot(struct slot *slot);
+ void pciehp_get_power_status(struct slot *slot, u8 *status);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 332b723ff9e6..44a6a63802d5 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -283,7 +283,7 @@ static int pciehp_resume(struct pcie_device *dev)
+ ctrl = get_service_data(dev);
+
+ /* reinitialize the chipset's event detection logic */
+- pcie_enable_notification(ctrl);
++ pcie_reenable_notification(ctrl);
+
+ slot = ctrl->slot;
+
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 18a42f8f5dc5..98ea75aa32c7 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -659,7 +659,7 @@ static irqreturn_t pcie_isr(int irq, void *dev_id)
+ return handled;
+ }
+
+-void pcie_enable_notification(struct controller *ctrl)
++static void pcie_enable_notification(struct controller *ctrl)
+ {
+ u16 cmd, mask;
+
+@@ -697,6 +697,17 @@ void pcie_enable_notification(struct controller *ctrl)
+ pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
+ }
+
++void pcie_reenable_notification(struct controller *ctrl)
++{
++ /*
++ * Clear both Presence and Data Link Layer Changed to make sure
++ * those events still fire after we have re-enabled them.
++ */
++ pcie_capability_write_word(ctrl->pcie->port, PCI_EXP_SLTSTA,
++ PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC);
++ pcie_enable_notification(ctrl);
++}
++
+ static void pcie_disable_notification(struct controller *ctrl)
+ {
+ u16 mask;
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index b9a131137e64..c816b0683a82 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -753,10 +753,11 @@ static int pci_pm_suspend(struct device *dev)
+ * better to resume the device from runtime suspend here.
+ */
+ if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
+- !pci_dev_keep_suspended(pci_dev))
++ !pci_dev_keep_suspended(pci_dev)) {
+ pm_runtime_resume(dev);
++ pci_dev->state_saved = false;
++ }
+
+- pci_dev->state_saved = false;
+ if (pm->suspend) {
+ pci_power_t prev = pci_dev->current_state;
+ int error;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index ac91b6fd0bcd..73ac02796ba9 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2638,7 +2638,14 @@ static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
+ for_each_pci_bridge(dev, bus) {
+ cmax = max;
+ max = pci_scan_bridge_extend(bus, dev, max, 0, 0);
+- used_buses += cmax - max;
++
++ /*
++ * Reserve one bus for each bridge now to avoid extending
++ * hotplug bridges too much during the second scan below.
++ */
++ used_buses++;
++ if (cmax - max > 1)
++ used_buses += cmax - max - 1;
+ }
+
+ /* Scan bridges that need to be reconfigured */
+@@ -2661,12 +2668,14 @@ static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
+ * bridges if any.
+ */
+ buses = available_buses / hotplug_bridges;
+- buses = min(buses, available_buses - used_buses);
++ buses = min(buses, available_buses - used_buses + 1);
+ }
+
+ cmax = max;
+ max = pci_scan_bridge_extend(bus, dev, cmax, buses, 1);
+- used_buses += max - cmax;
++ /* One bus is already accounted so don't add it again */
++ if (max - cmax > 1)
++ used_buses += max - cmax - 1;
+ }
+
+ /*
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 2990ad1e7c99..785a29ba4f51 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4230,11 +4230,29 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+ * 0xa290-0xa29f PCI Express Root port #{0-16}
+ * 0xa2e7-0xa2ee PCI Express Root port #{17-24}
+ *
++ * Mobile chipsets are also affected, 7th & 8th Generation
++ * Specification update confirms ACS errata 22, status no fix: (7th Generation
++ * Intel Processor Family I/O for U/Y Platforms and 8th Generation Intel
++ * Processor Family I/O for U Quad Core Platforms Specification Update,
++ * August 2017, Revision 002, Document#: 334660-002)[6]
++ * Device IDs from I/O datasheet: (7th Generation Intel Processor Family I/O
++ * for U/Y Platforms and 8th Generation Intel ® Processor Family I/O for U
++ * Quad Core Platforms, Vol 1 of 2, August 2017, Document#: 334658-003)[7]
++ *
++ * 0x9d10-0x9d1b PCI Express Root port #{1-12}
++ *
++ * The 300 series chipset suffers from the same bug so include those root
++ * ports here as well.
++ *
++ * 0xa32c-0xa343 PCI Express Root port #{0-24}
++ *
+ * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
+ * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
+ * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
+ * [4] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-spec-update.html
+ * [5] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-datasheet-vol-1.html
++ * [6] https://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-mobile-u-y-processor-lines-i-o-spec-update.html
++ * [7] https://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-mobile-u-y-processor-lines-i-o-datasheet-vol-1.html
+ */
+ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ {
+@@ -4244,6 +4262,8 @@ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ switch (dev->device) {
+ case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
+ case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
++ case 0x9d10 ... 0x9d1b: /* 7th & 8th Gen Mobile */
++ case 0xa32c ... 0xa343: /* 300 series */
+ return true;
+ }
+
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index b601039d6c69..c4aa411f5935 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -101,10 +101,11 @@ struct pinctrl_dev *of_pinctrl_get(struct device_node *np)
+ }
+
+ static int dt_to_map_one_config(struct pinctrl *p,
+- struct pinctrl_dev *pctldev,
++ struct pinctrl_dev *hog_pctldev,
+ const char *statename,
+ struct device_node *np_config)
+ {
++ struct pinctrl_dev *pctldev = NULL;
+ struct device_node *np_pctldev;
+ const struct pinctrl_ops *ops;
+ int ret;
+@@ -123,8 +124,10 @@ static int dt_to_map_one_config(struct pinctrl *p,
+ return -EPROBE_DEFER;
+ }
+ /* If we're creating a hog we can use the passed pctldev */
+- if (pctldev && (np_pctldev == p->dev->of_node))
++ if (hog_pctldev && (np_pctldev == p->dev->of_node)) {
++ pctldev = hog_pctldev;
+ break;
++ }
+ pctldev = get_pinctrl_dev_from_of_node(np_pctldev);
+ if (pctldev)
+ break;
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 5b63248c8209..7bef929bd7fe 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -679,12 +679,13 @@ static void armada_37xx_irq_handler(struct irq_desc *desc)
+ writel(1 << hwirq,
+ info->base +
+ IRQ_STATUS + 4 * i);
+- continue;
++ goto update_status;
+ }
+ }
+
+ generic_handle_irq(virq);
+
++update_status:
+ /* Update status in case a new IRQ appears */
+ spin_lock_irqsave(&info->irq_lock, flags);
+ status = readl_relaxed(info->base +
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+index 90c274490181..4f4ae66a0ee3 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+@@ -105,12 +105,12 @@ static const struct samsung_pin_bank_data s5pv210_pin_bank[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(7, 0x1c0, "gpg1", 0x38),
+ EXYNOS_PIN_BANK_EINTG(7, 0x1e0, "gpg2", 0x3c),
+ EXYNOS_PIN_BANK_EINTG(7, 0x200, "gpg3", 0x40),
+- EXYNOS_PIN_BANK_EINTN(7, 0x220, "gpi"),
+ EXYNOS_PIN_BANK_EINTG(8, 0x240, "gpj0", 0x44),
+ EXYNOS_PIN_BANK_EINTG(6, 0x260, "gpj1", 0x48),
+ EXYNOS_PIN_BANK_EINTG(8, 0x280, "gpj2", 0x4c),
+ EXYNOS_PIN_BANK_EINTG(8, 0x2a0, "gpj3", 0x50),
+ EXYNOS_PIN_BANK_EINTG(5, 0x2c0, "gpj4", 0x54),
++ EXYNOS_PIN_BANK_EINTN(7, 0x220, "gpi"),
+ EXYNOS_PIN_BANK_EINTN(8, 0x2e0, "mp01"),
+ EXYNOS_PIN_BANK_EINTN(4, 0x300, "mp02"),
+ EXYNOS_PIN_BANK_EINTN(8, 0x320, "mp03"),
+@@ -630,7 +630,6 @@ static const struct samsung_pin_bank_data exynos5410_pin_banks0[] __initconst =
+ EXYNOS_PIN_BANK_EINTG(4, 0x100, "gpc3", 0x20),
+ EXYNOS_PIN_BANK_EINTG(7, 0x120, "gpc1", 0x24),
+ EXYNOS_PIN_BANK_EINTG(7, 0x140, "gpc2", 0x28),
+- EXYNOS_PIN_BANK_EINTN(2, 0x160, "gpm5"),
+ EXYNOS_PIN_BANK_EINTG(8, 0x180, "gpd1", 0x2c),
+ EXYNOS_PIN_BANK_EINTG(8, 0x1A0, "gpe0", 0x30),
+ EXYNOS_PIN_BANK_EINTG(2, 0x1C0, "gpe1", 0x34),
+@@ -641,6 +640,7 @@ static const struct samsung_pin_bank_data exynos5410_pin_banks0[] __initconst =
+ EXYNOS_PIN_BANK_EINTG(2, 0x260, "gpg2", 0x48),
+ EXYNOS_PIN_BANK_EINTG(4, 0x280, "gph0", 0x4c),
+ EXYNOS_PIN_BANK_EINTG(8, 0x2A0, "gph1", 0x50),
++ EXYNOS_PIN_BANK_EINTN(2, 0x160, "gpm5"),
+ EXYNOS_PIN_BANK_EINTN(8, 0x2C0, "gpm7"),
+ EXYNOS_PIN_BANK_EINTN(6, 0x2E0, "gpy0"),
+ EXYNOS_PIN_BANK_EINTN(4, 0x300, "gpy1"),
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index 3682e1539251..31c8b8c49e45 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -435,7 +435,13 @@ static int __init cros_ec_lpc_init(void)
+ int ret;
+ acpi_status status;
+
+- if (!dmi_check_system(cros_ec_lpc_dmi_table)) {
++ status = acpi_get_devices(ACPI_DRV_NAME, cros_ec_lpc_parse_device,
++ &cros_ec_lpc_acpi_device_found, NULL);
++ if (ACPI_FAILURE(status))
++ pr_warn(DRV_NAME ": Looking for %s failed\n", ACPI_DRV_NAME);
++
++ if (!cros_ec_lpc_acpi_device_found &&
++ !dmi_check_system(cros_ec_lpc_dmi_table)) {
+ pr_err(DRV_NAME ": unsupported system.\n");
+ return -ENODEV;
+ }
+@@ -450,11 +456,6 @@ static int __init cros_ec_lpc_init(void)
+ return ret;
+ }
+
+- status = acpi_get_devices(ACPI_DRV_NAME, cros_ec_lpc_parse_device,
+- &cros_ec_lpc_acpi_device_found, NULL);
+- if (ACPI_FAILURE(status))
+- pr_warn(DRV_NAME ": Looking for %s failed\n", ACPI_DRV_NAME);
+-
+ if (!cros_ec_lpc_acpi_device_found) {
+ /* Register the device, and it'll get hooked up automatically */
+ ret = platform_device_register(&cros_ec_lpc_device);
+diff --git a/drivers/pwm/pwm-lpss-platform.c b/drivers/pwm/pwm-lpss-platform.c
+index 5d6ed1507d29..5561b9e190f8 100644
+--- a/drivers/pwm/pwm-lpss-platform.c
++++ b/drivers/pwm/pwm-lpss-platform.c
+@@ -74,6 +74,10 @@ static int pwm_lpss_remove_platform(struct platform_device *pdev)
+ return pwm_lpss_remove(lpwm);
+ }
+
++static SIMPLE_DEV_PM_OPS(pwm_lpss_platform_pm_ops,
++ pwm_lpss_suspend,
++ pwm_lpss_resume);
++
+ static const struct acpi_device_id pwm_lpss_acpi_match[] = {
+ { "80860F09", (unsigned long)&pwm_lpss_byt_info },
+ { "80862288", (unsigned long)&pwm_lpss_bsw_info },
+@@ -86,6 +90,7 @@ static struct platform_driver pwm_lpss_driver_platform = {
+ .driver = {
+ .name = "pwm-lpss",
+ .acpi_match_table = pwm_lpss_acpi_match,
++ .pm = &pwm_lpss_platform_pm_ops,
+ },
+ .probe = pwm_lpss_probe_platform,
+ .remove = pwm_lpss_remove_platform,
+diff --git a/drivers/pwm/pwm-lpss.c b/drivers/pwm/pwm-lpss.c
+index 8db0d40ccacd..4721a264bac2 100644
+--- a/drivers/pwm/pwm-lpss.c
++++ b/drivers/pwm/pwm-lpss.c
+@@ -32,10 +32,13 @@
+ /* Size of each PWM register space if multiple */
+ #define PWM_SIZE 0x400
+
++#define MAX_PWMS 4
++
+ struct pwm_lpss_chip {
+ struct pwm_chip chip;
+ void __iomem *regs;
+ const struct pwm_lpss_boardinfo *info;
++ u32 saved_ctrl[MAX_PWMS];
+ };
+
+ static inline struct pwm_lpss_chip *to_lpwm(struct pwm_chip *chip)
+@@ -177,6 +180,9 @@ struct pwm_lpss_chip *pwm_lpss_probe(struct device *dev, struct resource *r,
+ unsigned long c;
+ int ret;
+
++ if (WARN_ON(info->npwm > MAX_PWMS))
++ return ERR_PTR(-ENODEV);
++
+ lpwm = devm_kzalloc(dev, sizeof(*lpwm), GFP_KERNEL);
+ if (!lpwm)
+ return ERR_PTR(-ENOMEM);
+@@ -212,6 +218,30 @@ int pwm_lpss_remove(struct pwm_lpss_chip *lpwm)
+ }
+ EXPORT_SYMBOL_GPL(pwm_lpss_remove);
+
++int pwm_lpss_suspend(struct device *dev)
++{
++ struct pwm_lpss_chip *lpwm = dev_get_drvdata(dev);
++ int i;
++
++ for (i = 0; i < lpwm->info->npwm; i++)
++ lpwm->saved_ctrl[i] = readl(lpwm->regs + i * PWM_SIZE + PWM);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(pwm_lpss_suspend);
++
++int pwm_lpss_resume(struct device *dev)
++{
++ struct pwm_lpss_chip *lpwm = dev_get_drvdata(dev);
++ int i;
++
++ for (i = 0; i < lpwm->info->npwm; i++)
++ writel(lpwm->saved_ctrl[i], lpwm->regs + i * PWM_SIZE + PWM);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(pwm_lpss_resume);
++
+ MODULE_DESCRIPTION("PWM driver for Intel LPSS");
+ MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/pwm/pwm-lpss.h b/drivers/pwm/pwm-lpss.h
+index 98306bb02cfe..7a4238ad1fcb 100644
+--- a/drivers/pwm/pwm-lpss.h
++++ b/drivers/pwm/pwm-lpss.h
+@@ -28,5 +28,7 @@ struct pwm_lpss_boardinfo {
+ struct pwm_lpss_chip *pwm_lpss_probe(struct device *dev, struct resource *r,
+ const struct pwm_lpss_boardinfo *info);
+ int pwm_lpss_remove(struct pwm_lpss_chip *lpwm);
++int pwm_lpss_suspend(struct device *dev);
++int pwm_lpss_resume(struct device *dev);
+
+ #endif /* __PWM_LPSS_H */
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index cbbafdcaaecb..56b14c27e275 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -761,13 +761,11 @@ static int q6v5_start(struct rproc *rproc)
+ }
+
+ /* Assign MBA image access in DDR to q6 */
+- xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true,
+- qproc->mba_phys,
+- qproc->mba_size);
+- if (xfermemop_ret) {
++ ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true,
++ qproc->mba_phys, qproc->mba_size);
++ if (ret) {
+ dev_err(qproc->dev,
+- "assigning Q6 access to mba memory failed: %d\n",
+- xfermemop_ret);
++ "assigning Q6 access to mba memory failed: %d\n", ret);
+ goto disable_active_clks;
+ }
+
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 5ce9bf7b897d..f63adcd95eb0 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1100,12 +1100,12 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ void *info;
+ int ret;
+
+- channel = devm_kzalloc(&edge->dev, sizeof(*channel), GFP_KERNEL);
++ channel = kzalloc(sizeof(*channel), GFP_KERNEL);
+ if (!channel)
+ return ERR_PTR(-ENOMEM);
+
+ channel->edge = edge;
+- channel->name = devm_kstrdup(&edge->dev, name, GFP_KERNEL);
++ channel->name = kstrdup(name, GFP_KERNEL);
+ if (!channel->name)
+ return ERR_PTR(-ENOMEM);
+
+@@ -1156,8 +1156,8 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ return channel;
+
+ free_name_and_channel:
+- devm_kfree(&edge->dev, channel->name);
+- devm_kfree(&edge->dev, channel);
++ kfree(channel->name);
++ kfree(channel);
+
+ return ERR_PTR(ret);
+ }
+@@ -1378,13 +1378,13 @@ static int qcom_smd_parse_edge(struct device *dev,
+ */
+ static void qcom_smd_edge_release(struct device *dev)
+ {
+- struct qcom_smd_channel *channel;
++ struct qcom_smd_channel *channel, *tmp;
+ struct qcom_smd_edge *edge = to_smd_edge(dev);
+
+- list_for_each_entry(channel, &edge->channels, list) {
+- SET_RX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED);
+- SET_RX_CHANNEL_INFO(channel, head, 0);
+- SET_RX_CHANNEL_INFO(channel, tail, 0);
++ list_for_each_entry_safe(channel, tmp, &edge->channels, list) {
++ list_del(&channel->list);
++ kfree(channel->name);
++ kfree(channel);
+ }
+
+ kfree(edge);
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index 2e6fb275acc8..2cd5a7b1a2e3 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -74,7 +74,7 @@
+ #define SUN6I_ALARM_CONFIG_WAKEUP BIT(0)
+
+ #define SUN6I_LOSC_OUT_GATING 0x0060
+-#define SUN6I_LOSC_OUT_GATING_EN BIT(0)
++#define SUN6I_LOSC_OUT_GATING_EN_OFFSET 0
+
+ /*
+ * Get date values
+@@ -255,7 +255,7 @@ static void __init sun6i_rtc_clk_init(struct device_node *node)
+ &clkout_name);
+ rtc->ext_losc = clk_register_gate(NULL, clkout_name, rtc->hw.init->name,
+ 0, rtc->base + SUN6I_LOSC_OUT_GATING,
+- SUN6I_LOSC_OUT_GATING_EN, 0,
++ SUN6I_LOSC_OUT_GATING_EN_OFFSET, 0,
+ &rtc->lock);
+ if (IS_ERR(rtc->ext_losc)) {
+ pr_crit("Couldn't register the LOSC external gate\n");
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index 18c4f933e8b9..b415ba42ca73 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -664,6 +664,46 @@ void zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *sc,
+ spin_unlock_irqrestore(&dbf->scsi_lock, flags);
+ }
+
++/**
++ * zfcp_dbf_scsi_eh() - Trace event for special cases of scsi_eh callbacks.
++ * @tag: Identifier for event.
++ * @adapter: Pointer to zfcp adapter as context for this event.
++ * @scsi_id: SCSI ID/target to indicate scope of task management function (TMF).
++ * @ret: Return value of calling function.
++ *
++ * This SCSI trace variant does not depend on any of:
++ * scsi_cmnd, zfcp_fsf_req, scsi_device.
++ */
++void zfcp_dbf_scsi_eh(char *tag, struct zfcp_adapter *adapter,
++ unsigned int scsi_id, int ret)
++{
++ struct zfcp_dbf *dbf = adapter->dbf;
++ struct zfcp_dbf_scsi *rec = &dbf->scsi_buf;
++ unsigned long flags;
++ static int const level = 1;
++
++ if (unlikely(!debug_level_enabled(adapter->dbf->scsi, level)))
++ return;
++
++ spin_lock_irqsave(&dbf->scsi_lock, flags);
++ memset(rec, 0, sizeof(*rec));
++
++ memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
++ rec->id = ZFCP_DBF_SCSI_CMND;
++ rec->scsi_result = ret; /* re-use field, int is 4 bytes and fits */
++ rec->scsi_retries = ~0;
++ rec->scsi_allowed = ~0;
++ rec->fcp_rsp_info = ~0;
++ rec->scsi_id = scsi_id;
++ rec->scsi_lun = (u32)ZFCP_DBF_INVALID_LUN;
++ rec->scsi_lun_64_hi = (u32)(ZFCP_DBF_INVALID_LUN >> 32);
++ rec->host_scribble = ~0;
++ memset(rec->scsi_opcode, 0xff, ZFCP_DBF_SCSI_OPCODE);
++
++ debug_event(dbf->scsi, level, rec, sizeof(*rec));
++ spin_unlock_irqrestore(&dbf->scsi_lock, flags);
++}
++
+ static debug_info_t *zfcp_dbf_reg(const char *name, int size, int rec_size)
+ {
+ struct debug_info *d;
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 1d91a32db08e..69dfb328dba4 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -35,11 +35,28 @@ enum zfcp_erp_steps {
+ ZFCP_ERP_STEP_LUN_OPENING = 0x2000,
+ };
+
++/**
++ * enum zfcp_erp_act_type - Type of ERP action object.
++ * @ZFCP_ERP_ACTION_REOPEN_LUN: LUN recovery.
++ * @ZFCP_ERP_ACTION_REOPEN_PORT: Port recovery.
++ * @ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: Forced port recovery.
++ * @ZFCP_ERP_ACTION_REOPEN_ADAPTER: Adapter recovery.
++ * @ZFCP_ERP_ACTION_NONE: Eyecatcher pseudo flag to bitwise or-combine with
++ * either of the first four enum values.
++ * Used to indicate that an ERP action could not be
++ * set up despite a detected need for some recovery.
++ * @ZFCP_ERP_ACTION_FAILED: Eyecatcher pseudo flag to bitwise or-combine with
++ * either of the first four enum values.
++ * Used to indicate that ERP not needed because
++ * the object has ZFCP_STATUS_COMMON_ERP_FAILED.
++ */
+ enum zfcp_erp_act_type {
+ ZFCP_ERP_ACTION_REOPEN_LUN = 1,
+ ZFCP_ERP_ACTION_REOPEN_PORT = 2,
+ ZFCP_ERP_ACTION_REOPEN_PORT_FORCED = 3,
+ ZFCP_ERP_ACTION_REOPEN_ADAPTER = 4,
++ ZFCP_ERP_ACTION_NONE = 0xc0,
++ ZFCP_ERP_ACTION_FAILED = 0xe0,
+ };
+
+ enum zfcp_erp_act_state {
+@@ -126,6 +143,49 @@ static void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter)
+ }
+ }
+
++static int zfcp_erp_handle_failed(int want, struct zfcp_adapter *adapter,
++ struct zfcp_port *port,
++ struct scsi_device *sdev)
++{
++ int need = want;
++ struct zfcp_scsi_dev *zsdev;
++
++ switch (want) {
++ case ZFCP_ERP_ACTION_REOPEN_LUN:
++ zsdev = sdev_to_zfcp(sdev);
++ if (atomic_read(&zsdev->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
++ need = 0;
++ break;
++ case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
++ if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
++ need = 0;
++ break;
++ case ZFCP_ERP_ACTION_REOPEN_PORT:
++ if (atomic_read(&port->status) &
++ ZFCP_STATUS_COMMON_ERP_FAILED) {
++ need = 0;
++ /* ensure propagation of failed status to new devices */
++ zfcp_erp_set_port_status(
++ port, ZFCP_STATUS_COMMON_ERP_FAILED);
++ }
++ break;
++ case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
++ if (atomic_read(&adapter->status) &
++ ZFCP_STATUS_COMMON_ERP_FAILED) {
++ need = 0;
++ /* ensure propagation of failed status to new devices */
++ zfcp_erp_set_adapter_status(
++ adapter, ZFCP_STATUS_COMMON_ERP_FAILED);
++ }
++ break;
++ default:
++ need = 0;
++ break;
++ }
++
++ return need;
++}
++
+ static int zfcp_erp_required_act(int want, struct zfcp_adapter *adapter,
+ struct zfcp_port *port,
+ struct scsi_device *sdev)
+@@ -249,16 +309,27 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter,
+ int retval = 1, need;
+ struct zfcp_erp_action *act;
+
+- if (!adapter->erp_thread)
+- return -EIO;
++ need = zfcp_erp_handle_failed(want, adapter, port, sdev);
++ if (!need) {
++ need = ZFCP_ERP_ACTION_FAILED; /* marker for trace */
++ goto out;
++ }
++
++ if (!adapter->erp_thread) {
++ need = ZFCP_ERP_ACTION_NONE; /* marker for trace */
++ retval = -EIO;
++ goto out;
++ }
+
+ need = zfcp_erp_required_act(want, adapter, port, sdev);
+ if (!need)
+ goto out;
+
+ act = zfcp_erp_setup_act(need, act_status, adapter, port, sdev);
+- if (!act)
++ if (!act) {
++ need |= ZFCP_ERP_ACTION_NONE; /* marker for trace */
+ goto out;
++ }
+ atomic_or(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status);
+ ++adapter->erp_total_count;
+ list_add_tail(&act->list, &adapter->erp_ready_head);
+@@ -269,18 +340,32 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter,
+ return retval;
+ }
+
++void zfcp_erp_port_forced_no_port_dbf(char *id, struct zfcp_adapter *adapter,
++ u64 port_name, u32 port_id)
++{
++ unsigned long flags;
++ static /* don't waste stack */ struct zfcp_port tmpport;
++
++ write_lock_irqsave(&adapter->erp_lock, flags);
++ /* Stand-in zfcp port with fields just good enough for
++ * zfcp_dbf_rec_trig() and zfcp_dbf_set_common().
++ * Under lock because tmpport is static.
++ */
++ atomic_set(&tmpport.status, -1); /* unknown */
++ tmpport.wwpn = port_name;
++ tmpport.d_id = port_id;
++ zfcp_dbf_rec_trig(id, adapter, &tmpport, NULL,
++ ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
++ ZFCP_ERP_ACTION_NONE);
++ write_unlock_irqrestore(&adapter->erp_lock, flags);
++}
++
+ static int _zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter,
+ int clear_mask, char *id)
+ {
+ zfcp_erp_adapter_block(adapter, clear_mask);
+ zfcp_scsi_schedule_rports_block(adapter);
+
+- /* ensure propagation of failed status to new devices */
+- if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
+- zfcp_erp_set_adapter_status(adapter,
+- ZFCP_STATUS_COMMON_ERP_FAILED);
+- return -EIO;
+- }
+ return zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER,
+ adapter, NULL, NULL, id, 0);
+ }
+@@ -299,12 +384,8 @@ void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear, char *id)
+ zfcp_scsi_schedule_rports_block(adapter);
+
+ write_lock_irqsave(&adapter->erp_lock, flags);
+- if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
+- zfcp_erp_set_adapter_status(adapter,
+- ZFCP_STATUS_COMMON_ERP_FAILED);
+- else
+- zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, adapter,
+- NULL, NULL, id, 0);
++ zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, adapter,
++ NULL, NULL, id, 0);
+ write_unlock_irqrestore(&adapter->erp_lock, flags);
+ }
+
+@@ -345,9 +426,6 @@ static void _zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear,
+ zfcp_erp_port_block(port, clear);
+ zfcp_scsi_schedule_rport_block(port);
+
+- if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
+- return;
+-
+ zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
+ port->adapter, port, NULL, id, 0);
+ }
+@@ -373,12 +451,6 @@ static int _zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id)
+ zfcp_erp_port_block(port, clear);
+ zfcp_scsi_schedule_rport_block(port);
+
+- if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
+- /* ensure propagation of failed status to new devices */
+- zfcp_erp_set_port_status(port, ZFCP_STATUS_COMMON_ERP_FAILED);
+- return -EIO;
+- }
+-
+ return zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT,
+ port->adapter, port, NULL, id, 0);
+ }
+@@ -418,9 +490,6 @@ static void _zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id,
+
+ zfcp_erp_lun_block(sdev, clear);
+
+- if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
+- return;
+-
+ zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_LUN, adapter,
+ zfcp_sdev->port, sdev, id, act_status);
+ }
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index e5eed8aac0ce..65d16747c301 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -52,10 +52,15 @@ extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_scsi(char *, int, struct scsi_cmnd *,
+ struct zfcp_fsf_req *);
++extern void zfcp_dbf_scsi_eh(char *tag, struct zfcp_adapter *adapter,
++ unsigned int scsi_id, int ret);
+
+ /* zfcp_erp.c */
+ extern void zfcp_erp_set_adapter_status(struct zfcp_adapter *, u32);
+ extern void zfcp_erp_clear_adapter_status(struct zfcp_adapter *, u32);
++extern void zfcp_erp_port_forced_no_port_dbf(char *id,
++ struct zfcp_adapter *adapter,
++ u64 port_name, u32 port_id);
+ extern void zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, char *);
+ extern void zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, char *);
+ extern void zfcp_erp_set_port_status(struct zfcp_port *, u32);
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index 22f9562f415c..0b6f51424745 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -181,6 +181,7 @@ static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt)
+ if (abrt_req)
+ break;
+
++ zfcp_dbf_scsi_abort("abrt_wt", scpnt, NULL);
+ zfcp_erp_wait(adapter);
+ ret = fc_block_scsi_eh(scpnt);
+ if (ret) {
+@@ -277,6 +278,7 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
+ if (fsf_req)
+ break;
+
++ zfcp_dbf_scsi_devreset("wait", scpnt, tm_flags, NULL);
+ zfcp_erp_wait(adapter);
+ ret = fc_block_scsi_eh(scpnt);
+ if (ret) {
+@@ -323,15 +325,16 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+ {
+ struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device);
+ struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
+- int ret;
++ int ret = SUCCESS, fc_ret;
+
+ zfcp_erp_adapter_reopen(adapter, 0, "schrh_1");
+ zfcp_erp_wait(adapter);
+- ret = fc_block_scsi_eh(scpnt);
+- if (ret)
+- return ret;
++ fc_ret = fc_block_scsi_eh(scpnt);
++ if (fc_ret)
++ ret = fc_ret;
+
+- return SUCCESS;
++ zfcp_dbf_scsi_eh("schrh_r", adapter, ~0, ret);
++ return ret;
+ }
+
+ struct scsi_transport_template *zfcp_scsi_transport_template;
+@@ -602,6 +605,11 @@ static void zfcp_scsi_terminate_rport_io(struct fc_rport *rport)
+ if (port) {
+ zfcp_erp_port_forced_reopen(port, 0, "sctrpi1");
+ put_device(&port->dev);
++ } else {
++ zfcp_erp_port_forced_no_port_dbf(
++ "sctrpin", adapter,
++ rport->port_name /* zfcp_scsi_rport_register */,
++ rport->port_id /* zfcp_scsi_rport_register */);
+ }
+ }
+
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 3a9eca163db8..b92f86acb8bb 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -8869,7 +8869,7 @@ static void hpsa_disable_rld_caching(struct ctlr_info *h)
+ kfree(options);
+ }
+
+-static void hpsa_shutdown(struct pci_dev *pdev)
++static void __hpsa_shutdown(struct pci_dev *pdev)
+ {
+ struct ctlr_info *h;
+
+@@ -8884,6 +8884,12 @@ static void hpsa_shutdown(struct pci_dev *pdev)
+ hpsa_disable_interrupt_mode(h); /* pci_init 2 */
+ }
+
++static void hpsa_shutdown(struct pci_dev *pdev)
++{
++ __hpsa_shutdown(pdev);
++ pci_disable_device(pdev);
++}
++
+ static void hpsa_free_device_info(struct ctlr_info *h)
+ {
+ int i;
+@@ -8927,7 +8933,7 @@ static void hpsa_remove_one(struct pci_dev *pdev)
+ scsi_remove_host(h->scsi_host); /* init_one 8 */
+ /* includes hpsa_free_irqs - init_one 4 */
+ /* includes hpsa_disable_interrupt_mode - pci_init 2 */
+- hpsa_shutdown(pdev);
++ __hpsa_shutdown(pdev);
+
+ hpsa_free_device_info(h); /* scan */
+
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 9e914f9c3ffb..05abe5aaab7f 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3915,7 +3915,6 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ if (memcmp(rp->port_name, fcport->port_name, WWN_SIZE))
+ continue;
+ fcport->scan_state = QLA_FCPORT_FOUND;
+- fcport->d_id.b24 = rp->id.b24;
+ found = true;
+ /*
+ * If device was not a fabric device before.
+@@ -3923,7 +3922,10 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ if ((fcport->flags & FCF_FABRIC_DEVICE) == 0) {
+ qla2x00_clear_loop_id(fcport);
+ fcport->flags |= FCF_FABRIC_DEVICE;
++ } else if (fcport->d_id.b24 != rp->id.b24) {
++ qlt_schedule_sess_for_deletion(fcport);
+ }
++ fcport->d_id.b24 = rp->id.b24;
+ break;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8f55dd44adae..636960ad029a 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -5037,7 +5037,8 @@ qla2x00_iidma_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ return;
+
+ if (fcport->fp_speed == PORT_SPEED_UNKNOWN ||
+- fcport->fp_speed > ha->link_data_rate)
++ fcport->fp_speed > ha->link_data_rate ||
++ !ha->flags.gpsc_supported)
+ return;
+
+ rval = qla2x00_set_idma_speed(vha, fcport->loop_id, fcport->fp_speed,
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index a3dc83f9444d..68560a097ae1 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2494,8 +2494,12 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ ox_id = le16_to_cpu(sts24->ox_id);
+ par_sense_len = sizeof(sts24->data);
+ /* Valid values of the retry delay timer are 0x1-0xffef */
+- if (sts24->retry_delay > 0 && sts24->retry_delay < 0xfff1)
+- retry_delay = sts24->retry_delay;
++ if (sts24->retry_delay > 0 && sts24->retry_delay < 0xfff1) {
++ retry_delay = sts24->retry_delay & 0x3fff;
++ ql_dbg(ql_dbg_io, sp->vha, 0x3033,
++ "%s: scope=%#x retry_delay=%#x\n", __func__,
++ sts24->retry_delay >> 14, retry_delay);
++ }
+ } else {
+ if (scsi_status & SS_SENSE_LEN_VALID)
+ sense_len = le16_to_cpu(sts->req_sense_length);
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 025dc2d3f3de..0266c4d07bc9 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1230,7 +1230,6 @@ static void qla24xx_chk_fcp_state(struct fc_port *sess)
+ void qlt_schedule_sess_for_deletion(struct fc_port *sess)
+ {
+ struct qla_tgt *tgt = sess->tgt;
+- struct qla_hw_data *ha = sess->vha->hw;
+ unsigned long flags;
+
+ if (sess->disc_state == DSC_DELETE_PEND)
+@@ -1247,16 +1246,16 @@ void qlt_schedule_sess_for_deletion(struct fc_port *sess)
+ return;
+ }
+
+- spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ if (sess->deleted == QLA_SESS_DELETED)
+ sess->logout_on_delete = 0;
+
++ spin_lock_irqsave(&sess->vha->work_lock, flags);
+ if (sess->deleted == QLA_SESS_DELETION_IN_PROGRESS) {
+- spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
++ spin_unlock_irqrestore(&sess->vha->work_lock, flags);
+ return;
+ }
+ sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
+- spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
++ spin_unlock_irqrestore(&sess->vha->work_lock, flags);
+
+ sess->disc_state = DSC_DELETE_PEND;
+
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 656c98e116a9..e086bb63da46 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -5506,9 +5506,9 @@ static void __exit scsi_debug_exit(void)
+ int k = sdebug_add_host;
+
+ stop_all_queued();
+- free_all_queued();
+ for (; k; k--)
+ sdebug_remove_adapter();
++ free_all_queued();
+ driver_unregister(&sdebug_driverfs_driver);
+ bus_unregister(&pseudo_lld_bus);
+ root_device_unregister(pseudo_primary);
+diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c
+index 53efc386b1ad..df7f30a425c6 100644
+--- a/drivers/soc/rockchip/pm_domains.c
++++ b/drivers/soc/rockchip/pm_domains.c
+@@ -255,7 +255,7 @@ static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd,
+ return;
+ else if (pd->info->pwr_w_mask)
+ regmap_write(pmu->regmap, pmu->info->pwr_offset,
+- on ? pd->info->pwr_mask :
++ on ? pd->info->pwr_w_mask :
+ (pd->info->pwr_mask | pd->info->pwr_w_mask));
+ else
+ regmap_update_bits(pmu->regmap, pmu->info->pwr_offset,
+diff --git a/drivers/thermal/broadcom/bcm2835_thermal.c b/drivers/thermal/broadcom/bcm2835_thermal.c
+index a4d6a0e2e993..23ad4f9f2143 100644
+--- a/drivers/thermal/broadcom/bcm2835_thermal.c
++++ b/drivers/thermal/broadcom/bcm2835_thermal.c
+@@ -213,8 +213,8 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ rate = clk_get_rate(data->clk);
+ if ((rate < 1920000) || (rate > 5000000))
+ dev_warn(&pdev->dev,
+- "Clock %pCn running at %pCr Hz is outside of the recommended range: 1.92 to 5MHz\n",
+- data->clk, data->clk);
++ "Clock %pCn running at %lu Hz is outside of the recommended range: 1.92 to 5MHz\n",
++ data->clk, rate);
+
+ /* register of thermal sensor and get info from DT */
+ tz = thermal_zone_of_sensor_register(&pdev->dev, 0, data,
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index a4f82ec665fe..2051a5309851 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2890,16 +2890,15 @@ static void serial_console_write(struct console *co, const char *s,
+ unsigned long flags;
+ int locked = 1;
+
+- local_irq_save(flags);
+ #if defined(SUPPORT_SYSRQ)
+ if (port->sysrq)
+ locked = 0;
+ else
+ #endif
+ if (oops_in_progress)
+- locked = spin_trylock(&port->lock);
++ locked = spin_trylock_irqsave(&port->lock, flags);
+ else
+- spin_lock(&port->lock);
++ spin_lock_irqsave(&port->lock, flags);
+
+ /* first save SCSCR then disable interrupts, keep clock source */
+ ctrl = serial_port_in(port, SCSCR);
+@@ -2919,8 +2918,7 @@ static void serial_console_write(struct console *co, const char *s,
+ serial_port_out(port, SCSCR, ctrl);
+
+ if (locked)
+- spin_unlock(&port->lock);
+- local_irq_restore(flags);
++ spin_unlock_irqrestore(&port->lock, flags);
+ }
+
+ static int serial_console_setup(struct console *co, char *options)
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index aa9968d90a48..e3bf65e213cd 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -4551,7 +4551,9 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ * reset. But only on the first attempt,
+ * lest we get into a time out/reset loop
+ */
+- if (r == 0 || (r == -ETIMEDOUT && retries == 0))
++ if (r == 0 || (r == -ETIMEDOUT &&
++ retries == 0 &&
++ udev->speed > USB_SPEED_FULL))
+ break;
+ }
+ udev->descriptor.bMaxPacketSize0 =
+diff --git a/drivers/video/backlight/as3711_bl.c b/drivers/video/backlight/as3711_bl.c
+index 734a9158946b..e55304d5cf07 100644
+--- a/drivers/video/backlight/as3711_bl.c
++++ b/drivers/video/backlight/as3711_bl.c
+@@ -262,10 +262,10 @@ static int as3711_bl_register(struct platform_device *pdev,
+ static int as3711_backlight_parse_dt(struct device *dev)
+ {
+ struct as3711_bl_pdata *pdata = dev_get_platdata(dev);
+- struct device_node *bl =
+- of_find_node_by_name(dev->parent->of_node, "backlight"), *fb;
++ struct device_node *bl, *fb;
+ int ret;
+
++ bl = of_get_child_by_name(dev->parent->of_node, "backlight");
+ if (!bl) {
+ dev_dbg(dev, "backlight node not found\n");
+ return -ENODEV;
+@@ -279,7 +279,7 @@ static int as3711_backlight_parse_dt(struct device *dev)
+ if (pdata->su1_max_uA <= 0)
+ ret = -EINVAL;
+ if (ret < 0)
+- return ret;
++ goto err_put_bl;
+ }
+
+ fb = of_parse_phandle(bl, "su2-dev", 0);
+@@ -292,7 +292,7 @@ static int as3711_backlight_parse_dt(struct device *dev)
+ if (pdata->su2_max_uA <= 0)
+ ret = -EINVAL;
+ if (ret < 0)
+- return ret;
++ goto err_put_bl;
+
+ if (of_find_property(bl, "su2-feedback-voltage", NULL)) {
+ pdata->su2_feedback = AS3711_SU2_VOLTAGE;
+@@ -314,8 +314,10 @@ static int as3711_backlight_parse_dt(struct device *dev)
+ pdata->su2_feedback = AS3711_SU2_CURR_AUTO;
+ count++;
+ }
+- if (count != 1)
+- return -EINVAL;
++ if (count != 1) {
++ ret = -EINVAL;
++ goto err_put_bl;
++ }
+
+ count = 0;
+ if (of_find_property(bl, "su2-fbprot-lx-sd4", NULL)) {
+@@ -334,8 +336,10 @@ static int as3711_backlight_parse_dt(struct device *dev)
+ pdata->su2_fbprot = AS3711_SU2_GPIO4;
+ count++;
+ }
+- if (count != 1)
+- return -EINVAL;
++ if (count != 1) {
++ ret = -EINVAL;
++ goto err_put_bl;
++ }
+
+ count = 0;
+ if (of_find_property(bl, "su2-auto-curr1", NULL)) {
+@@ -355,11 +359,20 @@ static int as3711_backlight_parse_dt(struct device *dev)
+ * At least one su2-auto-curr* must be specified iff
+ * AS3711_SU2_CURR_AUTO is used
+ */
+- if (!count ^ (pdata->su2_feedback != AS3711_SU2_CURR_AUTO))
+- return -EINVAL;
++ if (!count ^ (pdata->su2_feedback != AS3711_SU2_CURR_AUTO)) {
++ ret = -EINVAL;
++ goto err_put_bl;
++ }
+ }
+
++ of_node_put(bl);
++
+ return 0;
++
++err_put_bl:
++ of_node_put(bl);
++
++ return ret;
+ }
+
+ static int as3711_backlight_probe(struct platform_device *pdev)
+diff --git a/drivers/video/backlight/max8925_bl.c b/drivers/video/backlight/max8925_bl.c
+index 7b738d60ecc2..f3aa6088f1d9 100644
+--- a/drivers/video/backlight/max8925_bl.c
++++ b/drivers/video/backlight/max8925_bl.c
+@@ -116,7 +116,7 @@ static void max8925_backlight_dt_init(struct platform_device *pdev)
+ if (!pdata)
+ return;
+
+- np = of_find_node_by_name(nproot, "backlight");
++ np = of_get_child_by_name(nproot, "backlight");
+ if (!np) {
+ dev_err(&pdev->dev, "failed to find backlight node\n");
+ return;
+@@ -125,6 +125,8 @@ static void max8925_backlight_dt_init(struct platform_device *pdev)
+ if (!of_property_read_u32(np, "maxim,max8925-dual-string", &val))
+ pdata->dual_string = val;
+
++ of_node_put(np);
++
+ pdev->dev.platform_data = pdata;
+ }
+
+diff --git a/drivers/video/backlight/tps65217_bl.c b/drivers/video/backlight/tps65217_bl.c
+index 380917c86276..762e3feed097 100644
+--- a/drivers/video/backlight/tps65217_bl.c
++++ b/drivers/video/backlight/tps65217_bl.c
+@@ -184,11 +184,11 @@ static struct tps65217_bl_pdata *
+ tps65217_bl_parse_dt(struct platform_device *pdev)
+ {
+ struct tps65217 *tps = dev_get_drvdata(pdev->dev.parent);
+- struct device_node *node = of_node_get(tps->dev->of_node);
++ struct device_node *node;
+ struct tps65217_bl_pdata *pdata, *err;
+ u32 val;
+
+- node = of_find_node_by_name(node, "backlight");
++ node = of_get_child_by_name(tps->dev->of_node, "backlight");
+ if (!node)
+ return ERR_PTR(-ENODEV);
+
+diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
+index 73676eb0244a..c592ca513115 100644
+--- a/drivers/video/fbdev/uvesafb.c
++++ b/drivers/video/fbdev/uvesafb.c
+@@ -1044,7 +1044,8 @@ static int uvesafb_setcmap(struct fb_cmap *cmap, struct fb_info *info)
+ info->cmap.len || cmap->start < info->cmap.start)
+ return -EINVAL;
+
+- entries = kmalloc(sizeof(*entries) * cmap->len, GFP_KERNEL);
++ entries = kmalloc_array(cmap->len, sizeof(*entries),
++ GFP_KERNEL);
+ if (!entries)
+ return -ENOMEM;
+
+diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c
+index 398d22693234..6e2a9619192d 100644
+--- a/drivers/virt/vboxguest/vboxguest_linux.c
++++ b/drivers/virt/vboxguest/vboxguest_linux.c
+@@ -121,7 +121,9 @@ static long vbg_misc_device_ioctl(struct file *filp, unsigned int req,
+ if (!buf)
+ return -ENOMEM;
+
+- if (copy_from_user(buf, (void *)arg, hdr.size_in)) {
++ *((struct vbg_ioctl_hdr *)buf) = hdr;
++ if (copy_from_user(buf + sizeof(hdr), (void *)arg + sizeof(hdr),
++ hdr.size_in - sizeof(hdr))) {
+ ret = -EFAULT;
+ goto out;
+ }
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index 80a778b02f28..caef0e0fd817 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -751,7 +751,7 @@ int w1_attach_slave_device(struct w1_master *dev, struct w1_reg_num *rn)
+
+ /* slave modules need to be loaded in a context with unlocked mutex */
+ mutex_unlock(&dev->mutex);
+- request_module("w1-family-0x%02x", rn->family);
++ request_module("w1-family-0x%02X", rn->family);
+ mutex_lock(&dev->mutex);
+
+ spin_lock(&w1_flock);
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 762378f1811c..08e4af04d6f2 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -628,8 +628,6 @@ static void __unbind_from_irq(unsigned int irq)
+ xen_irq_info_cleanup(info);
+ }
+
+- BUG_ON(info_for_irq(irq)->type == IRQT_UNBOUND);
+-
+ xen_free_irq(irq);
+ }
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 775a0f2d0b45..b54a55497216 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9475,6 +9475,7 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ u64 new_idx = 0;
+ u64 root_objectid;
+ int ret;
++ int ret2;
+ bool root_log_pinned = false;
+ bool dest_log_pinned = false;
+
+@@ -9671,7 +9672,8 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ dest_log_pinned = false;
+ }
+ }
+- ret = btrfs_end_transaction(trans);
++ ret2 = btrfs_end_transaction(trans);
++ ret = ret ? ret : ret2;
+ out_notrans:
+ if (new_ino == BTRFS_FIRST_FREE_OBJECTID)
+ up_read(&fs_info->subvol_sem);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index bf779461df13..2e23b953d304 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -100,8 +100,10 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index,
+ * readonly and make sure do not write checkpoint with non-uptodate
+ * meta page.
+ */
+- if (unlikely(!PageUptodate(page)))
++ if (unlikely(!PageUptodate(page))) {
++ memset(page_address(page), 0, PAGE_SIZE);
+ f2fs_stop_checkpoint(sbi, false);
++ }
+ out:
+ return page;
+ }
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index e0d9e8f27ed2..f8ef04c9f69d 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -320,10 +320,10 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ make_now:
+ if (ino == F2FS_NODE_INO(sbi)) {
+ inode->i_mapping->a_ops = &f2fs_node_aops;
+- mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
++ mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ } else if (ino == F2FS_META_INO(sbi)) {
+ inode->i_mapping->a_ops = &f2fs_meta_aops;
+- mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
++ mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ } else if (S_ISREG(inode->i_mode)) {
+ inode->i_op = &f2fs_file_inode_operations;
+ inode->i_fop = &f2fs_file_operations;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 5854cc4e1d67..be8d1b16b8d1 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2020,6 +2020,7 @@ static void write_current_sum_page(struct f2fs_sb_info *sbi,
+ struct f2fs_summary_block *dst;
+
+ dst = (struct f2fs_summary_block *)page_address(page);
++ memset(dst, 0, PAGE_SIZE);
+
+ mutex_lock(&curseg->curseg_mutex);
+
+@@ -3116,6 +3117,7 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr)
+
+ page = grab_meta_page(sbi, blkaddr++);
+ kaddr = (unsigned char *)page_address(page);
++ memset(kaddr, 0, PAGE_SIZE);
+
+ /* Step 1: write nat cache */
+ seg_i = CURSEG_I(sbi, CURSEG_HOT_DATA);
+@@ -3140,6 +3142,7 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr)
+ if (!page) {
+ page = grab_meta_page(sbi, blkaddr++);
+ kaddr = (unsigned char *)page_address(page);
++ memset(kaddr, 0, PAGE_SIZE);
+ written_size = 0;
+ }
+ summary = (struct f2fs_summary *)(kaddr + written_size);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 3325d0769723..492ad0c86fa9 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -375,6 +375,7 @@ static inline void seg_info_to_sit_page(struct f2fs_sb_info *sbi,
+ int i;
+
+ raw_sit = (struct f2fs_sit_block *)page_address(page);
++ memset(raw_sit, 0, PAGE_SIZE);
+ for (i = 0; i < end - start; i++) {
+ rs = &raw_sit->entries[i];
+ se = get_seg_entry(sbi, start + i);
+diff --git a/fs/fuse/control.c b/fs/fuse/control.c
+index b9ea99c5b5b3..5be0339dcceb 100644
+--- a/fs/fuse/control.c
++++ b/fs/fuse/control.c
+@@ -211,10 +211,11 @@ static struct dentry *fuse_ctl_add_dentry(struct dentry *parent,
+ if (!dentry)
+ return NULL;
+
+- fc->ctl_dentry[fc->ctl_ndents++] = dentry;
+ inode = new_inode(fuse_control_sb);
+- if (!inode)
++ if (!inode) {
++ dput(dentry);
+ return NULL;
++ }
+
+ inode->i_ino = get_next_ino();
+ inode->i_mode = mode;
+@@ -228,6 +229,9 @@ static struct dentry *fuse_ctl_add_dentry(struct dentry *parent,
+ set_nlink(inode, nlink);
+ inode->i_private = fc;
+ d_add(dentry, inode);
++
++ fc->ctl_dentry[fc->ctl_ndents++] = dentry;
++
+ return dentry;
+ }
+
+@@ -284,7 +288,10 @@ void fuse_ctl_remove_conn(struct fuse_conn *fc)
+ for (i = fc->ctl_ndents - 1; i >= 0; i--) {
+ struct dentry *dentry = fc->ctl_dentry[i];
+ d_inode(dentry)->i_private = NULL;
+- d_drop(dentry);
++ if (!i) {
++ /* Get rid of submounts: */
++ d_invalidate(dentry);
++ }
+ dput(dentry);
+ }
+ drop_nlink(d_inode(fuse_control_sb->s_root));
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 5d06384c2cae..ee6c9baf8158 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -381,8 +381,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ if (!fc->blocked && waitqueue_active(&fc->blocked_waitq))
+ wake_up(&fc->blocked_waitq);
+
+- if (fc->num_background == fc->congestion_threshold &&
+- fc->connected && fc->sb) {
++ if (fc->num_background == fc->congestion_threshold && fc->sb) {
+ clear_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC);
+ clear_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC);
+ }
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 24967382a7b1..7a980b4462d9 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1629,8 +1629,19 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ return err;
+
+ if (attr->ia_valid & ATTR_OPEN) {
+- if (fc->atomic_o_trunc)
++ /* This is coming from open(..., ... | O_TRUNC); */
++ WARN_ON(!(attr->ia_valid & ATTR_SIZE));
++ WARN_ON(attr->ia_size != 0);
++ if (fc->atomic_o_trunc) {
++ /*
++ * No need to send request to userspace, since actual
++ * truncation has already been done by OPEN. But still
++ * need to truncate page cache.
++ */
++ i_size_write(inode, 0);
++ truncate_pagecache(inode, 0);
+ return 0;
++ }
+ file = NULL;
+ }
+
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index ef309958e060..9b37cf8142b5 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1179,6 +1179,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent)
+ fuse_dev_free(fud);
+ err_put_conn:
+ fuse_conn_put(fc);
++ sb->s_fs_info = NULL;
+ err_fput:
+ fput(file);
+ err:
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index a50d7813e3ea..180b4b616725 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -420,11 +420,8 @@ validate_seqid(const struct nfs4_slot_table *tbl, const struct nfs4_slot *slot,
+ return htonl(NFS4ERR_SEQ_FALSE_RETRY);
+ }
+
+- /* Wraparound */
+- if (unlikely(slot->seq_nr == 0xFFFFFFFFU)) {
+- if (args->csa_sequenceid == 1)
+- return htonl(NFS4_OK);
+- } else if (likely(args->csa_sequenceid == slot->seq_nr + 1))
++ /* Note: wraparound relies on seq_nr being of type u32 */
++ if (likely(args->csa_sequenceid == slot->seq_nr + 1))
+ return htonl(NFS4_OK);
+
+ /* Misordered request */
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index 22dc30a679a0..b6f9d84ba19b 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -343,7 +343,7 @@ static ssize_t nfs_idmap_lookup_name(__u32 id, const char *type, char *buf,
+ int id_len;
+ ssize_t ret;
+
+- id_len = snprintf(id_str, sizeof(id_str), "%u", id);
++ id_len = nfs_map_numeric_to_string(id, id_str, sizeof(id_str));
+ ret = nfs_idmap_get_key(id_str, id_len, type, buf, buflen, idmap);
+ if (ret < 0)
+ return -EINVAL;
+@@ -627,7 +627,8 @@ static int nfs_idmap_read_and_verify_message(struct idmap_msg *im,
+ if (strcmp(upcall->im_name, im->im_name) != 0)
+ break;
+ /* Note: here we store the NUL terminator too */
+- len = sprintf(id_str, "%d", im->im_id) + 1;
++ len = 1 + nfs_map_numeric_to_string(im->im_id, id_str,
++ sizeof(id_str));
+ ret = nfs_idmap_instantiate(key, authkey, id_str, len);
+ break;
+ case IDMAP_CONV_IDTONAME:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b71757e85066..409acdda70dd 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -751,7 +751,7 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ * The slot id we used was probably retired. Try again
+ * using a different slot id.
+ */
+- if (slot->seq_nr < slot->table->target_highest_slotid)
++ if (slot->slot_nr < slot->table->target_highest_slotid)
+ goto session_recover;
+ goto retry_nowait;
+ case -NFS4ERR_SEQ_MISORDERED:
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 1d048dd95464..cfe535c286c3 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3651,7 +3651,8 @@ nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+ nfserr = nfserr_resource;
+ goto err_no_verf;
+ }
+- maxcount = min_t(u32, readdir->rd_maxcount, INT_MAX);
++ maxcount = svc_max_payload(resp->rqstp);
++ maxcount = min_t(u32, readdir->rd_maxcount, maxcount);
+ /*
+ * Note the rfc defines rd_maxcount as the size of the
+ * READDIR4resok structure, which includes the verifier above
+@@ -3665,7 +3666,7 @@ nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+
+ /* RFC 3530 14.2.24 allows us to ignore dircount when it's 0: */
+ if (!readdir->rd_dircount)
+- readdir->rd_dircount = INT_MAX;
++ readdir->rd_dircount = svc_max_payload(resp->rqstp);
+
+ readdir->xdr = xdr;
+ readdir->rd_maxcount = maxcount;
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 04c4ec6483e5..8ae1cd8611cc 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -1283,10 +1283,11 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
+ int *new_len)
+ {
+ void *buf;
+- int err, dlen, compr_type, out_len, old_dlen;
++ int err, compr_type;
++ u32 dlen, out_len, old_dlen;
+
+ out_len = le32_to_cpu(dn->size);
+- buf = kmalloc(out_len * WORST_COMPR_FACTOR, GFP_NOFS);
++ buf = kmalloc_array(out_len, WORST_COMPR_FACTOR, GFP_NOFS);
+ if (!buf)
+ return -ENOMEM;
+
+diff --git a/fs/udf/directory.c b/fs/udf/directory.c
+index 0a98a2369738..3835f983cc99 100644
+--- a/fs/udf/directory.c
++++ b/fs/udf/directory.c
+@@ -152,6 +152,9 @@ struct fileIdentDesc *udf_fileident_read(struct inode *dir, loff_t *nf_pos,
+ sizeof(struct fileIdentDesc));
+ }
+ }
++ /* Got last entry outside of dir size - fs is corrupted! */
++ if (*nf_pos > dir->i_size)
++ return NULL;
+ return fi;
+ }
+
+diff --git a/include/dt-bindings/clock/aspeed-clock.h b/include/dt-bindings/clock/aspeed-clock.h
+index d3558d897a4d..8d69b9134bef 100644
+--- a/include/dt-bindings/clock/aspeed-clock.h
++++ b/include/dt-bindings/clock/aspeed-clock.h
+@@ -45,7 +45,7 @@
+ #define ASPEED_RESET_JTAG_MASTER 3
+ #define ASPEED_RESET_MIC 4
+ #define ASPEED_RESET_PWM 5
+-#define ASPEED_RESET_PCIVGA 6
++#define ASPEED_RESET_PECI 6
+ #define ASPEED_RESET_I2C 7
+ #define ASPEED_RESET_AHB 8
+
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 5c4eee043191..7d047465dfc2 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1124,8 +1124,8 @@ static inline unsigned int blk_max_size_offset(struct request_queue *q,
+ if (!q->limits.chunk_sectors)
+ return q->limits.max_sectors;
+
+- return q->limits.chunk_sectors -
+- (offset & (q->limits.chunk_sectors - 1));
++ return min(q->limits.max_sectors, (unsigned int)(q->limits.chunk_sectors -
++ (offset & (q->limits.chunk_sectors - 1))));
+ }
+
+ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index ab4711c63601..42506e4d1f53 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -21,7 +21,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ #define unlikely_notrace(x) __builtin_expect(!!(x), 0)
+
+ #define __branch_check__(x, expect, is_constant) ({ \
+- int ______r; \
++ long ______r; \
+ static struct ftrace_likely_data \
+ __attribute__((__aligned__(4))) \
+ __attribute__((section("_ftrace_annotated_branch"))) \
+diff --git a/include/linux/memory.h b/include/linux/memory.h
+index 31ca3e28b0eb..a6ddefc60517 100644
+--- a/include/linux/memory.h
++++ b/include/linux/memory.h
+@@ -38,6 +38,7 @@ struct memory_block {
+
+ int arch_get_memory_phys_device(unsigned long start_pfn);
+ unsigned long memory_block_size_bytes(void);
++int set_memory_block_size_order(unsigned int order);
+
+ /* These states are exposed to userspace as text strings in sysfs */
+ #define MEM_ONLINE (1<<0) /* exposed to userspace */
+diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
+index 3773e26c08c1..bb93a6c693e3 100644
+--- a/include/linux/slub_def.h
++++ b/include/linux/slub_def.h
+@@ -156,8 +156,12 @@ struct kmem_cache {
+
+ #ifdef CONFIG_SYSFS
+ #define SLAB_SUPPORTS_SYSFS
++void sysfs_slab_unlink(struct kmem_cache *);
+ void sysfs_slab_release(struct kmem_cache *);
+ #else
++static inline void sysfs_slab_unlink(struct kmem_cache *s)
++{
++}
+ static inline void sysfs_slab_release(struct kmem_cache *s)
+ {
+ }
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 9fc8a825aa28..ba015efb5312 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -3310,11 +3310,14 @@ int ib_process_cq_direct(struct ib_cq *cq, int budget);
+ *
+ * Users can examine the cq structure to determine the actual CQ size.
+ */
+-struct ib_cq *ib_create_cq(struct ib_device *device,
+- ib_comp_handler comp_handler,
+- void (*event_handler)(struct ib_event *, void *),
+- void *cq_context,
+- const struct ib_cq_init_attr *cq_attr);
++struct ib_cq *__ib_create_cq(struct ib_device *device,
++ ib_comp_handler comp_handler,
++ void (*event_handler)(struct ib_event *, void *),
++ void *cq_context,
++ const struct ib_cq_init_attr *cq_attr,
++ const char *caller);
++#define ib_create_cq(device, cmp_hndlr, evt_hndlr, cq_ctxt, cq_attr) \
++ __ib_create_cq((device), (cmp_hndlr), (evt_hndlr), (cq_ctxt), (cq_attr), KBUILD_MODNAME)
+
+ /**
+ * ib_resize_cq - Modifies the capacity of the CQ.
+@@ -3734,6 +3737,20 @@ static inline int ib_check_mr_access(int flags)
+ return 0;
+ }
+
++static inline bool ib_access_writable(int access_flags)
++{
++ /*
++ * We have writable memory backing the MR if any of the following
++ * access flags are set. "Local write" and "remote write" obviously
++ * require write access. "Remote atomic" can do things like fetch and
++ * add, which will modify memory, and "MW bind" can change permissions
++ * by binding a window.
++ */
++ return access_flags &
++ (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE |
++ IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_MW_BIND);
++}
++
+ /**
+ * ib_check_mr_status: lightweight check of MR status.
+ * This routine may provide status checks on a selected
+diff --git a/include/rdma/rdma_vt.h b/include/rdma/rdma_vt.h
+index 3f4c187e435d..eec495e68823 100644
+--- a/include/rdma/rdma_vt.h
++++ b/include/rdma/rdma_vt.h
+@@ -402,7 +402,7 @@ struct rvt_dev_info {
+ spinlock_t pending_lock; /* protect pending mmap list */
+
+ /* CQ */
+- struct kthread_worker *worker; /* per device cq worker */
++ struct kthread_worker __rcu *worker; /* per device cq worker */
+ u32 n_cqs_allocated; /* number of CQs allocated for device */
+ spinlock_t n_cqs_lock; /* protect count of in use cqs */
+
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index bc1e507be9ff..776308d2fa9e 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -181,6 +181,7 @@ void down_read_non_owner(struct rw_semaphore *sem)
+ might_sleep();
+
+ __down_read(sem);
++ rwsem_set_reader_owned(sem);
+ }
+
+ EXPORT_SYMBOL(down_read_non_owner);
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index 3e3c2004bb23..449d67edfa4b 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -82,6 +82,7 @@ static __printf(2, 0) int printk_safe_log_store(struct printk_safe_seq_buf *s,
+ {
+ int add;
+ size_t len;
++ va_list ap;
+
+ again:
+ len = atomic_read(&s->len);
+@@ -100,7 +101,9 @@ static __printf(2, 0) int printk_safe_log_store(struct printk_safe_seq_buf *s,
+ if (!len)
+ smp_rmb();
+
+- add = vscnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, args);
++ va_copy(ap, args);
++ add = vscnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, ap);
++ va_end(ap);
+ if (!add)
+ return 0;
+
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 177de3640c78..8a040bcaa033 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
+ {
+ lockdep_assert_irqs_disabled();
+
++ if (preempt_count() == cnt)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++
+ if (softirq_count() == (cnt & SOFTIRQ_MASK))
+ trace_softirqs_on(_RET_IP_);
+- preempt_count_sub(cnt);
++
++ __preempt_count_sub(cnt);
+ }
+
+ /*
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 3044d48ebe56..e8127f4e9e66 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -28,6 +28,7 @@
+ */
+
+ #include <linux/export.h>
++#include <linux/kernel.h>
+ #include <linux/timex.h>
+ #include <linux/capability.h>
+ #include <linux/timekeeper_internal.h>
+@@ -314,9 +315,10 @@ unsigned int jiffies_to_msecs(const unsigned long j)
+ return (j + (HZ / MSEC_PER_SEC) - 1)/(HZ / MSEC_PER_SEC);
+ #else
+ # if BITS_PER_LONG == 32
+- return (HZ_TO_MSEC_MUL32 * j) >> HZ_TO_MSEC_SHR32;
++ return (HZ_TO_MSEC_MUL32 * j + (1ULL << HZ_TO_MSEC_SHR32) - 1) >>
++ HZ_TO_MSEC_SHR32;
+ # else
+- return (j * HZ_TO_MSEC_NUM) / HZ_TO_MSEC_DEN;
++ return DIV_ROUND_UP(j * HZ_TO_MSEC_NUM, HZ_TO_MSEC_DEN);
+ # endif
+ #endif
+ }
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 7d306b74230f..c44f74daefbf 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -78,7 +78,8 @@ static const char * ops[] = { OPS };
+ C(TOO_MANY_PREDS, "Too many terms in predicate expression"), \
+ C(INVALID_FILTER, "Meaningless filter expression"), \
+ C(IP_FIELD_ONLY, "Only 'ip' field is supported for function trace"), \
+- C(INVALID_VALUE, "Invalid value (did you forget quotes)?"),
++ C(INVALID_VALUE, "Invalid value (did you forget quotes)?"), \
++ C(NO_FILTER, "No filter found"),
+
+ #undef C
+ #define C(a, b) FILT_ERR_##a
+@@ -550,6 +551,13 @@ predicate_parse(const char *str, int nr_parens, int nr_preds,
+ goto out_free;
+ }
+
++ if (!N) {
++ /* No program? */
++ ret = -EINVAL;
++ parse_error(pe, FILT_ERR_NO_FILTER, ptr - str);
++ goto out_free;
++ }
++
+ prog[N].pred = NULL; /* #13 */
+ prog[N].target = 1; /* TRUE */
+ prog[N+1].pred = NULL;
+diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
+index 3d35d062970d..c253c1b46c6b 100644
+--- a/lib/Kconfig.kasan
++++ b/lib/Kconfig.kasan
+@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
+ config KASAN
+ bool "KASan: runtime memory debugger"
+ depends on SLUB || (SLAB && !DEBUG_SLAB)
++ select SLUB_DEBUG if SLUB
+ select CONSTRUCTORS
+ select STACKDEPOT
+ help
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 23920c5ff728..91320e5bfd5b 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1456,9 +1456,6 @@ char *clock(char *buf, char *end, struct clk *clk, struct printf_spec spec,
+ return string(buf, end, NULL, spec);
+
+ switch (fmt[1]) {
+- case 'r':
+- return number(buf, end, clk_get_rate(clk), spec);
+-
+ case 'n':
+ default:
+ #ifdef CONFIG_COMMON_CLK
+diff --git a/mm/gup.c b/mm/gup.c
+index 541904a7c60f..3d8472d48a0b 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1459,32 +1459,48 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
+ return 1;
+ }
+
+-static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr,
++static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
+ unsigned long end, struct page **pages, int *nr)
+ {
+ unsigned long fault_pfn;
++ int nr_start = *nr;
++
++ fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
++ if (!__gup_device_huge(fault_pfn, addr, end, pages, nr))
++ return 0;
+
+- fault_pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+- return __gup_device_huge(fault_pfn, addr, end, pages, nr);
++ if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
++ undo_dev_pagemap(nr, nr_start, pages);
++ return 0;
++ }
++ return 1;
+ }
+
+-static int __gup_device_huge_pud(pud_t pud, unsigned long addr,
++static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
+ unsigned long end, struct page **pages, int *nr)
+ {
+ unsigned long fault_pfn;
++ int nr_start = *nr;
++
++ fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
++ if (!__gup_device_huge(fault_pfn, addr, end, pages, nr))
++ return 0;
+
+- fault_pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+- return __gup_device_huge(fault_pfn, addr, end, pages, nr);
++ if (unlikely(pud_val(orig) != pud_val(*pudp))) {
++ undo_dev_pagemap(nr, nr_start, pages);
++ return 0;
++ }
++ return 1;
+ }
+ #else
+-static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr,
++static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
+ unsigned long end, struct page **pages, int *nr)
+ {
+ BUILD_BUG();
+ return 0;
+ }
+
+-static int __gup_device_huge_pud(pud_t pud, unsigned long addr,
++static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
+ unsigned long end, struct page **pages, int *nr)
+ {
+ BUILD_BUG();
+@@ -1502,7 +1518,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
+ return 0;
+
+ if (pmd_devmap(orig))
+- return __gup_device_huge_pmd(orig, addr, end, pages, nr);
++ return __gup_device_huge_pmd(orig, pmdp, addr, end, pages, nr);
+
+ refs = 0;
+ page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+@@ -1540,7 +1556,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
+ return 0;
+
+ if (pud_devmap(orig))
+- return __gup_device_huge_pud(orig, addr, end, pages, nr);
++ return __gup_device_huge_pud(orig, pudp, addr, end, pages, nr);
+
+ refs = 0;
+ page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+diff --git a/mm/ksm.c b/mm/ksm.c
+index e3cbf9a92f3c..e6a9640580fc 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -199,6 +199,8 @@ struct rmap_item {
+ #define SEQNR_MASK 0x0ff /* low bits of unstable tree seqnr */
+ #define UNSTABLE_FLAG 0x100 /* is a node of the unstable tree */
+ #define STABLE_FLAG 0x200 /* is listed from the stable tree */
++#define KSM_FLAG_MASK (SEQNR_MASK|UNSTABLE_FLAG|STABLE_FLAG)
++ /* to mask all the flags */
+
+ /* The stable and unstable tree heads */
+ static struct rb_root one_stable_tree[1] = { RB_ROOT };
+@@ -2570,10 +2572,15 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
+ anon_vma_lock_read(anon_vma);
+ anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
+ 0, ULONG_MAX) {
++ unsigned long addr;
++
+ cond_resched();
+ vma = vmac->vma;
+- if (rmap_item->address < vma->vm_start ||
+- rmap_item->address >= vma->vm_end)
++
++ /* Ignore the stable/unstable/sqnr flags */
++ addr = rmap_item->address & ~KSM_FLAG_MASK;
++
++ if (addr < vma->vm_start || addr >= vma->vm_end)
+ continue;
+ /*
+ * Initially we examine only the vma which covers this
+@@ -2587,8 +2594,7 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
+ if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
+ continue;
+
+- if (!rwc->rmap_one(page, vma,
+- rmap_item->address, rwc->arg)) {
++ if (!rwc->rmap_one(page, vma, addr, rwc->arg)) {
+ anon_vma_unlock_read(anon_vma);
+ return;
+ }
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 98dcdc352062..65408ced18f1 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -566,10 +566,14 @@ static int shutdown_cache(struct kmem_cache *s)
+ list_del(&s->list);
+
+ if (s->flags & SLAB_TYPESAFE_BY_RCU) {
++#ifdef SLAB_SUPPORTS_SYSFS
++ sysfs_slab_unlink(s);
++#endif
+ list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
+ schedule_work(&slab_caches_to_rcu_destroy_work);
+ } else {
+ #ifdef SLAB_SUPPORTS_SYSFS
++ sysfs_slab_unlink(s);
+ sysfs_slab_release(s);
+ #else
+ slab_kmem_cache_release(s);
+diff --git a/mm/slub.c b/mm/slub.c
+index 44aa7847324a..613c8dc2f409 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -5714,7 +5714,6 @@ static void sysfs_slab_remove_workfn(struct work_struct *work)
+ kset_unregister(s->memcg_kset);
+ #endif
+ kobject_uevent(&s->kobj, KOBJ_REMOVE);
+- kobject_del(&s->kobj);
+ out:
+ kobject_put(&s->kobj);
+ }
+@@ -5799,6 +5798,12 @@ static void sysfs_slab_remove(struct kmem_cache *s)
+ schedule_work(&s->kobj_remove_work);
+ }
+
++void sysfs_slab_unlink(struct kmem_cache *s)
++{
++ if (slab_state >= FULL)
++ kobject_del(&s->kobj);
++}
++
+ void sysfs_slab_release(struct kmem_cache *s)
+ {
+ if (slab_state >= FULL)
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index e8adad33d0bb..8e531ac9bc87 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -230,7 +230,7 @@ rpcrdma_convert_iovs(struct rpcrdma_xprt *r_xprt, struct xdr_buf *xdrbuf,
+ */
+ *ppages = alloc_page(GFP_ATOMIC);
+ if (!*ppages)
+- return -EAGAIN;
++ return -ENOBUFS;
+ }
+ seg->mr_page = *ppages;
+ seg->mr_offset = (char *)page_base;
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 245160373dab..cbf227d12c2b 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -435,22 +435,16 @@ static int sel_release_policy(struct inode *inode, struct file *filp)
+ static ssize_t sel_read_policy(struct file *filp, char __user *buf,
+ size_t count, loff_t *ppos)
+ {
+- struct selinux_fs_info *fsi = file_inode(filp)->i_sb->s_fs_info;
+ struct policy_load_memory *plm = filp->private_data;
+ int ret;
+
+- mutex_lock(&fsi->mutex);
+-
+ ret = avc_has_perm(&selinux_state,
+ current_sid(), SECINITSID_SECURITY,
+ SECCLASS_SECURITY, SECURITY__READ_POLICY, NULL);
+ if (ret)
+- goto out;
++ return ret;
+
+- ret = simple_read_from_buffer(buf, count, ppos, plm->data, plm->len);
+-out:
+- mutex_unlock(&fsi->mutex);
+- return ret;
++ return simple_read_from_buffer(buf, count, ppos, plm->data, plm->len);
+ }
+
+ static int sel_mmap_policy_fault(struct vm_fault *vmf)
+@@ -1182,25 +1176,29 @@ static ssize_t sel_read_bool(struct file *filep, char __user *buf,
+ ret = -EINVAL;
+ if (index >= fsi->bool_num || strcmp(name,
+ fsi->bool_pending_names[index]))
+- goto out;
++ goto out_unlock;
+
+ ret = -ENOMEM;
+ page = (char *)get_zeroed_page(GFP_KERNEL);
+ if (!page)
+- goto out;
++ goto out_unlock;
+
+ cur_enforcing = security_get_bool_value(fsi->state, index);
+ if (cur_enforcing < 0) {
+ ret = cur_enforcing;
+- goto out;
++ goto out_unlock;
+ }
+ length = scnprintf(page, PAGE_SIZE, "%d %d", cur_enforcing,
+ fsi->bool_pending_values[index]);
+- ret = simple_read_from_buffer(buf, count, ppos, page, length);
+-out:
+ mutex_unlock(&fsi->mutex);
++ ret = simple_read_from_buffer(buf, count, ppos, page, length);
++out_free:
+ free_page((unsigned long)page);
+ return ret;
++
++out_unlock:
++ mutex_unlock(&fsi->mutex);
++ goto out_free;
+ }
+
+ static ssize_t sel_write_bool(struct file *filep, const char __user *buf,
+@@ -1213,6 +1211,17 @@ static ssize_t sel_write_bool(struct file *filep, const char __user *buf,
+ unsigned index = file_inode(filep)->i_ino & SEL_INO_MASK;
+ const char *name = filep->f_path.dentry->d_name.name;
+
++ if (count >= PAGE_SIZE)
++ return -ENOMEM;
++
++ /* No partial writes. */
++ if (*ppos != 0)
++ return -EINVAL;
++
++ page = memdup_user_nul(buf, count);
++ if (IS_ERR(page))
++ return PTR_ERR(page);
++
+ mutex_lock(&fsi->mutex);
+
+ length = avc_has_perm(&selinux_state,
+@@ -1227,22 +1236,6 @@ static ssize_t sel_write_bool(struct file *filep, const char __user *buf,
+ fsi->bool_pending_names[index]))
+ goto out;
+
+- length = -ENOMEM;
+- if (count >= PAGE_SIZE)
+- goto out;
+-
+- /* No partial writes. */
+- length = -EINVAL;
+- if (*ppos != 0)
+- goto out;
+-
+- page = memdup_user_nul(buf, count);
+- if (IS_ERR(page)) {
+- length = PTR_ERR(page);
+- page = NULL;
+- goto out;
+- }
+-
+ length = -EINVAL;
+ if (sscanf(page, "%d", &new_value) != 1)
+ goto out;
+@@ -1274,6 +1267,17 @@ static ssize_t sel_commit_bools_write(struct file *filep,
+ ssize_t length;
+ int new_value;
+
++ if (count >= PAGE_SIZE)
++ return -ENOMEM;
++
++ /* No partial writes. */
++ if (*ppos != 0)
++ return -EINVAL;
++
++ page = memdup_user_nul(buf, count);
++ if (IS_ERR(page))
++ return PTR_ERR(page);
++
+ mutex_lock(&fsi->mutex);
+
+ length = avc_has_perm(&selinux_state,
+@@ -1283,22 +1287,6 @@ static ssize_t sel_commit_bools_write(struct file *filep,
+ if (length)
+ goto out;
+
+- length = -ENOMEM;
+- if (count >= PAGE_SIZE)
+- goto out;
+-
+- /* No partial writes. */
+- length = -EINVAL;
+- if (*ppos != 0)
+- goto out;
+-
+- page = memdup_user_nul(buf, count);
+- if (IS_ERR(page)) {
+- length = PTR_ERR(page);
+- page = NULL;
+- goto out;
+- }
+-
+ length = -EINVAL;
+ if (sscanf(page, "%d", &new_value) != 1)
+ goto out;
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 0ddcae495838..e9e73edb4bd8 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -1517,7 +1517,7 @@ static int snd_timer_user_next_device(struct snd_timer_id __user *_tid)
+ } else {
+ if (id.subdevice < 0)
+ id.subdevice = 0;
+- else
++ else if (id.subdevice < INT_MAX)
+ id.subdevice++;
+ }
+ }
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 5bc3a7468e17..4d26bb010ddf 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2887,8 +2887,9 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ list_for_each_entry(pcm, &codec->pcm_list_head, list)
+ snd_pcm_suspend_all(pcm->pcm);
+ state = hda_call_codec_suspend(codec);
+- if (codec_has_clkstop(codec) && codec_has_epss(codec) &&
+- (state & AC_PWRST_CLK_STOP_OK))
++ if (codec->link_down_at_suspend ||
++ (codec_has_clkstop(codec) && codec_has_epss(codec) &&
++ (state & AC_PWRST_CLK_STOP_OK)))
+ snd_hdac_codec_link_down(&codec->core);
+ snd_hdac_link_power(&codec->core, false);
+ return 0;
+diff --git a/sound/pci/hda/hda_codec.h b/sound/pci/hda/hda_codec.h
+index 681c360f29f9..a8b1b31f161c 100644
+--- a/sound/pci/hda/hda_codec.h
++++ b/sound/pci/hda/hda_codec.h
+@@ -258,6 +258,7 @@ struct hda_codec {
+ unsigned int power_save_node:1; /* advanced PM for each widget */
+ unsigned int auto_runtime_pm:1; /* enable automatic codec runtime pm */
+ unsigned int force_pin_prefix:1; /* Add location prefix */
++ unsigned int link_down_at_suspend:1; /* link down at runtime suspend */
+ #ifdef CONFIG_PM
+ unsigned long power_on_acct;
+ unsigned long power_off_acct;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 7d7eb1354eee..ed39a77f9253 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -3741,6 +3741,11 @@ static int patch_atihdmi(struct hda_codec *codec)
+
+ spec->chmap.channels_max = max(spec->chmap.channels_max, 8u);
+
++ /* AMD GPUs have neither EPSS nor CLKSTOP bits, hence preventing
++ * the link-down as is. Tell the core to allow it.
++ */
++ codec->link_down_at_suspend = 1;
++
+ return 0;
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 06c2c80a045b..cb9a977bf188 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2542,6 +2542,7 @@ static const struct snd_pci_quirk alc262_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu Lifebook S7110", ALC262_FIXUP_FSC_S7110),
+ SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ),
+ SND_PCI_QUIRK(0x10f1, 0x2915, "Tyan Thunder n6650W", ALC262_FIXUP_TYAN),
++ SND_PCI_QUIRK(0x1734, 0x1141, "FSC ESPRIMO U9210", ALC262_FIXUP_FSC_H270),
+ SND_PCI_QUIRK(0x1734, 0x1147, "FSC Celsius H270", ALC262_FIXUP_FSC_H270),
+ SND_PCI_QUIRK(0x17aa, 0x384e, "Lenovo 3000", ALC262_FIXUP_LENOVO_3000),
+ SND_PCI_QUIRK(0x17ff, 0x0560, "Benq ED8", ALC262_FIXUP_BENQ),
+@@ -4985,7 +4986,6 @@ static void alc_fixup_tpt440_dock(struct hda_codec *codec,
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+- spec->shutup = alc_no_shutup; /* reduce click noise */
+ spec->reboot_notify = alc_d3_at_reboot; /* reduce noise */
+ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
+ codec->power_save_node = 0; /* avoid click noises */
+@@ -5384,6 +5384,13 @@ static void alc274_fixup_bind_dacs(struct hda_codec *codec,
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+
++static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ alc_fixup_no_shutup(codec, fix, action); /* reduce click noise */
++ hda_fixup_thinkpad_acpi(codec, fix, action);
++}
++
+ /* for dell wmi mic mute led */
+ #include "dell_wmi_helper.c"
+
+@@ -5927,7 +5934,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ },
+ [ALC269_FIXUP_THINKPAD_ACPI] = {
+ .type = HDA_FIXUP_FUNC,
+- .v.func = hda_fixup_thinkpad_acpi,
++ .v.func = alc_fixup_thinkpad_acpi,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_SKU_IGNORE,
+ },
+@@ -6577,8 +6584,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
++ SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+- SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
++ SND_PCI_QUIRK(0x17aa, 0x3136, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+@@ -6756,6 +6764,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x14, 0x90170110},
+ {0x19, 0x02a11030},
+ {0x21, 0x02211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++ {0x14, 0x90170110},
++ {0x19, 0x02a11030},
++ {0x1a, 0x02a11040},
++ {0x1b, 0x01014020},
++ {0x21, 0x0221101f}),
+ SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ {0x12, 0x90a60140},
+ {0x14, 0x90170110},
+diff --git a/sound/soc/cirrus/edb93xx.c b/sound/soc/cirrus/edb93xx.c
+index c53bd6f2c2d7..3d011abaa266 100644
+--- a/sound/soc/cirrus/edb93xx.c
++++ b/sound/soc/cirrus/edb93xx.c
+@@ -67,7 +67,7 @@ static struct snd_soc_dai_link edb93xx_dai = {
+ .cpu_dai_name = "ep93xx-i2s",
+ .codec_name = "spi0.0",
+ .codec_dai_name = "cs4271-hifi",
+- .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_IF |
++ .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
+ SND_SOC_DAIFMT_CBS_CFS,
+ .ops = &edb93xx_ops,
+ };
+diff --git a/sound/soc/cirrus/ep93xx-i2s.c b/sound/soc/cirrus/ep93xx-i2s.c
+index 934f8aefdd90..0dc3852c4621 100644
+--- a/sound/soc/cirrus/ep93xx-i2s.c
++++ b/sound/soc/cirrus/ep93xx-i2s.c
+@@ -51,7 +51,9 @@
+ #define EP93XX_I2S_WRDLEN_24 (1 << 0)
+ #define EP93XX_I2S_WRDLEN_32 (2 << 0)
+
+-#define EP93XX_I2S_LINCTRLDATA_R_JUST (1 << 2) /* Right justify */
++#define EP93XX_I2S_RXLINCTRLDATA_R_JUST BIT(1) /* Right justify */
++
++#define EP93XX_I2S_TXLINCTRLDATA_R_JUST BIT(2) /* Right justify */
+
+ #define EP93XX_I2S_CLKCFG_LRS (1 << 0) /* lrclk polarity */
+ #define EP93XX_I2S_CLKCFG_CKP (1 << 1) /* Bit clock polarity */
+@@ -170,25 +172,25 @@ static int ep93xx_i2s_set_dai_fmt(struct snd_soc_dai *cpu_dai,
+ unsigned int fmt)
+ {
+ struct ep93xx_i2s_info *info = snd_soc_dai_get_drvdata(cpu_dai);
+- unsigned int clk_cfg, lin_ctrl;
++ unsigned int clk_cfg;
++ unsigned int txlin_ctrl = 0;
++ unsigned int rxlin_ctrl = 0;
+
+ clk_cfg = ep93xx_i2s_read_reg(info, EP93XX_I2S_RXCLKCFG);
+- lin_ctrl = ep93xx_i2s_read_reg(info, EP93XX_I2S_RXLINCTRLDATA);
+
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ clk_cfg |= EP93XX_I2S_CLKCFG_REL;
+- lin_ctrl &= ~EP93XX_I2S_LINCTRLDATA_R_JUST;
+ break;
+
+ case SND_SOC_DAIFMT_LEFT_J:
+ clk_cfg &= ~EP93XX_I2S_CLKCFG_REL;
+- lin_ctrl &= ~EP93XX_I2S_LINCTRLDATA_R_JUST;
+ break;
+
+ case SND_SOC_DAIFMT_RIGHT_J:
+ clk_cfg &= ~EP93XX_I2S_CLKCFG_REL;
+- lin_ctrl |= EP93XX_I2S_LINCTRLDATA_R_JUST;
++ rxlin_ctrl |= EP93XX_I2S_RXLINCTRLDATA_R_JUST;
++ txlin_ctrl |= EP93XX_I2S_TXLINCTRLDATA_R_JUST;
+ break;
+
+ default:
+@@ -213,32 +215,32 @@ static int ep93xx_i2s_set_dai_fmt(struct snd_soc_dai *cpu_dai,
+ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+ case SND_SOC_DAIFMT_NB_NF:
+ /* Negative bit clock, lrclk low on left word */
+- clk_cfg &= ~(EP93XX_I2S_CLKCFG_CKP | EP93XX_I2S_CLKCFG_REL);
++ clk_cfg &= ~(EP93XX_I2S_CLKCFG_CKP | EP93XX_I2S_CLKCFG_LRS);
+ break;
+
+ case SND_SOC_DAIFMT_NB_IF:
+ /* Negative bit clock, lrclk low on right word */
+ clk_cfg &= ~EP93XX_I2S_CLKCFG_CKP;
+- clk_cfg |= EP93XX_I2S_CLKCFG_REL;
++ clk_cfg |= EP93XX_I2S_CLKCFG_LRS;
+ break;
+
+ case SND_SOC_DAIFMT_IB_NF:
+ /* Positive bit clock, lrclk low on left word */
+ clk_cfg |= EP93XX_I2S_CLKCFG_CKP;
+- clk_cfg &= ~EP93XX_I2S_CLKCFG_REL;
++ clk_cfg &= ~EP93XX_I2S_CLKCFG_LRS;
+ break;
+
+ case SND_SOC_DAIFMT_IB_IF:
+ /* Positive bit clock, lrclk low on right word */
+- clk_cfg |= EP93XX_I2S_CLKCFG_CKP | EP93XX_I2S_CLKCFG_REL;
++ clk_cfg |= EP93XX_I2S_CLKCFG_CKP | EP93XX_I2S_CLKCFG_LRS;
+ break;
+ }
+
+ /* Write new register values */
+ ep93xx_i2s_write_reg(info, EP93XX_I2S_RXCLKCFG, clk_cfg);
+ ep93xx_i2s_write_reg(info, EP93XX_I2S_TXCLKCFG, clk_cfg);
+- ep93xx_i2s_write_reg(info, EP93XX_I2S_RXLINCTRLDATA, lin_ctrl);
+- ep93xx_i2s_write_reg(info, EP93XX_I2S_TXLINCTRLDATA, lin_ctrl);
++ ep93xx_i2s_write_reg(info, EP93XX_I2S_RXLINCTRLDATA, rxlin_ctrl);
++ ep93xx_i2s_write_reg(info, EP93XX_I2S_TXLINCTRLDATA, txlin_ctrl);
+ return 0;
+ }
+
+diff --git a/sound/soc/cirrus/snappercl15.c b/sound/soc/cirrus/snappercl15.c
+index 2334ec19e7eb..11ff7b2672b2 100644
+--- a/sound/soc/cirrus/snappercl15.c
++++ b/sound/soc/cirrus/snappercl15.c
+@@ -72,7 +72,7 @@ static struct snd_soc_dai_link snappercl15_dai = {
+ .codec_dai_name = "tlv320aic23-hifi",
+ .codec_name = "tlv320aic23-codec.0-001a",
+ .platform_name = "ep93xx-i2s",
+- .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_IF |
++ .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
+ SND_SOC_DAIFMT_CBS_CFS,
+ .ops = &snappercl15_ops,
+ };
+diff --git a/sound/soc/codecs/cs35l35.c b/sound/soc/codecs/cs35l35.c
+index a4a2cb171bdf..bd6226bde45f 100644
+--- a/sound/soc/codecs/cs35l35.c
++++ b/sound/soc/codecs/cs35l35.c
+@@ -1105,6 +1105,7 @@ static struct regmap_config cs35l35_regmap = {
+ .readable_reg = cs35l35_readable_register,
+ .precious_reg = cs35l35_precious_register,
+ .cache_type = REGCACHE_RBTREE,
++ .use_single_rw = true,
+ };
+
+ static irqreturn_t cs35l35_irq(int irq, void *data)
+diff --git a/sound/soc/mediatek/common/mtk-afe-platform-driver.c b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+index 53215b52e4f2..f8a06709f76d 100644
+--- a/sound/soc/mediatek/common/mtk-afe-platform-driver.c
++++ b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+@@ -64,14 +64,14 @@ static const struct snd_pcm_ops mtk_afe_pcm_ops = {
+ static int mtk_afe_pcm_new(struct snd_soc_pcm_runtime *rtd)
+ {
+ size_t size;
+- struct snd_card *card = rtd->card->snd_card;
+ struct snd_pcm *pcm = rtd->pcm;
+ struct snd_soc_component *component = snd_soc_rtdcom_lookup(rtd, AFE_PCM_NAME);
+ struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
+
+ size = afe->mtk_afe_hardware->buffer_bytes_max;
+ return snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV,
+- card->dev, size, size);
++ rtd->platform->dev,
++ size, size);
+ }
+
+ static void mtk_afe_pcm_free(struct snd_pcm *pcm)
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 2d9709104ec5..b2b501ef57d7 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -433,6 +433,8 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
+ static void dapm_kcontrol_free(struct snd_kcontrol *kctl)
+ {
+ struct dapm_kcontrol_data *data = snd_kcontrol_chip(kctl);
++
++ list_del(&data->paths);
+ kfree(data->wlist);
+ kfree(data);
+ }
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index 36ef45b2e89d..09c4a4a7b5dd 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -354,6 +354,8 @@ int __kmod_path__parse(struct kmod_path *m, const char *path,
+ if ((strncmp(name, "[kernel.kallsyms]", 17) == 0) ||
+ (strncmp(name, "[guest.kernel.kallsyms", 22) == 0) ||
+ (strncmp(name, "[vdso]", 6) == 0) ||
++ (strncmp(name, "[vdso32]", 8) == 0) ||
++ (strncmp(name, "[vdsox32]", 9) == 0) ||
+ (strncmp(name, "[vsyscall]", 10) == 0)) {
+ m->kmod = false;
+
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index f9157aed1289..d404bed7003a 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -113,6 +113,7 @@ struct intel_pt_decoder {
+ bool have_cyc;
+ bool fixup_last_mtc;
+ bool have_last_ip;
++ enum intel_pt_param_flags flags;
+ uint64_t pos;
+ uint64_t last_ip;
+ uint64_t ip;
+@@ -226,6 +227,8 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
+ decoder->return_compression = params->return_compression;
+ decoder->branch_enable = params->branch_enable;
+
++ decoder->flags = params->flags;
++
+ decoder->period = params->period;
+ decoder->period_type = params->period_type;
+
+@@ -1097,6 +1100,15 @@ static bool intel_pt_fup_event(struct intel_pt_decoder *decoder)
+ return ret;
+ }
+
++static inline bool intel_pt_fup_with_nlip(struct intel_pt_decoder *decoder,
++ struct intel_pt_insn *intel_pt_insn,
++ uint64_t ip, int err)
++{
++ return decoder->flags & INTEL_PT_FUP_WITH_NLIP && !err &&
++ intel_pt_insn->branch == INTEL_PT_BR_INDIRECT &&
++ ip == decoder->ip + intel_pt_insn->length;
++}
++
+ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ {
+ struct intel_pt_insn intel_pt_insn;
+@@ -1109,10 +1121,11 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ err = intel_pt_walk_insn(decoder, &intel_pt_insn, ip);
+ if (err == INTEL_PT_RETURN)
+ return 0;
+- if (err == -EAGAIN) {
++ if (err == -EAGAIN ||
++ intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
+ if (intel_pt_fup_event(decoder))
+ return 0;
+- return err;
++ return -EAGAIN;
+ }
+ decoder->set_fup_tx_flags = false;
+ if (err)
+@@ -1376,7 +1389,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
+ {
+ intel_pt_log("ERROR: Buffer overflow\n");
+ intel_pt_clear_tx_flags(decoder);
+- decoder->have_tma = false;
+ decoder->cbr = 0;
+ decoder->timestamp_insn_cnt = 0;
+ decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
+@@ -1604,7 +1616,6 @@ static int intel_pt_walk_fup_tip(struct intel_pt_decoder *decoder)
+ case INTEL_PT_PSB:
+ case INTEL_PT_TSC:
+ case INTEL_PT_TMA:
+- case INTEL_PT_CBR:
+ case INTEL_PT_MODE_TSX:
+ case INTEL_PT_BAD:
+ case INTEL_PT_PSBEND:
+@@ -1620,6 +1631,10 @@ static int intel_pt_walk_fup_tip(struct intel_pt_decoder *decoder)
+ decoder->pkt_step = 0;
+ return -ENOENT;
+
++ case INTEL_PT_CBR:
++ intel_pt_calc_cbr(decoder);
++ break;
++
+ case INTEL_PT_OVF:
+ return intel_pt_overflow(decoder);
+
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+index fc1752d50019..51c18d67f4ca 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+@@ -60,6 +60,14 @@ enum {
+ INTEL_PT_ERR_MAX,
+ };
+
++enum intel_pt_param_flags {
++ /*
++ * FUP packet can contain next linear instruction pointer instead of
++ * current linear instruction pointer.
++ */
++ INTEL_PT_FUP_WITH_NLIP = 1 << 0,
++};
++
+ struct intel_pt_state {
+ enum intel_pt_sample_type type;
+ int err;
+@@ -106,6 +114,7 @@ struct intel_pt_params {
+ unsigned int mtc_period;
+ uint32_t tsc_ctc_ratio_n;
+ uint32_t tsc_ctc_ratio_d;
++ enum intel_pt_param_flags flags;
+ };
+
+ struct intel_pt_decoder;
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
+index ba4c9dd18643..d426761a549d 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
+@@ -366,7 +366,7 @@ static int intel_pt_get_cyc(unsigned int byte, const unsigned char *buf,
+ if (len < offs)
+ return INTEL_PT_NEED_MORE_BYTES;
+ byte = buf[offs++];
+- payload |= (byte >> 1) << shift;
++ payload |= ((uint64_t)byte >> 1) << shift;
+ }
+
+ packet->type = INTEL_PT_CYC;
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 0effaff57020..38b25e826a45 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -751,6 +751,7 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
+ unsigned int queue_nr)
+ {
+ struct intel_pt_params params = { .get_trace = 0, };
++ struct perf_env *env = pt->machine->env;
+ struct intel_pt_queue *ptq;
+
+ ptq = zalloc(sizeof(struct intel_pt_queue));
+@@ -832,6 +833,9 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
+ }
+ }
+
++ if (env->cpuid && !strncmp(env->cpuid, "GenuineIntel,6,92,", 18))
++ params.flags |= INTEL_PT_FUP_WITH_NLIP;
++
+ ptq->decoder = intel_pt_decoder_new(¶ms);
+ if (!ptq->decoder)
+ goto out_free;
+@@ -1523,6 +1527,7 @@ static int intel_pt_sample(struct intel_pt_queue *ptq)
+
+ if (intel_pt_is_switch_ip(ptq, state->to_ip)) {
+ switch (ptq->switch_state) {
++ case INTEL_PT_SS_NOT_TRACING:
+ case INTEL_PT_SS_UNKNOWN:
+ case INTEL_PT_SS_EXPECTING_SWITCH_IP:
+ err = intel_pt_next_tid(pt, ptq);
+diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
+index 2a4f16fc9819..8393b1c06027 100644
+--- a/tools/testing/selftests/ftrace/test.d/functions
++++ b/tools/testing/selftests/ftrace/test.d/functions
+@@ -15,14 +15,29 @@ reset_tracer() { # reset the current tracer
+ echo nop > current_tracer
+ }
+
+-reset_trigger() { # reset all current setting triggers
+- grep -v ^# events/*/*/trigger |
++reset_trigger_file() {
++ # remove action triggers first
++ grep -H ':on[^:]*(' $@ |
++ while read line; do
++ cmd=`echo $line | cut -f2- -d: | cut -f1 -d" "`
++ file=`echo $line | cut -f1 -d:`
++ echo "!$cmd" >> $file
++ done
++ grep -Hv ^# $@ |
+ while read line; do
+ cmd=`echo $line | cut -f2- -d: | cut -f1 -d" "`
+- echo "!$cmd" > `echo $line | cut -f1 -d:`
++ file=`echo $line | cut -f1 -d:`
++ echo "!$cmd" > $file
+ done
+ }
+
++reset_trigger() { # reset all current setting triggers
++ if [ -d events/synthetic ]; then
++ reset_trigger_file events/synthetic/*/trigger
++ fi
++ reset_trigger_file events/*/*/trigger
++}
++
+ reset_events_filter() { # reset all current setting filters
+ grep -v ^none events/*/*/filter |
+ while read line; do
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-29 23:18 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-29 23:18 UTC (permalink / raw
To: gentoo-commits
commit: 28232bbdc29fe4ef9f7d2a5360d7cdd000b26304
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 29 23:18:10 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 29 23:18:10 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28232bbd
kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32. See bug #658544.
0000_README | 4 ++
...ne-pvclock-pvti-cpu0-va-setter-for-X86-32.patch | 55 ++++++++++++++++++++++
2 files changed, 59 insertions(+)
diff --git a/0000_README b/0000_README
index daa330b..af14401 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
+From: https://marc.info/?l=kvm&m=152960320011592&w=2
+Desc: kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32. See bug #658544.
+
Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
diff --git a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
new file mode 100644
index 0000000..0732c51
--- /dev/null
+++ b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
@@ -0,0 +1,55 @@
+pvti_cpu0_va is the address of shared kvmclock data structure.
+
+pvti_cpu0_va is currently kept unset (1) on 32 bit systems, (2) when
+kvmclock vsyscall is disabled, and (3) if kvmclock is not stable.
+This poses a problem, because kvm_ptp needs pvti_cpu0_va, but (1) can
+work on 32 bit, (2) has little relation to the vsyscall, and (3) does
+not need stable kvmclock (although kvmclock won't be used for system
+clock if it's not stable, so kvm_ptp is pointless in that case).
+
+Expose pvti_cpu0_va whenever kvmclock is enabled to allow all users to
+work with it.
+
+This fixes a regression found on Gentoo: https://bugs.gentoo.org/658544.
+
+Fixes: 9f08890ab906 ("x86/pvclock: add setter for pvclock_pvti_cpu0_va")
+Reported-by: Andreas Steinmetz <ast@domdv.de>
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+---
+ arch/x86/kernel/kvmclock.c | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index bf8d1eb7fca3..46ffa8327563 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -319,6 +319,8 @@ void __init kvmclock_init(void)
+ printk(KERN_INFO "kvm-clock: Using msrs %x and %x",
+ msr_kvm_system_time, msr_kvm_wall_clock);
+
++ pvclock_set_pvti_cpu0_va(hv_clock);
++
+ if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT))
+ pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT);
+
+@@ -366,14 +368,11 @@ int __init kvm_setup_vsyscall_timeinfo(void)
+ vcpu_time = &hv_clock[cpu].pvti;
+ flags = pvclock_read_flags(vcpu_time);
+
+- if (!(flags & PVCLOCK_TSC_STABLE_BIT)) {
+- put_cpu();
+- return 1;
+- }
+-
+- pvclock_set_pvti_cpu0_va(hv_clock);
+ put_cpu();
+
++ if (!(flags & PVCLOCK_TSC_STABLE_BIT))
++ return 1;
++
+ kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK;
+ #endif
+ return 0;
+--
+2.18.0.rc2
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-26 16:29 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2018-06-26 16:29 UTC (permalink / raw
To: gentoo-commits
commit: c429ed15c8c907dbe61005f1542b6f1c16e99543
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 26 16:24:12 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jun 26 16:24:12 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c429ed15
linux kernel 4.17.3
0000_README | 4 +
1002_linux-4.17.3.patch | 2352 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2356 insertions(+)
diff --git a/0000_README b/0000_README
index a4cf389..daa330b 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-4.17.2.patch
From: http://www.kernel.org
Desc: Linux 4.17.2
+Patch: 1002_linux-4.17.3.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.3
+
Patch: 1800_iommu-amd-dma-direct-revert.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=e16c4790de39dc861b749674c2a9319507f6f64f
Desc: Revert iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and dma_direct_{alloc,free}(). See bug #658538.
diff --git a/1002_linux-4.17.3.patch b/1002_linux-4.17.3.patch
new file mode 100644
index 0000000..9fdc2fc
--- /dev/null
+++ b/1002_linux-4.17.3.patch
@@ -0,0 +1,2352 @@
+diff --git a/Makefile b/Makefile
+index f43cd522b175..31dc3a08295a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/um/drivers/vector_transports.c b/arch/um/drivers/vector_transports.c
+index 9065047f844b..77e4ebc206ae 100644
+--- a/arch/um/drivers/vector_transports.c
++++ b/arch/um/drivers/vector_transports.c
+@@ -120,7 +120,8 @@ static int raw_form_header(uint8_t *header,
+ skb,
+ vheader,
+ virtio_legacy_is_little_endian(),
+- false
++ false,
++ 0
+ );
+
+ return 0;
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 08acd954f00e..74a9e06b6cfd 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -436,6 +436,8 @@ static inline void apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v)) {}
+
+ #endif /* CONFIG_X86_LOCAL_APIC */
+
++extern void apic_ack_irq(struct irq_data *data);
++
+ static inline void ack_APIC_irq(void)
+ {
+ /*
+diff --git a/arch/x86/include/asm/trace/irq_vectors.h b/arch/x86/include/asm/trace/irq_vectors.h
+index 22647a642e98..0af81b590a0c 100644
+--- a/arch/x86/include/asm/trace/irq_vectors.h
++++ b/arch/x86/include/asm/trace/irq_vectors.h
+@@ -236,7 +236,7 @@ TRACE_EVENT(vector_alloc,
+ TP_PROTO(unsigned int irq, unsigned int vector, bool reserved,
+ int ret),
+
+- TP_ARGS(irq, vector, ret, reserved),
++ TP_ARGS(irq, vector, reserved, ret),
+
+ TP_STRUCT__entry(
+ __field( unsigned int, irq )
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 7553819c74c3..3982f79d2377 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1851,7 +1851,7 @@ static void ioapic_ir_ack_level(struct irq_data *irq_data)
+ * intr-remapping table entry. Hence for the io-apic
+ * EOI we use the pin number.
+ */
+- ack_APIC_irq();
++ apic_ack_irq(irq_data);
+ eoi_ioapic_pin(data->entry.vector, data);
+ }
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index bb6f7a2148d7..b708f597eee3 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -235,6 +235,15 @@ static int allocate_vector(struct irq_data *irqd, const struct cpumask *dest)
+ if (vector && cpu_online(cpu) && cpumask_test_cpu(cpu, dest))
+ return 0;
+
++ /*
++ * Careful here. @apicd might either have move_in_progress set or
++ * be enqueued for cleanup. Assigning a new vector would either
++ * leave a stale vector on some CPU around or in case of a pending
++ * cleanup corrupt the hlist.
++ */
++ if (apicd->move_in_progress || !hlist_unhashed(&apicd->clist))
++ return -EBUSY;
++
+ vector = irq_matrix_alloc(vector_matrix, dest, resvd, &cpu);
+ if (vector > 0)
+ apic_update_vector(irqd, vector, cpu);
+@@ -800,13 +809,18 @@ static int apic_retrigger_irq(struct irq_data *irqd)
+ return 1;
+ }
+
+-void apic_ack_edge(struct irq_data *irqd)
++void apic_ack_irq(struct irq_data *irqd)
+ {
+- irq_complete_move(irqd_cfg(irqd));
+ irq_move_irq(irqd);
+ ack_APIC_irq();
+ }
+
++void apic_ack_edge(struct irq_data *irqd)
++{
++ irq_complete_move(irqd_cfg(irqd));
++ apic_ack_irq(irqd);
++}
++
+ static struct irq_chip lapic_controller = {
+ .name = "APIC",
+ .irq_ack = apic_ack_edge,
+diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
+index 589b948e6e01..316a8875bd90 100644
+--- a/arch/x86/kernel/cpu/intel_rdt.c
++++ b/arch/x86/kernel/cpu/intel_rdt.c
+@@ -821,6 +821,8 @@ static __init void rdt_quirks(void)
+ case INTEL_FAM6_SKYLAKE_X:
+ if (boot_cpu_data.x86_stepping <= 4)
+ set_rdt_options("!cmt,!mbmtotal,!mbmlocal,!l3cat");
++ else
++ set_rdt_options("!l3cat");
+ }
+ }
+
+diff --git a/arch/x86/kernel/cpu/mcheck/mce-inject.c b/arch/x86/kernel/cpu/mcheck/mce-inject.c
+index 475cb4f5f14f..c805a06e14c3 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce-inject.c
++++ b/arch/x86/kernel/cpu/mcheck/mce-inject.c
+@@ -48,7 +48,7 @@ static struct dentry *dfs_inj;
+
+ static u8 n_banks;
+
+-#define MAX_FLAG_OPT_SIZE 3
++#define MAX_FLAG_OPT_SIZE 4
+ #define NBCFG 0x44
+
+ enum injection_type {
+diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
+index e4cb9f4cde8a..fc13cbbb2dce 100644
+--- a/arch/x86/platform/uv/uv_irq.c
++++ b/arch/x86/platform/uv/uv_irq.c
+@@ -47,11 +47,6 @@ static void uv_program_mmr(struct irq_cfg *cfg, struct uv_irq_2_mmr_pnode *info)
+
+ static void uv_noop(struct irq_data *data) { }
+
+-static void uv_ack_apic(struct irq_data *data)
+-{
+- ack_APIC_irq();
+-}
+-
+ static int
+ uv_set_irq_affinity(struct irq_data *data, const struct cpumask *mask,
+ bool force)
+@@ -73,7 +68,7 @@ static struct irq_chip uv_irq_chip = {
+ .name = "UV-CORE",
+ .irq_mask = uv_noop,
+ .irq_unmask = uv_noop,
+- .irq_eoi = uv_ack_apic,
++ .irq_eoi = apic_ack_irq,
+ .irq_set_affinity = uv_set_irq_affinity,
+ };
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9ce9cac16c3f..90ffd8151c57 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2473,7 +2473,6 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
+
+ mutex_lock(&set->tag_list_lock);
+ list_del_rcu(&q->tag_set_list);
+- INIT_LIST_HEAD(&q->tag_set_list);
+ if (list_is_singular(&set->tag_list)) {
+ /* just transitioned to unshared */
+ set->flags &= ~BLK_MQ_F_TAG_SHARED;
+@@ -2481,8 +2480,8 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
+ blk_mq_update_tag_set_depth(set, false);
+ }
+ mutex_unlock(&set->tag_list_lock);
+-
+ synchronize_rcu();
++ INIT_LIST_HEAD(&q->tag_set_list);
+ }
+
+ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 68422afc365f..bc5f05906bd1 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -515,6 +515,22 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
++ if (walk_state->opcode == AML_SCOPE_OP) {
++ /*
++ * If the scope op fails to parse, skip the body of the
++ * scope op because the parse failure indicates that the
++ * device may not exist.
++ */
++ walk_state->parser_state.aml =
++ walk_state->aml + 1;
++ walk_state->parser_state.aml =
++ acpi_ps_get_next_package_end
++ (&walk_state->parser_state);
++ walk_state->aml =
++ walk_state->parser_state.aml;
++ ACPI_ERROR((AE_INFO,
++ "Skipping Scope block"));
++ }
+
+ continue;
+ }
+@@ -557,7 +573,40 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+-
++ if ((walk_state->control_state) &&
++ ((walk_state->control_state->control.
++ opcode == AML_IF_OP)
++ || (walk_state->control_state->control.
++ opcode == AML_WHILE_OP))) {
++ /*
++ * If the if/while op fails to parse, we will skip parsing
++ * the body of the op.
++ */
++ parser_state->aml =
++ walk_state->control_state->control.
++ aml_predicate_start + 1;
++ parser_state->aml =
++ acpi_ps_get_next_package_end
++ (parser_state);
++ walk_state->aml = parser_state->aml;
++
++ ACPI_ERROR((AE_INFO,
++ "Skipping While/If block"));
++ if (*walk_state->aml == AML_ELSE_OP) {
++ ACPI_ERROR((AE_INFO,
++ "Skipping Else block"));
++ walk_state->parser_state.aml =
++ walk_state->aml + 1;
++ walk_state->parser_state.aml =
++ acpi_ps_get_next_package_end
++ (parser_state);
++ walk_state->aml =
++ parser_state->aml;
++ }
++ ACPI_FREE(acpi_ut_pop_generic_state
++ (&walk_state->control_state));
++ }
++ op = NULL;
+ continue;
+ }
+ }
+diff --git a/drivers/acpi/acpica/psobject.c b/drivers/acpi/acpica/psobject.c
+index 7d9d0151ee54..3138e7a00da8 100644
+--- a/drivers/acpi/acpica/psobject.c
++++ b/drivers/acpi/acpica/psobject.c
+@@ -12,6 +12,7 @@
+ #include "acparser.h"
+ #include "amlcode.h"
+ #include "acconvert.h"
++#include "acnamesp.h"
+
+ #define _COMPONENT ACPI_PARSER
+ ACPI_MODULE_NAME("psobject")
+@@ -549,6 +550,21 @@ acpi_ps_complete_op(struct acpi_walk_state *walk_state,
+
+ do {
+ if (*op) {
++ /*
++ * These Opcodes need to be removed from the namespace because they
++ * get created even if these opcodes cannot be created due to
++ * errors.
++ */
++ if (((*op)->common.aml_opcode == AML_REGION_OP)
++ || ((*op)->common.aml_opcode ==
++ AML_DATA_REGION_OP)) {
++ acpi_ns_delete_children((*op)->common.
++ node);
++ acpi_ns_remove_node((*op)->common.node);
++ (*op)->common.node = NULL;
++ acpi_ps_delete_parse_tree(*op);
++ }
++
+ status2 =
+ acpi_ps_complete_this_op(walk_state, *op);
+ if (ACPI_FAILURE(status2)) {
+@@ -574,6 +590,20 @@ acpi_ps_complete_op(struct acpi_walk_state *walk_state,
+ #endif
+ walk_state->prev_op = NULL;
+ walk_state->prev_arg_types = walk_state->arg_types;
++
++ if (walk_state->parse_flags & ACPI_PARSE_MODULE_LEVEL) {
++ /*
++ * There was something that went wrong while executing code at the
++ * module-level. We need to skip parsing whatever caused the
++ * error and keep going. One runtime error during the table load
++ * should not cause the entire table to not be loaded. This is
++ * because there could be correct AML beyond the parts that caused
++ * the runtime error.
++ */
++ ACPI_ERROR((AE_INFO,
++ "Ignore error and continue table load"));
++ return_ACPI_STATUS(AE_OK);
++ }
+ return_ACPI_STATUS(status);
+ }
+
+diff --git a/drivers/acpi/acpica/uterror.c b/drivers/acpi/acpica/uterror.c
+index 12d4a0f6b8d2..5a64ddaed8a3 100644
+--- a/drivers/acpi/acpica/uterror.c
++++ b/drivers/acpi/acpica/uterror.c
+@@ -182,20 +182,20 @@ acpi_ut_prefixed_namespace_error(const char *module_name,
+ switch (lookup_status) {
+ case AE_ALREADY_EXISTS:
+
+- acpi_os_printf(ACPI_MSG_BIOS_ERROR);
++ acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR);
+ message = "Failure creating";
+ break;
+
+ case AE_NOT_FOUND:
+
+- acpi_os_printf(ACPI_MSG_BIOS_ERROR);
+- message = "Failure looking up";
++ acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR);
++ message = "Could not resolve";
+ break;
+
+ default:
+
+- acpi_os_printf(ACPI_MSG_ERROR);
+- message = "Failure looking up";
++ acpi_os_printf("\n" ACPI_MSG_ERROR);
++ message = "Failure resolving";
+ break;
+ }
+
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 346b163f6e89..9bfd2f7e4542 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4557,9 +4557,6 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
+ { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },
+
+- /* Sandisk devices which are known to not handle LPM well */
+- { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, },
+-
+ /* devices that don't properly handle queued TRIM commands */
+ { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+diff --git a/drivers/ata/libata-zpodd.c b/drivers/ata/libata-zpodd.c
+index de4ddd0e8550..b3ed8f9953a8 100644
+--- a/drivers/ata/libata-zpodd.c
++++ b/drivers/ata/libata-zpodd.c
+@@ -35,7 +35,7 @@ struct zpodd {
+ static int eject_tray(struct ata_device *dev)
+ {
+ struct ata_taskfile tf;
+- static const char cdb[] = { GPCMD_START_STOP_UNIT,
++ static const char cdb[ATAPI_CDB_LEN] = { GPCMD_START_STOP_UNIT,
+ 0, 0, 0,
+ 0x02, /* LoEj */
+ 0, 0, 0, 0, 0, 0, 0,
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index b610816eb887..d680fd030316 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -1467,7 +1467,7 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj)
+
+ dir = kzalloc(sizeof(*dir), GFP_KERNEL);
+ if (!dir)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ dir->class = class;
+ kobject_init(&dir->kobj, &class_dir_ktype);
+@@ -1477,7 +1477,7 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj)
+ retval = kobject_add(&dir->kobj, parent_kobj, "%s", class->name);
+ if (retval < 0) {
+ kobject_put(&dir->kobj);
+- return NULL;
++ return ERR_PTR(retval);
+ }
+ return &dir->kobj;
+ }
+@@ -1784,6 +1784,10 @@ int device_add(struct device *dev)
+
+ parent = get_device(dev->parent);
+ kobj = get_device_parent(dev, parent);
++ if (IS_ERR(kobj)) {
++ error = PTR_ERR(kobj);
++ goto parent_error;
++ }
+ if (kobj)
+ dev->kobj.parent = kobj;
+
+@@ -1882,6 +1886,7 @@ int device_add(struct device *dev)
+ kobject_del(&dev->kobj);
+ Error:
+ cleanup_glue_dir(dev, glue_dir);
++parent_error:
+ put_device(parent);
+ name_error:
+ kfree(dev->p);
+@@ -2701,6 +2706,11 @@ int device_move(struct device *dev, struct device *new_parent,
+ device_pm_lock();
+ new_parent = get_device(new_parent);
+ new_parent_kobj = get_device_parent(dev, new_parent);
++ if (IS_ERR(new_parent_kobj)) {
++ error = PTR_ERR(new_parent_kobj);
++ put_device(new_parent);
++ goto out;
++ }
+
+ pr_debug("device: '%s': %s: moving to '%s'\n", dev_name(dev),
+ __func__, new_parent ? dev_name(new_parent) : "<NULL>");
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index afbc202ca6fd..64278f472efe 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -173,9 +173,12 @@ static const struct device_attribute pid_attr = {
+ static void nbd_dev_remove(struct nbd_device *nbd)
+ {
+ struct gendisk *disk = nbd->disk;
++ struct request_queue *q;
++
+ if (disk) {
++ q = disk->queue;
+ del_gendisk(disk);
+- blk_cleanup_queue(disk->queue);
++ blk_cleanup_queue(q);
+ blk_mq_free_tag_set(&nbd->tag_set);
+ disk->private_data = NULL;
+ put_disk(disk);
+@@ -231,9 +234,18 @@ static void nbd_size_clear(struct nbd_device *nbd)
+ static void nbd_size_update(struct nbd_device *nbd)
+ {
+ struct nbd_config *config = nbd->config;
++ struct block_device *bdev = bdget_disk(nbd->disk, 0);
++
+ blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
+ blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+ set_capacity(nbd->disk, config->bytesize >> 9);
++ if (bdev) {
++ if (bdev->bd_disk)
++ bd_set_size(bdev, config->bytesize);
++ else
++ bdev->bd_invalidated = 1;
++ bdput(bdev);
++ }
+ kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+ }
+
+@@ -243,6 +255,8 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
+ struct nbd_config *config = nbd->config;
+ config->blksize = blocksize;
+ config->bytesize = blocksize * nr_blocks;
++ if (nbd->task_recv != NULL)
++ nbd_size_update(nbd);
+ }
+
+ static void nbd_complete_rq(struct request *req)
+@@ -1109,7 +1123,6 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
+ if (ret)
+ return ret;
+
+- bd_set_size(bdev, config->bytesize);
+ if (max_part)
+ bdev->bd_invalidated = 1;
+ mutex_unlock(&nbd->config_lock);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 075d18f6ba7a..54d4c0f999ec 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -696,6 +696,8 @@ static ssize_t store_##file_name \
+ struct cpufreq_policy new_policy; \
+ \
+ memcpy(&new_policy, policy, sizeof(*policy)); \
++ new_policy.min = policy->user_policy.min; \
++ new_policy.max = policy->user_policy.max; \
+ \
+ ret = sscanf(buf, "%u", &new_policy.object); \
+ if (ret != 1) \
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index ca38229b045a..43e14bb512c8 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -165,7 +165,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ * calls, so the previous load value can be used then.
+ */
+ load = j_cdbs->prev_load;
+- } else if (unlikely(time_elapsed > 2 * sampling_rate &&
++ } else if (unlikely((int)idle_time > 2 * sampling_rate &&
+ j_cdbs->prev_load)) {
+ /*
+ * If the CPU had gone completely idle and a task has
+@@ -185,10 +185,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ * clear prev_load to guarantee that the load will be
+ * computed again next time.
+ *
+- * Detecting this situation is easy: the governor's
+- * utilization update handler would not have run during
+- * CPU-idle periods. Hence, an unusually large
+- * 'time_elapsed' (as compared to the sampling rate)
++ * Detecting this situation is easy: an unusually large
++ * 'idle_time' (as compared to the sampling rate)
+ * indicates this scenario.
+ */
+ load = j_cdbs->prev_load;
+@@ -217,8 +215,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ j_cdbs->prev_load = load;
+ }
+
+- if (time_elapsed > 2 * sampling_rate) {
+- unsigned int periods = time_elapsed / sampling_rate;
++ if (unlikely((int)idle_time > 2 * sampling_rate)) {
++ unsigned int periods = idle_time / sampling_rate;
+
+ if (periods < idle_periods)
+ idle_periods = periods;
+diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
+index 6ba709b6f095..896caba5dfe5 100644
+--- a/drivers/cpufreq/ti-cpufreq.c
++++ b/drivers/cpufreq/ti-cpufreq.c
+@@ -226,7 +226,7 @@ static int ti_cpufreq_probe(struct platform_device *pdev)
+ opp_data->cpu_dev = get_cpu_device(0);
+ if (!opp_data->cpu_dev) {
+ pr_err("%s: Failed to get device for CPU0\n", __func__);
+- ret = ENODEV;
++ ret = -ENODEV;
+ goto free_opp_data;
+ }
+
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index 582e449be9fe..a2c53ea3b5ed 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -205,8 +205,7 @@ static void ish_remove(struct pci_dev *pdev)
+ kfree(ishtp_dev);
+ }
+
+-#ifdef CONFIG_PM
+-static struct device *ish_resume_device;
++static struct device __maybe_unused *ish_resume_device;
+
+ /* 50ms to get resume response */
+ #define WAIT_FOR_RESUME_ACK_MS 50
+@@ -220,7 +219,7 @@ static struct device *ish_resume_device;
+ * in that case a simple resume message is enough, others we need
+ * a reset sequence.
+ */
+-static void ish_resume_handler(struct work_struct *work)
++static void __maybe_unused ish_resume_handler(struct work_struct *work)
+ {
+ struct pci_dev *pdev = to_pci_dev(ish_resume_device);
+ struct ishtp_device *dev = pci_get_drvdata(pdev);
+@@ -262,7 +261,7 @@ static void ish_resume_handler(struct work_struct *work)
+ *
+ * Return: 0 to the pm core
+ */
+-static int ish_suspend(struct device *device)
++static int __maybe_unused ish_suspend(struct device *device)
+ {
+ struct pci_dev *pdev = to_pci_dev(device);
+ struct ishtp_device *dev = pci_get_drvdata(pdev);
+@@ -288,7 +287,7 @@ static int ish_suspend(struct device *device)
+ return 0;
+ }
+
+-static DECLARE_WORK(resume_work, ish_resume_handler);
++static __maybe_unused DECLARE_WORK(resume_work, ish_resume_handler);
+ /**
+ * ish_resume() - ISH resume callback
+ * @device: device pointer
+@@ -297,7 +296,7 @@ static DECLARE_WORK(resume_work, ish_resume_handler);
+ *
+ * Return: 0 to the pm core
+ */
+-static int ish_resume(struct device *device)
++static int __maybe_unused ish_resume(struct device *device)
+ {
+ struct pci_dev *pdev = to_pci_dev(device);
+ struct ishtp_device *dev = pci_get_drvdata(pdev);
+@@ -311,21 +310,14 @@ static int ish_resume(struct device *device)
+ return 0;
+ }
+
+-static const struct dev_pm_ops ish_pm_ops = {
+- .suspend = ish_suspend,
+- .resume = ish_resume,
+-};
+-#define ISHTP_ISH_PM_OPS (&ish_pm_ops)
+-#else
+-#define ISHTP_ISH_PM_OPS NULL
+-#endif /* CONFIG_PM */
++static SIMPLE_DEV_PM_OPS(ish_pm_ops, ish_suspend, ish_resume);
+
+ static struct pci_driver ish_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = ish_pci_tbl,
+ .probe = ish_probe,
+ .remove = ish_remove,
+- .driver.pm = ISHTP_ISH_PM_OPS,
++ .driver.pm = &ish_pm_ops,
+ };
+
+ module_pci_driver(ish_driver);
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index ee7a37eb159a..545986cfb978 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -395,6 +395,14 @@ static void wacom_usage_mapping(struct hid_device *hdev,
+ }
+ }
+
++ /* 2nd-generation Intuos Pro Large has incorrect Y maximum */
++ if (hdev->vendor == USB_VENDOR_ID_WACOM &&
++ hdev->product == 0x0358 &&
++ WACOM_PEN_FIELD(field) &&
++ wacom_equivalent_usage(usage->hid) == HID_GD_Y) {
++ field->logical_maximum = 43200;
++ }
++
+ switch (usage->hid) {
+ case HID_GD_X:
+ features->x_max = field->logical_maximum;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 8fb8c737fffe..b0b30a568db7 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -4379,7 +4379,7 @@ static void ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg)
+
+ static struct irq_chip amd_ir_chip = {
+ .name = "AMD-IR",
+- .irq_ack = ir_ack_apic_edge,
++ .irq_ack = apic_ack_irq,
+ .irq_set_affinity = amd_ir_set_affinity,
+ .irq_set_vcpu_affinity = amd_ir_set_vcpu_affinity,
+ .irq_compose_msi_msg = ir_compose_msi_msg,
+diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
+index 3062a154a9fb..967450bd421a 100644
+--- a/drivers/iommu/intel_irq_remapping.c
++++ b/drivers/iommu/intel_irq_remapping.c
+@@ -1223,7 +1223,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
+
+ static struct irq_chip intel_ir_chip = {
+ .name = "INTEL-IR",
+- .irq_ack = ir_ack_apic_edge,
++ .irq_ack = apic_ack_irq,
+ .irq_set_affinity = intel_ir_set_affinity,
+ .irq_compose_msi_msg = intel_ir_compose_msi_msg,
+ .irq_set_vcpu_affinity = intel_ir_set_vcpu_affinity,
+diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
+index 496deee3ae3a..7d0f3074d41d 100644
+--- a/drivers/iommu/irq_remapping.c
++++ b/drivers/iommu/irq_remapping.c
+@@ -156,11 +156,6 @@ void panic_if_irq_remap(const char *msg)
+ panic(msg);
+ }
+
+-void ir_ack_apic_edge(struct irq_data *data)
+-{
+- ack_APIC_irq();
+-}
+-
+ /**
+ * irq_remapping_get_ir_irq_domain - Get the irqdomain associated with the IOMMU
+ * device serving request @info
+diff --git a/drivers/iommu/irq_remapping.h b/drivers/iommu/irq_remapping.h
+index 039c7af7b190..0afef6e43be4 100644
+--- a/drivers/iommu/irq_remapping.h
++++ b/drivers/iommu/irq_remapping.h
+@@ -65,8 +65,6 @@ struct irq_remap_ops {
+ extern struct irq_remap_ops intel_irq_remap_ops;
+ extern struct irq_remap_ops amd_iommu_irq_ops;
+
+-extern void ir_ack_apic_edge(struct irq_data *data);
+-
+ #else /* CONFIG_IRQ_REMAP */
+
+ #define irq_remapping_enabled 0
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index b67be33bd62f..cea7b2d2e60a 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -1860,6 +1860,8 @@ int rc_register_device(struct rc_dev *dev)
+ dev->device_name ?: "Unspecified device", path ?: "N/A");
+ kfree(path);
+
++ dev->registered = true;
++
+ if (dev->driver_type != RC_DRIVER_IR_RAW_TX) {
+ rc = rc_setup_rx_device(dev);
+ if (rc)
+@@ -1879,8 +1881,6 @@ int rc_register_device(struct rc_dev *dev)
+ goto out_lirc;
+ }
+
+- dev->registered = true;
+-
+ dev_dbg(&dev->dev, "Registered rc%u (driver: %s)\n", dev->minor,
+ dev->driver_name ? dev->driver_name : "unknown");
+
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 102594ec3e97..a36b4fb949fa 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1607,14 +1607,12 @@ static int uvc_ctrl_get_flags(struct uvc_device *dev,
+ ret = uvc_query_ctrl(dev, UVC_GET_INFO, ctrl->entity->id, dev->intfnum,
+ info->selector, data, 1);
+ if (!ret)
+- info->flags = UVC_CTRL_FLAG_GET_MIN | UVC_CTRL_FLAG_GET_MAX
+- | UVC_CTRL_FLAG_GET_RES | UVC_CTRL_FLAG_GET_DEF
+- | (data[0] & UVC_CONTROL_CAP_GET ?
+- UVC_CTRL_FLAG_GET_CUR : 0)
+- | (data[0] & UVC_CONTROL_CAP_SET ?
+- UVC_CTRL_FLAG_SET_CUR : 0)
+- | (data[0] & UVC_CONTROL_CAP_AUTOUPDATE ?
+- UVC_CTRL_FLAG_AUTO_UPDATE : 0);
++ info->flags |= (data[0] & UVC_CONTROL_CAP_GET ?
++ UVC_CTRL_FLAG_GET_CUR : 0)
++ | (data[0] & UVC_CONTROL_CAP_SET ?
++ UVC_CTRL_FLAG_SET_CUR : 0)
++ | (data[0] & UVC_CONTROL_CAP_AUTOUPDATE ?
++ UVC_CTRL_FLAG_AUTO_UPDATE : 0);
+
+ kfree(data);
+ return ret;
+@@ -1689,6 +1687,9 @@ static int uvc_ctrl_fill_xu_info(struct uvc_device *dev,
+
+ info->size = le16_to_cpup((__le16 *)data);
+
++ info->flags = UVC_CTRL_FLAG_GET_MIN | UVC_CTRL_FLAG_GET_MAX
++ | UVC_CTRL_FLAG_GET_RES | UVC_CTRL_FLAG_GET_DEF;
++
+ ret = uvc_ctrl_get_flags(dev, ctrl, info);
+ if (ret < 0) {
+ uvc_trace(UVC_TRACE_CONTROL,
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 58c705f24f96..b594bae1adbd 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1142,6 +1142,7 @@ static int bond_option_primary_set(struct bonding *bond,
+ slave->dev->name);
+ rcu_assign_pointer(bond->primary_slave, slave);
+ strcpy(bond->params.primary, slave->dev->name);
++ bond->force_primary = true;
+ bond_select_active_slave(bond);
+ goto out;
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index a50e08bb4748..750007513f9d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -267,14 +267,13 @@ static int aq_pci_probe(struct pci_dev *pdev,
+ numvecs = min(numvecs, num_online_cpus());
+ /*enable interrupts */
+ #if !AQ_CFG_FORCE_LEGACY_INT
+- numvecs = pci_alloc_irq_vectors(self->pdev, 1, numvecs,
+- PCI_IRQ_MSIX | PCI_IRQ_MSI |
+- PCI_IRQ_LEGACY);
++ err = pci_alloc_irq_vectors(self->pdev, 1, numvecs,
++ PCI_IRQ_MSIX | PCI_IRQ_MSI |
++ PCI_IRQ_LEGACY);
+
+- if (numvecs < 0) {
+- err = numvecs;
++ if (err < 0)
+ goto err_hwinit;
+- }
++ numvecs = err;
+ #endif
+ self->irqvecs = numvecs;
+
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index da07ccdf84bf..eb8dccd24abf 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -126,8 +126,10 @@ static int netvsc_open(struct net_device *net)
+ }
+
+ rdev = nvdev->extension;
+- if (!rdev->link_state)
++ if (!rdev->link_state) {
+ netif_carrier_on(net);
++ netif_tx_wake_all_queues(net);
++ }
+
+ if (vf_netdev) {
+ /* Setting synthetic device up transparently sets
+diff --git a/drivers/net/phy/dp83848.c b/drivers/net/phy/dp83848.c
+index cd09c3af2117..6e8e42361fd5 100644
+--- a/drivers/net/phy/dp83848.c
++++ b/drivers/net/phy/dp83848.c
+@@ -74,6 +74,25 @@ static int dp83848_config_intr(struct phy_device *phydev)
+ return phy_write(phydev, DP83848_MICR, control);
+ }
+
++static int dp83848_config_init(struct phy_device *phydev)
++{
++ int err;
++ int val;
++
++ err = genphy_config_init(phydev);
++ if (err < 0)
++ return err;
++
++ /* DP83620 always reports Auto Negotiation Ability on BMSR. Instead,
++ * we check initial value of BMCR Auto negotiation enable bit
++ */
++ val = phy_read(phydev, MII_BMCR);
++ if (!(val & BMCR_ANENABLE))
++ phydev->autoneg = AUTONEG_DISABLE;
++
++ return 0;
++}
++
+ static struct mdio_device_id __maybe_unused dp83848_tbl[] = {
+ { TI_DP83848C_PHY_ID, 0xfffffff0 },
+ { NS_DP83848C_PHY_ID, 0xfffffff0 },
+@@ -83,7 +102,7 @@ static struct mdio_device_id __maybe_unused dp83848_tbl[] = {
+ };
+ MODULE_DEVICE_TABLE(mdio, dp83848_tbl);
+
+-#define DP83848_PHY_DRIVER(_id, _name) \
++#define DP83848_PHY_DRIVER(_id, _name, _config_init) \
+ { \
+ .phy_id = _id, \
+ .phy_id_mask = 0xfffffff0, \
+@@ -92,7 +111,7 @@ MODULE_DEVICE_TABLE(mdio, dp83848_tbl);
+ .flags = PHY_HAS_INTERRUPT, \
+ \
+ .soft_reset = genphy_soft_reset, \
+- .config_init = genphy_config_init, \
++ .config_init = _config_init, \
+ .suspend = genphy_suspend, \
+ .resume = genphy_resume, \
+ \
+@@ -102,10 +121,14 @@ MODULE_DEVICE_TABLE(mdio, dp83848_tbl);
+ }
+
+ static struct phy_driver dp83848_driver[] = {
+- DP83848_PHY_DRIVER(TI_DP83848C_PHY_ID, "TI DP83848C 10/100 Mbps PHY"),
+- DP83848_PHY_DRIVER(NS_DP83848C_PHY_ID, "NS DP83848C 10/100 Mbps PHY"),
+- DP83848_PHY_DRIVER(TI_DP83620_PHY_ID, "TI DP83620 10/100 Mbps PHY"),
+- DP83848_PHY_DRIVER(TLK10X_PHY_ID, "TI TLK10X 10/100 Mbps PHY"),
++ DP83848_PHY_DRIVER(TI_DP83848C_PHY_ID, "TI DP83848C 10/100 Mbps PHY",
++ genphy_config_init),
++ DP83848_PHY_DRIVER(NS_DP83848C_PHY_ID, "NS DP83848C 10/100 Mbps PHY",
++ genphy_config_init),
++ DP83848_PHY_DRIVER(TI_DP83620_PHY_ID, "TI DP83620 10/100 Mbps PHY",
++ dp83848_config_init),
++ DP83848_PHY_DRIVER(TLK10X_PHY_ID, "TI TLK10X 10/100 Mbps PHY",
++ genphy_config_init),
+ };
+ module_phy_driver(dp83848_driver);
+
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 9b6cb780affe..f0f7cd977667 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -774,13 +774,16 @@ static ssize_t tap_put_user(struct tap_queue *q,
+ int total;
+
+ if (q->flags & IFF_VNET_HDR) {
++ int vlan_hlen = skb_vlan_tag_present(skb) ? VLAN_HLEN : 0;
+ struct virtio_net_hdr vnet_hdr;
++
+ vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
+ if (iov_iter_count(iter) < vnet_hdr_len)
+ return -EINVAL;
+
+ if (virtio_net_hdr_from_skb(skb, &vnet_hdr,
+- tap_is_little_endian(q), true))
++ tap_is_little_endian(q), true,
++ vlan_hlen))
+ BUG();
+
+ if (copy_to_iter(&vnet_hdr, sizeof(vnet_hdr), iter) !=
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 23e9eb66197f..409eb8b74740 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2078,7 +2078,8 @@ static ssize_t tun_put_user(struct tun_struct *tun,
+ return -EINVAL;
+
+ if (virtio_net_hdr_from_skb(skb, &gso,
+- tun_is_little_endian(tun), true)) {
++ tun_is_little_endian(tun), true,
++ vlan_hlen)) {
+ struct skb_shared_info *sinfo = skb_shinfo(skb);
+ pr_err("unexpected GSO type: "
+ "0x%x, gso_size %d, hdr_len %d\n",
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 90d07ed224d5..b0e8b9613054 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1124,7 +1124,7 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ * accordingly. Otherwise, we should check here.
+ */
+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)
+- delayed_ndp_size = ctx->max_ndp_size;
++ delayed_ndp_size = ALIGN(ctx->max_ndp_size, ctx->tx_ndp_modulus);
+ else
+ delayed_ndp_size = 0;
+
+@@ -1285,7 +1285,7 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ /* If requested, put NDP at end of frame. */
+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {
+ nth16 = (struct usb_cdc_ncm_nth16 *)skb_out->data;
+- cdc_ncm_align_tail(skb_out, ctx->tx_ndp_modulus, 0, ctx->tx_curr_size);
++ cdc_ncm_align_tail(skb_out, ctx->tx_ndp_modulus, 0, ctx->tx_curr_size - ctx->max_ndp_size);
+ nth16->wNdpIndex = cpu_to_le16(skb_out->len);
+ skb_put_data(skb_out, ctx->delayed_ndp16, ctx->max_ndp_size);
+
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 032e1ac10a30..8c7207535179 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1358,7 +1358,8 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
+ hdr = skb_vnet_hdr(skb);
+
+ if (virtio_net_hdr_from_skb(skb, &hdr->hdr,
+- virtio_is_little_endian(vi->vdev), false))
++ virtio_is_little_endian(vi->vdev), false,
++ 0))
+ BUG();
+
+ if (vi->mergeable_rx_bufs)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/paging.c b/drivers/net/wireless/intel/iwlwifi/fw/paging.c
+index 1fec8e3a6b35..6afcfd1f0eec 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/paging.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/paging.c
+@@ -8,6 +8,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+@@ -30,6 +31,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+@@ -163,7 +165,7 @@ static int iwl_alloc_fw_paging_mem(struct iwl_fw_runtime *fwrt,
+ static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt,
+ const struct fw_img *image)
+ {
+- int sec_idx, idx;
++ int sec_idx, idx, ret;
+ u32 offset = 0;
+
+ /*
+@@ -190,17 +192,23 @@ static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt,
+ */
+ if (sec_idx >= image->num_sec - 1) {
+ IWL_ERR(fwrt, "Paging: Missing CSS and/or paging sections\n");
+- iwl_free_fw_paging(fwrt);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err;
+ }
+
+ /* copy the CSS block to the dram */
+ IWL_DEBUG_FW(fwrt, "Paging: load paging CSS to FW, sec = %d\n",
+ sec_idx);
+
++ if (image->sec[sec_idx].len > fwrt->fw_paging_db[0].fw_paging_size) {
++ IWL_ERR(fwrt, "CSS block is larger than paging size\n");
++ ret = -EINVAL;
++ goto err;
++ }
++
+ memcpy(page_address(fwrt->fw_paging_db[0].fw_paging_block),
+ image->sec[sec_idx].data,
+- fwrt->fw_paging_db[0].fw_paging_size);
++ image->sec[sec_idx].len);
+ dma_sync_single_for_device(fwrt->trans->dev,
+ fwrt->fw_paging_db[0].fw_paging_phys,
+ fwrt->fw_paging_db[0].fw_paging_size,
+@@ -221,6 +229,14 @@ static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt,
+ for (idx = 1; idx < fwrt->num_of_paging_blk; idx++) {
+ struct iwl_fw_paging *block = &fwrt->fw_paging_db[idx];
+
++ if (block->fw_paging_size > image->sec[sec_idx].len - offset) {
++ IWL_ERR(fwrt,
++ "Paging: paging size is larger than remaining data in block %d\n",
++ idx);
++ ret = -EINVAL;
++ goto err;
++ }
++
+ memcpy(page_address(block->fw_paging_block),
+ image->sec[sec_idx].data + offset,
+ block->fw_paging_size);
+@@ -231,19 +247,32 @@ static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt,
+
+ IWL_DEBUG_FW(fwrt,
+ "Paging: copied %d paging bytes to block %d\n",
+- fwrt->fw_paging_db[idx].fw_paging_size,
+- idx);
++ block->fw_paging_size, idx);
+
+- offset += fwrt->fw_paging_db[idx].fw_paging_size;
++ offset += block->fw_paging_size;
++
++ if (offset > image->sec[sec_idx].len) {
++ IWL_ERR(fwrt,
++ "Paging: offset goes over section size\n");
++ ret = -EINVAL;
++ goto err;
++ }
+ }
+
+ /* copy the last paging block */
+ if (fwrt->num_of_pages_in_last_blk > 0) {
+ struct iwl_fw_paging *block = &fwrt->fw_paging_db[idx];
+
++ if (image->sec[sec_idx].len - offset > block->fw_paging_size) {
++ IWL_ERR(fwrt,
++ "Paging: last block is larger than paging size\n");
++ ret = -EINVAL;
++ goto err;
++ }
++
+ memcpy(page_address(block->fw_paging_block),
+ image->sec[sec_idx].data + offset,
+- FW_PAGING_SIZE * fwrt->num_of_pages_in_last_blk);
++ image->sec[sec_idx].len - offset);
+ dma_sync_single_for_device(fwrt->trans->dev,
+ block->fw_paging_phys,
+ block->fw_paging_size,
+@@ -255,6 +284,10 @@ static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt,
+ }
+
+ return 0;
++
++err:
++ iwl_free_fw_paging(fwrt);
++ return ret;
+ }
+
+ static int iwl_save_fw_paging(struct iwl_fw_runtime *fwrt,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 17a0190bd88f..5dbb0f0c02ef 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2679,8 +2679,15 @@ static pci_ers_result_t nvme_slot_reset(struct pci_dev *pdev)
+
+ dev_info(dev->ctrl.device, "restart after slot reset\n");
+ pci_restore_state(pdev);
+- nvme_reset_ctrl(&dev->ctrl);
+- return PCI_ERS_RESULT_RECOVERED;
++ nvme_reset_ctrl_sync(&dev->ctrl);
++
++ switch (dev->ctrl.state) {
++ case NVME_CTRL_LIVE:
++ case NVME_CTRL_ADMIN_ONLY:
++ return PCI_ERS_RESULT_RECOVERED;
++ default:
++ return PCI_ERS_RESULT_DISCONNECT;
++ }
+ }
+
+ static void nvme_error_resume(struct pci_dev *pdev)
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index f0be5f35ab28..9beefa6ed1ce 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2345,6 +2345,9 @@ struct vhost_msg_node *vhost_new_msg(struct vhost_virtqueue *vq, int type)
+ struct vhost_msg_node *node = kmalloc(sizeof *node, GFP_KERNEL);
+ if (!node)
+ return NULL;
++
++ /* Make sure all padding within the structure is initialized. */
++ memset(&node->msg, 0, sizeof node->msg);
+ node->vq = vq;
+ node->msg.type = type;
+ return node;
+diff --git a/drivers/w1/masters/mxc_w1.c b/drivers/w1/masters/mxc_w1.c
+index 74f2e6e6202a..8851d441e5fd 100644
+--- a/drivers/w1/masters/mxc_w1.c
++++ b/drivers/w1/masters/mxc_w1.c
+@@ -112,6 +112,10 @@ static int mxc_w1_probe(struct platform_device *pdev)
+ if (IS_ERR(mdev->clk))
+ return PTR_ERR(mdev->clk);
+
++ err = clk_prepare_enable(mdev->clk);
++ if (err)
++ return err;
++
+ clkrate = clk_get_rate(mdev->clk);
+ if (clkrate < 10000000)
+ dev_warn(&pdev->dev,
+@@ -125,12 +129,10 @@ static int mxc_w1_probe(struct platform_device *pdev)
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ mdev->regs = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(mdev->regs))
+- return PTR_ERR(mdev->regs);
+-
+- err = clk_prepare_enable(mdev->clk);
+- if (err)
+- return err;
++ if (IS_ERR(mdev->regs)) {
++ err = PTR_ERR(mdev->regs);
++ goto out_disable_clk;
++ }
+
+ /* Software reset 1-Wire module */
+ writeb(MXC_W1_RESET_RST, mdev->regs + MXC_W1_RESET);
+@@ -146,8 +148,12 @@ static int mxc_w1_probe(struct platform_device *pdev)
+
+ err = w1_add_master_device(&mdev->bus_master);
+ if (err)
+- clk_disable_unprepare(mdev->clk);
++ goto out_disable_clk;
+
++ return 0;
++
++out_disable_clk:
++ clk_disable_unprepare(mdev->clk);
+ return err;
+ }
+
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index a41b48f82a70..4de191563261 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -387,8 +387,13 @@ static Node *create_entry(const char __user *buffer, size_t count)
+ s = strchr(p, del);
+ if (!s)
+ goto einval;
+- *s++ = '\0';
+- e->offset = simple_strtoul(p, &p, 10);
++ *s = '\0';
++ if (p != s) {
++ int r = kstrtoint(p, 10, &e->offset);
++ if (r != 0 || e->offset < 0)
++ goto einval;
++ }
++ p = s;
+ if (*p++)
+ goto einval;
+ pr_debug("register: offset: %#x\n", e->offset);
+@@ -428,7 +433,8 @@ static Node *create_entry(const char __user *buffer, size_t count)
+ if (e->mask &&
+ string_unescape_inplace(e->mask, UNESCAPE_HEX) != e->size)
+ goto einval;
+- if (e->size + e->offset > BINPRM_BUF_SIZE)
++ if (e->size > BINPRM_BUF_SIZE ||
++ BINPRM_BUF_SIZE - e->size < e->offset)
+ goto einval;
+ pr_debug("register: magic/mask length: %i\n", e->size);
+ if (USE_DEBUG) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 0b86cf10cf2a..775a0f2d0b45 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1018,8 +1018,10 @@ static noinline int cow_file_range(struct inode *inode,
+ ram_size, /* ram_bytes */
+ BTRFS_COMPRESS_NONE, /* compress_type */
+ BTRFS_ORDERED_REGULAR /* type */);
+- if (IS_ERR(em))
++ if (IS_ERR(em)) {
++ ret = PTR_ERR(em);
+ goto out_reserve;
++ }
+ free_extent_map(em);
+
+ ret = btrfs_add_ordered_extent(inode, start, ins.objectid,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 632e26d6f7ce..28fed3e8960b 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2654,8 +2654,10 @@ static long btrfs_ioctl_rm_dev_v2(struct file *file, void __user *arg)
+ }
+
+ /* Check for compatibility reject unknown flags */
+- if (vol_args->flags & ~BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED)
+- return -EOPNOTSUPP;
++ if (vol_args->flags & ~BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED) {
++ ret = -EOPNOTSUPP;
++ goto out;
++ }
+
+ if (test_and_set_bit(BTRFS_FS_EXCL_OP, &fs_info->flags)) {
+ ret = BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
+@@ -3826,11 +3828,6 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src,
+ src->i_sb != inode->i_sb)
+ return -EXDEV;
+
+- /* don't make the dst file partly checksummed */
+- if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) !=
+- (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM))
+- return -EINVAL;
+-
+ if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode))
+ return -EISDIR;
+
+@@ -3840,6 +3837,13 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src,
+ inode_lock(src);
+ }
+
++ /* don't make the dst file partly checksummed */
++ if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) !=
++ (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) {
++ ret = -EINVAL;
++ goto out_unlock;
++ }
++
+ /* determine range to clone */
+ ret = -EINVAL;
+ if (off + len > src->i_size || off + len < off)
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 52b39a0924e9..ad8a69ba7f13 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -2799,7 +2799,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
+ have_csum = scrub_find_csum(sctx, logical, csum);
+ if (have_csum == 0)
+ ++sctx->stat.no_csum;
+- if (sctx->is_dev_replace && !have_csum) {
++ if (0 && sctx->is_dev_replace && !have_csum) {
+ ret = copy_nocow_pages(sctx, logical, l,
+ mirror_num,
+ physical_for_dev_replace);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 0628092b0b1b..f82152a0cb38 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -323,6 +323,7 @@ enum {
+ Opt_ssd, Opt_nossd,
+ Opt_ssd_spread, Opt_nossd_spread,
+ Opt_subvol,
++ Opt_subvol_empty,
+ Opt_subvolid,
+ Opt_thread_pool,
+ Opt_treelog, Opt_notreelog,
+@@ -388,6 +389,7 @@ static const match_table_t tokens = {
+ {Opt_ssd_spread, "ssd_spread"},
+ {Opt_nossd_spread, "nossd_spread"},
+ {Opt_subvol, "subvol=%s"},
++ {Opt_subvol_empty, "subvol="},
+ {Opt_subvolid, "subvolid=%s"},
+ {Opt_thread_pool, "thread_pool=%u"},
+ {Opt_treelog, "treelog"},
+@@ -461,6 +463,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ btrfs_set_opt(info->mount_opt, DEGRADED);
+ break;
+ case Opt_subvol:
++ case Opt_subvol_empty:
+ case Opt_subvolid:
+ case Opt_subvolrootid:
+ case Opt_device:
+diff --git a/fs/cifs/cifsacl.h b/fs/cifs/cifsacl.h
+index 4f3884835267..dd95a6fa24bf 100644
+--- a/fs/cifs/cifsacl.h
++++ b/fs/cifs/cifsacl.h
+@@ -98,4 +98,18 @@ struct cifs_ace {
+ struct cifs_sid sid; /* ie UUID of user or group who gets these perms */
+ } __attribute__((packed));
+
++/*
++ * Minimum security identifier can be one for system defined Users
++ * and Groups such as NULL SID and World or Built-in accounts such
++ * as Administrator and Guest and consists of
++ * Revision + Num (Sub)Auths + Authority + Domain (one Subauthority)
++ */
++#define MIN_SID_LEN (1 + 1 + 6 + 4) /* in bytes */
++
++/*
++ * Minimum security descriptor can be one without any SACL and DACL and can
++ * consist of revision, type, and two sids of minimum size for owner and group
++ */
++#define MIN_SEC_DESC_LEN (sizeof(struct cifs_ntsd) + (2 * MIN_SID_LEN))
++
+ #endif /* _CIFSACL_H */
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 9c6d95ffca97..4ee32488ff74 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1277,10 +1277,11 @@ smb2_is_session_expired(char *buf)
+ {
+ struct smb2_sync_hdr *shdr = get_sync_hdr(buf);
+
+- if (shdr->Status != STATUS_NETWORK_SESSION_EXPIRED)
++ if (shdr->Status != STATUS_NETWORK_SESSION_EXPIRED &&
++ shdr->Status != STATUS_USER_SESSION_DELETED)
+ return false;
+
+- cifs_dbg(FYI, "Session expired\n");
++ cifs_dbg(FYI, "Session expired or deleted\n");
+ return true;
+ }
+
+@@ -1593,8 +1594,11 @@ get_smb2_acl_by_path(struct cifs_sb_info *cifs_sb,
+ oparms.create_options = 0;
+
+ utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+- if (!utf16_path)
+- return ERR_PTR(-ENOMEM);
++ if (!utf16_path) {
++ rc = -ENOMEM;
++ free_xid(xid);
++ return ERR_PTR(rc);
++ }
+
+ oparms.tcon = tcon;
+ oparms.desired_access = READ_CONTROL;
+@@ -1652,8 +1656,11 @@ set_smb2_acl(struct cifs_ntsd *pnntsd, __u32 acllen,
+ access_flags = WRITE_DAC;
+
+ utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+- if (!utf16_path)
+- return -ENOMEM;
++ if (!utf16_path) {
++ rc = -ENOMEM;
++ free_xid(xid);
++ return rc;
++ }
+
+ oparms.tcon = tcon;
+ oparms.desired_access = access_flags;
+@@ -1713,15 +1720,21 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+
+ /* if file not oplocked can't be sure whether asking to extend size */
+ if (!CIFS_CACHE_READ(cifsi))
+- if (keep_size == false)
+- return -EOPNOTSUPP;
++ if (keep_size == false) {
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
++ }
+
+ /*
+ * Must check if file sparse since fallocate -z (zero range) assumes
+ * non-sparse allocation
+ */
+- if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE))
+- return -EOPNOTSUPP;
++ if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)) {
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
++ }
+
+ /*
+ * need to make sure we are not asked to extend the file since the SMB3
+@@ -1730,8 +1743,11 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ * which for a non sparse file would zero the newly extended range
+ */
+ if (keep_size == false)
+- if (i_size_read(inode) < offset + len)
+- return -EOPNOTSUPP;
++ if (i_size_read(inode) < offset + len) {
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
++ }
+
+ cifs_dbg(FYI, "offset %lld len %lld", offset, len);
+
+@@ -1764,8 +1780,11 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+
+ /* Need to make file sparse, if not already, before freeing range. */
+ /* Consider adding equivalent for compressed since it could also work */
+- if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse))
+- return -EOPNOTSUPP;
++ if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse)) {
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
++ }
+
+ cifs_dbg(FYI, "offset %lld len %lld", offset, len);
+
+@@ -1796,8 +1815,10 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+
+ /* if file not oplocked can't be sure whether asking to extend size */
+ if (!CIFS_CACHE_READ(cifsi))
+- if (keep_size == false)
+- return -EOPNOTSUPP;
++ if (keep_size == false) {
++ free_xid(xid);
++ return rc;
++ }
+
+ /*
+ * Files are non-sparse by default so falloc may be a no-op
+@@ -1806,14 +1827,16 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ */
+ if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0) {
+ if (keep_size == true)
+- return 0;
++ rc = 0;
+ /* check if extending file */
+ else if (i_size_read(inode) >= off + len)
+ /* not extending file and already not sparse */
+- return 0;
++ rc = 0;
+ /* BB: in future add else clause to extend file */
+ else
+- return -EOPNOTSUPP;
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
+ }
+
+ if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
+@@ -1825,8 +1848,11 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ * ie potentially making a few extra pages at the beginning
+ * or end of the file non-sparse via set_sparse is harmless.
+ */
+- if ((off > 8192) || (off + len + 8192 < i_size_read(inode)))
+- return -EOPNOTSUPP;
++ if ((off > 8192) || (off + len + 8192 < i_size_read(inode))) {
++ rc = -EOPNOTSUPP;
++ free_xid(xid);
++ return rc;
++ }
+
+ rc = smb2_set_sparse(xid, tcon, cfile, inode, false);
+ }
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 0f48741a0130..32d7fd830aae 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1276,6 +1276,7 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
+ sess_data->ses = ses;
+ sess_data->buf0_type = CIFS_NO_BUFFER;
+ sess_data->nls_cp = (struct nls_table *) nls_cp;
++ sess_data->previous_session = ses->Suid;
+
+ #ifdef CONFIG_CIFS_SMB311
+ /*
+@@ -2377,8 +2378,7 @@ SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon,
+
+ return query_info(xid, tcon, persistent_fid, volatile_fid,
+ 0, SMB2_O_INFO_SECURITY, additional_info,
+- SMB2_MAX_BUFFER_SIZE,
+- sizeof(struct smb2_file_all_info), data, plen);
++ SMB2_MAX_BUFFER_SIZE, MIN_SEC_DESC_LEN, data, plen);
+ }
+
+ int
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index c32802c956d5..bf7fa1507e81 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -561,10 +561,16 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ unsigned epb = inode->i_sb->s_blocksize / sizeof(u32);
+ int i;
+
+- /* Count number blocks in a subtree under 'partial' */
+- count = 1;
+- for (i = 0; partial + i != chain + depth - 1; i++)
+- count *= epb;
++ /*
++ * Count number blocks in a subtree under 'partial'. At each
++ * level we count number of complete empty subtrees beyond
++ * current offset and then descend into the subtree only
++ * partially beyond current offset.
++ */
++ count = 0;
++ for (i = partial - chain + 1; i < depth; i++)
++ count = count * epb + (epb - offsets[i] - 1);
++ count++;
+ /* Fill in size of a hole we found */
+ map->m_pblk = 0;
+ map->m_len = min_t(unsigned int, map->m_len, count);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 70cf4c7b268a..44b4fcdc3755 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -144,6 +144,12 @@ int ext4_find_inline_data_nolock(struct inode *inode)
+ goto out;
+
+ if (!is.s.not_found) {
++ if (is.s.here->e_value_inum) {
++ EXT4_ERROR_INODE(inode, "inline data xattr refers "
++ "to an external xattr inode");
++ error = -EFSCORRUPTED;
++ goto out;
++ }
+ EXT4_I(inode)->i_inline_off = (u16)((void *)is.s.here -
+ (void *)ext4_raw_inode(&is.iloc));
+ EXT4_I(inode)->i_inline_size = EXT4_MIN_INLINE_DATA_SIZE +
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 1e50c5efae67..c73cb9346aee 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4298,28 +4298,28 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ EXT4_BLOCK_SIZE_BITS(sb);
+ stop_block = (offset + length) >> EXT4_BLOCK_SIZE_BITS(sb);
+
+- /* If there are no blocks to remove, return now */
+- if (first_block >= stop_block)
+- goto out_stop;
++ /* If there are blocks to remove, do it */
++ if (stop_block > first_block) {
+
+- down_write(&EXT4_I(inode)->i_data_sem);
+- ext4_discard_preallocations(inode);
++ down_write(&EXT4_I(inode)->i_data_sem);
++ ext4_discard_preallocations(inode);
+
+- ret = ext4_es_remove_extent(inode, first_block,
+- stop_block - first_block);
+- if (ret) {
+- up_write(&EXT4_I(inode)->i_data_sem);
+- goto out_stop;
+- }
++ ret = ext4_es_remove_extent(inode, first_block,
++ stop_block - first_block);
++ if (ret) {
++ up_write(&EXT4_I(inode)->i_data_sem);
++ goto out_stop;
++ }
+
+- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+- ret = ext4_ext_remove_space(inode, first_block,
+- stop_block - 1);
+- else
+- ret = ext4_ind_remove_space(handle, inode, first_block,
+- stop_block);
++ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
++ ret = ext4_ext_remove_space(inode, first_block,
++ stop_block - 1);
++ else
++ ret = ext4_ind_remove_space(handle, inode, first_block,
++ stop_block);
+
+- up_write(&EXT4_I(inode)->i_data_sem);
++ up_write(&EXT4_I(inode)->i_data_sem);
++ }
+ if (IS_SYNC(inode))
+ ext4_handle_sync(handle);
+
+@@ -4701,19 +4701,21 @@ static blkcnt_t ext4_inode_blocks(struct ext4_inode *raw_inode,
+ }
+ }
+
+-static inline void ext4_iget_extra_inode(struct inode *inode,
++static inline int ext4_iget_extra_inode(struct inode *inode,
+ struct ext4_inode *raw_inode,
+ struct ext4_inode_info *ei)
+ {
+ __le32 *magic = (void *)raw_inode +
+ EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize;
++
+ if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize + sizeof(__le32) <=
+ EXT4_INODE_SIZE(inode->i_sb) &&
+ *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+- ext4_find_inline_data_nolock(inode);
++ return ext4_find_inline_data_nolock(inode);
+ } else
+ EXT4_I(inode)->i_inline_off = 0;
++ return 0;
+ }
+
+ int ext4_get_projid(struct inode *inode, kprojid_t *projid)
+@@ -4893,7 +4895,9 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ ei->i_extra_isize = sizeof(struct ext4_inode) -
+ EXT4_GOOD_OLD_INODE_SIZE;
+ } else {
+- ext4_iget_extra_inode(inode, raw_inode, ei);
++ ret = ext4_iget_extra_inode(inode, raw_inode, ei);
++ if (ret)
++ goto bad_inode;
+ }
+ }
+
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index b6bec270a8e4..d792b7689d92 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1933,7 +1933,7 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ return 0;
+
+ n_group = ext4_get_group_number(sb, n_blocks_count - 1);
+- if (n_group > (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) {
++ if (n_group >= (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) {
+ ext4_warning(sb, "resize would cause inodes_count overflow");
+ return -EINVAL;
+ }
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 499cb4b1fbd2..fc4ced59c565 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1688,7 +1688,7 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+
+ /* No failures allowed past this point. */
+
+- if (!s->not_found && here->e_value_offs) {
++ if (!s->not_found && here->e_value_size && here->e_value_offs) {
+ /* Remove the old value. */
+ void *first_val = s->base + min_offs;
+ size_t offs = le16_to_cpu(here->e_value_offs);
+diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
+index 79c61da8b1bc..c65a51d87cac 100644
+--- a/fs/orangefs/inode.c
++++ b/fs/orangefs/inode.c
+@@ -269,6 +269,13 @@ int orangefs_getattr(const struct path *path, struct kstat *stat,
+ else
+ stat->result_mask = STATX_BASIC_STATS &
+ ~STATX_SIZE;
++
++ stat->attributes_mask = STATX_ATTR_IMMUTABLE |
++ STATX_ATTR_APPEND;
++ if (inode->i_flags & S_IMMUTABLE)
++ stat->attributes |= STATX_ATTR_IMMUTABLE;
++ if (inode->i_flags & S_APPEND)
++ stat->attributes |= STATX_ATTR_APPEND;
+ }
+ return ret;
+ }
+diff --git a/fs/orangefs/namei.c b/fs/orangefs/namei.c
+index 1b5707c44c3f..e026bee02a66 100644
+--- a/fs/orangefs/namei.c
++++ b/fs/orangefs/namei.c
+@@ -326,6 +326,13 @@ static int orangefs_symlink(struct inode *dir,
+ ret = PTR_ERR(inode);
+ goto out;
+ }
++ /*
++ * This is necessary because orangefs_inode_getattr will not
++ * re-read symlink size as it is impossible for it to change.
++ * Invalidating the cache does not help. orangefs_new_inode
++ * does not set the correct size (it does not know symname).
++ */
++ inode->i_size = strlen(symname);
+
+ gossip_debug(GOSSIP_NAME_DEBUG,
+ "Assigned symlink inode new number of %pU\n",
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 65916a305f3d..4e66378f290b 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -551,7 +551,12 @@ extern int irq_affinity_online_cpu(unsigned int cpu);
+ #endif
+
+ #if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_PENDING_IRQ)
+-void irq_move_irq(struct irq_data *data);
++void __irq_move_irq(struct irq_data *data);
++static inline void irq_move_irq(struct irq_data *data)
++{
++ if (unlikely(irqd_is_setaffinity_pending(data)))
++ __irq_move_irq(data);
++}
+ void irq_move_masked_irq(struct irq_data *data);
+ void irq_force_complete_move(struct irq_desc *desc);
+ #else
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index f144216febc6..9397628a1967 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -58,7 +58,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
+ struct virtio_net_hdr *hdr,
+ bool little_endian,
+- bool has_data_valid)
++ bool has_data_valid,
++ int vlan_hlen)
+ {
+ memset(hdr, 0, sizeof(*hdr)); /* no info leak */
+
+@@ -83,12 +84,8 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+- if (skb_vlan_tag_present(skb))
+- hdr->csum_start = __cpu_to_virtio16(little_endian,
+- skb_checksum_start_offset(skb) + VLAN_HLEN);
+- else
+- hdr->csum_start = __cpu_to_virtio16(little_endian,
+- skb_checksum_start_offset(skb));
++ hdr->csum_start = __cpu_to_virtio16(little_endian,
++ skb_checksum_start_offset(skb) + vlan_hlen);
+ hdr->csum_offset = __cpu_to_virtio16(little_endian,
+ skb->csum_offset);
+ } else if (has_data_valid &&
+diff --git a/include/net/transp_v6.h b/include/net/transp_v6.h
+index c4f5caaf3778..f6a3543e5247 100644
+--- a/include/net/transp_v6.h
++++ b/include/net/transp_v6.h
+@@ -45,8 +45,15 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk, struct msghdr *msg,
+ struct flowi6 *fl6, struct ipcm6_cookie *ipc6,
+ struct sockcm_cookie *sockc);
+
+-void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
+- __u16 srcp, __u16 destp, int bucket);
++void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
++ __u16 srcp, __u16 destp, int rqueue, int bucket);
++static inline void
++ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, __u16 srcp,
++ __u16 destp, int bucket)
++{
++ __ip6_dgram_sock_seq_show(seq, sp, srcp, destp, sk_rmem_alloc_get(sp),
++ bucket);
++}
+
+ #define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006)
+
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 0676b272f6ac..1db85dcb06f6 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -244,6 +244,11 @@ static inline __be16 udp_flow_src_port(struct net *net, struct sk_buff *skb,
+ return htons((((u64) hash * (max - min)) >> 32) + min);
+ }
+
++static inline int udp_rqueue_get(struct sock *sk)
++{
++ return sk_rmem_alloc_get(sk) - READ_ONCE(udp_sk(sk)->forward_deficit);
++}
++
+ /* net/ipv4/udp.c */
+ void udp_destruct_sock(struct sock *sk);
+ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len);
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index e3336d904f64..facfecfc543c 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -204,6 +204,39 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
+ return ret;
+ }
+
++#ifdef CONFIG_GENERIC_PENDING_IRQ
++static inline int irq_set_affinity_pending(struct irq_data *data,
++ const struct cpumask *dest)
++{
++ struct irq_desc *desc = irq_data_to_desc(data);
++
++ irqd_set_move_pending(data);
++ irq_copy_pending(desc, dest);
++ return 0;
++}
++#else
++static inline int irq_set_affinity_pending(struct irq_data *data,
++ const struct cpumask *dest)
++{
++ return -EBUSY;
++}
++#endif
++
++static int irq_try_set_affinity(struct irq_data *data,
++ const struct cpumask *dest, bool force)
++{
++ int ret = irq_do_set_affinity(data, dest, force);
++
++ /*
++ * In case that the underlying vector management is busy and the
++ * architecture supports the generic pending mechanism then utilize
++ * this to avoid returning an error to user space.
++ */
++ if (ret == -EBUSY && !force)
++ ret = irq_set_affinity_pending(data, dest);
++ return ret;
++}
++
+ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
+ bool force)
+ {
+@@ -214,8 +247,8 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
+ if (!chip || !chip->irq_set_affinity)
+ return -EINVAL;
+
+- if (irq_can_move_pcntxt(data)) {
+- ret = irq_do_set_affinity(data, mask, force);
++ if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) {
++ ret = irq_try_set_affinity(data, mask, force);
+ } else {
+ irqd_set_move_pending(data);
+ irq_copy_pending(desc, mask);
+diff --git a/kernel/irq/migration.c b/kernel/irq/migration.c
+index 86ae0eb80b53..def48589ea48 100644
+--- a/kernel/irq/migration.c
++++ b/kernel/irq/migration.c
+@@ -38,17 +38,18 @@ bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear)
+ void irq_move_masked_irq(struct irq_data *idata)
+ {
+ struct irq_desc *desc = irq_data_to_desc(idata);
+- struct irq_chip *chip = desc->irq_data.chip;
++ struct irq_data *data = &desc->irq_data;
++ struct irq_chip *chip = data->chip;
+
+- if (likely(!irqd_is_setaffinity_pending(&desc->irq_data)))
++ if (likely(!irqd_is_setaffinity_pending(data)))
+ return;
+
+- irqd_clr_move_pending(&desc->irq_data);
++ irqd_clr_move_pending(data);
+
+ /*
+ * Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
+ */
+- if (irqd_is_per_cpu(&desc->irq_data)) {
++ if (irqd_is_per_cpu(data)) {
+ WARN_ON(1);
+ return;
+ }
+@@ -73,13 +74,24 @@ void irq_move_masked_irq(struct irq_data *idata)
+ * For correct operation this depends on the caller
+ * masking the irqs.
+ */
+- if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
+- irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
+-
++ if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids) {
++ int ret;
++
++ ret = irq_do_set_affinity(data, desc->pending_mask, false);
++ /*
++ * If the there is a cleanup pending in the underlying
++ * vector management, reschedule the move for the next
++ * interrupt. Leave desc->pending_mask intact.
++ */
++ if (ret == -EBUSY) {
++ irqd_set_move_pending(data);
++ return;
++ }
++ }
+ cpumask_clear(desc->pending_mask);
+ }
+
+-void irq_move_irq(struct irq_data *idata)
++void __irq_move_irq(struct irq_data *idata)
+ {
+ bool masked;
+
+@@ -90,9 +102,6 @@ void irq_move_irq(struct irq_data *idata)
+ */
+ idata = irq_desc_get_irq_data(irq_data_to_desc(idata));
+
+- if (likely(!irqd_is_setaffinity_pending(idata)))
+- return;
+-
+ if (unlikely(irqd_irq_disabled(idata)))
+ return;
+
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 7441bd93b732..8fe3ebd6ac00 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -412,6 +412,7 @@ static void wb_exit(struct bdi_writeback *wb)
+ * protected.
+ */
+ static DEFINE_SPINLOCK(cgwb_lock);
++static struct workqueue_struct *cgwb_release_wq;
+
+ /**
+ * wb_congested_get_create - get or create a wb_congested
+@@ -522,7 +523,7 @@ static void cgwb_release(struct percpu_ref *refcnt)
+ {
+ struct bdi_writeback *wb = container_of(refcnt, struct bdi_writeback,
+ refcnt);
+- schedule_work(&wb->release_work);
++ queue_work(cgwb_release_wq, &wb->release_work);
+ }
+
+ static void cgwb_kill(struct bdi_writeback *wb)
+@@ -784,6 +785,21 @@ static void cgwb_bdi_register(struct backing_dev_info *bdi)
+ spin_unlock_irq(&cgwb_lock);
+ }
+
++static int __init cgwb_init(void)
++{
++ /*
++ * There can be many concurrent release work items overwhelming
++ * system_wq. Put them in a separate wq and limit concurrency.
++ * There's no point in executing many of these in parallel.
++ */
++ cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1);
++ if (!cgwb_release_wq)
++ return -ENOMEM;
++
++ return 0;
++}
++subsys_initcall(cgwb_init);
++
+ #else /* CONFIG_CGROUP_WRITEBACK */
+
+ static int cgwb_bdi_init(struct backing_dev_info *bdi)
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 22320ea27489..d2d0eb9536a3 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4162,7 +4162,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ * orientated.
+ */
+ if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) {
+- ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
+ ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
+ ac->high_zoneidx, ac->nodemask);
+ }
+diff --git a/net/dsa/tag_trailer.c b/net/dsa/tag_trailer.c
+index 7d20e1f3de28..56197f0d9608 100644
+--- a/net/dsa/tag_trailer.c
++++ b/net/dsa/tag_trailer.c
+@@ -75,7 +75,8 @@ static struct sk_buff *trailer_rcv(struct sk_buff *skb, struct net_device *dev,
+ if (!skb->dev)
+ return NULL;
+
+- pskb_trim_rcsum(skb, skb->len - 4);
++ if (pskb_trim_rcsum(skb, skb->len - 4))
++ return NULL;
+
+ return skb;
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index f70586b50838..ef8cd0f7db89 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1689,6 +1689,10 @@ int tcp_v4_rcv(struct sk_buff *skb)
+ reqsk_put(req);
+ goto discard_it;
+ }
++ if (tcp_checksum_complete(skb)) {
++ reqsk_put(req);
++ goto csum_error;
++ }
+ if (unlikely(sk->sk_state != TCP_LISTEN)) {
+ inet_csk_reqsk_queue_drop_and_put(sk, req);
+ goto lookup;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index b61a770884fa..5f7bc5c6366a 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2718,7 +2718,7 @@ static void udp4_format_sock(struct sock *sp, struct seq_file *f,
+ " %02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d %pK %d",
+ bucket, src, srcp, dest, destp, sp->sk_state,
+ sk_wmem_alloc_get(sp),
+- sk_rmem_alloc_get(sp),
++ udp_rqueue_get(sp),
+ 0, 0L, 0,
+ from_kuid_munged(seq_user_ns(f), sock_i_uid(sp)),
+ 0, sock_i_ino(sp),
+diff --git a/net/ipv4/udp_diag.c b/net/ipv4/udp_diag.c
+index d0390d844ac8..d9ad986c7b2c 100644
+--- a/net/ipv4/udp_diag.c
++++ b/net/ipv4/udp_diag.c
+@@ -163,7 +163,7 @@ static int udp_diag_dump_one(struct sk_buff *in_skb, const struct nlmsghdr *nlh,
+ static void udp_diag_get_info(struct sock *sk, struct inet_diag_msg *r,
+ void *info)
+ {
+- r->idiag_rqueue = sk_rmem_alloc_get(sk);
++ r->idiag_rqueue = udp_rqueue_get(sk);
+ r->idiag_wqueue = sk_wmem_alloc_get(sk);
+ }
+
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index a02ad100f0d7..2ee08b6a86a4 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -1019,8 +1019,8 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
+ }
+ EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl);
+
+-void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
+- __u16 srcp, __u16 destp, int bucket)
++void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
++ __u16 srcp, __u16 destp, int rqueue, int bucket)
+ {
+ const struct in6_addr *dest, *src;
+
+@@ -1036,7 +1036,7 @@ void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
+ dest->s6_addr32[2], dest->s6_addr32[3], destp,
+ sp->sk_state,
+ sk_wmem_alloc_get(sp),
+- sk_rmem_alloc_get(sp),
++ rqueue,
+ 0, 0L, 0,
+ from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
+ 0,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 4530a82aaa2e..b94345e657f7 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2149,9 +2149,6 @@ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
+ const struct in6_addr *daddr, *saddr;
+ struct rt6_info *rt6 = (struct rt6_info *)dst;
+
+- if (rt6->rt6i_flags & RTF_LOCAL)
+- return;
+-
+ if (dst_metric_locked(dst, RTAX_MTU))
+ return;
+
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 6d664d83cd16..5d4eb9d2c3a7 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1475,6 +1475,10 @@ static int tcp_v6_rcv(struct sk_buff *skb)
+ reqsk_put(req);
+ goto discard_it;
+ }
++ if (tcp_checksum_complete(skb)) {
++ reqsk_put(req);
++ goto csum_error;
++ }
+ if (unlikely(sk->sk_state != TCP_LISTEN)) {
+ inet_csk_reqsk_queue_drop_and_put(sk, req);
+ goto lookup;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index ea0730028e5d..977bd5a07cab 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1475,7 +1475,8 @@ int udp6_seq_show(struct seq_file *seq, void *v)
+ struct inet_sock *inet = inet_sk(v);
+ __u16 srcp = ntohs(inet->inet_sport);
+ __u16 destp = ntohs(inet->inet_dport);
+- ip6_dgram_sock_seq_show(seq, v, srcp, destp, bucket);
++ __ip6_dgram_sock_seq_show(seq, v, srcp, destp,
++ udp_rqueue_get(v), bucket);
+ }
+ return 0;
+ }
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 60c2a252bdf5..38d132d007ba 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2037,7 +2037,7 @@ static int packet_rcv_vnet(struct msghdr *msg, const struct sk_buff *skb,
+ return -EINVAL;
+ *len -= sizeof(vnet_hdr);
+
+- if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le(), true))
++ if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le(), true, 0))
+ return -EINVAL;
+
+ return memcpy_to_msg(msg, (void *)&vnet_hdr, sizeof(vnet_hdr));
+@@ -2304,7 +2304,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ if (do_vnet) {
+ if (virtio_net_hdr_from_skb(skb, h.raw + macoff -
+ sizeof(struct virtio_net_hdr),
+- vio_le(), true)) {
++ vio_le(), true, 0)) {
+ spin_lock(&sk->sk_receive_queue.lock);
+ goto drop_n_account;
+ }
+diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c
+index 9618b4a83cee..98c4afe7c15b 100644
+--- a/net/sched/act_simple.c
++++ b/net/sched/act_simple.c
+@@ -53,22 +53,22 @@ static void tcf_simp_release(struct tc_action *a)
+ kfree(d->tcfd_defdata);
+ }
+
+-static int alloc_defdata(struct tcf_defact *d, char *defdata)
++static int alloc_defdata(struct tcf_defact *d, const struct nlattr *defdata)
+ {
+ d->tcfd_defdata = kzalloc(SIMP_MAX_DATA, GFP_KERNEL);
+ if (unlikely(!d->tcfd_defdata))
+ return -ENOMEM;
+- strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA);
++ nla_strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA);
+ return 0;
+ }
+
+-static void reset_policy(struct tcf_defact *d, char *defdata,
++static void reset_policy(struct tcf_defact *d, const struct nlattr *defdata,
+ struct tc_defact *p)
+ {
+ spin_lock_bh(&d->tcf_lock);
+ d->tcf_action = p->action;
+ memset(d->tcfd_defdata, 0, SIMP_MAX_DATA);
+- strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA);
++ nla_strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA);
+ spin_unlock_bh(&d->tcf_lock);
+ }
+
+@@ -87,7 +87,6 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ struct tcf_defact *d;
+ bool exists = false;
+ int ret = 0, err;
+- char *defdata;
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -110,8 +109,6 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ return -EINVAL;
+ }
+
+- defdata = nla_data(tb[TCA_DEF_DATA]);
+-
+ if (!exists) {
+ ret = tcf_idr_create(tn, parm->index, est, a,
+ &act_simp_ops, bind, false);
+@@ -119,7 +116,7 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ return ret;
+
+ d = to_defact(*a);
+- ret = alloc_defdata(d, defdata);
++ ret = alloc_defdata(d, tb[TCA_DEF_DATA]);
+ if (ret < 0) {
+ tcf_idr_release(*a, bind);
+ return ret;
+@@ -133,7 +130,7 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ if (!ovr)
+ return -EEXIST;
+
+- reset_policy(d, defdata, parm);
++ reset_policy(d, tb[TCA_DEF_DATA], parm);
+ }
+
+ if (ret == ACT_P_CREATED)
+diff --git a/net/socket.c b/net/socket.c
+index f10f1d947c78..d1b02f161429 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -537,7 +537,10 @@ static int sockfs_setattr(struct dentry *dentry, struct iattr *iattr)
+ if (!err && (iattr->ia_valid & ATTR_UID)) {
+ struct socket *sock = SOCKET_I(d_inode(dentry));
+
+- sock->sk->sk_uid = iattr->ia_uid;
++ if (sock->sk)
++ sock->sk->sk_uid = iattr->ia_uid;
++ else
++ err = -ENOENT;
+ }
+
+ return err;
+@@ -586,12 +589,16 @@ EXPORT_SYMBOL(sock_alloc);
+ * an inode not a file.
+ */
+
+-void sock_release(struct socket *sock)
++static void __sock_release(struct socket *sock, struct inode *inode)
+ {
+ if (sock->ops) {
+ struct module *owner = sock->ops->owner;
+
++ if (inode)
++ inode_lock(inode);
+ sock->ops->release(sock);
++ if (inode)
++ inode_unlock(inode);
+ sock->ops = NULL;
+ module_put(owner);
+ }
+@@ -605,6 +612,11 @@ void sock_release(struct socket *sock)
+ }
+ sock->file = NULL;
+ }
++
++void sock_release(struct socket *sock)
++{
++ __sock_release(sock, NULL);
++}
+ EXPORT_SYMBOL(sock_release);
+
+ void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags)
+@@ -1146,7 +1158,7 @@ static int sock_mmap(struct file *file, struct vm_area_struct *vma)
+
+ static int sock_close(struct inode *inode, struct file *filp)
+ {
+- sock_release(SOCKET_I(inode));
++ __sock_release(SOCKET_I(inode), inode);
+ return 0;
+ }
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index e1c93ce74e0f..5fe29121b9a8 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -191,18 +191,12 @@ static void tls_free_both_sg(struct sock *sk)
+ }
+
+ static int tls_do_encryption(struct tls_context *tls_ctx,
+- struct tls_sw_context *ctx, size_t data_len,
+- gfp_t flags)
++ struct tls_sw_context *ctx,
++ struct aead_request *aead_req,
++ size_t data_len)
+ {
+- unsigned int req_size = sizeof(struct aead_request) +
+- crypto_aead_reqsize(ctx->aead_send);
+- struct aead_request *aead_req;
+ int rc;
+
+- aead_req = kzalloc(req_size, flags);
+- if (!aead_req)
+- return -ENOMEM;
+-
+ ctx->sg_encrypted_data[0].offset += tls_ctx->tx.prepend_size;
+ ctx->sg_encrypted_data[0].length -= tls_ctx->tx.prepend_size;
+
+@@ -219,7 +213,6 @@ static int tls_do_encryption(struct tls_context *tls_ctx,
+ ctx->sg_encrypted_data[0].offset -= tls_ctx->tx.prepend_size;
+ ctx->sg_encrypted_data[0].length += tls_ctx->tx.prepend_size;
+
+- kfree(aead_req);
+ return rc;
+ }
+
+@@ -228,8 +221,14 @@ static int tls_push_record(struct sock *sk, int flags,
+ {
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct tls_sw_context *ctx = tls_sw_ctx(tls_ctx);
++ struct aead_request *req;
+ int rc;
+
++ req = kzalloc(sizeof(struct aead_request) +
++ crypto_aead_reqsize(ctx->aead_send), sk->sk_allocation);
++ if (!req)
++ return -ENOMEM;
++
+ sg_mark_end(ctx->sg_plaintext_data + ctx->sg_plaintext_num_elem - 1);
+ sg_mark_end(ctx->sg_encrypted_data + ctx->sg_encrypted_num_elem - 1);
+
+@@ -245,15 +244,14 @@ static int tls_push_record(struct sock *sk, int flags,
+ tls_ctx->pending_open_record_frags = 0;
+ set_bit(TLS_PENDING_CLOSED_RECORD, &tls_ctx->flags);
+
+- rc = tls_do_encryption(tls_ctx, ctx, ctx->sg_plaintext_size,
+- sk->sk_allocation);
++ rc = tls_do_encryption(tls_ctx, ctx, req, ctx->sg_plaintext_size);
+ if (rc < 0) {
+ /* If we are called from write_space and
+ * we fail, we need to set this SOCK_NOSPACE
+ * to trigger another write_space in the future.
+ */
+ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+- return rc;
++ goto out_req;
+ }
+
+ free_sg(sk, ctx->sg_plaintext_data, &ctx->sg_plaintext_num_elem,
+@@ -268,6 +266,8 @@ static int tls_push_record(struct sock *sk, int flags,
+ tls_err_abort(sk, EBADMSG);
+
+ tls_advance_record_sn(sk, &tls_ctx->tx);
++out_req:
++ kfree(req);
+ return rc;
+ }
+
+@@ -755,7 +755,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ struct sk_buff *skb;
+ ssize_t copied = 0;
+ bool cmsg = false;
+- int err = 0;
++ int target, err = 0;
+ long timeo;
+
+ flags |= nonblock;
+@@ -765,6 +765,7 @@ int tls_sw_recvmsg(struct sock *sk,
+
+ lock_sock(sk);
+
++ target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+ timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+ do {
+ bool zc = false;
+@@ -857,6 +858,9 @@ int tls_sw_recvmsg(struct sock *sk,
+ goto recv_end;
+ }
+ }
++ /* If we have a new message from strparser, continue now. */
++ if (copied >= target && !ctx->recv_pkt)
++ break;
+ } while (len);
+
+ recv_end:
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index d1eb14842340..a12e594d4e3b 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -748,8 +748,10 @@ int snd_hda_attach_pcm_stream(struct hda_bus *_bus, struct hda_codec *codec,
+ return err;
+ strlcpy(pcm->name, cpcm->name, sizeof(pcm->name));
+ apcm = kzalloc(sizeof(*apcm), GFP_KERNEL);
+- if (apcm == NULL)
++ if (apcm == NULL) {
++ snd_device_free(chip->card, pcm);
+ return -ENOMEM;
++ }
+ apcm->chip = chip;
+ apcm->pcm = pcm;
+ apcm->codec = codec;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 5b4dbcec6de8..ba9a7e552183 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -959,12 +959,15 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x8079, "HP EliteBook 840 G3", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK),
++ SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),
++ SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 01a6643fc7d4..06c2c80a045b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6580,7 +6580,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+- SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+@@ -6752,6 +6751,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x1b, 0x01111010},
+ {0x1e, 0x01451130},
+ {0x21, 0x02211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
++ {0x12, 0x90a60140},
++ {0x14, 0x90170110},
++ {0x19, 0x02a11030},
++ {0x21, 0x02211020}),
+ SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ {0x12, 0x90a60140},
+ {0x14, 0x90170110},
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 754e632a27bd..02b7ad1946db 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3277,6 +3277,10 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ },
+
++/* disabled due to regression for other devices;
++ * see https://bugzilla.kernel.org/show_bug.cgi?id=199905
++ */
++#if 0
+ {
+ /*
+ * Nura's first gen headphones use Cambridge Silicon Radio's vendor
+@@ -3324,6 +3328,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ }
+ },
++#endif /* disabled */
+
+ {
+ /*
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-25 12:40 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-25 12:40 UTC (permalink / raw
To: gentoo-commits
commit: fe91b90c7d22a035be3e5732d0dd627fd1ce782b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 25 12:39:08 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 25 12:39:08 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe91b90c
Removing kvm patch as upstream suggested patch was
significantly different. Waiting on user to test
patch. See bug #658544.
0000_README | 4 ---
...ne-pvclock-pvti-cpu0-va-setter-for-X86-32.patch | 37 ----------------------
2 files changed, 41 deletions(-)
diff --git a/0000_README b/0000_README
index 3487ae6..a4cf389 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
-Patch: 1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
-From: https://bugs.gentoo.org/show_bug.cgi?id=658544
-Desc: kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32
-
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
deleted file mode 100644
index e52f3a2..0000000
--- a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 42d9186f9ef41d6b50458db13ca34d01595e1ecd Mon Sep 17 00:00:00 2001
-From: Mike Pagano <mpagano@gentoo.org>
-Date: Wed, 20 Jun 2018 12:31:18 -0400
-Subject: [PATCH] kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32
-Cc: mpagano@gentoo.org
-
-setup_vsyscall_timeinfo() is only defined for x86_64, thus
-vclock_set_pvti_cpu0_va() does not get called resulting in
-the failure of ptp_kvm initialization for Linux X86_32 guests.
-The result of this being that the 32 bit guest userspace has
-no /dev/ptp0 device.
-
-See Gentoo bug 658544 located at the following link:
-https://bugs.gentoo.org/658544
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
-Signed-off-by: Andreas Steinmetz <ast@domdv.de>
----
- arch/x86/kernel/kvmclock.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
-index bf8d1eb7fca3..6aee5c6265b3 100644
---- a/arch/x86/kernel/kvmclock.c
-+++ b/arch/x86/kernel/kvmclock.c
-@@ -350,7 +350,7 @@ void __init kvmclock_init(void)
-
- int __init kvm_setup_vsyscall_timeinfo(void)
- {
--#ifdef CONFIG_X86_64
-+#ifdef CONFIG_X86_64 || defined(CONFIG_X86_32)
- int cpu;
- u8 flags;
- struct pvclock_vcpu_time_info *vcpu_time;
---
-2.16.4
-
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-20 17:47 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-20 17:47 UTC (permalink / raw
To: gentoo-commits
commit: 28dc1147d04eff810b527e8714a865cdc6cf6023
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 20 17:46:53 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 20 17:46:53 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28dc1147
kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32. See bug #658544.
0000_README | 4 +++
...ne-pvclock-pvti-cpu0-va-setter-for-X86-32.patch | 37 ++++++++++++++++++++++
2 files changed, 41 insertions(+)
diff --git a/0000_README b/0000_README
index a4cf389..3487ae6 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
+Patch: 1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=658544
+Desc: kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
new file mode 100644
index 0000000..e52f3a2
--- /dev/null
+++ b/1700_define-pvclock-pvti-cpu0-va-setter-for-X86-32.patch
@@ -0,0 +1,37 @@
+From 42d9186f9ef41d6b50458db13ca34d01595e1ecd Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Wed, 20 Jun 2018 12:31:18 -0400
+Subject: [PATCH] kvmclock: Define pvclock_pvti_cpu0_va setter for X86_32
+Cc: mpagano@gentoo.org
+
+setup_vsyscall_timeinfo() is only defined for x86_64, thus
+vclock_set_pvti_cpu0_va() does not get called resulting in
+the failure of ptp_kvm initialization for Linux X86_32 guests.
+The result of this being that the 32 bit guest userspace has
+no /dev/ptp0 device.
+
+See Gentoo bug 658544 located at the following link:
+https://bugs.gentoo.org/658544
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+Signed-off-by: Andreas Steinmetz <ast@domdv.de>
+---
+ arch/x86/kernel/kvmclock.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index bf8d1eb7fca3..6aee5c6265b3 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -350,7 +350,7 @@ void __init kvmclock_init(void)
+
+ int __init kvm_setup_vsyscall_timeinfo(void)
+ {
+-#ifdef CONFIG_X86_64
++#ifdef CONFIG_X86_64 || defined(CONFIG_X86_32)
+ int cpu;
+ u8 flags;
+ struct pvclock_vcpu_time_info *vcpu_time;
+--
+2.16.4
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-19 23:30 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-19 23:30 UTC (permalink / raw
To: gentoo-commits
commit: 41b21d285ad76e01ad0c800e9c8589b2741c4213
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 19 23:30:01 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 19 23:30:01 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=41b21d28
Revert iommu patch. See bug #658538.
0000_README | 4 +
1800_iommu-amd-dma-direct-revert.patch | 164 +++++++++++++++++++++++++++++++++
2 files changed, 168 insertions(+)
diff --git a/0000_README b/0000_README
index df97765..a4cf389 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-4.17.2.patch
From: http://www.kernel.org
Desc: Linux 4.17.2
+Patch: 1800_iommu-amd-dma-direct-revert.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=e16c4790de39dc861b749674c2a9319507f6f64f
+Desc: Revert iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and dma_direct_{alloc,free}(). See bug #658538.
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1800_iommu-amd-dma-direct-revert.patch b/1800_iommu-amd-dma-direct-revert.patch
new file mode 100644
index 0000000..a78fa02
--- /dev/null
+++ b/1800_iommu-amd-dma-direct-revert.patch
@@ -0,0 +1,164 @@
+From e16c4790de39dc861b749674c2a9319507f6f64f Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Mon, 11 Jun 2018 12:22:12 -0700
+Subject: Revert "iommu/amd_iommu: Use CONFIG_DMA_DIRECT_OPS=y and
+ dma_direct_{alloc,free}()"
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This reverts commit b468620f2a1dfdcfddfd6fa54367b8bcc1b51248.
+
+It turns out that this broke drm on AMD platforms. Quoting Gabriel C:
+ "I can confirm reverting b468620f2a1dfdcfddfd6fa54367b8bcc1b51248 fixes
+ that issue for me.
+
+ The GPU is working fine with SME enabled.
+
+ Now with working GPU :) I can also confirm performance is back to
+ normal without doing any other workarounds"
+
+Christan König analyzed it partially:
+ "As far as I analyzed it we now get an -ENOMEM from dma_alloc_attrs()
+ in drivers/gpu/drm/ttm/ttm_page_alloc_dma.c when IOMMU is enabled"
+
+and Christoph Hellwig responded:
+ "I think the prime issue is that dma_direct_alloc respects the dma
+ mask. Which we don't need if actually using the iommu. This would be
+ mostly harmless exept for the the SEV bit high in the address that
+ makes the checks fail.
+
+ For now I'd say revert this commit for 4.17/4.18-rc and I'll look into
+ addressing these issues properly"
+
+Reported-and-bisected-by: Gabriel C <nix.or.die@gmail.com>
+Acked-by: Christoph Hellwig <hch@lst.de>
+Cc: Christian König <christian.koenig@amd.com>
+Cc: Michel Dänzer <michel.daenzer@amd.com>
+Cc: Joerg Roedel <jroedel@suse.de>
+Cc: Tom Lendacky <thomas.lendacky@amd.com>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: stable@kernel.org # v4.17
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+---
+ drivers/iommu/Kconfig | 1 -
+ drivers/iommu/amd_iommu.c | 68 ++++++++++++++++++++++++++++++++---------------
+ 2 files changed, 47 insertions(+), 22 deletions(-)
+
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index 8ea77ef..e055d22 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -107,7 +107,6 @@ config IOMMU_PGTABLES_L2
+ # AMD IOMMU support
+ config AMD_IOMMU
+ bool "AMD IOMMU support"
+- select DMA_DIRECT_OPS
+ select SWIOTLB
+ select PCI_MSI
+ select PCI_ATS
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 0cea80be..596b95c 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2596,32 +2596,51 @@ static void *alloc_coherent(struct device *dev, size_t size,
+ unsigned long attrs)
+ {
+ u64 dma_mask = dev->coherent_dma_mask;
+- struct protection_domain *domain = get_domain(dev);
+- bool is_direct = false;
+- void *virt_addr;
++ struct protection_domain *domain;
++ struct dma_ops_domain *dma_dom;
++ struct page *page;
++
++ domain = get_domain(dev);
++ if (PTR_ERR(domain) == -EINVAL) {
++ page = alloc_pages(flag, get_order(size));
++ *dma_addr = page_to_phys(page);
++ return page_address(page);
++ } else if (IS_ERR(domain))
++ return NULL;
+
+- if (IS_ERR(domain)) {
+- if (PTR_ERR(domain) != -EINVAL)
++ dma_dom = to_dma_ops_domain(domain);
++ size = PAGE_ALIGN(size);
++ dma_mask = dev->coherent_dma_mask;
++ flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
++ flag |= __GFP_ZERO;
++
++ page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
++ if (!page) {
++ if (!gfpflags_allow_blocking(flag))
+ return NULL;
+- is_direct = true;
+- }
+
+- virt_addr = dma_direct_alloc(dev, size, dma_addr, flag, attrs);
+- if (!virt_addr || is_direct)
+- return virt_addr;
++ page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
++ get_order(size), flag);
++ if (!page)
++ return NULL;
++ }
+
+ if (!dma_mask)
+ dma_mask = *dev->dma_mask;
+
+- *dma_addr = __map_single(dev, to_dma_ops_domain(domain),
+- virt_to_phys(virt_addr), PAGE_ALIGN(size),
+- DMA_BIDIRECTIONAL, dma_mask);
++ *dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
++ size, DMA_BIDIRECTIONAL, dma_mask);
++
+ if (*dma_addr == AMD_IOMMU_MAPPING_ERROR)
+ goto out_free;
+- return virt_addr;
++
++ return page_address(page);
+
+ out_free:
+- dma_direct_free(dev, size, virt_addr, *dma_addr, attrs);
++
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, get_order(size));
++
+ return NULL;
+ }
+
+@@ -2632,17 +2651,24 @@ static void free_coherent(struct device *dev, size_t size,
+ void *virt_addr, dma_addr_t dma_addr,
+ unsigned long attrs)
+ {
+- struct protection_domain *domain = get_domain(dev);
++ struct protection_domain *domain;
++ struct dma_ops_domain *dma_dom;
++ struct page *page;
+
++ page = virt_to_page(virt_addr);
+ size = PAGE_ALIGN(size);
+
+- if (!IS_ERR(domain)) {
+- struct dma_ops_domain *dma_dom = to_dma_ops_domain(domain);
++ domain = get_domain(dev);
++ if (IS_ERR(domain))
++ goto free_mem;
+
+- __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+- }
++ dma_dom = to_dma_ops_domain(domain);
++
++ __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+
+- dma_direct_free(dev, size, virt_addr, dma_addr, attrs);
++free_mem:
++ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
++ __free_pages(page, get_order(size));
+ }
+
+ /*
+--
+cgit v1.1
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-16 15:46 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-16 15:46 UTC (permalink / raw
To: gentoo-commits
commit: 70a60381716c31741c8126bf7c4589a781c3d23b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 16 15:46:12 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 16 15:46:12 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=70a60381
Linux patch 4.17.2
0000_README | 4 +
1001_linux-4.17.2.patch | 2863 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2867 insertions(+)
diff --git a/0000_README b/0000_README
index de4fd96..df97765 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-4.17.1.patch
From: http://www.kernel.org
Desc: Linux 4.17.1
+Patch: 1001_linux-4.17.2.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-4.17.2.patch b/1001_linux-4.17.2.patch
new file mode 100644
index 0000000..a2da995
--- /dev/null
+++ b/1001_linux-4.17.2.patch
@@ -0,0 +1,2863 @@
+diff --git a/Documentation/ABI/stable/sysfs-bus-vmbus b/Documentation/ABI/stable/sysfs-bus-vmbus
+index 0c9d9dcd2151..3eaffbb2d468 100644
+--- a/Documentation/ABI/stable/sysfs-bus-vmbus
++++ b/Documentation/ABI/stable/sysfs-bus-vmbus
+@@ -1,25 +1,25 @@
+-What: /sys/bus/vmbus/devices/vmbus_*/id
++What: /sys/bus/vmbus/devices/<UUID>/id
+ Date: Jul 2009
+ KernelVersion: 2.6.31
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+ Description: The VMBus child_relid of the device's primary channel
+ Users: tools/hv/lsvmbus
+
+-What: /sys/bus/vmbus/devices/vmbus_*/class_id
++What: /sys/bus/vmbus/devices/<UUID>/class_id
+ Date: Jul 2009
+ KernelVersion: 2.6.31
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+ Description: The VMBus interface type GUID of the device
+ Users: tools/hv/lsvmbus
+
+-What: /sys/bus/vmbus/devices/vmbus_*/device_id
++What: /sys/bus/vmbus/devices/<UUID>/device_id
+ Date: Jul 2009
+ KernelVersion: 2.6.31
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+ Description: The VMBus interface instance GUID of the device
+ Users: tools/hv/lsvmbus
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channel_vp_mapping
++What: /sys/bus/vmbus/devices/<UUID>/channel_vp_mapping
+ Date: Jul 2015
+ KernelVersion: 4.2.0
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+@@ -28,112 +28,112 @@ Description: The mapping of which primary/sub channels are bound to which
+ Format: <channel's child_relid:the bound cpu's number>
+ Users: tools/hv/lsvmbus
+
+-What: /sys/bus/vmbus/devices/vmbus_*/device
++What: /sys/bus/vmbus/devices/<UUID>/device
+ Date: Dec. 2015
+ KernelVersion: 4.5
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+ Description: The 16 bit device ID of the device
+ Users: tools/hv/lsvmbus and user level RDMA libraries
+
+-What: /sys/bus/vmbus/devices/vmbus_*/vendor
++What: /sys/bus/vmbus/devices/<UUID>/vendor
+ Date: Dec. 2015
+ KernelVersion: 4.5
+ Contact: K. Y. Srinivasan <kys@microsoft.com>
+ Description: The 16 bit vendor ID of the device
+ Users: tools/hv/lsvmbus and user level RDMA libraries
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Directory for per-channel information
+ NN is the VMBUS relid associtated with the channel.
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/cpu
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: VCPU (sub)channel is affinitized to
+ Users: tools/hv/lsvmbus and other debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/cpu
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: VCPU (sub)channel is affinitized to
+ Users: tools/hv/lsvmbus and other debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/in_mask
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/in_mask
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Host to guest channel interrupt mask
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/latency
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/latency
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Channel signaling latency
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/out_mask
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/out_mask
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Guest to host channel interrupt mask
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/pending
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/pending
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Channel interrupt pending state
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/read_avail
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/read_avail
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Bytes available to read
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/write_avail
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/write_avail
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Bytes available to write
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/events
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/events
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Number of times we have signaled the host
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/interrupts
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/interrupts
+ Date: September. 2017
+ KernelVersion: 4.14
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Number of times we have taken an interrupt (incoming)
+ Users: Debugging tools
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/subchannel_id
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/subchannel_id
+ Date: January. 2018
+ KernelVersion: 4.16
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Subchannel ID associated with VMBUS channel
+ Users: Debugging tools and userspace drivers
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/monitor_id
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/monitor_id
+ Date: January. 2018
+ KernelVersion: 4.16
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+ Description: Monitor bit associated with channel
+ Users: Debugging tools and userspace drivers
+
+-What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/ring
++What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/ring
+ Date: January. 2018
+ KernelVersion: 4.16
+ Contact: Stephen Hemminger <sthemmin@microsoft.com>
+diff --git a/Makefile b/Makefile
+index e551c9af6a06..f43cd522b175 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index ecf613761e78..fe005df02ed3 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -320,6 +320,7 @@ CONFIG_PINCTRL_MAX77620=y
+ CONFIG_PINCTRL_MSM8916=y
+ CONFIG_PINCTRL_MSM8994=y
+ CONFIG_PINCTRL_MSM8996=y
++CONFIG_PINCTRL_MT7622=y
+ CONFIG_PINCTRL_QDF2XXX=y
+ CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
+ CONFIG_GPIO_DWAPB=y
+diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
+index b24b1c8b3979..0f82cd91cd3c 100644
+--- a/arch/x86/include/asm/kvm_emulate.h
++++ b/arch/x86/include/asm/kvm_emulate.h
+@@ -107,11 +107,12 @@ struct x86_emulate_ops {
+ * @addr: [IN ] Linear address from which to read.
+ * @val: [OUT] Value read from memory, zero-extended to 'u_long'.
+ * @bytes: [IN ] Number of bytes to read from memory.
++ * @system:[IN ] Whether the access is forced to be at CPL0.
+ */
+ int (*read_std)(struct x86_emulate_ctxt *ctxt,
+ unsigned long addr, void *val,
+ unsigned int bytes,
+- struct x86_exception *fault);
++ struct x86_exception *fault, bool system);
+
+ /*
+ * read_phys: Read bytes of standard (non-emulated/special) memory.
+@@ -129,10 +130,11 @@ struct x86_emulate_ops {
+ * @addr: [IN ] Linear address to which to write.
+ * @val: [OUT] Value write to memory, zero-extended to 'u_long'.
+ * @bytes: [IN ] Number of bytes to write to memory.
++ * @system:[IN ] Whether the access is forced to be at CPL0.
+ */
+ int (*write_std)(struct x86_emulate_ctxt *ctxt,
+ unsigned long addr, void *val, unsigned int bytes,
+- struct x86_exception *fault);
++ struct x86_exception *fault, bool system);
+ /*
+ * fetch: Read bytes of standard (non-emulated/special) memory.
+ * Used for instruction fetch.
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index b3705ae52824..4c4f4263420c 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -812,6 +812,19 @@ static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
+ return assign_eip_near(ctxt, ctxt->_eip + rel);
+ }
+
++static int linear_read_system(struct x86_emulate_ctxt *ctxt, ulong linear,
++ void *data, unsigned size)
++{
++ return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, true);
++}
++
++static int linear_write_system(struct x86_emulate_ctxt *ctxt,
++ ulong linear, void *data,
++ unsigned int size)
++{
++ return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, true);
++}
++
+ static int segmented_read_std(struct x86_emulate_ctxt *ctxt,
+ struct segmented_address addr,
+ void *data,
+@@ -823,7 +836,7 @@ static int segmented_read_std(struct x86_emulate_ctxt *ctxt,
+ rc = linearize(ctxt, addr, size, false, &linear);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+- return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception);
++ return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, false);
+ }
+
+ static int segmented_write_std(struct x86_emulate_ctxt *ctxt,
+@@ -837,7 +850,7 @@ static int segmented_write_std(struct x86_emulate_ctxt *ctxt,
+ rc = linearize(ctxt, addr, size, true, &linear);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+- return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception);
++ return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, false);
+ }
+
+ /*
+@@ -1496,8 +1509,7 @@ static int read_interrupt_descriptor(struct x86_emulate_ctxt *ctxt,
+ return emulate_gp(ctxt, index << 3 | 0x2);
+
+ addr = dt.address + index * 8;
+- return ctxt->ops->read_std(ctxt, addr, desc, sizeof *desc,
+- &ctxt->exception);
++ return linear_read_system(ctxt, addr, desc, sizeof *desc);
+ }
+
+ static void get_descriptor_table_ptr(struct x86_emulate_ctxt *ctxt,
+@@ -1560,8 +1572,7 @@ static int read_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+
+- return ctxt->ops->read_std(ctxt, *desc_addr_p, desc, sizeof(*desc),
+- &ctxt->exception);
++ return linear_read_system(ctxt, *desc_addr_p, desc, sizeof(*desc));
+ }
+
+ /* allowed just for 8 bytes segments */
+@@ -1575,8 +1586,7 @@ static int write_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+
+- return ctxt->ops->write_std(ctxt, addr, desc, sizeof *desc,
+- &ctxt->exception);
++ return linear_write_system(ctxt, addr, desc, sizeof *desc);
+ }
+
+ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+@@ -1737,8 +1747,7 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ return ret;
+ }
+ } else if (ctxt->mode == X86EMUL_MODE_PROT64) {
+- ret = ctxt->ops->read_std(ctxt, desc_addr+8, &base3,
+- sizeof(base3), &ctxt->exception);
++ ret = linear_read_system(ctxt, desc_addr+8, &base3, sizeof(base3));
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+ if (emul_is_noncanonical_address(get_desc_base(&seg_desc) |
+@@ -2051,11 +2060,11 @@ static int __emulate_int_real(struct x86_emulate_ctxt *ctxt, int irq)
+ eip_addr = dt.address + (irq << 2);
+ cs_addr = dt.address + (irq << 2) + 2;
+
+- rc = ops->read_std(ctxt, cs_addr, &cs, 2, &ctxt->exception);
++ rc = linear_read_system(ctxt, cs_addr, &cs, 2);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+
+- rc = ops->read_std(ctxt, eip_addr, &eip, 2, &ctxt->exception);
++ rc = linear_read_system(ctxt, eip_addr, &eip, 2);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+
+@@ -2919,12 +2928,12 @@ static bool emulator_io_port_access_allowed(struct x86_emulate_ctxt *ctxt,
+ #ifdef CONFIG_X86_64
+ base |= ((u64)base3) << 32;
+ #endif
+- r = ops->read_std(ctxt, base + 102, &io_bitmap_ptr, 2, NULL);
++ r = ops->read_std(ctxt, base + 102, &io_bitmap_ptr, 2, NULL, true);
+ if (r != X86EMUL_CONTINUE)
+ return false;
+ if (io_bitmap_ptr + port/8 > desc_limit_scaled(&tr_seg))
+ return false;
+- r = ops->read_std(ctxt, base + io_bitmap_ptr + port/8, &perm, 2, NULL);
++ r = ops->read_std(ctxt, base + io_bitmap_ptr + port/8, &perm, 2, NULL, true);
+ if (r != X86EMUL_CONTINUE)
+ return false;
+ if ((perm >> bit_idx) & mask)
+@@ -3053,35 +3062,30 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt,
+ u16 tss_selector, u16 old_tss_sel,
+ ulong old_tss_base, struct desc_struct *new_desc)
+ {
+- const struct x86_emulate_ops *ops = ctxt->ops;
+ struct tss_segment_16 tss_seg;
+ int ret;
+ u32 new_tss_base = get_desc_base(new_desc);
+
+- ret = ops->read_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg,
+- &ctxt->exception);
++ ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+ save_state_to_tss16(ctxt, &tss_seg);
+
+- ret = ops->write_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg,
+- &ctxt->exception);
++ ret = linear_write_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+- ret = ops->read_std(ctxt, new_tss_base, &tss_seg, sizeof tss_seg,
+- &ctxt->exception);
++ ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof tss_seg);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+ if (old_tss_sel != 0xffff) {
+ tss_seg.prev_task_link = old_tss_sel;
+
+- ret = ops->write_std(ctxt, new_tss_base,
+- &tss_seg.prev_task_link,
+- sizeof tss_seg.prev_task_link,
+- &ctxt->exception);
++ ret = linear_write_system(ctxt, new_tss_base,
++ &tss_seg.prev_task_link,
++ sizeof tss_seg.prev_task_link);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+ }
+@@ -3197,38 +3201,34 @@ static int task_switch_32(struct x86_emulate_ctxt *ctxt,
+ u16 tss_selector, u16 old_tss_sel,
+ ulong old_tss_base, struct desc_struct *new_desc)
+ {
+- const struct x86_emulate_ops *ops = ctxt->ops;
+ struct tss_segment_32 tss_seg;
+ int ret;
+ u32 new_tss_base = get_desc_base(new_desc);
+ u32 eip_offset = offsetof(struct tss_segment_32, eip);
+ u32 ldt_sel_offset = offsetof(struct tss_segment_32, ldt_selector);
+
+- ret = ops->read_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg,
+- &ctxt->exception);
++ ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof tss_seg);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+ save_state_to_tss32(ctxt, &tss_seg);
+
+ /* Only GP registers and segment selectors are saved */
+- ret = ops->write_std(ctxt, old_tss_base + eip_offset, &tss_seg.eip,
+- ldt_sel_offset - eip_offset, &ctxt->exception);
++ ret = linear_write_system(ctxt, old_tss_base + eip_offset, &tss_seg.eip,
++ ldt_sel_offset - eip_offset);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+- ret = ops->read_std(ctxt, new_tss_base, &tss_seg, sizeof tss_seg,
+- &ctxt->exception);
++ ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof tss_seg);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+
+ if (old_tss_sel != 0xffff) {
+ tss_seg.prev_task_link = old_tss_sel;
+
+- ret = ops->write_std(ctxt, new_tss_base,
+- &tss_seg.prev_task_link,
+- sizeof tss_seg.prev_task_link,
+- &ctxt->exception);
++ ret = linear_write_system(ctxt, new_tss_base,
++ &tss_seg.prev_task_link,
++ sizeof tss_seg.prev_task_link);
+ if (ret != X86EMUL_CONTINUE)
+ return ret;
+ }
+@@ -4189,7 +4189,9 @@ static int check_cr_write(struct x86_emulate_ctxt *ctxt)
+ maxphyaddr = eax & 0xff;
+ else
+ maxphyaddr = 36;
+- rsvd = rsvd_bits(maxphyaddr, 62);
++ rsvd = rsvd_bits(maxphyaddr, 63);
++ if (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_PCIDE)
++ rsvd &= ~CR3_PCID_INVD;
+ }
+
+ if (new_val & rsvd)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 40aa29204baf..82f5e915e568 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7588,8 +7588,7 @@ static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)
+ vmcs_read32(VMX_INSTRUCTION_INFO), false, &gva))
+ return 1;
+
+- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, vmpointer,
+- sizeof(*vmpointer), &e)) {
++ if (kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e)) {
+ kvm_inject_page_fault(vcpu, &e);
+ return 1;
+ }
+@@ -7670,6 +7669,12 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ return 1;
+ }
+
++ /* CPL=0 must be checked manually. */
++ if (vmx_get_cpl(vcpu)) {
++ kvm_queue_exception(vcpu, UD_VECTOR);
++ return 1;
++ }
++
+ if (vmx->nested.vmxon) {
+ nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
+ return kvm_skip_emulated_instruction(vcpu);
+@@ -7729,6 +7734,11 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ */
+ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+ {
++ if (vmx_get_cpl(vcpu)) {
++ kvm_queue_exception(vcpu, UD_VECTOR);
++ return 0;
++ }
++
+ if (!to_vmx(vcpu)->nested.vmxon) {
+ kvm_queue_exception(vcpu, UD_VECTOR);
+ return 0;
+@@ -8029,9 +8039,9 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
+ if (get_vmx_mem_address(vcpu, exit_qualification,
+ vmx_instruction_info, true, &gva))
+ return 1;
+- /* _system ok, as hardware has verified cpl=0 */
+- kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva,
+- &field_value, (is_long_mode(vcpu) ? 8 : 4), NULL);
++ /* _system ok, nested_vmx_check_permission has verified cpl=0 */
++ kvm_write_guest_virt_system(vcpu, gva, &field_value,
++ (is_long_mode(vcpu) ? 8 : 4), NULL);
+ }
+
+ nested_vmx_succeed(vcpu);
+@@ -8069,8 +8079,8 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
+ if (get_vmx_mem_address(vcpu, exit_qualification,
+ vmx_instruction_info, false, &gva))
+ return 1;
+- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva,
+- &field_value, (is_64_bit_mode(vcpu) ? 8 : 4), &e)) {
++ if (kvm_read_guest_virt(vcpu, gva, &field_value,
++ (is_64_bit_mode(vcpu) ? 8 : 4), &e)) {
+ kvm_inject_page_fault(vcpu, &e);
+ return 1;
+ }
+@@ -8189,10 +8199,10 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
+ if (get_vmx_mem_address(vcpu, exit_qualification,
+ vmx_instruction_info, true, &vmcs_gva))
+ return 1;
+- /* ok to use *_system, as hardware has verified cpl=0 */
+- if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva,
+- (void *)&to_vmx(vcpu)->nested.current_vmptr,
+- sizeof(u64), &e)) {
++ /* *_system ok, nested_vmx_check_permission has verified cpl=0 */
++ if (kvm_write_guest_virt_system(vcpu, vmcs_gva,
++ (void *)&to_vmx(vcpu)->nested.current_vmptr,
++ sizeof(u64), &e)) {
+ kvm_inject_page_fault(vcpu, &e);
+ return 1;
+ }
+@@ -8239,8 +8249,7 @@ static int handle_invept(struct kvm_vcpu *vcpu)
+ if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
+ vmx_instruction_info, false, &gva))
+ return 1;
+- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &operand,
+- sizeof(operand), &e)) {
++ if (kvm_read_guest_virt(vcpu, gva, &operand, sizeof(operand), &e)) {
+ kvm_inject_page_fault(vcpu, &e);
+ return 1;
+ }
+@@ -8304,8 +8313,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
+ if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
+ vmx_instruction_info, false, &gva))
+ return 1;
+- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &operand,
+- sizeof(operand), &e)) {
++ if (kvm_read_guest_virt(vcpu, gva, &operand, sizeof(operand), &e)) {
+ kvm_inject_page_fault(vcpu, &e);
+ return 1;
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 71e7cda6d014..fbc4d17e3ecc 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -856,7 +856,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ }
+
+ if (is_long_mode(vcpu) &&
+- (cr3 & rsvd_bits(cpuid_maxphyaddr(vcpu), 62)))
++ (cr3 & rsvd_bits(cpuid_maxphyaddr(vcpu), 63)))
+ return 1;
+ else if (is_pae(vcpu) && is_paging(vcpu) &&
+ !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
+@@ -2894,7 +2894,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ r = KVM_CLOCK_TSC_STABLE;
+ break;
+ case KVM_CAP_X86_DISABLE_EXITS:
+- r |= KVM_X86_DISABLE_EXITS_HTL | KVM_X86_DISABLE_EXITS_PAUSE;
++ r |= KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE;
+ if(kvm_can_mwait_in_guest())
+ r |= KVM_X86_DISABLE_EXITS_MWAIT;
+ break;
+@@ -4248,7 +4248,7 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
+ if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
+ kvm_can_mwait_in_guest())
+ kvm->arch.mwait_in_guest = true;
+- if (cap->args[0] & KVM_X86_DISABLE_EXITS_HTL)
++ if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
+ kvm->arch.hlt_in_guest = true;
+ if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)
+ kvm->arch.pause_in_guest = true;
+@@ -4787,11 +4787,10 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
+ return X86EMUL_CONTINUE;
+ }
+
+-int kvm_read_guest_virt(struct x86_emulate_ctxt *ctxt,
++int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
+ gva_t addr, void *val, unsigned int bytes,
+ struct x86_exception *exception)
+ {
+- struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+
+ return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,
+@@ -4799,12 +4798,17 @@ int kvm_read_guest_virt(struct x86_emulate_ctxt *ctxt,
+ }
+ EXPORT_SYMBOL_GPL(kvm_read_guest_virt);
+
+-static int kvm_read_guest_virt_system(struct x86_emulate_ctxt *ctxt,
+- gva_t addr, void *val, unsigned int bytes,
+- struct x86_exception *exception)
++static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
++ gva_t addr, void *val, unsigned int bytes,
++ struct x86_exception *exception, bool system)
+ {
+ struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+- return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, exception);
++ u32 access = 0;
++
++ if (!system && kvm_x86_ops->get_cpl(vcpu) == 3)
++ access |= PFERR_USER_MASK;
++
++ return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception);
+ }
+
+ static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt,
+@@ -4816,18 +4820,16 @@ static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt,
+ return r < 0 ? X86EMUL_IO_NEEDED : X86EMUL_CONTINUE;
+ }
+
+-int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
+- gva_t addr, void *val,
+- unsigned int bytes,
+- struct x86_exception *exception)
++static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes,
++ struct kvm_vcpu *vcpu, u32 access,
++ struct x86_exception *exception)
+ {
+- struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ void *data = val;
+ int r = X86EMUL_CONTINUE;
+
+ while (bytes) {
+ gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr,
+- PFERR_WRITE_MASK,
++ access,
+ exception);
+ unsigned offset = addr & (PAGE_SIZE-1);
+ unsigned towrite = min(bytes, (unsigned)PAGE_SIZE - offset);
+@@ -4848,6 +4850,27 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
+ out:
+ return r;
+ }
++
++static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val,
++ unsigned int bytes, struct x86_exception *exception,
++ bool system)
++{
++ struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
++ u32 access = PFERR_WRITE_MASK;
++
++ if (!system && kvm_x86_ops->get_cpl(vcpu) == 3)
++ access |= PFERR_USER_MASK;
++
++ return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
++ access, exception);
++}
++
++int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
++ unsigned int bytes, struct x86_exception *exception)
++{
++ return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
++ PFERR_WRITE_MASK, exception);
++}
+ EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
+
+ int handle_ud(struct kvm_vcpu *vcpu)
+@@ -4858,8 +4881,8 @@ int handle_ud(struct kvm_vcpu *vcpu)
+ struct x86_exception e;
+
+ if (force_emulation_prefix &&
+- kvm_read_guest_virt(&vcpu->arch.emulate_ctxt,
+- kvm_get_linear_rip(vcpu), sig, sizeof(sig), &e) == 0 &&
++ kvm_read_guest_virt(vcpu, kvm_get_linear_rip(vcpu),
++ sig, sizeof(sig), &e) == 0 &&
+ memcmp(sig, "\xf\xbkvm", sizeof(sig)) == 0) {
+ kvm_rip_write(vcpu, kvm_rip_read(vcpu) + sizeof(sig));
+ emul_type = 0;
+@@ -5600,8 +5623,8 @@ static int emulator_pre_leave_smm(struct x86_emulate_ctxt *ctxt, u64 smbase)
+ static const struct x86_emulate_ops emulate_ops = {
+ .read_gpr = emulator_read_gpr,
+ .write_gpr = emulator_write_gpr,
+- .read_std = kvm_read_guest_virt_system,
+- .write_std = kvm_write_guest_virt_system,
++ .read_std = emulator_read_std,
++ .write_std = emulator_write_std,
+ .read_phys = kvm_read_guest_phys_system,
+ .fetch = kvm_fetch_guest_virt,
+ .read_emulated = emulator_read_emulated,
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index c9492f764902..331993c49dae 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -247,11 +247,11 @@ int kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
+ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr);
+ u64 get_kvmclock_ns(struct kvm *kvm);
+
+-int kvm_read_guest_virt(struct x86_emulate_ctxt *ctxt,
++int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
+ gva_t addr, void *val, unsigned int bytes,
+ struct x86_exception *exception);
+
+-int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
++int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu,
+ gva_t addr, void *val, unsigned int bytes,
+ struct x86_exception *exception);
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 08e84ef2bc05..3d08dc84db16 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -328,7 +328,11 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ if (!rep.nr_zones)
+ return -EINVAL;
+
+- zones = kcalloc(rep.nr_zones, sizeof(struct blk_zone), GFP_KERNEL);
++ if (rep.nr_zones > INT_MAX / sizeof(struct blk_zone))
++ return -ERANGE;
++
++ zones = kvmalloc(rep.nr_zones * sizeof(struct blk_zone),
++ GFP_KERNEL | __GFP_ZERO);
+ if (!zones)
+ return -ENOMEM;
+
+@@ -350,7 +354,7 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ }
+
+ out:
+- kfree(zones);
++ kvfree(zones);
+
+ return ret;
+ }
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 7207a535942d..d67667970f7e 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -769,15 +769,18 @@ struct aead_edesc {
+ * @src_nents: number of segments in input s/w scatterlist
+ * @dst_nents: number of segments in output s/w scatterlist
+ * @iv_dma: dma address of iv for checking continuity and link table
++ * @iv_dir: DMA mapping direction for IV
+ * @sec4_sg_bytes: length of dma mapped sec4_sg space
+ * @sec4_sg_dma: bus physical mapped address of h/w link table
+ * @sec4_sg: pointer to h/w link table
+ * @hw_desc: the h/w job descriptor followed by any referenced link tables
++ * and IV
+ */
+ struct ablkcipher_edesc {
+ int src_nents;
+ int dst_nents;
+ dma_addr_t iv_dma;
++ enum dma_data_direction iv_dir;
+ int sec4_sg_bytes;
+ dma_addr_t sec4_sg_dma;
+ struct sec4_sg_entry *sec4_sg;
+@@ -787,7 +790,8 @@ struct ablkcipher_edesc {
+ static void caam_unmap(struct device *dev, struct scatterlist *src,
+ struct scatterlist *dst, int src_nents,
+ int dst_nents,
+- dma_addr_t iv_dma, int ivsize, dma_addr_t sec4_sg_dma,
++ dma_addr_t iv_dma, int ivsize,
++ enum dma_data_direction iv_dir, dma_addr_t sec4_sg_dma,
+ int sec4_sg_bytes)
+ {
+ if (dst != src) {
+@@ -799,7 +803,7 @@ static void caam_unmap(struct device *dev, struct scatterlist *src,
+ }
+
+ if (iv_dma)
+- dma_unmap_single(dev, iv_dma, ivsize, DMA_TO_DEVICE);
++ dma_unmap_single(dev, iv_dma, ivsize, iv_dir);
+ if (sec4_sg_bytes)
+ dma_unmap_single(dev, sec4_sg_dma, sec4_sg_bytes,
+ DMA_TO_DEVICE);
+@@ -810,7 +814,7 @@ static void aead_unmap(struct device *dev,
+ struct aead_request *req)
+ {
+ caam_unmap(dev, req->src, req->dst,
+- edesc->src_nents, edesc->dst_nents, 0, 0,
++ edesc->src_nents, edesc->dst_nents, 0, 0, DMA_NONE,
+ edesc->sec4_sg_dma, edesc->sec4_sg_bytes);
+ }
+
+@@ -823,7 +827,7 @@ static void ablkcipher_unmap(struct device *dev,
+
+ caam_unmap(dev, req->src, req->dst,
+ edesc->src_nents, edesc->dst_nents,
+- edesc->iv_dma, ivsize,
++ edesc->iv_dma, ivsize, edesc->iv_dir,
+ edesc->sec4_sg_dma, edesc->sec4_sg_bytes);
+ }
+
+@@ -912,6 +916,18 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
+ scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize,
+ ivsize, 0);
+
++ /* In case initial IV was generated, copy it in GIVCIPHER request */
++ if (edesc->iv_dir == DMA_FROM_DEVICE) {
++ u8 *iv;
++ struct skcipher_givcrypt_request *greq;
++
++ greq = container_of(req, struct skcipher_givcrypt_request,
++ creq);
++ iv = (u8 *)edesc->hw_desc + desc_bytes(edesc->hw_desc) +
++ edesc->sec4_sg_bytes;
++ memcpy(greq->giv, iv, ivsize);
++ }
++
+ kfree(edesc);
+
+ ablkcipher_request_complete(req, err);
+@@ -922,10 +938,10 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
+ {
+ struct ablkcipher_request *req = context;
+ struct ablkcipher_edesc *edesc;
++#ifdef DEBUG
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+
+-#ifdef DEBUG
+ dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+
+@@ -943,14 +959,6 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
+ edesc->dst_nents > 1 ? 100 : req->nbytes, 1);
+
+ ablkcipher_unmap(jrdev, edesc, req);
+-
+- /*
+- * The crypto API expects us to set the IV (req->info) to the last
+- * ciphertext block.
+- */
+- scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ivsize,
+- ivsize, 0);
+-
+ kfree(edesc);
+
+ ablkcipher_request_complete(req, err);
+@@ -1099,15 +1107,14 @@ static void init_authenc_job(struct aead_request *req,
+ */
+ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
+ struct ablkcipher_edesc *edesc,
+- struct ablkcipher_request *req,
+- bool iv_contig)
++ struct ablkcipher_request *req)
+ {
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ u32 *desc = edesc->hw_desc;
+- u32 out_options = 0, in_options;
+- dma_addr_t dst_dma, src_dma;
+- int len, sec4_sg_index = 0;
++ u32 out_options = 0;
++ dma_addr_t dst_dma;
++ int len;
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "presciv@"__stringify(__LINE__)": ",
+@@ -1123,30 +1130,18 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
+ len = desc_len(sh_desc);
+ init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+
+- if (iv_contig) {
+- src_dma = edesc->iv_dma;
+- in_options = 0;
+- } else {
+- src_dma = edesc->sec4_sg_dma;
+- sec4_sg_index += edesc->src_nents + 1;
+- in_options = LDST_SGF;
+- }
+- append_seq_in_ptr(desc, src_dma, req->nbytes + ivsize, in_options);
++ append_seq_in_ptr(desc, edesc->sec4_sg_dma, req->nbytes + ivsize,
++ LDST_SGF);
+
+ if (likely(req->src == req->dst)) {
+- if (edesc->src_nents == 1 && iv_contig) {
+- dst_dma = sg_dma_address(req->src);
+- } else {
+- dst_dma = edesc->sec4_sg_dma +
+- sizeof(struct sec4_sg_entry);
+- out_options = LDST_SGF;
+- }
++ dst_dma = edesc->sec4_sg_dma + sizeof(struct sec4_sg_entry);
++ out_options = LDST_SGF;
+ } else {
+ if (edesc->dst_nents == 1) {
+ dst_dma = sg_dma_address(req->dst);
+ } else {
+- dst_dma = edesc->sec4_sg_dma +
+- sec4_sg_index * sizeof(struct sec4_sg_entry);
++ dst_dma = edesc->sec4_sg_dma + (edesc->src_nents + 1) *
++ sizeof(struct sec4_sg_entry);
+ out_options = LDST_SGF;
+ }
+ }
+@@ -1158,13 +1153,12 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
+ */
+ static void init_ablkcipher_giv_job(u32 *sh_desc, dma_addr_t ptr,
+ struct ablkcipher_edesc *edesc,
+- struct ablkcipher_request *req,
+- bool iv_contig)
++ struct ablkcipher_request *req)
+ {
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ u32 *desc = edesc->hw_desc;
+- u32 out_options, in_options;
++ u32 in_options;
+ dma_addr_t dst_dma, src_dma;
+ int len, sec4_sg_index = 0;
+
+@@ -1190,15 +1184,9 @@ static void init_ablkcipher_giv_job(u32 *sh_desc, dma_addr_t ptr,
+ }
+ append_seq_in_ptr(desc, src_dma, req->nbytes, in_options);
+
+- if (iv_contig) {
+- dst_dma = edesc->iv_dma;
+- out_options = 0;
+- } else {
+- dst_dma = edesc->sec4_sg_dma +
+- sec4_sg_index * sizeof(struct sec4_sg_entry);
+- out_options = LDST_SGF;
+- }
+- append_seq_out_ptr(desc, dst_dma, req->nbytes + ivsize, out_options);
++ dst_dma = edesc->sec4_sg_dma + sec4_sg_index *
++ sizeof(struct sec4_sg_entry);
++ append_seq_out_ptr(desc, dst_dma, req->nbytes + ivsize, LDST_SGF);
+ }
+
+ /*
+@@ -1287,7 +1275,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ GFP_DMA | flags);
+ if (!edesc) {
+ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
+- 0, 0, 0);
++ 0, DMA_NONE, 0, 0);
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -1491,8 +1479,7 @@ static int aead_decrypt(struct aead_request *req)
+ * allocate and map the ablkcipher extended descriptor for ablkcipher
+ */
+ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+- *req, int desc_bytes,
+- bool *iv_contig_out)
++ *req, int desc_bytes)
+ {
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher);
+@@ -1501,8 +1488,8 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ GFP_KERNEL : GFP_ATOMIC;
+ int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
+ struct ablkcipher_edesc *edesc;
+- dma_addr_t iv_dma = 0;
+- bool in_contig;
++ dma_addr_t iv_dma;
++ u8 *iv;
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ int dst_sg_idx, sec4_sg_ents, sec4_sg_bytes;
+
+@@ -1546,33 +1533,20 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ }
+ }
+
+- iv_dma = dma_map_single(jrdev, req->info, ivsize, DMA_TO_DEVICE);
+- if (dma_mapping_error(jrdev, iv_dma)) {
+- dev_err(jrdev, "unable to map IV\n");
+- caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
+- 0, 0, 0);
+- return ERR_PTR(-ENOMEM);
+- }
+-
+- if (mapped_src_nents == 1 &&
+- iv_dma + ivsize == sg_dma_address(req->src)) {
+- in_contig = true;
+- sec4_sg_ents = 0;
+- } else {
+- in_contig = false;
+- sec4_sg_ents = 1 + mapped_src_nents;
+- }
++ sec4_sg_ents = 1 + mapped_src_nents;
+ dst_sg_idx = sec4_sg_ents;
+ sec4_sg_ents += mapped_dst_nents > 1 ? mapped_dst_nents : 0;
+ sec4_sg_bytes = sec4_sg_ents * sizeof(struct sec4_sg_entry);
+
+- /* allocate space for base edesc and hw desc commands, link tables */
+- edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes,
++ /*
++ * allocate space for base edesc and hw desc commands, link tables, IV
++ */
++ edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes + ivsize,
+ GFP_DMA | flags);
+ if (!edesc) {
+ dev_err(jrdev, "could not allocate extended descriptor\n");
+- caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, 0, 0);
++ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, DMA_NONE, 0, 0);
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -1581,13 +1555,24 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ edesc->sec4_sg_bytes = sec4_sg_bytes;
+ edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+ desc_bytes;
++ edesc->iv_dir = DMA_TO_DEVICE;
+
+- if (!in_contig) {
+- dma_to_sec4_sg_one(edesc->sec4_sg, iv_dma, ivsize, 0);
+- sg_to_sec4_sg_last(req->src, mapped_src_nents,
+- edesc->sec4_sg + 1, 0);
++ /* Make sure IV is located in a DMAable area */
++ iv = (u8 *)edesc->hw_desc + desc_bytes + sec4_sg_bytes;
++ memcpy(iv, req->info, ivsize);
++
++ iv_dma = dma_map_single(jrdev, iv, ivsize, DMA_TO_DEVICE);
++ if (dma_mapping_error(jrdev, iv_dma)) {
++ dev_err(jrdev, "unable to map IV\n");
++ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, DMA_NONE, 0, 0);
++ kfree(edesc);
++ return ERR_PTR(-ENOMEM);
+ }
+
++ dma_to_sec4_sg_one(edesc->sec4_sg, iv_dma, ivsize, 0);
++ sg_to_sec4_sg_last(req->src, mapped_src_nents, edesc->sec4_sg + 1, 0);
++
+ if (mapped_dst_nents > 1) {
+ sg_to_sec4_sg_last(req->dst, mapped_dst_nents,
+ edesc->sec4_sg + dst_sg_idx, 0);
+@@ -1598,7 +1583,7 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
+ dev_err(jrdev, "unable to map S/G table\n");
+ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, 0, 0);
++ iv_dma, ivsize, DMA_TO_DEVICE, 0, 0);
+ kfree(edesc);
+ return ERR_PTR(-ENOMEM);
+ }
+@@ -1611,7 +1596,6 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ sec4_sg_bytes, 1);
+ #endif
+
+- *iv_contig_out = in_contig;
+ return edesc;
+ }
+
+@@ -1621,19 +1605,16 @@ static int ablkcipher_encrypt(struct ablkcipher_request *req)
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher);
+ struct device *jrdev = ctx->jrdev;
+- bool iv_contig;
+ u32 *desc;
+ int ret = 0;
+
+ /* allocate extended descriptor */
+- edesc = ablkcipher_edesc_alloc(req, DESC_JOB_IO_LEN *
+- CAAM_CMD_SZ, &iv_contig);
++ edesc = ablkcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
+ if (IS_ERR(edesc))
+ return PTR_ERR(edesc);
+
+ /* Create and submit job descriptor*/
+- init_ablkcipher_job(ctx->sh_desc_enc,
+- ctx->sh_desc_enc_dma, edesc, req, iv_contig);
++ init_ablkcipher_job(ctx->sh_desc_enc, ctx->sh_desc_enc_dma, edesc, req);
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
+@@ -1657,20 +1638,25 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req)
+ struct ablkcipher_edesc *edesc;
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher);
++ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ struct device *jrdev = ctx->jrdev;
+- bool iv_contig;
+ u32 *desc;
+ int ret = 0;
+
+ /* allocate extended descriptor */
+- edesc = ablkcipher_edesc_alloc(req, DESC_JOB_IO_LEN *
+- CAAM_CMD_SZ, &iv_contig);
++ edesc = ablkcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
+ if (IS_ERR(edesc))
+ return PTR_ERR(edesc);
+
++ /*
++ * The crypto API expects us to set the IV (req->info) to the last
++ * ciphertext block.
++ */
++ scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ivsize,
++ ivsize, 0);
++
+ /* Create and submit job descriptor*/
+- init_ablkcipher_job(ctx->sh_desc_dec,
+- ctx->sh_desc_dec_dma, edesc, req, iv_contig);
++ init_ablkcipher_job(ctx->sh_desc_dec, ctx->sh_desc_dec_dma, edesc, req);
+ desc = edesc->hw_desc;
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
+@@ -1695,8 +1681,7 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req)
+ */
+ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ struct skcipher_givcrypt_request *greq,
+- int desc_bytes,
+- bool *iv_contig_out)
++ int desc_bytes)
+ {
+ struct ablkcipher_request *req = &greq->creq;
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+@@ -1706,8 +1691,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ GFP_KERNEL : GFP_ATOMIC;
+ int src_nents, mapped_src_nents, dst_nents, mapped_dst_nents;
+ struct ablkcipher_edesc *edesc;
+- dma_addr_t iv_dma = 0;
+- bool out_contig;
++ dma_addr_t iv_dma;
++ u8 *iv;
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ int dst_sg_idx, sec4_sg_ents, sec4_sg_bytes;
+
+@@ -1752,36 +1737,20 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ }
+ }
+
+- /*
+- * Check if iv can be contiguous with source and destination.
+- * If so, include it. If not, create scatterlist.
+- */
+- iv_dma = dma_map_single(jrdev, greq->giv, ivsize, DMA_TO_DEVICE);
+- if (dma_mapping_error(jrdev, iv_dma)) {
+- dev_err(jrdev, "unable to map IV\n");
+- caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
+- 0, 0, 0);
+- return ERR_PTR(-ENOMEM);
+- }
+-
+ sec4_sg_ents = mapped_src_nents > 1 ? mapped_src_nents : 0;
+ dst_sg_idx = sec4_sg_ents;
+- if (mapped_dst_nents == 1 &&
+- iv_dma + ivsize == sg_dma_address(req->dst)) {
+- out_contig = true;
+- } else {
+- out_contig = false;
+- sec4_sg_ents += 1 + mapped_dst_nents;
+- }
++ sec4_sg_ents += 1 + mapped_dst_nents;
+
+- /* allocate space for base edesc and hw desc commands, link tables */
++ /*
++ * allocate space for base edesc and hw desc commands, link tables, IV
++ */
+ sec4_sg_bytes = sec4_sg_ents * sizeof(struct sec4_sg_entry);
+- edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes,
++ edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes + ivsize,
+ GFP_DMA | flags);
+ if (!edesc) {
+ dev_err(jrdev, "could not allocate extended descriptor\n");
+- caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, 0, 0);
++ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, DMA_NONE, 0, 0);
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -1790,24 +1759,33 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ edesc->sec4_sg_bytes = sec4_sg_bytes;
+ edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+ desc_bytes;
++ edesc->iv_dir = DMA_FROM_DEVICE;
++
++ /* Make sure IV is located in a DMAable area */
++ iv = (u8 *)edesc->hw_desc + desc_bytes + sec4_sg_bytes;
++ iv_dma = dma_map_single(jrdev, iv, ivsize, DMA_FROM_DEVICE);
++ if (dma_mapping_error(jrdev, iv_dma)) {
++ dev_err(jrdev, "unable to map IV\n");
++ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, DMA_NONE, 0, 0);
++ kfree(edesc);
++ return ERR_PTR(-ENOMEM);
++ }
+
+ if (mapped_src_nents > 1)
+ sg_to_sec4_sg_last(req->src, mapped_src_nents, edesc->sec4_sg,
+ 0);
+
+- if (!out_contig) {
+- dma_to_sec4_sg_one(edesc->sec4_sg + dst_sg_idx,
+- iv_dma, ivsize, 0);
+- sg_to_sec4_sg_last(req->dst, mapped_dst_nents,
+- edesc->sec4_sg + dst_sg_idx + 1, 0);
+- }
++ dma_to_sec4_sg_one(edesc->sec4_sg + dst_sg_idx, iv_dma, ivsize, 0);
++ sg_to_sec4_sg_last(req->dst, mapped_dst_nents, edesc->sec4_sg +
++ dst_sg_idx + 1, 0);
+
+ edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
+ sec4_sg_bytes, DMA_TO_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
+ dev_err(jrdev, "unable to map S/G table\n");
+ caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, 0, 0);
++ iv_dma, ivsize, DMA_FROM_DEVICE, 0, 0);
+ kfree(edesc);
+ return ERR_PTR(-ENOMEM);
+ }
+@@ -1820,7 +1798,6 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ sec4_sg_bytes, 1);
+ #endif
+
+- *iv_contig_out = out_contig;
+ return edesc;
+ }
+
+@@ -1831,19 +1808,17 @@ static int ablkcipher_givencrypt(struct skcipher_givcrypt_request *creq)
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher);
+ struct device *jrdev = ctx->jrdev;
+- bool iv_contig = false;
+ u32 *desc;
+ int ret = 0;
+
+ /* allocate extended descriptor */
+- edesc = ablkcipher_giv_edesc_alloc(creq, DESC_JOB_IO_LEN *
+- CAAM_CMD_SZ, &iv_contig);
++ edesc = ablkcipher_giv_edesc_alloc(creq, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
+ if (IS_ERR(edesc))
+ return PTR_ERR(edesc);
+
+ /* Create and submit job descriptor*/
+ init_ablkcipher_giv_job(ctx->sh_desc_givenc, ctx->sh_desc_givenc_dma,
+- edesc, req, iv_contig);
++ edesc, req);
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR,
+ "ablkcipher jobdesc@" __stringify(__LINE__) ": ",
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index cacda0831390..6e61cc93c2b0 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -728,7 +728,7 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ * @assoclen: associated data length, in CAAM endianness
+ * @assoclen_dma: bus physical mapped address of req->assoclen
+ * @drv_req: driver-specific request structure
+- * @sgt: the h/w link table
++ * @sgt: the h/w link table, followed by IV
+ */
+ struct aead_edesc {
+ int src_nents;
+@@ -739,9 +739,6 @@ struct aead_edesc {
+ unsigned int assoclen;
+ dma_addr_t assoclen_dma;
+ struct caam_drv_req drv_req;
+-#define CAAM_QI_MAX_AEAD_SG \
+- ((CAAM_QI_MEMCACHE_SIZE - offsetof(struct aead_edesc, sgt)) / \
+- sizeof(struct qm_sg_entry))
+ struct qm_sg_entry sgt[0];
+ };
+
+@@ -753,7 +750,7 @@ struct aead_edesc {
+ * @qm_sg_bytes: length of dma mapped h/w link table
+ * @qm_sg_dma: bus physical mapped address of h/w link table
+ * @drv_req: driver-specific request structure
+- * @sgt: the h/w link table
++ * @sgt: the h/w link table, followed by IV
+ */
+ struct ablkcipher_edesc {
+ int src_nents;
+@@ -762,9 +759,6 @@ struct ablkcipher_edesc {
+ int qm_sg_bytes;
+ dma_addr_t qm_sg_dma;
+ struct caam_drv_req drv_req;
+-#define CAAM_QI_MAX_ABLKCIPHER_SG \
+- ((CAAM_QI_MEMCACHE_SIZE - offsetof(struct ablkcipher_edesc, sgt)) / \
+- sizeof(struct qm_sg_entry))
+ struct qm_sg_entry sgt[0];
+ };
+
+@@ -986,17 +980,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ }
+ }
+
+- if ((alg->caam.rfc3686 && encrypt) || !alg->caam.geniv) {
++ if ((alg->caam.rfc3686 && encrypt) || !alg->caam.geniv)
+ ivsize = crypto_aead_ivsize(aead);
+- iv_dma = dma_map_single(qidev, req->iv, ivsize, DMA_TO_DEVICE);
+- if (dma_mapping_error(qidev, iv_dma)) {
+- dev_err(qidev, "unable to map IV\n");
+- caam_unmap(qidev, req->src, req->dst, src_nents,
+- dst_nents, 0, 0, op_type, 0, 0);
+- qi_cache_free(edesc);
+- return ERR_PTR(-ENOMEM);
+- }
+- }
+
+ /*
+ * Create S/G table: req->assoclen, [IV,] req->src [, req->dst].
+@@ -1004,16 +989,33 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ */
+ qm_sg_ents = 1 + !!ivsize + mapped_src_nents +
+ (mapped_dst_nents > 1 ? mapped_dst_nents : 0);
+- if (unlikely(qm_sg_ents > CAAM_QI_MAX_AEAD_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+- qm_sg_ents, CAAM_QI_MAX_AEAD_SG);
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, op_type, 0, 0);
++ sg_table = &edesc->sgt[0];
++ qm_sg_bytes = qm_sg_ents * sizeof(*sg_table);
++ if (unlikely(offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize >
++ CAAM_QI_MEMCACHE_SIZE)) {
++ dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n",
++ qm_sg_ents, ivsize);
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
+ qi_cache_free(edesc);
+ return ERR_PTR(-ENOMEM);
+ }
+- sg_table = &edesc->sgt[0];
+- qm_sg_bytes = qm_sg_ents * sizeof(*sg_table);
++
++ if (ivsize) {
++ u8 *iv = (u8 *)(sg_table + qm_sg_ents);
++
++ /* Make sure IV is located in a DMAable area */
++ memcpy(iv, req->iv, ivsize);
++
++ iv_dma = dma_map_single(qidev, iv, ivsize, DMA_TO_DEVICE);
++ if (dma_mapping_error(qidev, iv_dma)) {
++ dev_err(qidev, "unable to map IV\n");
++ caam_unmap(qidev, req->src, req->dst, src_nents,
++ dst_nents, 0, 0, 0, 0, 0);
++ qi_cache_free(edesc);
++ return ERR_PTR(-ENOMEM);
++ }
++ }
+
+ edesc->src_nents = src_nents;
+ edesc->dst_nents = dst_nents;
+@@ -1166,15 +1168,27 @@ static void ablkcipher_done(struct caam_drv_req *drv_req, u32 status)
+ #endif
+
+ ablkcipher_unmap(qidev, edesc, req);
+- qi_cache_free(edesc);
++
++ /* In case initial IV was generated, copy it in GIVCIPHER request */
++ if (edesc->drv_req.drv_ctx->op_type == GIVENCRYPT) {
++ u8 *iv;
++ struct skcipher_givcrypt_request *greq;
++
++ greq = container_of(req, struct skcipher_givcrypt_request,
++ creq);
++ iv = (u8 *)edesc->sgt + edesc->qm_sg_bytes;
++ memcpy(greq->giv, iv, ivsize);
++ }
+
+ /*
+ * The crypto API expects us to set the IV (req->info) to the last
+ * ciphertext block. This is used e.g. by the CTS mode.
+ */
+- scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize,
+- ivsize, 0);
++ if (edesc->drv_req.drv_ctx->op_type != DECRYPT)
++ scatterwalk_map_and_copy(req->info, req->dst, req->nbytes -
++ ivsize, ivsize, 0);
+
++ qi_cache_free(edesc);
+ ablkcipher_request_complete(req, status);
+ }
+
+@@ -1189,9 +1203,9 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
+ struct ablkcipher_edesc *edesc;
+ dma_addr_t iv_dma;
+- bool in_contig;
++ u8 *iv;
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+- int dst_sg_idx, qm_sg_ents;
++ int dst_sg_idx, qm_sg_ents, qm_sg_bytes;
+ struct qm_sg_entry *sg_table, *fd_sgt;
+ struct caam_drv_ctx *drv_ctx;
+ enum optype op_type = encrypt ? ENCRYPT : DECRYPT;
+@@ -1238,55 +1252,53 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ }
+ }
+
+- iv_dma = dma_map_single(qidev, req->info, ivsize, DMA_TO_DEVICE);
+- if (dma_mapping_error(qidev, iv_dma)) {
+- dev_err(qidev, "unable to map IV\n");
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
+- 0, 0, 0, 0);
+- return ERR_PTR(-ENOMEM);
+- }
+-
+- if (mapped_src_nents == 1 &&
+- iv_dma + ivsize == sg_dma_address(req->src)) {
+- in_contig = true;
+- qm_sg_ents = 0;
+- } else {
+- in_contig = false;
+- qm_sg_ents = 1 + mapped_src_nents;
+- }
++ qm_sg_ents = 1 + mapped_src_nents;
+ dst_sg_idx = qm_sg_ents;
+
+ qm_sg_ents += mapped_dst_nents > 1 ? mapped_dst_nents : 0;
+- if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+- qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG);
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, op_type, 0, 0);
++ qm_sg_bytes = qm_sg_ents * sizeof(struct qm_sg_entry);
++ if (unlikely(offsetof(struct ablkcipher_edesc, sgt) + qm_sg_bytes +
++ ivsize > CAAM_QI_MEMCACHE_SIZE)) {
++ dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n",
++ qm_sg_ents, ivsize);
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
+ return ERR_PTR(-ENOMEM);
+ }
+
+- /* allocate space for base edesc and link tables */
++ /* allocate space for base edesc, link tables and IV */
+ edesc = qi_cache_alloc(GFP_DMA | flags);
+ if (unlikely(!edesc)) {
+ dev_err(qidev, "could not allocate extended descriptor\n");
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, op_type, 0, 0);
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ /* Make sure IV is located in a DMAable area */
++ sg_table = &edesc->sgt[0];
++ iv = (u8 *)(sg_table + qm_sg_ents);
++ memcpy(iv, req->info, ivsize);
++
++ iv_dma = dma_map_single(qidev, iv, ivsize, DMA_TO_DEVICE);
++ if (dma_mapping_error(qidev, iv_dma)) {
++ dev_err(qidev, "unable to map IV\n");
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
++ qi_cache_free(edesc);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ edesc->src_nents = src_nents;
+ edesc->dst_nents = dst_nents;
+ edesc->iv_dma = iv_dma;
+- sg_table = &edesc->sgt[0];
+- edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table);
++ edesc->qm_sg_bytes = qm_sg_bytes;
+ edesc->drv_req.app_ctx = req;
+ edesc->drv_req.cbk = ablkcipher_done;
+ edesc->drv_req.drv_ctx = drv_ctx;
+
+- if (!in_contig) {
+- dma_to_qm_sg_one(sg_table, iv_dma, ivsize, 0);
+- sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + 1, 0);
+- }
++ dma_to_qm_sg_one(sg_table, iv_dma, ivsize, 0);
++ sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + 1, 0);
+
+ if (mapped_dst_nents > 1)
+ sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table +
+@@ -1304,20 +1316,12 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+
+ fd_sgt = &edesc->drv_req.fd_sgt[0];
+
+- if (!in_contig)
+- dma_to_qm_sg_one_last_ext(&fd_sgt[1], edesc->qm_sg_dma,
+- ivsize + req->nbytes, 0);
+- else
+- dma_to_qm_sg_one_last(&fd_sgt[1], iv_dma, ivsize + req->nbytes,
+- 0);
++ dma_to_qm_sg_one_last_ext(&fd_sgt[1], edesc->qm_sg_dma,
++ ivsize + req->nbytes, 0);
+
+ if (req->src == req->dst) {
+- if (!in_contig)
+- dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma +
+- sizeof(*sg_table), req->nbytes, 0);
+- else
+- dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->src),
+- req->nbytes, 0);
++ dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma +
++ sizeof(*sg_table), req->nbytes, 0);
+ } else if (mapped_dst_nents > 1) {
+ dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx *
+ sizeof(*sg_table), req->nbytes, 0);
+@@ -1341,10 +1345,10 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ int src_nents, mapped_src_nents, dst_nents, mapped_dst_nents;
+ struct ablkcipher_edesc *edesc;
+ dma_addr_t iv_dma;
+- bool out_contig;
++ u8 *iv;
+ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ struct qm_sg_entry *sg_table, *fd_sgt;
+- int dst_sg_idx, qm_sg_ents;
++ int dst_sg_idx, qm_sg_ents, qm_sg_bytes;
+ struct caam_drv_ctx *drv_ctx;
+
+ drv_ctx = get_drv_ctx(ctx, GIVENCRYPT);
+@@ -1392,46 +1396,45 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ mapped_dst_nents = src_nents;
+ }
+
+- iv_dma = dma_map_single(qidev, creq->giv, ivsize, DMA_FROM_DEVICE);
+- if (dma_mapping_error(qidev, iv_dma)) {
+- dev_err(qidev, "unable to map IV\n");
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
+- 0, 0, 0, 0);
+- return ERR_PTR(-ENOMEM);
+- }
+-
+ qm_sg_ents = mapped_src_nents > 1 ? mapped_src_nents : 0;
+ dst_sg_idx = qm_sg_ents;
+- if (mapped_dst_nents == 1 &&
+- iv_dma + ivsize == sg_dma_address(req->dst)) {
+- out_contig = true;
+- } else {
+- out_contig = false;
+- qm_sg_ents += 1 + mapped_dst_nents;
+- }
+
+- if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+- qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG);
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, GIVENCRYPT, 0, 0);
++ qm_sg_ents += 1 + mapped_dst_nents;
++ qm_sg_bytes = qm_sg_ents * sizeof(struct qm_sg_entry);
++ if (unlikely(offsetof(struct ablkcipher_edesc, sgt) + qm_sg_bytes +
++ ivsize > CAAM_QI_MEMCACHE_SIZE)) {
++ dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n",
++ qm_sg_ents, ivsize);
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
+ return ERR_PTR(-ENOMEM);
+ }
+
+- /* allocate space for base edesc and link tables */
++ /* allocate space for base edesc, link tables and IV */
+ edesc = qi_cache_alloc(GFP_DMA | flags);
+ if (!edesc) {
+ dev_err(qidev, "could not allocate extended descriptor\n");
+- caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+- iv_dma, ivsize, GIVENCRYPT, 0, 0);
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ /* Make sure IV is located in a DMAable area */
++ sg_table = &edesc->sgt[0];
++ iv = (u8 *)(sg_table + qm_sg_ents);
++ iv_dma = dma_map_single(qidev, iv, ivsize, DMA_FROM_DEVICE);
++ if (dma_mapping_error(qidev, iv_dma)) {
++ dev_err(qidev, "unable to map IV\n");
++ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0,
++ 0, 0, 0, 0);
++ qi_cache_free(edesc);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ edesc->src_nents = src_nents;
+ edesc->dst_nents = dst_nents;
+ edesc->iv_dma = iv_dma;
+- sg_table = &edesc->sgt[0];
+- edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table);
++ edesc->qm_sg_bytes = qm_sg_bytes;
+ edesc->drv_req.app_ctx = req;
+ edesc->drv_req.cbk = ablkcipher_done;
+ edesc->drv_req.drv_ctx = drv_ctx;
+@@ -1439,11 +1442,9 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ if (mapped_src_nents > 1)
+ sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table, 0);
+
+- if (!out_contig) {
+- dma_to_qm_sg_one(sg_table + dst_sg_idx, iv_dma, ivsize, 0);
+- sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table +
+- dst_sg_idx + 1, 0);
+- }
++ dma_to_qm_sg_one(sg_table + dst_sg_idx, iv_dma, ivsize, 0);
++ sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + dst_sg_idx + 1,
++ 0);
+
+ edesc->qm_sg_dma = dma_map_single(qidev, sg_table, edesc->qm_sg_bytes,
+ DMA_TO_DEVICE);
+@@ -1464,13 +1465,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ dma_to_qm_sg_one(&fd_sgt[1], sg_dma_address(req->src),
+ req->nbytes, 0);
+
+- if (!out_contig)
+- dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx *
+- sizeof(*sg_table), ivsize + req->nbytes,
+- 0);
+- else
+- dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst),
+- ivsize + req->nbytes, 0);
++ dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx *
++ sizeof(*sg_table), ivsize + req->nbytes, 0);
+
+ return edesc;
+ }
+@@ -1480,6 +1476,7 @@ static inline int ablkcipher_crypt(struct ablkcipher_request *req, bool encrypt)
+ struct ablkcipher_edesc *edesc;
+ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
+ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher);
++ int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ int ret;
+
+ if (unlikely(caam_congested))
+@@ -1490,6 +1487,14 @@ static inline int ablkcipher_crypt(struct ablkcipher_request *req, bool encrypt)
+ if (IS_ERR(edesc))
+ return PTR_ERR(edesc);
+
++ /*
++ * The crypto API expects us to set the IV (req->info) to the last
++ * ciphertext block.
++ */
++ if (!encrypt)
++ scatterwalk_map_and_copy(req->info, req->src, req->nbytes -
++ ivsize, ivsize, 0);
++
+ ret = caam_qi_enqueue(ctx->qidev, &edesc->drv_req);
+ if (!ret) {
+ ret = -EINPROGRESS;
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 7a897209f181..7ff4a25440ac 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -66,7 +66,7 @@ static void rsa_priv_f2_unmap(struct device *dev, struct rsa_edesc *edesc,
+ struct caam_rsa_key *key = &ctx->key;
+ struct rsa_priv_f2_pdb *pdb = &edesc->pdb.priv_f2;
+ size_t p_sz = key->p_sz;
+- size_t q_sz = key->p_sz;
++ size_t q_sz = key->q_sz;
+
+ dma_unmap_single(dev, pdb->d_dma, key->d_sz, DMA_TO_DEVICE);
+ dma_unmap_single(dev, pdb->p_dma, p_sz, DMA_TO_DEVICE);
+@@ -83,7 +83,7 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+ struct caam_rsa_key *key = &ctx->key;
+ struct rsa_priv_f3_pdb *pdb = &edesc->pdb.priv_f3;
+ size_t p_sz = key->p_sz;
+- size_t q_sz = key->p_sz;
++ size_t q_sz = key->q_sz;
+
+ dma_unmap_single(dev, pdb->p_dma, p_sz, DMA_TO_DEVICE);
+ dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+@@ -166,18 +166,71 @@ static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
+ akcipher_request_complete(req, err);
+ }
+
++static int caam_rsa_count_leading_zeros(struct scatterlist *sgl,
++ unsigned int nbytes,
++ unsigned int flags)
++{
++ struct sg_mapping_iter miter;
++ int lzeros, ents;
++ unsigned int len;
++ unsigned int tbytes = nbytes;
++ const u8 *buff;
++
++ ents = sg_nents_for_len(sgl, nbytes);
++ if (ents < 0)
++ return ents;
++
++ sg_miter_start(&miter, sgl, ents, SG_MITER_FROM_SG | flags);
++
++ lzeros = 0;
++ len = 0;
++ while (nbytes > 0) {
++ while (len && !*buff) {
++ lzeros++;
++ len--;
++ buff++;
++ }
++
++ if (len && *buff)
++ break;
++
++ sg_miter_next(&miter);
++ buff = miter.addr;
++ len = miter.length;
++
++ nbytes -= lzeros;
++ lzeros = 0;
++ }
++
++ miter.consumed = lzeros;
++ sg_miter_stop(&miter);
++ nbytes -= lzeros;
++
++ return tbytes - nbytes;
++}
++
+ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
+ size_t desclen)
+ {
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct device *dev = ctx->dev;
++ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
+ struct rsa_edesc *edesc;
+ gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+ GFP_KERNEL : GFP_ATOMIC;
++ int sg_flags = (flags == GFP_ATOMIC) ? SG_MITER_ATOMIC : 0;
+ int sgc;
+ int sec4_sg_index, sec4_sg_len = 0, sec4_sg_bytes;
+ int src_nents, dst_nents;
++ int lzeros;
++
++ lzeros = caam_rsa_count_leading_zeros(req->src, req->src_len, sg_flags);
++ if (lzeros < 0)
++ return ERR_PTR(lzeros);
++
++ req->src_len -= lzeros;
++ req->src = scatterwalk_ffwd(req_ctx->src, req->src, lzeros);
+
+ src_nents = sg_nents_for_len(req->src, req->src_len);
+ dst_nents = sg_nents_for_len(req->dst, req->dst_len);
+@@ -344,7 +397,7 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ struct rsa_priv_f2_pdb *pdb = &edesc->pdb.priv_f2;
+ int sec4_sg_index = 0;
+ size_t p_sz = key->p_sz;
+- size_t q_sz = key->p_sz;
++ size_t q_sz = key->q_sz;
+
+ pdb->d_dma = dma_map_single(dev, key->d, key->d_sz, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, pdb->d_dma)) {
+@@ -419,7 +472,7 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ struct rsa_priv_f3_pdb *pdb = &edesc->pdb.priv_f3;
+ int sec4_sg_index = 0;
+ size_t p_sz = key->p_sz;
+- size_t q_sz = key->p_sz;
++ size_t q_sz = key->q_sz;
+
+ pdb->p_dma = dma_map_single(dev, key->p, p_sz, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, pdb->p_dma)) {
+@@ -953,6 +1006,7 @@ static struct akcipher_alg caam_rsa = {
+ .max_size = caam_rsa_max_size,
+ .init = caam_rsa_init_tfm,
+ .exit = caam_rsa_exit_tfm,
++ .reqsize = sizeof(struct caam_rsa_req_ctx),
+ .base = {
+ .cra_name = "rsa",
+ .cra_driver_name = "rsa-caam",
+diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
+index fd145c46eae1..82645bcf8b27 100644
+--- a/drivers/crypto/caam/caampkc.h
++++ b/drivers/crypto/caam/caampkc.h
+@@ -95,6 +95,14 @@ struct caam_rsa_ctx {
+ struct device *dev;
+ };
+
++/**
++ * caam_rsa_req_ctx - per request context.
++ * @src: input scatterlist (stripped of leading zeros)
++ */
++struct caam_rsa_req_ctx {
++ struct scatterlist src[2];
++};
++
+ /**
+ * rsa_edesc - s/w-extended rsa descriptor
+ * @src_nents : number of segments in input scatterlist
+diff --git a/drivers/crypto/cavium/zip/common.h b/drivers/crypto/cavium/zip/common.h
+index dc451e0a43c5..58fb3ed6e644 100644
+--- a/drivers/crypto/cavium/zip/common.h
++++ b/drivers/crypto/cavium/zip/common.h
+@@ -46,8 +46,10 @@
+ #ifndef __COMMON_H__
+ #define __COMMON_H__
+
++#include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
++#include <linux/io.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+@@ -149,6 +151,25 @@ struct zip_operation {
+ u32 sizeofzops;
+ };
+
++static inline int zip_poll_result(union zip_zres_s *result)
++{
++ int retries = 1000;
++
++ while (!result->s.compcode) {
++ if (!--retries) {
++ pr_err("ZIP ERR: request timed out");
++ return -ETIMEDOUT;
++ }
++ udelay(10);
++ /*
++ * Force re-reading of compcode which is updated
++ * by the ZIP coprocessor.
++ */
++ rmb();
++ }
++ return 0;
++}
++
+ /* error messages */
+ #define zip_err(fmt, args...) pr_err("ZIP ERR:%s():%d: " \
+ fmt "\n", __func__, __LINE__, ## args)
+diff --git a/drivers/crypto/cavium/zip/zip_crypto.c b/drivers/crypto/cavium/zip/zip_crypto.c
+index 8df4d26cf9d4..b92b6e7e100f 100644
+--- a/drivers/crypto/cavium/zip/zip_crypto.c
++++ b/drivers/crypto/cavium/zip/zip_crypto.c
+@@ -124,7 +124,7 @@ int zip_compress(const u8 *src, unsigned int slen,
+ struct zip_kernel_ctx *zip_ctx)
+ {
+ struct zip_operation *zip_ops = NULL;
+- struct zip_state zip_state;
++ struct zip_state *zip_state;
+ struct zip_device *zip = NULL;
+ int ret;
+
+@@ -135,20 +135,23 @@ int zip_compress(const u8 *src, unsigned int slen,
+ if (!zip)
+ return -ENODEV;
+
+- memset(&zip_state, 0, sizeof(struct zip_state));
++ zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
++ if (!zip_state)
++ return -ENOMEM;
++
+ zip_ops = &zip_ctx->zip_comp;
+
+ zip_ops->input_len = slen;
+ zip_ops->output_len = *dlen;
+ memcpy(zip_ops->input, src, slen);
+
+- ret = zip_deflate(zip_ops, &zip_state, zip);
++ ret = zip_deflate(zip_ops, zip_state, zip);
+
+ if (!ret) {
+ *dlen = zip_ops->output_len;
+ memcpy(dst, zip_ops->output, *dlen);
+ }
+-
++ kfree(zip_state);
+ return ret;
+ }
+
+@@ -157,7 +160,7 @@ int zip_decompress(const u8 *src, unsigned int slen,
+ struct zip_kernel_ctx *zip_ctx)
+ {
+ struct zip_operation *zip_ops = NULL;
+- struct zip_state zip_state;
++ struct zip_state *zip_state;
+ struct zip_device *zip = NULL;
+ int ret;
+
+@@ -168,7 +171,10 @@ int zip_decompress(const u8 *src, unsigned int slen,
+ if (!zip)
+ return -ENODEV;
+
+- memset(&zip_state, 0, sizeof(struct zip_state));
++ zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
++ if (!zip_state)
++ return -ENOMEM;
++
+ zip_ops = &zip_ctx->zip_decomp;
+ memcpy(zip_ops->input, src, slen);
+
+@@ -179,13 +185,13 @@ int zip_decompress(const u8 *src, unsigned int slen,
+ zip_ops->input_len = slen;
+ zip_ops->output_len = *dlen;
+
+- ret = zip_inflate(zip_ops, &zip_state, zip);
++ ret = zip_inflate(zip_ops, zip_state, zip);
+
+ if (!ret) {
+ *dlen = zip_ops->output_len;
+ memcpy(dst, zip_ops->output, *dlen);
+ }
+-
++ kfree(zip_state);
+ return ret;
+ }
+
+diff --git a/drivers/crypto/cavium/zip/zip_deflate.c b/drivers/crypto/cavium/zip/zip_deflate.c
+index 9a944b8c1e29..d7133f857d67 100644
+--- a/drivers/crypto/cavium/zip/zip_deflate.c
++++ b/drivers/crypto/cavium/zip/zip_deflate.c
+@@ -129,8 +129,8 @@ int zip_deflate(struct zip_operation *zip_ops, struct zip_state *s,
+ /* Stats update for compression requests submitted */
+ atomic64_inc(&zip_dev->stats.comp_req_submit);
+
+- while (!result_ptr->s.compcode)
+- continue;
++ /* Wait for completion or error */
++ zip_poll_result(result_ptr);
+
+ /* Stats update for compression requests completed */
+ atomic64_inc(&zip_dev->stats.comp_req_complete);
+diff --git a/drivers/crypto/cavium/zip/zip_inflate.c b/drivers/crypto/cavium/zip/zip_inflate.c
+index 50cbdd83dbf2..7e0d73e2f89e 100644
+--- a/drivers/crypto/cavium/zip/zip_inflate.c
++++ b/drivers/crypto/cavium/zip/zip_inflate.c
+@@ -143,8 +143,8 @@ int zip_inflate(struct zip_operation *zip_ops, struct zip_state *s,
+ /* Decompression requests submitted stats update */
+ atomic64_inc(&zip_dev->stats.decomp_req_submit);
+
+- while (!result_ptr->s.compcode)
+- continue;
++ /* Wait for completion or error */
++ zip_poll_result(result_ptr);
+
+ /* Decompression requests completed stats update */
+ atomic64_inc(&zip_dev->stats.decomp_req_complete);
+diff --git a/drivers/crypto/ccree/cc_debugfs.c b/drivers/crypto/ccree/cc_debugfs.c
+index 08f8db489cf0..5ca184e42483 100644
+--- a/drivers/crypto/ccree/cc_debugfs.c
++++ b/drivers/crypto/ccree/cc_debugfs.c
+@@ -26,7 +26,8 @@ struct cc_debugfs_ctx {
+ static struct dentry *cc_debugfs_dir;
+
+ static struct debugfs_reg32 debug_regs[] = {
+- CC_DEBUG_REG(HOST_SIGNATURE),
++ { .name = "SIGNATURE" }, /* Must be 0th */
++ { .name = "VERSION" }, /* Must be 1st */
+ CC_DEBUG_REG(HOST_IRR),
+ CC_DEBUG_REG(HOST_POWER_DOWN_EN),
+ CC_DEBUG_REG(AXIM_MON_ERR),
+@@ -34,7 +35,6 @@ static struct debugfs_reg32 debug_regs[] = {
+ CC_DEBUG_REG(HOST_IMR),
+ CC_DEBUG_REG(AXIM_CFG),
+ CC_DEBUG_REG(AXIM_CACHE_PARAMS),
+- CC_DEBUG_REG(HOST_VERSION),
+ CC_DEBUG_REG(GPR_HOST),
+ CC_DEBUG_REG(AXIM_MON_COMP),
+ };
+@@ -58,6 +58,9 @@ int cc_debugfs_init(struct cc_drvdata *drvdata)
+ struct debugfs_regset32 *regset;
+ struct dentry *file;
+
++ debug_regs[0].offset = drvdata->sig_offset;
++ debug_regs[1].offset = drvdata->ver_offset;
++
+ ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
+index 89ce013ae093..6f93ce7701ec 100644
+--- a/drivers/crypto/ccree/cc_driver.c
++++ b/drivers/crypto/ccree/cc_driver.c
+@@ -207,9 +207,13 @@ static int init_cc_resources(struct platform_device *plat_dev)
+ if (hw_rev->rev >= CC_HW_REV_712) {
+ new_drvdata->hash_len_sz = HASH_LEN_SIZE_712;
+ new_drvdata->axim_mon_offset = CC_REG(AXIM_MON_COMP);
++ new_drvdata->sig_offset = CC_REG(HOST_SIGNATURE_712);
++ new_drvdata->ver_offset = CC_REG(HOST_VERSION_712);
+ } else {
+ new_drvdata->hash_len_sz = HASH_LEN_SIZE_630;
+ new_drvdata->axim_mon_offset = CC_REG(AXIM_MON_COMP8);
++ new_drvdata->sig_offset = CC_REG(HOST_SIGNATURE_630);
++ new_drvdata->ver_offset = CC_REG(HOST_VERSION_630);
+ }
+
+ platform_set_drvdata(plat_dev, new_drvdata);
+@@ -276,7 +280,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
+ }
+
+ /* Verify correct mapping */
+- signature_val = cc_ioread(new_drvdata, CC_REG(HOST_SIGNATURE));
++ signature_val = cc_ioread(new_drvdata, new_drvdata->sig_offset);
+ if (signature_val != hw_rev->sig) {
+ dev_err(dev, "Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n",
+ signature_val, hw_rev->sig);
+@@ -287,7 +291,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
+
+ /* Display HW versions */
+ dev_info(dev, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n",
+- hw_rev->name, cc_ioread(new_drvdata, CC_REG(HOST_VERSION)),
++ hw_rev->name, cc_ioread(new_drvdata, new_drvdata->ver_offset),
+ DRV_MODULE_VERSION);
+
+ rc = init_cc_regs(new_drvdata, true);
+diff --git a/drivers/crypto/ccree/cc_driver.h b/drivers/crypto/ccree/cc_driver.h
+index 2048fdeb9579..95f82b2d1e70 100644
+--- a/drivers/crypto/ccree/cc_driver.h
++++ b/drivers/crypto/ccree/cc_driver.h
+@@ -129,6 +129,8 @@ struct cc_drvdata {
+ enum cc_hw_rev hw_rev;
+ u32 hash_len_sz;
+ u32 axim_mon_offset;
++ u32 sig_offset;
++ u32 ver_offset;
+ };
+
+ struct cc_crypto_alg {
+diff --git a/drivers/crypto/ccree/cc_host_regs.h b/drivers/crypto/ccree/cc_host_regs.h
+index f51001898ca1..616b2e1c41ba 100644
+--- a/drivers/crypto/ccree/cc_host_regs.h
++++ b/drivers/crypto/ccree/cc_host_regs.h
+@@ -45,7 +45,8 @@
+ #define CC_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE 0x1UL
+ #define CC_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT 0x17UL
+ #define CC_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE 0x1UL
+-#define CC_HOST_SIGNATURE_REG_OFFSET 0xA24UL
++#define CC_HOST_SIGNATURE_712_REG_OFFSET 0xA24UL
++#define CC_HOST_SIGNATURE_630_REG_OFFSET 0xAC8UL
+ #define CC_HOST_SIGNATURE_VALUE_BIT_SHIFT 0x0UL
+ #define CC_HOST_SIGNATURE_VALUE_BIT_SIZE 0x20UL
+ #define CC_HOST_BOOT_REG_OFFSET 0xA28UL
+@@ -105,7 +106,8 @@
+ #define CC_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE 0x1UL
+ #define CC_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT 0x1EUL
+ #define CC_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE 0x1UL
+-#define CC_HOST_VERSION_REG_OFFSET 0xA40UL
++#define CC_HOST_VERSION_712_REG_OFFSET 0xA40UL
++#define CC_HOST_VERSION_630_REG_OFFSET 0xAD8UL
+ #define CC_HOST_VERSION_VALUE_BIT_SHIFT 0x0UL
+ #define CC_HOST_VERSION_VALUE_BIT_SIZE 0x20UL
+ #define CC_HOST_KFDE0_VALID_REG_OFFSET 0xA60UL
+diff --git a/drivers/crypto/chelsio/chcr_ipsec.c b/drivers/crypto/chelsio/chcr_ipsec.c
+index 8e0aa3f175c9..461b97e2f1fd 100644
+--- a/drivers/crypto/chelsio/chcr_ipsec.c
++++ b/drivers/crypto/chelsio/chcr_ipsec.c
+@@ -346,18 +346,23 @@ inline void *copy_cpltx_pktxt(struct sk_buff *skb,
+ struct net_device *dev,
+ void *pos)
+ {
++ struct cpl_tx_pkt_core *cpl;
++ struct sge_eth_txq *q;
+ struct adapter *adap;
+ struct port_info *pi;
+- struct sge_eth_txq *q;
+- struct cpl_tx_pkt_core *cpl;
+- u64 cntrl = 0;
+ u32 ctrl0, qidx;
++ u64 cntrl = 0;
++ int left;
+
+ pi = netdev_priv(dev);
+ adap = pi->adapter;
+ qidx = skb->queue_mapping;
+ q = &adap->sge.ethtxq[qidx + pi->first_qset];
+
++ left = (void *)q->q.stat - pos;
++ if (!left)
++ pos = q->q.desc;
++
+ cpl = (struct cpl_tx_pkt_core *)pos;
+
+ cntrl = TXPKT_L4CSUM_DIS_F | TXPKT_IPCSUM_DIS_F;
+@@ -382,18 +387,17 @@ inline void *copy_key_cpltx_pktxt(struct sk_buff *skb,
+ void *pos,
+ struct ipsec_sa_entry *sa_entry)
+ {
+- struct adapter *adap;
+- struct port_info *pi;
+- struct sge_eth_txq *q;
+- unsigned int len, qidx;
+ struct _key_ctx *key_ctx;
+ int left, eoq, key_len;
++ struct sge_eth_txq *q;
++ struct adapter *adap;
++ struct port_info *pi;
++ unsigned int qidx;
+
+ pi = netdev_priv(dev);
+ adap = pi->adapter;
+ qidx = skb->queue_mapping;
+ q = &adap->sge.ethtxq[qidx + pi->first_qset];
+- len = sa_entry->enckey_len + sizeof(struct cpl_tx_pkt_core);
+ key_len = sa_entry->kctx_len;
+
+ /* end of queue, reset pos to start of queue */
+@@ -411,19 +415,14 @@ inline void *copy_key_cpltx_pktxt(struct sk_buff *skb,
+ pos += sizeof(struct _key_ctx);
+ left -= sizeof(struct _key_ctx);
+
+- if (likely(len <= left)) {
++ if (likely(key_len <= left)) {
+ memcpy(key_ctx->key, sa_entry->key, key_len);
+ pos += key_len;
+ } else {
+- if (key_len <= left) {
+- memcpy(pos, sa_entry->key, key_len);
+- pos += key_len;
+- } else {
+- memcpy(pos, sa_entry->key, left);
+- memcpy(q->q.desc, sa_entry->key + left,
+- key_len - left);
+- pos = (u8 *)q->q.desc + (key_len - left);
+- }
++ memcpy(pos, sa_entry->key, left);
++ memcpy(q->q.desc, sa_entry->key + left,
++ key_len - left);
++ pos = (u8 *)q->q.desc + (key_len - left);
+ }
+ /* Copy CPL TX PKT XT */
+ pos = copy_cpltx_pktxt(skb, dev, pos);
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index ad02aa63b519..d1a1c74fb56a 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -1087,7 +1087,7 @@ static void omap_sham_finish_req(struct ahash_request *req, int err)
+
+ if (test_bit(FLAGS_SGS_COPIED, &dd->flags))
+ free_pages((unsigned long)sg_virt(ctx->sg),
+- get_order(ctx->sg->length));
++ get_order(ctx->sg->length + ctx->bufcnt));
+
+ if (test_bit(FLAGS_SGS_ALLOCED, &dd->flags))
+ kfree(ctx->sg);
+diff --git a/drivers/crypto/vmx/aes.c b/drivers/crypto/vmx/aes.c
+index 96072b9b55c4..d7316f7a3a69 100644
+--- a/drivers/crypto/vmx/aes.c
++++ b/drivers/crypto/vmx/aes.c
+@@ -48,8 +48,6 @@ static int p8_aes_init(struct crypto_tfm *tfm)
+ alg, PTR_ERR(fallback));
+ return PTR_ERR(fallback);
+ }
+- printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
+
+ crypto_cipher_set_flags(fallback,
+ crypto_cipher_get_flags((struct
+diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
+index 7394d35d5936..5285ece4f33a 100644
+--- a/drivers/crypto/vmx/aes_cbc.c
++++ b/drivers/crypto/vmx/aes_cbc.c
+@@ -52,9 +52,6 @@ static int p8_aes_cbc_init(struct crypto_tfm *tfm)
+ alg, PTR_ERR(fallback));
+ return PTR_ERR(fallback);
+ }
+- printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_skcipher_driver_name(fallback));
+-
+
+ crypto_skcipher_set_flags(
+ fallback,
+diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c
+index fc60d00a2e84..cd777c75291d 100644
+--- a/drivers/crypto/vmx/aes_ctr.c
++++ b/drivers/crypto/vmx/aes_ctr.c
+@@ -50,8 +50,6 @@ static int p8_aes_ctr_init(struct crypto_tfm *tfm)
+ alg, PTR_ERR(fallback));
+ return PTR_ERR(fallback);
+ }
+- printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_skcipher_driver_name(fallback));
+
+ crypto_skcipher_set_flags(
+ fallback,
+diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
+index 8cd6e62e4c90..8bd9aff0f55f 100644
+--- a/drivers/crypto/vmx/aes_xts.c
++++ b/drivers/crypto/vmx/aes_xts.c
+@@ -53,8 +53,6 @@ static int p8_aes_xts_init(struct crypto_tfm *tfm)
+ alg, PTR_ERR(fallback));
+ return PTR_ERR(fallback);
+ }
+- printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_skcipher_driver_name(fallback));
+
+ crypto_skcipher_set_flags(
+ fallback,
+diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c
+index 27a94a119009..1c4b5b889fba 100644
+--- a/drivers/crypto/vmx/ghash.c
++++ b/drivers/crypto/vmx/ghash.c
+@@ -64,8 +64,6 @@ static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
+ alg, PTR_ERR(fallback));
+ return PTR_ERR(fallback);
+ }
+- printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_tfm_alg_driver_name(crypto_shash_tfm(fallback)));
+
+ crypto_shash_set_flags(fallback,
+ crypto_shash_get_flags((struct crypto_shash
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 06e9650b3b30..a89b81b35932 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -123,6 +123,7 @@ static const struct xpad_device {
+ u8 mapping;
+ u8 xtype;
+ } xpad_device[] = {
++ { 0x0079, 0x18d4, "GPD Win 2 Controller", 0, XTYPE_XBOX360 },
+ { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+@@ -409,6 +410,7 @@ static const signed short xpad_abs_triggers[] = {
+
+ static const struct usb_device_id xpad_table[] = {
+ { USB_INTERFACE_INFO('X', 'B', 0) }, /* X-Box USB-IF not approved class */
++ XPAD_XBOX360_VENDOR(0x0079), /* GPD Win 2 Controller */
+ XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft X-Box One controllers */
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 75e757520ef0..93967c8139e7 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1262,6 +1262,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ { "ELAN060B", 0 },
+ { "ELAN060C", 0 },
+ { "ELAN0611", 0 },
++ { "ELAN0612", 0 },
+ { "ELAN1000", 0 },
+ { }
+ };
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 9736c83dd418..f2d9c2c41885 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -933,6 +933,7 @@ MODULE_DEVICE_TABLE(i2c, goodix_ts_id);
+ #ifdef CONFIG_ACPI
+ static const struct acpi_device_id goodix_acpi_match[] = {
+ { "GDIX1001", 0 },
++ { "GDIX1002", 0 },
+ { }
+ };
+ MODULE_DEVICE_TABLE(acpi, goodix_acpi_match);
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 9047c0a529b2..efd733472a35 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -576,15 +576,9 @@ static void vmballoon_pop(struct vmballoon *b)
+ }
+ }
+
+- if (b->batch_page) {
+- vunmap(b->batch_page);
+- b->batch_page = NULL;
+- }
+-
+- if (b->page) {
+- __free_page(b->page);
+- b->page = NULL;
+- }
++ /* Clearing the batch_page unconditionally has no adverse effect */
++ free_page((unsigned long)b->batch_page);
++ b->batch_page = NULL;
+ }
+
+ /*
+@@ -991,16 +985,13 @@ static const struct vmballoon_ops vmballoon_batched_ops = {
+
+ static bool vmballoon_init_batching(struct vmballoon *b)
+ {
+- b->page = alloc_page(VMW_PAGE_ALLOC_NOSLEEP);
+- if (!b->page)
+- return false;
++ struct page *page;
+
+- b->batch_page = vmap(&b->page, 1, VM_MAP, PAGE_KERNEL);
+- if (!b->batch_page) {
+- __free_page(b->page);
++ page = alloc_page(GFP_KERNEL | __GFP_ZERO);
++ if (!page)
+ return false;
+- }
+
++ b->batch_page = page_address(page);
+ return true;
+ }
+
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index e153e8b64bb8..d5553c47014f 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -62,6 +62,9 @@ struct pn533_usb_phy {
+ struct urb *out_urb;
+ struct urb *in_urb;
+
++ struct urb *ack_urb;
++ u8 *ack_buffer;
++
+ struct pn533 *priv;
+ };
+
+@@ -150,13 +153,16 @@ static int pn533_usb_send_ack(struct pn533 *dev, gfp_t flags)
+ struct pn533_usb_phy *phy = dev->phy;
+ static const u8 ack[6] = {0x00, 0x00, 0xff, 0x00, 0xff, 0x00};
+ /* spec 7.1.1.3: Preamble, SoPC (2), ACK Code (2), Postamble */
+- int rc;
+
+- phy->out_urb->transfer_buffer = (u8 *)ack;
+- phy->out_urb->transfer_buffer_length = sizeof(ack);
+- rc = usb_submit_urb(phy->out_urb, flags);
++ if (!phy->ack_buffer) {
++ phy->ack_buffer = kmemdup(ack, sizeof(ack), flags);
++ if (!phy->ack_buffer)
++ return -ENOMEM;
++ }
+
+- return rc;
++ phy->ack_urb->transfer_buffer = phy->ack_buffer;
++ phy->ack_urb->transfer_buffer_length = sizeof(ack);
++ return usb_submit_urb(phy->ack_urb, flags);
+ }
+
+ static int pn533_usb_send_frame(struct pn533 *dev,
+@@ -375,26 +381,31 @@ static int pn533_acr122_poweron_rdr(struct pn533_usb_phy *phy)
+ /* Power on th reader (CCID cmd) */
+ u8 cmd[10] = {PN533_ACR122_PC_TO_RDR_ICCPOWERON,
+ 0, 0, 0, 0, 0, 0, 3, 0, 0};
++ char *buffer;
++ int transferred;
+ int rc;
+ void *cntx;
+ struct pn533_acr122_poweron_rdr_arg arg;
+
+ dev_dbg(&phy->udev->dev, "%s\n", __func__);
+
++ buffer = kmemdup(cmd, sizeof(cmd), GFP_KERNEL);
++ if (!buffer)
++ return -ENOMEM;
++
+ init_completion(&arg.done);
+ cntx = phy->in_urb->context; /* backup context */
+
+ phy->in_urb->complete = pn533_acr122_poweron_rdr_resp;
+ phy->in_urb->context = &arg;
+
+- phy->out_urb->transfer_buffer = cmd;
+- phy->out_urb->transfer_buffer_length = sizeof(cmd);
+-
+ print_hex_dump_debug("ACR122 TX: ", DUMP_PREFIX_NONE, 16, 1,
+ cmd, sizeof(cmd), false);
+
+- rc = usb_submit_urb(phy->out_urb, GFP_KERNEL);
+- if (rc) {
++ rc = usb_bulk_msg(phy->udev, phy->out_urb->pipe, buffer, sizeof(cmd),
++ &transferred, 0);
++ kfree(buffer);
++ if (rc || (transferred != sizeof(cmd))) {
+ nfc_err(&phy->udev->dev,
+ "Reader power on cmd error %d\n", rc);
+ return rc;
+@@ -490,8 +501,9 @@ static int pn533_usb_probe(struct usb_interface *interface,
+
+ phy->in_urb = usb_alloc_urb(0, GFP_KERNEL);
+ phy->out_urb = usb_alloc_urb(0, GFP_KERNEL);
++ phy->ack_urb = usb_alloc_urb(0, GFP_KERNEL);
+
+- if (!phy->in_urb || !phy->out_urb)
++ if (!phy->in_urb || !phy->out_urb || !phy->ack_urb)
+ goto error;
+
+ usb_fill_bulk_urb(phy->in_urb, phy->udev,
+@@ -501,7 +513,9 @@ static int pn533_usb_probe(struct usb_interface *interface,
+ usb_fill_bulk_urb(phy->out_urb, phy->udev,
+ usb_sndbulkpipe(phy->udev, out_endpoint),
+ NULL, 0, pn533_send_complete, phy);
+-
++ usb_fill_bulk_urb(phy->ack_urb, phy->udev,
++ usb_sndbulkpipe(phy->udev, out_endpoint),
++ NULL, 0, pn533_send_complete, phy);
+
+ switch (id->driver_info) {
+ case PN533_DEVICE_STD:
+@@ -554,6 +568,7 @@ static int pn533_usb_probe(struct usb_interface *interface,
+ error:
+ usb_free_urb(phy->in_urb);
+ usb_free_urb(phy->out_urb);
++ usb_free_urb(phy->ack_urb);
+ usb_put_dev(phy->udev);
+ kfree(in_buf);
+
+@@ -573,10 +588,13 @@ static void pn533_usb_disconnect(struct usb_interface *interface)
+
+ usb_kill_urb(phy->in_urb);
+ usb_kill_urb(phy->out_urb);
++ usb_kill_urb(phy->ack_urb);
+
+ kfree(phy->in_urb->transfer_buffer);
+ usb_free_urb(phy->in_urb);
+ usb_free_urb(phy->out_urb);
++ usb_free_urb(phy->ack_urb);
++ kfree(phy->ack_buffer);
+
+ nfc_info(&interface->dev, "NXP PN533 NFC device disconnected\n");
+ }
+diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+index 94afeac1a19e..40fdef8b5b75 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+@@ -315,6 +315,10 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ const struct qusb2_phy_cfg *cfg = qphy->cfg;
+ u8 *val;
+
++ /* efuse register is optional */
++ if (!qphy->cell)
++ return;
++
+ /*
+ * Read efuse register having TUNE2/1 parameter's high nibble.
+ * If efuse register shows value as 0x0, or if we fail to find
+diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
+index e74db7902549..a68329411b29 100644
+--- a/drivers/staging/android/ion/ion.c
++++ b/drivers/staging/android/ion/ion.c
+@@ -114,8 +114,11 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
+
+ void ion_buffer_destroy(struct ion_buffer *buffer)
+ {
+- if (WARN_ON(buffer->kmap_cnt > 0))
++ if (buffer->kmap_cnt > 0) {
++ pr_warn_once("%s: buffer still mapped in the kernel\n",
++ __func__);
+ buffer->heap->ops->unmap_kernel(buffer->heap, buffer);
++ }
+ buffer->heap->ops->free(buffer);
+ kfree(buffer);
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 624b501fd253..93de20e87abe 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1110,13 +1110,14 @@ static int omap8250_no_handle_irq(struct uart_port *port)
+ return 0;
+ }
+
++static const u8 omap4_habit = UART_ERRATA_CLOCK_DISABLE;
+ static const u8 am3352_habit = OMAP_DMA_TX_KICK | UART_ERRATA_CLOCK_DISABLE;
+ static const u8 dra742_habit = UART_ERRATA_CLOCK_DISABLE;
+
+ static const struct of_device_id omap8250_dt_ids[] = {
+ { .compatible = "ti,omap2-uart" },
+ { .compatible = "ti,omap3-uart" },
+- { .compatible = "ti,omap4-uart" },
++ { .compatible = "ti,omap4-uart", .data = &omap4_habit, },
+ { .compatible = "ti,am3352-uart", .data = &am3352_habit, },
+ { .compatible = "ti,am4372-uart", .data = &am3352_habit, },
+ { .compatible = "ti,dra742-uart", .data = &dra742_habit, },
+@@ -1353,6 +1354,19 @@ static int omap8250_soft_reset(struct device *dev)
+ int sysc;
+ int syss;
+
++ /*
++ * At least on omap4, unused uarts may not idle after reset without
++ * a basic scr dma configuration even with no dma in use. The
++ * module clkctrl status bits will be 1 instead of 3 blocking idle
++ * for the whole clockdomain. The softreset below will clear scr,
++ * and we restore it on resume so this is safe to do on all SoCs
++ * needing omap8250_soft_reset() quirk. Do it in two writes as
++ * recommended in the comment for omap8250_update_scr().
++ */
++ serial_out(up, UART_OMAP_SCR, OMAP_UART_SCR_DMAMODE_1);
++ serial_out(up, UART_OMAP_SCR,
++ OMAP_UART_SCR_DMAMODE_1 | OMAP_UART_SCR_DMAMODE_CTL);
++
+ sysc = serial_in(up, UART_OMAP_SYSC);
+
+ /* softreset the UART */
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 4b40a5b449ee..ebd33c0232e6 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1727,10 +1727,26 @@ static int pl011_allocate_irq(struct uart_amba_port *uap)
+ */
+ static void pl011_enable_interrupts(struct uart_amba_port *uap)
+ {
++ unsigned int i;
++
+ spin_lock_irq(&uap->port.lock);
+
+ /* Clear out any spuriously appearing RX interrupts */
+ pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR);
++
++ /*
++ * RXIS is asserted only when the RX FIFO transitions from below
++ * to above the trigger threshold. If the RX FIFO is already
++ * full to the threshold this can't happen and RXIS will now be
++ * stuck off. Drain the RX FIFO explicitly to fix this:
++ */
++ for (i = 0; i < uap->fifosize * 2; ++i) {
++ if (pl011_read(uap, REG_FR) & UART01x_FR_RXFE)
++ break;
++
++ pl011_read(uap, REG_DR);
++ }
++
+ uap->im = UART011_RTIM;
+ if (!pl011_dma_rx_running(uap))
+ uap->im |= UART011_RXIM;
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index e287fe8f10fc..55b3eff148b1 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1757,7 +1757,6 @@ static int atmel_startup(struct uart_port *port)
+ {
+ struct platform_device *pdev = to_platform_device(port->dev);
+ struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+- struct tty_struct *tty = port->state->port.tty;
+ int retval;
+
+ /*
+@@ -1772,8 +1771,8 @@ static int atmel_startup(struct uart_port *port)
+ * Allocate the IRQ
+ */
+ retval = request_irq(port->irq, atmel_interrupt,
+- IRQF_SHARED | IRQF_COND_SUSPEND,
+- tty ? tty->name : "atmel_serial", port);
++ IRQF_SHARED | IRQF_COND_SUSPEND,
++ dev_name(&pdev->dev), port);
+ if (retval) {
+ dev_err(port->dev, "atmel_startup - Can't get irq\n");
+ return retval;
+diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c
+index 3f2f8c118ce0..64e96926f1ad 100644
+--- a/drivers/tty/serial/samsung.c
++++ b/drivers/tty/serial/samsung.c
+@@ -862,15 +862,12 @@ static int s3c24xx_serial_request_dma(struct s3c24xx_uart_port *p)
+ dma->rx_conf.direction = DMA_DEV_TO_MEM;
+ dma->rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ dma->rx_conf.src_addr = p->port.mapbase + S3C2410_URXH;
+- dma->rx_conf.src_maxburst = 16;
++ dma->rx_conf.src_maxburst = 1;
+
+ dma->tx_conf.direction = DMA_MEM_TO_DEV;
+ dma->tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ dma->tx_conf.dst_addr = p->port.mapbase + S3C2410_UTXH;
+- if (dma_get_cache_alignment() >= 16)
+- dma->tx_conf.dst_maxburst = 16;
+- else
+- dma->tx_conf.dst_maxburst = 1;
++ dma->tx_conf.dst_maxburst = 1;
+
+ dma->rx_chan = dma_request_chan(p->port.dev, "rx");
+
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index fdbbff547106..a4f82ec665fe 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2704,8 +2704,8 @@ static int sci_init_clocks(struct sci_port *sci_port, struct device *dev)
+ dev_dbg(dev, "failed to get %s (%ld)\n", clk_names[i],
+ PTR_ERR(clk));
+ else
+- dev_dbg(dev, "clk %s is %pC rate %pCr\n", clk_names[i],
+- clk, clk);
++ dev_dbg(dev, "clk %s is %pC rate %lu\n", clk_names[i],
++ clk, clk_get_rate(clk));
+ sci_port->clks[i] = IS_ERR(clk) ? NULL : clk;
+ }
+ return 0;
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 0c11d40a12bc..7b137003c2be 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -940,7 +940,7 @@ int usb_set_isoch_delay(struct usb_device *dev)
+ return usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ USB_REQ_SET_ISOCH_DELAY,
+ USB_DIR_OUT | USB_TYPE_STANDARD | USB_RECIP_DEVICE,
+- cpu_to_le16(dev->hub_delay), 0, NULL, 0,
++ dev->hub_delay, 0, NULL, 0,
+ USB_CTRL_SET_TIMEOUT);
+ }
+
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index d359efe06c76..9c7ed2539ff7 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -631,19 +631,19 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ return -EAGAIN;
+ }
+
++ list_add(&req->list, &dev->tx_reqs_active);
++
+ /* here, we unlock, and only unlock, to avoid deadlock. */
+ spin_unlock(&dev->lock);
+ value = usb_ep_queue(dev->in_ep, req, GFP_ATOMIC);
+ spin_lock(&dev->lock);
+ if (value) {
++ list_del(&req->list);
+ list_add(&req->list, &dev->tx_reqs);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ mutex_unlock(&dev->lock_printer_io);
+ return -EAGAIN;
+ }
+-
+- list_add(&req->list, &dev->tx_reqs_active);
+-
+ }
+
+ spin_unlock_irqrestore(&dev->lock, flags);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 409cde4e6a51..5caf78bbbf7c 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -333,6 +333,7 @@ struct renesas_usb3 {
+ struct extcon_dev *extcon;
+ struct work_struct extcon_work;
+ struct phy *phy;
++ struct dentry *dentry;
+
+ struct renesas_usb3_ep *usb3_ep;
+ int num_usb3_eps;
+@@ -622,6 +623,13 @@ static void usb3_disconnect(struct renesas_usb3 *usb3)
+ usb3_usb2_pullup(usb3, 0);
+ usb3_clear_bit(usb3, USB30_CON_B3_CONNECT, USB3_USB30_CON);
+ usb3_reset_epc(usb3);
++ usb3_disable_irq_1(usb3, USB_INT_1_B2_RSUM | USB_INT_1_B3_PLLWKUP |
++ USB_INT_1_B3_LUPSUCS | USB_INT_1_B3_DISABLE |
++ USB_INT_1_SPEED | USB_INT_1_B3_WRMRST |
++ USB_INT_1_B3_HOTRST | USB_INT_1_B2_SPND |
++ USB_INT_1_B2_L1SPND | USB_INT_1_B2_USBRST);
++ usb3_clear_bit(usb3, USB_COM_CON_SPD_MODE, USB3_USB_COM_CON);
++ usb3_init_epc_registers(usb3);
+
+ if (usb3->driver)
+ usb3->driver->disconnect(&usb3->gadget);
+@@ -2393,8 +2401,12 @@ static void renesas_usb3_debugfs_init(struct renesas_usb3 *usb3,
+
+ file = debugfs_create_file("b_device", 0644, root, usb3,
+ &renesas_usb3_b_device_fops);
+- if (!file)
++ if (!file) {
+ dev_info(dev, "%s: Can't create debugfs mode\n", __func__);
++ debugfs_remove_recursive(root);
++ } else {
++ usb3->dentry = root;
++ }
+ }
+
+ /*------- platform_driver ------------------------------------------------*/
+@@ -2402,14 +2414,13 @@ static int renesas_usb3_remove(struct platform_device *pdev)
+ {
+ struct renesas_usb3 *usb3 = platform_get_drvdata(pdev);
+
++ debugfs_remove_recursive(usb3->dentry);
+ device_remove_file(&pdev->dev, &dev_attr_role);
+
+ usb_del_gadget_udc(&usb3->gadget);
+ renesas_usb3_dma_free_prd(usb3, &pdev->dev);
+
+ __renesas_usb3_ep_free_request(usb3->ep0_req);
+- if (usb3->phy)
+- phy_put(usb3->phy);
+ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+@@ -2628,6 +2639,17 @@ static int renesas_usb3_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto err_alloc_prd;
+
++ /*
++ * This is optional. So, if this driver cannot get a phy,
++ * this driver will not handle a phy anymore.
++ */
++ usb3->phy = devm_phy_optional_get(&pdev->dev, "usb");
++ if (IS_ERR(usb3->phy)) {
++ ret = PTR_ERR(usb3->phy);
++ goto err_add_udc;
++ }
++
++ pm_runtime_enable(&pdev->dev);
+ ret = usb_add_gadget_udc(&pdev->dev, &usb3->gadget);
+ if (ret < 0)
+ goto err_add_udc;
+@@ -2636,20 +2658,11 @@ static int renesas_usb3_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto err_dev_create;
+
+- /*
+- * This is an optional. So, if this driver cannot get a phy,
+- * this driver will not handle a phy anymore.
+- */
+- usb3->phy = devm_phy_get(&pdev->dev, "usb");
+- if (IS_ERR(usb3->phy))
+- usb3->phy = NULL;
+-
+ usb3->workaround_for_vbus = priv->workaround_for_vbus;
+
+ renesas_usb3_debugfs_init(usb3, &pdev->dev);
+
+ dev_info(&pdev->dev, "probed%s\n", usb3->phy ? " with phy" : "");
+- pm_runtime_enable(usb3_to_dev(usb3));
+
+ return 0;
+
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 6034c39b67d1..9e9de5452860 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -836,6 +836,12 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ if (devinfo->flags & US_FL_BROKEN_FUA)
+ sdev->broken_fua = 1;
+
++ /* UAS also needs to support FL_ALWAYS_SYNC */
++ if (devinfo->flags & US_FL_ALWAYS_SYNC) {
++ sdev->skip_ms_page_3f = 1;
++ sdev->skip_ms_page_8 = 1;
++ sdev->wce_default_on = 1;
++ }
+ scsi_change_queue_depth(sdev, devinfo->qdepth - 2);
+ return 0;
+ }
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 747d3a9596d9..22fcfccf453a 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2321,6 +2321,15 @@ UNUSUAL_DEV( 0x4146, 0xba01, 0x0100, 0x0100,
+ "Micro Mini 1GB",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_NOT_LOCKABLE ),
+
++/* "G-DRIVE" external HDD hangs on write without these.
++ * Patch submitted by Alexander Kappner <agk@godking.net>
++ */
++UNUSUAL_DEV(0x4971, 0x8024, 0x0000, 0x9999,
++ "SimpleTech",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_ALWAYS_SYNC),
++
+ /*
+ * Nick Bowler <nbowler@elliptictech.com>
+ * SCSI stack spams (otherwise harmless) error messages.
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 38434d88954a..d0bdebd87ce3 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -107,3 +107,12 @@ UNUSUAL_DEV(0x4971, 0x8017, 0x0000, 0x9999,
+ "External HDD",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_OPCODES),
++
++/* "G-DRIVE" external HDD hangs on write without these.
++ * Patch submitted by Alexander Kappner <agk@godking.net>
++ */
++UNUSUAL_DEV(0x4971, 0x8024, 0x0000, 0x9999,
++ "SimpleTech",
++ "External HDD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_ALWAYS_SYNC),
+diff --git a/drivers/usb/typec/typec_wcove.c b/drivers/usb/typec/typec_wcove.c
+index 19cca7f1b2c5..58dc7ea7cf0d 100644
+--- a/drivers/usb/typec/typec_wcove.c
++++ b/drivers/usb/typec/typec_wcove.c
+@@ -202,6 +202,10 @@ static int wcove_init(struct tcpc_dev *tcpc)
+ struct wcove_typec *wcove = tcpc_to_wcove(tcpc);
+ int ret;
+
++ ret = regmap_write(wcove->regmap, USBC_CONTROL1, 0);
++ if (ret)
++ return ret;
++
+ /* Unmask everything */
+ ret = regmap_write(wcove->regmap, USBC_IRQMASK1, 0);
+ if (ret)
+@@ -285,8 +289,30 @@ static int wcove_get_cc(struct tcpc_dev *tcpc, enum typec_cc_status *cc1,
+
+ static int wcove_set_cc(struct tcpc_dev *tcpc, enum typec_cc_status cc)
+ {
+- /* XXX: Relying on the HW FSM to configure things correctly for now */
+- return 0;
++ struct wcove_typec *wcove = tcpc_to_wcove(tcpc);
++ unsigned int ctrl;
++
++ switch (cc) {
++ case TYPEC_CC_RD:
++ ctrl = USBC_CONTROL1_MODE_SNK;
++ break;
++ case TYPEC_CC_RP_DEF:
++ ctrl = USBC_CONTROL1_CURSRC_UA_80 | USBC_CONTROL1_MODE_SRC;
++ break;
++ case TYPEC_CC_RP_1_5:
++ ctrl = USBC_CONTROL1_CURSRC_UA_180 | USBC_CONTROL1_MODE_SRC;
++ break;
++ case TYPEC_CC_RP_3_0:
++ ctrl = USBC_CONTROL1_CURSRC_UA_330 | USBC_CONTROL1_MODE_SRC;
++ break;
++ case TYPEC_CC_OPEN:
++ ctrl = 0;
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ return regmap_write(wcove->regmap, USBC_CONTROL1, ctrl);
+ }
+
+ static int wcove_set_polarity(struct tcpc_dev *tcpc, enum typec_cc_polarity pol)
+diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
+index 48808388ec33..be37aec250c2 100644
+--- a/drivers/usb/usbip/vhci_sysfs.c
++++ b/drivers/usb/usbip/vhci_sysfs.c
+@@ -10,6 +10,9 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+
++/* Hardening for Spectre-v1 */
++#include <linux/nospec.h>
++
+ #include "usbip_common.h"
+ #include "vhci.h"
+
+@@ -205,16 +208,20 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
+ return 0;
+ }
+
+-static int valid_port(__u32 pdev_nr, __u32 rhport)
++static int valid_port(__u32 *pdev_nr, __u32 *rhport)
+ {
+- if (pdev_nr >= vhci_num_controllers) {
+- pr_err("pdev %u\n", pdev_nr);
++ if (*pdev_nr >= vhci_num_controllers) {
++ pr_err("pdev %u\n", *pdev_nr);
+ return 0;
+ }
+- if (rhport >= VHCI_HC_PORTS) {
+- pr_err("rhport %u\n", rhport);
++ *pdev_nr = array_index_nospec(*pdev_nr, vhci_num_controllers);
++
++ if (*rhport >= VHCI_HC_PORTS) {
++ pr_err("rhport %u\n", *rhport);
+ return 0;
+ }
++ *rhport = array_index_nospec(*rhport, VHCI_HC_PORTS);
++
+ return 1;
+ }
+
+@@ -232,7 +239,7 @@ static ssize_t detach_store(struct device *dev, struct device_attribute *attr,
+ pdev_nr = port_to_pdev_nr(port);
+ rhport = port_to_rhport(port);
+
+- if (!valid_port(pdev_nr, rhport))
++ if (!valid_port(&pdev_nr, &rhport))
+ return -EINVAL;
+
+ hcd = platform_get_drvdata(vhcis[pdev_nr].pdev);
+@@ -258,7 +265,8 @@ static ssize_t detach_store(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_WO(detach);
+
+-static int valid_args(__u32 pdev_nr, __u32 rhport, enum usb_device_speed speed)
++static int valid_args(__u32 *pdev_nr, __u32 *rhport,
++ enum usb_device_speed speed)
+ {
+ if (!valid_port(pdev_nr, rhport)) {
+ return 0;
+@@ -322,7 +330,7 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
+ sockfd, devid, speed);
+
+ /* check received parameters */
+- if (!valid_args(pdev_nr, rhport, speed))
++ if (!valid_args(&pdev_nr, &rhport, speed))
+ return -EINVAL;
+
+ hcd = platform_get_drvdata(vhcis[pdev_nr].pdev);
+diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
+index b02c41e53d56..39e364c70caf 100644
+--- a/include/uapi/linux/kvm.h
++++ b/include/uapi/linux/kvm.h
+@@ -677,10 +677,10 @@ struct kvm_ioeventfd {
+ };
+
+ #define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0)
+-#define KVM_X86_DISABLE_EXITS_HTL (1 << 1)
++#define KVM_X86_DISABLE_EXITS_HLT (1 << 1)
+ #define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2)
+ #define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \
+- KVM_X86_DISABLE_EXITS_HTL | \
++ KVM_X86_DISABLE_EXITS_HLT | \
+ KVM_X86_DISABLE_EXITS_PAUSE)
+
+ /* for KVM_ENABLE_CAP */
+diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
+index b02c41e53d56..39e364c70caf 100644
+--- a/tools/include/uapi/linux/kvm.h
++++ b/tools/include/uapi/linux/kvm.h
+@@ -677,10 +677,10 @@ struct kvm_ioeventfd {
+ };
+
+ #define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0)
+-#define KVM_X86_DISABLE_EXITS_HTL (1 << 1)
++#define KVM_X86_DISABLE_EXITS_HLT (1 << 1)
+ #define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2)
+ #define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \
+- KVM_X86_DISABLE_EXITS_HTL | \
++ KVM_X86_DISABLE_EXITS_HLT | \
+ KVM_X86_DISABLE_EXITS_PAUSE)
+
+ /* for KVM_ENABLE_CAP */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-11 21:50 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-11 21:50 UTC (permalink / raw
To: gentoo-commits
commit: fa93352971e5dd4e0cda149358e6fb0af0a8218b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 11 21:50:11 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 11 21:50:11 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fa933529
Linux patch 4.17.1
0000_README | 4 +
1000_linux-4.17.1.patch | 602 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 606 insertions(+)
diff --git a/0000_README b/0000_README
index 86e4a15..de4fd96 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-4.17.1.patch
+From: http://www.kernel.org
+Desc: Linux 4.17.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-4.17.1.patch b/1000_linux-4.17.1.patch
new file mode 100644
index 0000000..8c58c98
--- /dev/null
+++ b/1000_linux-4.17.1.patch
@@ -0,0 +1,602 @@
+diff --git a/Documentation/networking/netdev-FAQ.txt b/Documentation/networking/netdev-FAQ.txt
+index 2a3278d5cf35..fa951b820b25 100644
+--- a/Documentation/networking/netdev-FAQ.txt
++++ b/Documentation/networking/netdev-FAQ.txt
+@@ -179,6 +179,15 @@ A: No. See above answer. In short, if you think it really belongs in
+ dash marker line as described in Documentation/process/submitting-patches.rst to
+ temporarily embed that information into the patch that you send.
+
++Q: Are all networking bug fixes backported to all stable releases?
++
++A: Due to capacity, Dave could only take care of the backports for the last
++ 2 stable releases. For earlier stable releases, each stable branch maintainer
++ is supposed to take care of them. If you find any patch is missing from an
++ earlier stable branch, please notify stable@vger.kernel.org with either a
++ commit ID or a formal patch backported, and CC Dave and other relevant
++ networking developers.
++
+ Q: Someone said that the comment style and coding convention is different
+ for the networking content. Is this true?
+
+diff --git a/Makefile b/Makefile
+index 554dcaddbce4..e551c9af6a06 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 3da5fca77cbd..bbc6cc609ec3 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -684,7 +684,8 @@ static int b53_switch_reset(struct b53_device *dev)
+ * still use this driver as a library and need to perform the reset
+ * earlier.
+ */
+- if (dev->chip_id == BCM58XX_DEVICE_ID) {
++ if (dev->chip_id == BCM58XX_DEVICE_ID ||
++ dev->chip_id == BCM583XX_DEVICE_ID) {
+ b53_read8(dev, B53_CTRL_PAGE, B53_SOFTRESET, ®);
+ reg |= SW_RST | EN_SW_RST | EN_CH_RST;
+ b53_write8(dev, B53_CTRL_PAGE, B53_SOFTRESET, reg);
+@@ -1879,6 +1880,18 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ .jumbo_pm_reg = B53_JUMBO_PORT_MASK,
+ .jumbo_size_reg = B53_JUMBO_MAX_SIZE,
+ },
++ {
++ .chip_id = BCM583XX_DEVICE_ID,
++ .dev_name = "BCM583xx/11360",
++ .vlans = 4096,
++ .enabled_ports = 0x103,
++ .arl_entries = 4,
++ .cpu_port = B53_CPU_PORT,
++ .vta_regs = B53_VTA_REGS,
++ .duplex_reg = B53_DUPLEX_STAT_GE,
++ .jumbo_pm_reg = B53_JUMBO_PORT_MASK,
++ .jumbo_size_reg = B53_JUMBO_MAX_SIZE,
++ },
+ {
+ .chip_id = BCM7445_DEVICE_ID,
+ .dev_name = "BCM7445",
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 3b57f47d0e79..b232aaae20aa 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -62,6 +62,7 @@ enum {
+ BCM53018_DEVICE_ID = 0x53018,
+ BCM53019_DEVICE_ID = 0x53019,
+ BCM58XX_DEVICE_ID = 0x5800,
++ BCM583XX_DEVICE_ID = 0x58300,
+ BCM7445_DEVICE_ID = 0x7445,
+ BCM7278_DEVICE_ID = 0x7278,
+ };
+@@ -181,6 +182,7 @@ static inline int is5301x(struct b53_device *dev)
+ static inline int is58xx(struct b53_device *dev)
+ {
+ return dev->chip_id == BCM58XX_DEVICE_ID ||
++ dev->chip_id == BCM583XX_DEVICE_ID ||
+ dev->chip_id == BCM7445_DEVICE_ID ||
+ dev->chip_id == BCM7278_DEVICE_ID;
+ }
+diff --git a/drivers/net/dsa/b53/b53_srab.c b/drivers/net/dsa/b53/b53_srab.c
+index c37ffd1b6833..8247481eaa06 100644
+--- a/drivers/net/dsa/b53/b53_srab.c
++++ b/drivers/net/dsa/b53/b53_srab.c
+@@ -364,7 +364,7 @@ static const struct of_device_id b53_srab_of_match[] = {
+ { .compatible = "brcm,bcm53018-srab" },
+ { .compatible = "brcm,bcm53019-srab" },
+ { .compatible = "brcm,bcm5301x-srab" },
+- { .compatible = "brcm,bcm11360-srab", .data = (void *)BCM58XX_DEVICE_ID },
++ { .compatible = "brcm,bcm11360-srab", .data = (void *)BCM583XX_DEVICE_ID },
+ { .compatible = "brcm,bcm58522-srab", .data = (void *)BCM58XX_DEVICE_ID },
+ { .compatible = "brcm,bcm58525-srab", .data = (void *)BCM58XX_DEVICE_ID },
+ { .compatible = "brcm,bcm58535-srab", .data = (void *)BCM58XX_DEVICE_ID },
+@@ -372,7 +372,7 @@ static const struct of_device_id b53_srab_of_match[] = {
+ { .compatible = "brcm,bcm58623-srab", .data = (void *)BCM58XX_DEVICE_ID },
+ { .compatible = "brcm,bcm58625-srab", .data = (void *)BCM58XX_DEVICE_ID },
+ { .compatible = "brcm,bcm88312-srab", .data = (void *)BCM58XX_DEVICE_ID },
+- { .compatible = "brcm,cygnus-srab", .data = (void *)BCM58XX_DEVICE_ID },
++ { .compatible = "brcm,cygnus-srab", .data = (void *)BCM583XX_DEVICE_ID },
+ { .compatible = "brcm,nsp-srab", .data = (void *)BCM58XX_DEVICE_ID },
+ { /* sentinel */ },
+ };
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
+index 7dd83d0ef0a0..22243c480a05 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
+@@ -588,7 +588,7 @@ static void bnx2x_ets_e3b0_nig_disabled(const struct link_params *params,
+ * slots for the highest priority.
+ */
+ REG_WR(bp, (port) ? NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS :
+- NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
++ NIG_REG_P0_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
+ /* Mapping between the CREDIT_WEIGHT registers and actual client
+ * numbers
+ */
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index ddb6bf85a59c..e141563a4682 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1004,7 +1004,8 @@ static void team_port_disable(struct team *team,
+ static void __team_compute_features(struct team *team)
+ {
+ struct team_port *port;
+- u32 vlan_features = TEAM_VLAN_FEATURES & NETIF_F_ALL_FOR_ALL;
++ netdev_features_t vlan_features = TEAM_VLAN_FEATURES &
++ NETIF_F_ALL_FOR_ALL;
+ netdev_features_t enc_features = TEAM_ENC_FEATURES;
+ unsigned short max_hard_header_len = ETH_HLEN;
+ unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index 50cdefe3f6d3..c75199538c05 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -556,6 +556,26 @@ static void put_pcichild(struct hv_pci_dev *hv_pcidev,
+ static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus);
+ static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus);
+
++/*
++ * There is no good way to get notified from vmbus_onoffer_rescind(),
++ * so let's use polling here, since this is not a hot path.
++ */
++static int wait_for_response(struct hv_device *hdev,
++ struct completion *comp)
++{
++ while (true) {
++ if (hdev->channel->rescind) {
++ dev_warn_once(&hdev->device, "The device is gone.\n");
++ return -ENODEV;
++ }
++
++ if (wait_for_completion_timeout(comp, HZ / 10))
++ break;
++ }
++
++ return 0;
++}
++
+ /**
+ * devfn_to_wslot() - Convert from Linux PCI slot to Windows
+ * @devfn: The Linux representation of PCI slot
+@@ -1568,7 +1588,8 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
+ if (ret)
+ goto error;
+
+- wait_for_completion(&comp_pkt.host_event);
++ if (wait_for_response(hbus->hdev, &comp_pkt.host_event))
++ goto error;
+
+ hpdev->desc = *desc;
+ refcount_set(&hpdev->refs, 1);
+@@ -2069,15 +2090,16 @@ static int hv_pci_protocol_negotiation(struct hv_device *hdev)
+ sizeof(struct pci_version_request),
+ (unsigned long)pkt, VM_PKT_DATA_INBAND,
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
++ if (!ret)
++ ret = wait_for_response(hdev, &comp_pkt.host_event);
++
+ if (ret) {
+ dev_err(&hdev->device,
+- "PCI Pass-through VSP failed sending version reqquest: %#x",
++ "PCI Pass-through VSP failed to request version: %d",
+ ret);
+ goto exit;
+ }
+
+- wait_for_completion(&comp_pkt.host_event);
+-
+ if (comp_pkt.completion_status >= 0) {
+ pci_protocol_version = pci_protocol_versions[i];
+ dev_info(&hdev->device,
+@@ -2286,11 +2308,12 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
+ ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry),
+ (unsigned long)pkt, VM_PKT_DATA_INBAND,
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
++ if (!ret)
++ ret = wait_for_response(hdev, &comp_pkt.host_event);
++
+ if (ret)
+ goto exit;
+
+- wait_for_completion(&comp_pkt.host_event);
+-
+ if (comp_pkt.completion_status < 0) {
+ dev_err(&hdev->device,
+ "PCI Pass-through VSP failed D0 Entry with status %x\n",
+@@ -2330,11 +2353,10 @@ static int hv_pci_query_relations(struct hv_device *hdev)
+
+ ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message),
+ 0, VM_PKT_DATA_INBAND, 0);
+- if (ret)
+- return ret;
++ if (!ret)
++ ret = wait_for_response(hdev, &comp);
+
+- wait_for_completion(&comp);
+- return 0;
++ return ret;
+ }
+
+ /**
+@@ -2404,11 +2426,11 @@ static int hv_send_resources_allocated(struct hv_device *hdev)
+ size_res, (unsigned long)pkt,
+ VM_PKT_DATA_INBAND,
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
++ if (!ret)
++ ret = wait_for_response(hdev, &comp_pkt.host_event);
+ if (ret)
+ break;
+
+- wait_for_completion(&comp_pkt.host_event);
+-
+ if (comp_pkt.completion_status < 0) {
+ ret = -EPROTO;
+ dev_err(&hdev->device,
+diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
+index d617fe45543e..d633f737b3c6 100644
+--- a/include/linux/mroute_base.h
++++ b/include/linux/mroute_base.h
+@@ -307,16 +307,6 @@ static inline void vif_device_init(struct vif_device *v,
+ {
+ }
+
+-static inline void *
+-mr_table_alloc(struct net *net, u32 id,
+- struct mr_table_ops *ops,
+- void (*expire_func)(struct timer_list *t),
+- void (*table_set)(struct mr_table *mrt,
+- struct net *net))
+-{
+- return NULL;
+-}
+-
+ static inline void *mr_mfc_find_parent(struct mr_table *mrt,
+ void *hasharg, int parent)
+ {
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 836f31af1369..a406f2e8680a 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -906,6 +906,11 @@ static inline __be32 ip6_make_flowinfo(unsigned int tclass, __be32 flowlabel)
+ return htonl(tclass << IPV6_TCLASS_SHIFT) | flowlabel;
+ }
+
++static inline __be32 flowi6_get_flowlabel(const struct flowi6 *fl6)
++{
++ return fl6->flowlabel & IPV6_FLOWLABEL_MASK;
++}
++
+ /*
+ * Prototypes exported by ipv6
+ */
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index d29f09bc5ff9..0234f8d1f0ac 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1334,7 +1334,7 @@ __u32 __get_hash_from_flowi6(const struct flowi6 *fl6, struct flow_keys *keys)
+ keys->ports.src = fl6->fl6_sport;
+ keys->ports.dst = fl6->fl6_dport;
+ keys->keyid.keyid = fl6->fl6_gre_key;
+- keys->tags.flow_label = (__force u32)fl6->flowlabel;
++ keys->tags.flow_label = (__force u32)flowi6_get_flowlabel(fl6);
+ keys->basic.ip_proto = fl6->flowi6_proto;
+
+ return flow_hash_from_keys(keys);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 45936922d7e2..19f6ab5de6e1 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2256,6 +2256,10 @@ static int do_setlink(const struct sk_buff *skb,
+ const struct net_device_ops *ops = dev->netdev_ops;
+ int err;
+
++ err = validate_linkmsg(dev, tb);
++ if (err < 0)
++ return err;
++
+ if (tb[IFLA_NET_NS_PID] || tb[IFLA_NET_NS_FD] || tb[IFLA_IF_NETNSID]) {
+ struct net *net = rtnl_link_get_net_capable(skb, dev_net(dev),
+ tb, CAP_NET_ADMIN);
+@@ -2619,10 +2623,6 @@ static int rtnl_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ goto errout;
+ }
+
+- err = validate_linkmsg(dev, tb);
+- if (err < 0)
+- goto errout;
+-
+ err = do_setlink(skb, dev, ifm, extack, tb, ifname, 0);
+ errout:
+ return err;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index c27122f01b87..cfae17335705 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -717,6 +717,8 @@ bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi)
+ nla_strlcpy(tmp, nla, sizeof(tmp));
+ val = tcp_ca_get_key_by_name(fi->fib_net, tmp, &ecn_ca);
+ } else {
++ if (nla_len(nla) != sizeof(u32))
++ return false;
+ val = nla_get_u32(nla);
+ }
+
+@@ -1043,6 +1045,8 @@ fib_convert_metrics(struct fib_info *fi, const struct fib_config *cfg)
+ if (val == TCP_CA_UNSPEC)
+ return -EINVAL;
+ } else {
++ if (nla_len(nla) != sizeof(u32))
++ return -EINVAL;
+ val = nla_get_u32(nla);
+ }
+ if (type == RTAX_ADVMSS && val > 65535 - 40)
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index 30221701614c..cafb0506c8c9 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -35,17 +35,19 @@ mr_table_alloc(struct net *net, u32 id,
+ struct net *net))
+ {
+ struct mr_table *mrt;
++ int err;
+
+ mrt = kzalloc(sizeof(*mrt), GFP_KERNEL);
+ if (!mrt)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ mrt->id = id;
+ write_pnet(&mrt->net, net);
+
+ mrt->ops = *ops;
+- if (rhltable_init(&mrt->mfc_hash, mrt->ops.rht_params)) {
++ err = rhltable_init(&mrt->mfc_hash, mrt->ops.rht_params);
++ if (err) {
+ kfree(mrt);
+- return NULL;
++ return ERR_PTR(err);
+ }
+ INIT_LIST_HEAD(&mrt->mfc_cache_list);
+ INIT_LIST_HEAD(&mrt->mfc_unres_queue);
+diff --git a/net/ipv4/netfilter/nf_flow_table_ipv4.c b/net/ipv4/netfilter/nf_flow_table_ipv4.c
+index 0cd46bffa469..fc3923932eda 100644
+--- a/net/ipv4/netfilter/nf_flow_table_ipv4.c
++++ b/net/ipv4/netfilter/nf_flow_table_ipv4.c
+@@ -213,7 +213,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+ enum flow_offload_tuple_dir dir;
+ struct flow_offload *flow;
+ struct net_device *outdev;
+- const struct rtable *rt;
++ struct rtable *rt;
+ struct iphdr *iph;
+ __be32 nexthop;
+
+@@ -234,7 +234,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+ dir = tuplehash->tuple.dir;
+ flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
+
+- rt = (const struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
++ rt = (struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
+ if (unlikely(nf_flow_exceeds_mtu(skb, rt)))
+ return NF_ACCEPT;
+
+@@ -251,6 +251,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+
+ skb->dev = outdev;
+ nexthop = rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr);
++ skb_dst_set_noref(skb, &rt->dst);
+ neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
+
+ return NF_STOLEN;
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 7b6d1689087b..af49f6cb5d3e 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -507,7 +507,8 @@ int ip6_forward(struct sk_buff *skb)
+ send redirects to source routed frames.
+ We don't send redirects to frames decapsulated from IPsec.
+ */
+- if (skb->dev == dst->dev && opt->srcrt == 0 && !skb_sec_path(skb)) {
++ if (IP6CB(skb)->iif == dst->dev->ifindex &&
++ opt->srcrt == 0 && !skb_sec_path(skb)) {
+ struct in6_addr *target = NULL;
+ struct inet_peer *peer;
+ struct rt6_info *rt;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 298fd8b6ed17..37936671dcb3 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -227,8 +227,8 @@ static int __net_init ip6mr_rules_init(struct net *net)
+ INIT_LIST_HEAD(&net->ipv6.mr6_tables);
+
+ mrt = ip6mr_new_table(net, RT6_TABLE_DFLT);
+- if (!mrt) {
+- err = -ENOMEM;
++ if (IS_ERR(mrt)) {
++ err = PTR_ERR(mrt);
+ goto err1;
+ }
+
+@@ -301,8 +301,13 @@ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+
+ static int __net_init ip6mr_rules_init(struct net *net)
+ {
+- net->ipv6.mrt6 = ip6mr_new_table(net, RT6_TABLE_DFLT);
+- return net->ipv6.mrt6 ? 0 : -ENOMEM;
++ struct mr_table *mrt;
++
++ mrt = ip6mr_new_table(net, RT6_TABLE_DFLT);
++ if (IS_ERR(mrt))
++ return PTR_ERR(mrt);
++ net->ipv6.mrt6 = mrt;
++ return 0;
+ }
+
+ static void __net_exit ip6mr_rules_exit(struct net *net)
+@@ -1757,9 +1762,11 @@ int ip6_mroute_setsockopt(struct sock *sk, int optname, char __user *optval, uns
+
+ rtnl_lock();
+ ret = 0;
+- if (!ip6mr_new_table(net, v))
+- ret = -ENOMEM;
+- raw6_sk(sk)->ip6mr_table = v;
++ mrt = ip6mr_new_table(net, v);
++ if (IS_ERR(mrt))
++ ret = PTR_ERR(mrt);
++ else
++ raw6_sk(sk)->ip6mr_table = v;
+ rtnl_unlock();
+ return ret;
+ }
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 9de4dfb126ba..525051a886bc 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1576,6 +1576,12 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+ ops_data_buf[NDISC_OPS_REDIRECT_DATA_SPACE], *ops_data = NULL;
+ bool ret;
+
++ if (netif_is_l3_master(skb->dev)) {
++ dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif);
++ if (!dev)
++ return;
++ }
++
+ if (ipv6_get_lladdr(dev, &saddr_buf, IFA_F_TENTATIVE)) {
+ ND_PRINTK(2, warn, "Redirect: no link-local address on %s\n",
+ dev->name);
+diff --git a/net/ipv6/netfilter/nf_flow_table_ipv6.c b/net/ipv6/netfilter/nf_flow_table_ipv6.c
+index 207cb35569b1..2d6652146bba 100644
+--- a/net/ipv6/netfilter/nf_flow_table_ipv6.c
++++ b/net/ipv6/netfilter/nf_flow_table_ipv6.c
+@@ -243,6 +243,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
+
+ skb->dev = outdev;
+ nexthop = rt6_nexthop(rt, &flow->tuplehash[!dir].tuple.src_v6);
++ skb_dst_set_noref(skb, &rt->dst);
+ neigh_xmit(NEIGH_ND_TABLE, outdev, nexthop, skb);
+
+ return NF_STOLEN;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index f4d61736c41a..4530a82aaa2e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1868,7 +1868,7 @@ static void ip6_multipath_l3_keys(const struct sk_buff *skb,
+ } else {
+ keys->addrs.v6addrs.src = key_iph->saddr;
+ keys->addrs.v6addrs.dst = key_iph->daddr;
+- keys->tags.flow_label = ip6_flowinfo(key_iph);
++ keys->tags.flow_label = ip6_flowlabel(key_iph);
+ keys->basic.ip_proto = key_iph->nexthdr;
+ }
+ }
+@@ -1889,7 +1889,7 @@ u32 rt6_multipath_hash(const struct net *net, const struct flowi6 *fl6,
+ } else {
+ hash_keys.addrs.v6addrs.src = fl6->saddr;
+ hash_keys.addrs.v6addrs.dst = fl6->daddr;
+- hash_keys.tags.flow_label = (__force u32)fl6->flowlabel;
++ hash_keys.tags.flow_label = (__force u32)flowi6_get_flowlabel(fl6);
+ hash_keys.basic.ip_proto = fl6->flowi6_proto;
+ }
+ break;
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index 1fd9e145076a..466f17646625 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -428,16 +428,6 @@ static void pppol2tp_put_sk(struct rcu_head *head)
+ */
+ static void pppol2tp_session_close(struct l2tp_session *session)
+ {
+- struct pppol2tp_session *ps;
+-
+- ps = l2tp_session_priv(session);
+- mutex_lock(&ps->sk_lock);
+- ps->__sk = rcu_dereference_protected(ps->sk,
+- lockdep_is_held(&ps->sk_lock));
+- RCU_INIT_POINTER(ps->sk, NULL);
+- if (ps->__sk)
+- call_rcu(&ps->rcu, pppol2tp_put_sk);
+- mutex_unlock(&ps->sk_lock);
+ }
+
+ /* Really kill the session socket. (Called from sock_put() if
+@@ -480,15 +470,24 @@ static int pppol2tp_release(struct socket *sock)
+ sock_orphan(sk);
+ sock->sk = NULL;
+
+- /* If the socket is associated with a session,
+- * l2tp_session_delete will call pppol2tp_session_close which
+- * will drop the session's ref on the socket.
+- */
+ session = pppol2tp_sock_to_session(sk);
+ if (session) {
++ struct pppol2tp_session *ps;
++
+ l2tp_session_delete(session);
+- /* drop the ref obtained by pppol2tp_sock_to_session */
+- sock_put(sk);
++
++ ps = l2tp_session_priv(session);
++ mutex_lock(&ps->sk_lock);
++ ps->__sk = rcu_dereference_protected(ps->sk,
++ lockdep_is_held(&ps->sk_lock));
++ RCU_INIT_POINTER(ps->sk, NULL);
++ mutex_unlock(&ps->sk_lock);
++ call_rcu(&ps->rcu, pppol2tp_put_sk);
++
++ /* Rely on the sock_put() call at the end of the function for
++ * dropping the reference held by pppol2tp_sock_to_session().
++ * The last reference will be dropped by pppol2tp_put_sk().
++ */
+ }
+
+ release_sock(sk);
+@@ -742,7 +741,8 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ */
+ mutex_lock(&ps->sk_lock);
+ if (rcu_dereference_protected(ps->sk,
+- lockdep_is_held(&ps->sk_lock))) {
++ lockdep_is_held(&ps->sk_lock)) ||
++ ps->__sk) {
+ mutex_unlock(&ps->sk_lock);
+ error = -EEXIST;
+ goto end;
+@@ -803,7 +803,6 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+
+ out_no_ppp:
+ /* This is how we get the session context from the socket. */
+- sock_hold(sk);
+ sk->sk_user_data = session;
+ rcu_assign_pointer(ps->sk, sk);
+ mutex_unlock(&ps->sk_lock);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index acb7b86574cd..60c2a252bdf5 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4282,7 +4282,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ goto out;
+ if (po->tp_version >= TPACKET_V3 &&
+ req->tp_block_size <=
+- BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv))
++ BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv) + sizeof(struct tpacket3_hdr))
+ goto out;
+ if (unlikely(req->tp_frame_size < po->tp_hdrlen +
+ po->tp_reserve))
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 47f82bd794d9..03fc2c427aca 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -634,7 +634,7 @@ unsigned long sctp_transport_timeout(struct sctp_transport *trans)
+ trans->state != SCTP_PF)
+ timeout += trans->hbinterval;
+
+- return timeout;
++ return max_t(unsigned long, timeout, HZ / 5);
+ }
+
+ /* Reset transport variables to their initial values */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-06-08 23:11 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-06-08 23:11 UTC (permalink / raw
To: gentoo-commits
commit: 9f9ebec7eee0d85f359a44fe3dd2f484b01172ad
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jun 8 23:11:01 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jun 8 23:11:01 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9f9ebec7
Update cpu optimization patch
0000_README | 2 +-
...able-additional-cpu-optimizations-for-gcc.patch | 67 +++++++++++++---------
2 files changed, 42 insertions(+), 27 deletions(-)
diff --git a/0000_README b/0000_README
index 94eb66a..86e4a15 100644
--- a/0000_README
+++ b/0000_README
@@ -77,4 +77,4 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-additional-cpu-optimizations-for-gcc.patch
From: https://github.com/graysky2/kernel_gcc_patch/
-Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
+Desc: Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
index 1aba143..a8aa759 100644
--- a/5010_enable-additional-cpu-optimizations-for-gcc.patch
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -1,5 +1,5 @@
WARNING
-This patch works with gcc versions 4.9+ and with kernel version 3.15+ and should
+This patch works with gcc versions 4.9+ and with kernel version 4.13+ and should
NOT be applied when compiling on older versions of gcc due to key name changes
of the march flags introduced with the version 4.9 release of gcc.[1]
@@ -29,7 +29,8 @@ The expanded microarchitectures include:
* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
* Intel 4th Gen Core i3/i5/i7 (Haswell)
* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5.i7 (Skylake)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
It also offers to compile passing the 'native' option which, "selects the CPU
to generate code for at compilation time by determining the processor type of
@@ -53,7 +54,7 @@ See the following experimental evidence supporting this statement:
https://github.com/graysky2/kernel_gcc_patch
REQUIREMENTS
-linux version >=3.15
+linux version >=4.13
gcc version >=4.9
ACKNOWLEDGMENTS
@@ -66,9 +67,9 @@ REFERENCES
4. https://github.com/graysky2/kernel_gcc_patch/issues/15
5. http://www.linuxforge.net/docs/linux/linux-gcc.php
---- a/arch/x86/include/asm/module.h 2018-02-25 21:50:41.000000000 -0500
-+++ b/arch/x86/include/asm/module.h 2018-02-26 15:37:52.684596240 -0500
-@@ -25,6 +25,24 @@ struct mod_arch_specific {
+--- a/arch/x86/include/asm/module.h 2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/include/asm/module.h 2018-03-10 06:42:38.688317317 -0500
+@@ -25,6 +25,26 @@ struct mod_arch_specific {
#define MODULE_PROC_FAMILY "586MMX "
#elif defined CONFIG_MCORE2
#define MODULE_PROC_FAMILY "CORE2 "
@@ -90,10 +91,12 @@ REFERENCES
+#define MODULE_PROC_FAMILY "BROADWELL "
+#elif defined CONFIG_MSKYLAKE
+#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -43,6 +61,26 @@ struct mod_arch_specific {
+@@ -43,6 +63,26 @@ struct mod_arch_specific {
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -120,8 +123,8 @@ REFERENCES
#elif defined CONFIG_MELAN
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu 2018-02-25 21:50:41.000000000 -0500
-+++ b/arch/x86/Kconfig.cpu 2018-02-26 15:46:09.886742109 -0500
+--- a/arch/x86/Kconfig.cpu 2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Kconfig.cpu 2018-03-10 06:45:50.244371799 -0500
@@ -116,6 +116,7 @@ config MPENTIUMM
config MPENTIUM4
bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
@@ -264,7 +267,7 @@ REFERENCES
---help---
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -271,14 +354,79 @@ config MCORE2
+@@ -271,14 +354,88 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
@@ -347,10 +350,19 @@ REFERENCES
+ Select this for 6th Gen Core processors in the Skylake family.
+
+ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
config GENERIC_CPU
bool "Generic-x86-64"
-@@ -287,6 +435,19 @@ config GENERIC_CPU
+@@ -287,6 +444,19 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
@@ -370,26 +382,26 @@ REFERENCES
endchoice
config X86_GENERIC
-@@ -311,7 +472,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -311,7 +481,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
default "4" if MELAN || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-@@ -342,35 +503,36 @@ config X86_ALIGNMENT_16
+@@ -342,35 +512,36 @@ config X86_ALIGNMENT_16
config X86_INTEL_USERCOPY
def_bool y
- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE
config X86_USE_PPRO_CHECKSUM
def_bool y
- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MATOM || MNATIVE
config X86_USE_3DNOW
def_bool y
@@ -412,7 +424,7 @@ REFERENCES
- depends on (MCORE2 || MPENTIUM4 || MPSC)
+ default n
+ bool "Support for P6_NOPs on Intel chips"
-+ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE)
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE)
+ ---help---
+ P6_NOPs are a relatively minor optimization that require a family >=
+ 6 processor, except that it is broken on certain VIA chips.
@@ -429,22 +441,22 @@ REFERENCES
config X86_TSC
def_bool y
- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM) || X86_64
config X86_CMPXCHG64
def_bool y
-@@ -380,7 +542,7 @@ config X86_CMPXCHG64
+@@ -380,7 +551,7 @@ config X86_CMPXCHG64
# generates cmov.
config X86_CMOV
def_bool y
- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
config X86_MINIMUM_CPU_FAMILY
int
---- a/arch/x86/Makefile 2018-02-25 21:50:41.000000000 -0500
-+++ b/arch/x86/Makefile 2018-02-26 15:37:52.685596255 -0500
-@@ -124,13 +124,40 @@ else
+--- a/arch/x86/Makefile 2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile 2018-03-10 06:47:00.284240139 -0500
+@@ -124,13 +124,42 @@ else
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
@@ -483,13 +495,15 @@ REFERENCES
+ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
+ cflags-$(CONFIG_MSKYLAKE) += \
+ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
+ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
+ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
KBUILD_CFLAGS += $(cflags-y)
---- a/arch/x86/Makefile_32.cpu 2018-02-25 21:50:41.000000000 -0500
-+++ b/arch/x86/Makefile_32.cpu 2018-02-26 15:37:52.686596269 -0500
+--- a/arch/x86/Makefile_32.cpu 2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu 2018-03-10 06:47:46.025992644 -0500
@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
@@ -509,7 +523,7 @@ REFERENCES
cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -32,8 +43,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+@@ -32,8 +43,17 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_MVIAC7) += -march=i686
cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
@@ -523,6 +537,7 @@ REFERENCES
+cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
+cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
+cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
+cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
+ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:4.17 commit in: /
@ 2018-05-23 18:47 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2018-05-23 18:47 UTC (permalink / raw
To: gentoo-commits
commit: c43aa0c629b617cb71ba2cfd9b0a6055c0dcd35e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 23 18:46:57 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 23 18:46:57 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c43aa0c6
Patch to support for namespace user.pax.* on tmpfs.
Patch to enable link security restrictions by default.
Patch to enable poweroff on Mac Pro 11. See bug #601964.
Patch to add UAS disable quirk. See bug #640082.
Patch that enables swapping of the FN and left Control keys and some
additional on some apple keyboards. See bug #622902.
Patch to ennsure that /dev/root doesn't appear in /proc/mounts when
bootint without an initramfs. Bootsplash patch ported by Conrad
Kostecki. (Bug #637434).
Patch to enable control of the unaligned access control policy from
sysctl
Patch that adds Gentoo Linux support config settings and defaults.
Patch that enables gcc >= v4.9 optimizations for additional CPUs.
0000_README | 36 +
1500_XATTR_USER_PREFIX.patch | 69 +
...ble-link-security-restrictions-by-default.patch | 22 +
2300_enable-poweroff-on-Mac-Pro-11.patch | 76 +
...age-Disable-UAS-on-JMicron-SATA-enclosure.patch | 40 +
2600_enable-key-swapping-for-apple-mac.patch | 114 ++
2900_dev-root-proc-mount-fix.patch | 38 +
4200_fbcondecor.patch | 2095 ++++++++++++++++++++
4400_alpha-sysctl-uac.patch | 142 ++
...able-additional-cpu-optimizations-for-gcc.patch | 530 +++++
10 files changed, 3162 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..6546583 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,42 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
+From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
+Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
+
+Patch: 2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
+From: https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
+Desc: Add UAS disable quirk. See bug #640082.
+
+Patch: 2600_enable-key-swapping-for-apple-mac.patch
+From: https://github.com/free5lot/hid-apple-patched
+Desc: This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
+
+Patch: 2900_dev-root-proc-mount-fix.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch: 4200_fbcondecor.patch
+From: http://www.mepiscommunity.org/fbcondecor
+Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
+
+Patch: 4400_alpha-sysctl-uac.patch
+From: Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323
+Desc: Enable control of the unaligned access control policy from sysctl
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5010_enable-additional-cpu-optimizations-for-gcc.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..bacd032
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,69 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 440e2a7..c377172 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2667,6 +2667,14 @@ static int shmem_xattr_handler_set(const struct xattr_handler *handler,
+ struct shmem_inode_info *info = SHMEM_I(d_inode(dentry));
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+
+@@ -2682,6 +2690,12 @@ static const struct xattr_handler shmem_trusted_xattr_handler = {
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -2689,6 +2703,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ path_put(link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+
+ /**
+ * may_follow_link - Check symlink following for unsafe situations
diff --git a/2300_enable-poweroff-on-Mac-Pro-11.patch b/2300_enable-poweroff-on-Mac-Pro-11.patch
new file mode 100644
index 0000000..063f2a1
--- /dev/null
+++ b/2300_enable-poweroff-on-Mac-Pro-11.patch
@@ -0,0 +1,76 @@
+From 5080ff61a438f3dd80b88b423e1a20791d8a774c Mon Sep 17 00:00:00 2001
+From: Chen Yu <yu.c.chen@intel.com>
+Date: Fri, 19 Aug 2016 10:25:57 -0700
+Subject: UBUNTU: SAUCE: PCI: Workaround to enable poweroff on Mac Pro 11
+
+BugLink: http://bugs.launchpad.net/bugs/1587714
+
+People reported that they can not do a poweroff nor a
+suspend to ram on their Mac Pro 11. After some investigations
+it was found that, once the PCI bridge 0000:00:1c.0 reassigns its
+mm windows to ([mem 0x7fa00000-0x7fbfffff] and
+[mem 0x7fc00000-0x7fdfffff 64bit pref]), the region of ACPI
+io resource 0x1804 becomes unaccessible immediately, where the
+ACPI Sleep register is located, as a result neither poweroff(S5)
+nor suspend to ram(S3) works.
+
+As suggested by Bjorn, further testing shows that, there is an
+unreported device may be (using) conflict with above aperture,
+which brings unpredictable result such as the failure of accessing
+the io port, which blocks the poweroff(S5). Besides if we reassign
+the memory aperture to the other place, the poweroff works again.
+
+As we do not find any resource declared in _CRS which contain above
+memory aperture, and Mac OS does not use this pci bridge neither, we
+choose a simple workaround to clear the hotplug flag(suggested by
+Yinghai Lu), thus do not allocate any resource for this pci bridge,
+and thereby no conflict anymore.
+
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=103211
+Cc: Bjorn Helgaas <bhelgaas@google.com>
+Cc: Rafael J. Wysocki <rafael@kernel.org>
+Cc: Lukas Wunner <lukas@wunner.de>
+Signed-off-by: Chen Yu <yu.c.chen@intel.com>
+Reference: https://patchwork.kernel.org/patch/9289777/
+Signed-off-by: Kamal Mostafa <kamal@canonical.com>
+Acked-by: Brad Figg <brad.figg@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
+---
+ drivers/pci/quirks.c | 20 ++++++++++++++++++++
+ 1 file changed, 20 insertions(+)
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 48cfaa0..23968b6 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2750,6 +2750,26 @@ static void quirk_hotplug_bridge(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_HINT, 0x0020, quirk_hotplug_bridge);
+
+ /*
++ * Apple: Avoid programming the memory/io aperture of 00:1c.0
++ *
++ * BIOS does not declare any resource for 00:1c.0, but with
++ * hotplug flag set, thus the OS allocates:
++ * [mem 0x7fa00000 - 0x7fbfffff]
++ * [mem 0x7fc00000-0x7fdfffff 64bit pref]
++ * which is conflict with an unreported device, which
++ * causes unpredictable result such as accessing io port.
++ * So clear the hotplug flag to work around it.
++ */
++static void quirk_apple_mbp_poweroff(struct pci_dev *dev)
++{
++ if (dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,4") ||
++ dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,5"))
++ dev->is_hotplug_bridge = 0;
++}
++
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
++
++/*
+ * This is a quirk for the Ricoh MMC controller found as a part of
+ * some mulifunction chips.
+
+--
+cgit v0.11.2
+
diff --git a/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
new file mode 100644
index 0000000..0dd93ef
--- /dev/null
+++ b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
@@ -0,0 +1,40 @@
+From d02a55182307c01136b599fd048b4679f259a84e Mon Sep 17 00:00:00 2001
+From: Laura Abbott <labbott@fedoraproject.org>
+Date: Tue, 8 Sep 2015 09:53:38 -0700
+Subject: [PATCH] usb-storage: Disable UAS on JMicron SATA enclosure
+
+Steve Ellis reported incorrect block sizes and alignement
+offsets with a SATA enclosure. Adding a quirk to disable
+UAS fixes the problems.
+
+Reported-by: Steven Ellis <sellis@redhat.com>
+Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
+---
+ drivers/usb/storage/unusual_uas.h | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c85ea53..216d93d 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -141,12 +141,15 @@ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
+-/* Reported-by: Takeo Nakayama <javhera@gmx.com> */
++/*
++ * Initially Reported-by: Takeo Nakayama <javhera@gmx.com>
++ * UAS Ignore Reported by Steven Ellis <sellis@redhat.com>
++ */
+ UNUSUAL_DEV(0x357d, 0x7788, 0x0000, 0x9999,
+ "JMicron",
+ "JMS566",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+- US_FL_NO_REPORT_OPCODES),
++ US_FL_NO_REPORT_OPCODES | US_FL_IGNORE_UAS),
+
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x4971, 0x1012, 0x0000, 0x9999,
+--
+2.4.3
+
diff --git a/2600_enable-key-swapping-for-apple-mac.patch b/2600_enable-key-swapping-for-apple-mac.patch
new file mode 100644
index 0000000..ab228d3
--- /dev/null
+++ b/2600_enable-key-swapping-for-apple-mac.patch
@@ -0,0 +1,114 @@
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -52,6 +52,22 @@
+ "(For people who want to keep Windows PC keyboard muscle memory. "
+ "[0] = as-is, Mac layout. 1 = swapped, Windows layout.)");
+
++static unsigned int swap_fn_leftctrl;
++module_param(swap_fn_leftctrl, uint, 0644);
++MODULE_PARM_DESC(swap_fn_leftctrl, "Swap the Fn and left Control keys. "
++ "(For people who want to keep PC keyboard muscle memory. "
++ "[0] = as-is, Mac layout, 1 = swapped, PC layout)");
++
++static unsigned int rightalt_as_rightctrl;
++module_param(rightalt_as_rightctrl, uint, 0644);
++MODULE_PARM_DESC(rightalt_as_rightctrl, "Use the right Alt key as a right Ctrl key. "
++ "[0] = as-is, Mac layout. 1 = Right Alt is right Ctrl");
++
++static unsigned int ejectcd_as_delete;
++module_param(ejectcd_as_delete, uint, 0644);
++MODULE_PARM_DESC(ejectcd_as_delete, "Use Eject-CD key as Delete key. "
++ "([0] = disabled, 1 = enabled)");
++
+ struct apple_sc {
+ unsigned long quirks;
+ unsigned int fn_on;
+@@ -164,6 +180,21 @@
+ { }
+ };
+
++static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
++ { KEY_FN, KEY_LEFTCTRL },
++ { }
++};
++
++static const struct apple_key_translation rightalt_as_rightctrl_keys[] = {
++ { KEY_RIGHTALT, KEY_RIGHTCTRL },
++ { }
++};
++
++static const struct apple_key_translation ejectcd_as_delete_keys[] = {
++ { KEY_EJECTCD, KEY_DELETE },
++ { }
++};
++
+ static const struct apple_key_translation *apple_find_translation(
+ const struct apple_key_translation *table, u16 from)
+ {
+@@ -183,9 +214,11 @@
+ struct apple_sc *asc = hid_get_drvdata(hid);
+ const struct apple_key_translation *trans, *table;
+
+- if (usage->code == KEY_FN) {
++ u16 fn_keycode = (swap_fn_leftctrl) ? (KEY_LEFTCTRL) : (KEY_FN);
++
++ if (usage->code == fn_keycode) {
+ asc->fn_on = !!value;
+- input_event(input, usage->type, usage->code, value);
++ input_event(input, usage->type, KEY_FN, value);
+ return 1;
+ }
+
+@@ -264,6 +297,30 @@
+ }
+ }
+
++ if (swap_fn_leftctrl) {
++ trans = apple_find_translation(swapped_fn_leftctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (ejectcd_as_delete) {
++ trans = apple_find_translation(ejectcd_as_delete_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (rightalt_as_rightctrl) {
++ trans = apple_find_translation(rightalt_as_rightctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -327,6 +384,21 @@
+
+ for (trans = apple_iso_keyboard; trans->from; trans++)
+ set_bit(trans->to, input->keybit);
++
++ if (swap_fn_leftctrl) {
++ for (trans = swapped_fn_leftctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (ejectcd_as_delete) {
++ for (trans = ejectcd_as_delete_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (rightalt_as_rightctrl) {
++ for (trans = rightalt_as_rightctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
+ }
+
+ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..83f96d2
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,38 @@
+--- a/init/do_mounts.c 2018-05-23 14:30:36.870899527 -0400
++++ b/init/do_mounts.c 2018-05-23 14:35:54.398659105 -0400
+@@ -489,7 +489,11 @@ void __init change_floppy(char *fmt, ...
+ va_start(args, fmt);
+ vsprintf(buf, fmt, args);
+ va_end(args);
+- fd = ksys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++ if (saved_root_name[0])
++ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++ else
++ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++
+ if (fd >= 0) {
+ ksys_ioctl(fd, FDEJECT, 0);
+ ksys_close(fd);
+@@ -533,11 +537,17 @@ void __init mount_root(void)
+ #endif
+ #ifdef CONFIG_BLOCK
+ {
+- int err = create_dev("/dev/root", ROOT_DEV);
+-
+- if (err < 0)
+- pr_emerg("Failed to create /dev/root: %d\n", err);
+- mount_block_root("/dev/root", root_mountflags);
++ if (saved_root_name[0] == '/') {
++ int err = create_dev(saved_root_name, ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
++ mount_block_root(saved_root_name, root_mountflags);
++ } else {
++ int err = create_dev("/dev/root", ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create /dev/root: %d\n", err);
++ mount_block_root("/dev/root", root_mountflags);
++ }
+ }
+ #endif
+ }
diff --git a/4200_fbcondecor.patch b/4200_fbcondecor.patch
new file mode 100644
index 0000000..7151d0f
--- /dev/null
+++ b/4200_fbcondecor.patch
@@ -0,0 +1,2095 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c5907a..22309308ba56 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ - info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ - intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++ - info on the Framebuffer Console Decoration
+ framebuffer.txt
+ - introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 000000000000..637209e11ccd
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++ http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++ standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++ is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++ Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++ Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++ Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++ Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 1d034b680431..9f41f2ea0c8b 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -23,6 +23,10 @@ obj-y += pci/dwc/
+
+ obj-$(CONFIG_PARISC) += parisc/
+ obj-$(CONFIG_RAPIDIO) += rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y += tty/
++obj-y += char/
+ obj-y += video/
+ obj-y += idle/
+
+@@ -53,11 +57,6 @@ obj-$(CONFIG_REGULATOR) += regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER) += reset/
+
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y += tty/
+-obj-y += char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 7f1f1fbcef9e..8439b618dfc0 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -151,6 +151,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+ such that other users of the framebuffer will remain normally
+ oriented.
+
++config FB_CON_DECOR
++ bool "Support for the Framebuffer Console Decorations"
++ depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++ default n
++ ---help---
++ This option enables support for framebuffer console decorations which
++ makes it possible to display images in the background of the system
++ consoles. Note that userspace utilities are necessary in order to take
++ advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++ for more information.
++
++ If unsure, say N.
++
+ config STI_CONSOLE
+ bool "STI text console"
+ depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index db07b784bd2c..3e369bd120b8 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -9,4 +9,5 @@ obj-$(CONFIG_STI_CONSOLE) += sticon.o sticore.o
+ obj-$(CONFIG_VGA_CONSOLE) += vgacon.o
+ obj-$(CONFIG_MDA_CONSOLE) += mdacon.o
+
++obj-$(CONFIG_FB_CON_DECOR) += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI) += sticore.o
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 000000000000..b00960803edc
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,473 @@
++/*
++ * linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootdecor" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift, bpp, type) \
++ do { \
++ if (d & (0x80 >> (shift))) \
++ dd2[(shift)] = fgx; \
++ else \
++ dd2[(shift)] = transparent ? *(type *)decor_src : bgx; \
++ decor_src += (bpp); \
++ } while (0) \
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++ u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++ int i, j, k;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++ u32 col;
++
++ for (j = i = 0; i < 16; i++) {
++ k = color_table[i];
++
++ col = ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.red.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.green.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.blue.offset);
++ ((u32 *)info->pseudo_palette)[k] = col;
++ }
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++ int width, u8 *src, u32 fgx, u32 bgx, u8 transparent)
++{
++ unsigned int x, y;
++ u32 dd;
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++ unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++ u16 dd2[4];
++
++ u8 *decor_src = (u8 *)(info->bgdecor.data + ds);
++ u8 *dst = (u8 *)(info->screen_base + d);
++
++ if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++ return;
++
++ for (y = 0; y < height; y++) {
++ switch (info->var.bits_per_pixel) {
++
++ case 32:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ *(u32 *)decor_src : bgx;
++
++ d <<= 1;
++ decor_src += 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++ case 24:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ (*(u32 *)decor_src & 0xffffff) : bgx;
++
++ d <<= 1;
++ decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++ fb_writew(dd & 0xffff, dst);
++ dst += 2;
++ fb_writeb((dd >> 16), dst);
++#else
++ fb_writew(dd >> 8, dst);
++ dst += 2;
++ fb_writeb(dd & 0xff, dst);
++#endif
++ dst++;
++ }
++ break;
++ case 16:
++ for (x = 0; x < width; x += 2) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 2, u16);
++ parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 16);
++#else
++ dd = dd2[1] | (dd2[0] << 16);
++#endif
++ d <<= 2;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++
++ case 8:
++ for (x = 0; x < width; x += 4) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 1, u8);
++ parse_pixel(1, 1, u8);
++ parse_pixel(2, 1, u8);
++ parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++ dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++ d <<= 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ }
++
++ dst += info->fix.line_length - width * bytespp;
++ decor_src += (info->var.xres - width) * bytespp;
++ }
++}
++
++#define cc2cx(a) \
++ ((info->fix.visual == FB_VISUAL_TRUECOLOR || \
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? \
++ ((u32 *)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++ const unsigned short *s, int count, int yy, int xx)
++{
++ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++ struct fbcon_ops *ops = info->fbcon_par;
++ int fg_color, bg_color, transparent;
++ u8 *src;
++ u32 bgx, fgx;
++ u16 c = scr_readw(s);
++
++ fg_color = get_color(vc, info, c, 1);
++ bg_color = get_color(vc, info, c, 0);
++
++ /* Don't paint the background image if console is blanked */
++ transparent = ops->blank_state ? 0 :
++ (vc->vc_decor.bg_color == bg_color);
++
++ xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++ yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++ fgx = cc2cx(fg_color);
++ bgx = cc2cx(bg_color);
++
++ while (count--) {
++ c = scr_readw(s++);
++ src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++ ((vc->vc_font.width + 7) >> 3);
++
++ fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++ vc->vc_font.width, src, fgx, bgx, transparent);
++ xx += vc->vc_font.width;
++ }
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++ int i;
++ unsigned int dsize, s_pitch;
++ struct fbcon_ops *ops = info->fbcon_par;
++ struct vc_data *vc;
++ u8 *src;
++
++ /* we really don't need any cursors while the console is blanked */
++ if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++ return;
++
++ vc = vc_cons[ops->currcon].d;
++
++ src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++ if (!src)
++ return;
++
++ s_pitch = (cursor->image.width + 7) >> 3;
++ dsize = s_pitch * cursor->image.height;
++ if (cursor->enable) {
++ switch (cursor->rop) {
++ case ROP_XOR:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] ^ cursor->mask[i];
++ break;
++ case ROP_COPY:
++ default:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] & cursor->mask[i];
++ break;
++ }
++ } else
++ memcpy(src, cursor->image.data, dsize);
++
++ fbcon_decor_renderc(info,
++ cursor->image.dy + vc->vc_decor.ty,
++ cursor->image.dx + vc->vc_decor.tx,
++ cursor->image.height,
++ cursor->image.width,
++ (u8 *)src,
++ cc2cx(cursor->image.fg_color),
++ cc2cx(cursor->image.bg_color),
++ cursor->image.bg_color == vc->vc_decor.bg_color);
++
++ kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++ u32 bgx, int bpp)
++{
++ int i;
++
++ if (bpp == 8)
++ bgx |= bgx << 8;
++ if (bpp == 16 || bpp == 8)
++ bgx |= bgx << 16;
++
++ while (height-- > 0) {
++ u8 *p = dst;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++ break;
++ case 24:
++ for (i = 0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++ fb_writew((bgx & 0xffff), (u16 *)p); p += 2;
++ fb_writeb((bgx >> 16), p++);
++#else
++ fb_writew((bgx >> 8), (u16 *)p); p += 2;
++ fb_writeb((bgx & 0xff), p++);
++#endif
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 2) {
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 1)
++ fb_writew(bgx, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++
++ if (width & 2) {
++ fb_writew(bgx, p); p += 2;
++ }
++ if (width & 1)
++ fb_writeb(bgx, (u8 *)p);
++ break;
++
++ }
++ dst += dstbytes;
++ }
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++ int srclinebytes, int bpp)
++{
++ int i;
++
++ while (height-- > 0) {
++ u32 *p = (u32 *)dst;
++ u32 *q = (u32 *)src;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++)
++ fb_writel(*q++, p++);
++ break;
++ case 24:
++ for (i = 0; i < (width * 3 / 4); i++)
++ fb_writel(*q++, p++);
++ if ((width * 3) % 4) {
++ if (width & 2) {
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ } else if (width & 1) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ fb_writeb(*(u8 *)((u16 *)q + 1),
++ (u8 *)((u16 *)p + 2));
++ }
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(*q++, p++);
++ fb_writel(*q++, p++);
++ }
++ if (width & 2)
++ fb_writel(*q++, p++);
++ if (width & 1)
++ fb_writew(*(u16 *)q, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++)
++ fb_writel(*q++, p++);
++
++ if (width & 2) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ q = (u32 *) ((u16 *)q + 1);
++ p = (u32 *) ((u16 *)p + 1);
++ }
++ if (width & 1)
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ break;
++ }
++
++ dst += linebytes;
++ src += srclinebytes;
++ }
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++ int width)
++{
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ int d = sy * info->fix.line_length + sx * bytespp;
++ int ds = (sy * info->var.xres + sx) * bytespp;
++
++ fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++ height, width, info->fix.line_length, info->var.xres * bytespp,
++ info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++ int height, int width)
++{
++ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++ struct fbcon_ops *ops = info->fbcon_par;
++ u8 *dst;
++ int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++ transparent = (vc->vc_decor.bg_color == bg_color);
++ sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++ sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++ height *= vc->vc_font.height;
++ width *= vc->vc_font.width;
++
++ /* Don't paint the background image if console is blanked */
++ if (transparent && !ops->blank_state) {
++ decorfill(info, sy, sx, height, width);
++ } else {
++ dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++ sx * ((info->var.bits_per_pixel + 7) >> 3));
++ decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++ info->var.bits_per_pixel);
++ }
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++ int bottom_only)
++{
++ unsigned int tw = vc->vc_cols*vc->vc_font.width;
++ unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++ if (!bottom_only) {
++ /* top margin */
++ decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++ /* left margin */
++ decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++ /* right margin */
++ decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++ info->var.xres - vc->vc_decor.tx - tw);
++ }
++ decorfill(info, vc->vc_decor.ty + th, 0,
++ info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++ int sx, int dx, int width)
++{
++ u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++ u16 *s = d + (dx - sx);
++ u16 *start = d;
++ u16 *ls = d;
++ u16 *le = d + width;
++ u16 c;
++ int x = dx;
++ u16 attr = 1;
++
++ do {
++ c = scr_readw(d);
++ if (attr != (c & 0xff00)) {
++ attr = c & 0xff00;
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start;
++ start = d;
++ }
++ }
++ if (s >= ls && s < le && c == scr_readw(s)) {
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start + 1;
++ start = d + 1;
++ } else {
++ x++;
++ start++;
++ }
++ }
++ s++;
++ d++;
++ } while (d < le);
++ if (d > start)
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++ if (blank) {
++ decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++ info->fix.line_length, 0, info->var.bits_per_pixel);
++ } else {
++ update_screen(vc);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++}
++
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 000000000000..78288a497a60
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,549 @@
++/*
++ * linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ * Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootsplash" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++#include <linux/binfmts.h>
++#include <linux/uaccess.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++
++static int initialized;
++
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++EXPORT_SYMBOL(fbcon_decor_path);
++
++int fbcon_decor_call_helper(char *cmd, unsigned short vc)
++{
++ char *envp[] = {
++ "HOME=/",
++ "PATH=/sbin:/bin",
++ NULL
++ };
++
++ char tfb[5];
++ char tcons[5];
++ unsigned char fb = (int) con2fb_map[vc];
++
++ char *argv[] = {
++ fbcon_decor_path,
++ "2",
++ cmd,
++ tcons,
++ tfb,
++ vc_cons[vc].d->vc_decor.theme,
++ NULL
++ };
++
++ snprintf(tfb, 5, "%d", fb);
++ snprintf(tcons, 5, "%d", vc);
++
++ return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++ struct fb_info *info;
++
++ if (!vc->vc_decor.state)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ vc->vc_decor.state = 0;
++ vc_resize(vc, info->var.xres / vc->vc_font.width,
++ info->var.yres / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num && redraw) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++ struct fb_info *info;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++ info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++ vc->vc_num == fg_console))
++ return -EINVAL;
++
++ vc->vc_decor.state = 1;
++ vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++ vc->vc_decor.theight / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++ int ret;
++
++ console_lock();
++ if (!state)
++ ret = fbcon_decor_disable(vc, 1);
++ else
++ ret = fbcon_decor_enable(vc);
++ console_unlock();
++
++ return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++ *state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ char *tmp;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL || !cfg->twidth || !cfg->theight ||
++ cfg->tx + cfg->twidth > info->var.xres ||
++ cfg->ty + cfg->theight > info->var.yres)
++ return -EINVAL;
++
++ len = strnlen_user(cfg->theme, MAX_ARG_STRLEN);
++ if (!len || len > FBCON_DECOR_THEME_LEN)
++ return -EINVAL;
++ tmp = kmalloc(len, GFP_KERNEL);
++ if (!tmp)
++ return -ENOMEM;
++ if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++ return -EFAULT;
++ cfg->theme = tmp;
++ cfg->state = 0;
++
++ console_lock();
++ if (vc->vc_decor.state)
++ fbcon_decor_disable(vc, 1);
++ kfree(vc->vc_decor.theme);
++ vc->vc_decor = *cfg;
++ console_unlock();
++
++ printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++ vc->vc_num, vc->vc_decor.theme);
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc,
++ struct vc_decor *decor)
++{
++ char __user *tmp;
++
++ tmp = decor->theme;
++ *decor = vc->vc_decor;
++ decor->theme = tmp;
++
++ if (vc->vc_decor.theme) {
++ if (copy_to_user(tmp, vc->vc_decor.theme,
++ strlen(vc->vc_decor.theme) + 1))
++ return -EFAULT;
++ } else
++ if (put_user(0, tmp))
++ return -EFAULT;
++
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img,
++ unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ u8 *tmp;
++
++ if (vc->vc_num != fg_console)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ if (img->width != info->var.xres || img->height != info->var.yres) {
++ printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++ printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height,
++ info->var.xres, info->var.yres);
++ return -EINVAL;
++ }
++
++ if (img->depth != info->var.bits_per_pixel) {
++ printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++ return -EINVAL;
++ }
++
++ if (img->depth == 8) {
++ if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++ !img->cmap.blue)
++ return -EINVAL;
++
++ tmp = vmalloc(img->cmap.len * 3 * 2);
++ if (!tmp)
++ return -ENOMEM;
++
++ if (copy_from_user(tmp,
++ (void __user *)img->cmap.red,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 1),
++ (void __user *)img->cmap.green,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 2),
++ (void __user *)img->cmap.blue,
++ (img->cmap.len << 1))) {
++ vfree(tmp);
++ return -EFAULT;
++ }
++
++ img->cmap.transp = NULL;
++ img->cmap.red = (u16 *)tmp;
++ img->cmap.green = img->cmap.red + img->cmap.len;
++ img->cmap.blue = img->cmap.green + img->cmap.len;
++ } else {
++ img->cmap.red = NULL;
++ }
++
++ len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++ /*
++ * Allocate an additional byte so that we never go outside of the
++ * buffer boundaries in the rendering functions in a 24 bpp mode.
++ */
++ tmp = vmalloc(len + 1);
++
++ if (!tmp)
++ goto out;
++
++ if (copy_from_user(tmp, (void __user *)img->data, len))
++ goto out;
++
++ img->data = tmp;
++
++ console_lock();
++
++ if (info->bgdecor.data)
++ vfree((u8 *)info->bgdecor.data);
++ if (info->bgdecor.cmap.red)
++ vfree(info->bgdecor.cmap.red);
++
++ info->bgdecor = *img;
++
++ if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ console_unlock();
++
++ return 0;
++
++out:
++ if (img->cmap.red)
++ vfree(img->cmap.red);
++
++ if (tmp)
++ vfree(tmp);
++ return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++ struct fbcon_decor_iowrapper __user *wrapper = (void __user *) arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data, &wrapper->data);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC:
++ {
++ struct fb_image img;
++
++ if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++ case FBIOCONDECOR_SETCFG:
++ {
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++ case FBIOCONDECOR_GETCFG:
++ {
++ int rval;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++ return -EFAULT;
++ return rval;
++ }
++ case FBIOCONDECOR_SETSTATE:
++ {
++ unsigned int state = 0;
++
++ if (get_user(state, (unsigned int __user *)data))
++ return -EFAULT;
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++ case FBIOCONDECOR_GETSTATE:
++ {
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ return put_user(state, (unsigned int __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++ struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ compat_uptr_t data_compat = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper32)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data_compat, &wrapper->data);
++ data = compat_ptr(data_compat);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC32:
++ {
++ struct fb_image32 img_compat;
++ struct fb_image img;
++
++ if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++ return -EFAULT;
++
++ fb_image_from_compat(img, img_compat);
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++
++ case FBIOCONDECOR_SETCFG32:
++ {
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++
++ vc_decor_from_compat(cfg, cfg_compat);
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++
++ case FBIOCONDECOR_GETCFG32:
++ {
++ int rval;
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ cfg.theme = compat_ptr(cfg_compat.theme);
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ vc_decor_to_compat(cfg_compat, cfg);
++
++ if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ return rval;
++ }
++
++ case FBIOCONDECOR_SETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ if (get_user(state_compat, (compat_uint_t __user *)data))
++ return -EFAULT;
++
++ state = (unsigned int)state_compat;
++
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++
++ case FBIOCONDECOR_GETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ state_compat = (compat_uint_t)state;
++
++ return put_user(state_compat, (compat_uint_t __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++#else
++ #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++ .owner = THIS_MODULE,
++ .unlocked_ioctl = fbcon_decor_ioctl,
++ .compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++ .minor = MISC_DYNAMIC_MINOR,
++ .name = "fbcondecor",
++ .fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++ int i;
++
++ for (i = 0; i < num_registered_fb; i++) {
++ registered_fb[i]->bgdecor.data = NULL;
++ registered_fb[i]->bgdecor.cmap.red = NULL;
++ }
++
++ for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++ vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++ vc_cons[i].d->vc_decor.theight = 0;
++ vc_cons[i].d->vc_decor.theme = NULL;
++ }
++}
++
++int fbcon_decor_init(void)
++{
++ int i;
++
++ fbcon_decor_reset();
++
++ if (initialized)
++ return 0;
++
++ i = misc_register(&fbcon_decor_dev);
++ if (i) {
++ printk(KERN_ERR "fbcondecor: failed to register device\n");
++ return i;
++ }
++
++ fbcon_decor_call_helper("init", 0);
++ initialized = 1;
++ return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++ fbcon_decor_reset();
++ return 0;
++}
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 000000000000..c49386c16695
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,77 @@
++/*
++ * linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char *cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x, y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x, y) (fbcon_decor_active_nores(x, y) && \
++ x->bgdecor.width == x->var.xres && \
++ x->bgdecor.height == x->var.yres && \
++ x->bgdecor.depth == x->var.bits_per_pixel)
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char *cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x, y) (0)
++#define fbcon_decor_active(x, y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 5e58f5ec0a28..1daa8c2cb2d8 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1226,7 +1226,6 @@ config FB_MATROX
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+- select FB_TILEBLITTING
+ select FB_MACMODES if PPC_PMAC
+ ---help---
+ Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 790900d646c0..3f940c93752c 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+
+ /*
+ * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ area.height = height * vc->vc_font.height;
+ area.width = width * vc->vc_font.width;
+
++ if (fbcon_decor_active(info, vc)) {
++ area.sx += vc->vc_decor.tx;
++ area.sy += vc->vc_decor.ty;
++ area.dx += vc->vc_decor.tx;
++ area.dy += vc->vc_decor.ty;
++ }
++
+ info->fbops->fb_copyarea(info, &area);
+ }
+
+@@ -379,11 +387,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ cursor.image.depth = 1;
+ cursor.rop = ROP_XOR;
+
+- if (info->fbops->fb_cursor)
+- err = info->fbops->fb_cursor(info, &cursor);
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_cursor(info, &cursor);
++ } else {
++ if (info->fbops->fb_cursor)
++ err = info->fbops->fb_cursor(info, &cursor);
+
+- if (err)
+- soft_cursor(info, &cursor);
++ if (err)
++ soft_cursor(info, &cursor);
++ }
+
+ ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index 68a113594808..21f977cb59d2 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+ 0x0000, 0xaaaa
+ };
+@@ -256,9 +258,12 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ break;
+ }
+ }
+- if (rc == 0)
++ if (rc == 0) {
+ fb_copy_cmap(cmap, &info->cmap);
+-
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 04612f938bab..95c349200078 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -80,6 +80,7 @@
+ #include <asm/irq.h>
+
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+
+ #ifdef FBCONDEBUG
+ # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -95,7 +96,7 @@ enum {
+
+ static struct display fb_display[MAX_NR_CONSOLES];
+
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+
+ static int logo_lines;
+@@ -282,7 +283,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ !vt_force_oops_output(vc);
+ }
+
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ u16 c, int is_fg)
+ {
+ int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ info_idx = -1;
+ } else {
+ fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++ fbcon_decor_init();
++#endif
+ }
+
+ return err;
+@@ -1013,6 +1017,12 @@ static const char *fbcon_startup(void)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
++
++ if (fbcon_decor_active(info, vc)) {
++ cols = vc->vc_decor.twidth / vc->vc_font.width;
++ rows = vc->vc_decor.theight / vc->vc_font.height;
++ }
++
+ vc_resize(vc, cols, rows);
+
+ DPRINTK("mode: %s\n", info->fix.id);
+@@ -1042,7 +1052,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ cap = info->flags;
+
+ if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+- (info->fix.type == FB_TYPE_TEXT))
++ (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ logo = 0;
+
+ if (var_to_display(p, &info->var, info))
+@@ -1275,6 +1285,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ fbcon_clear_margins(vc, 0);
+ }
+
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_clear(vc, info, sy, sx, height, width);
++ return;
++ }
++
+ /* Split blits that cross physical y_wrap boundary */
+
+ y_break = p->vrows - p->yscroll;
+@@ -1294,10 +1309,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ struct display *p = &fb_display[vc->vc_num];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+- get_color(vc, info, scr_readw(s), 1),
+- get_color(vc, info, scr_readw(s), 0));
++ if (!fbcon_is_inactive(vc, info)) {
++
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++ else
++ ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++ get_color(vc, info, scr_readw(s), 1),
++ get_color(vc, info, scr_readw(s), 0));
++ }
+ }
+
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1313,8 +1333,12 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->clear_margins(vc, info, margin_color, bottom_only);
++ if (!fbcon_is_inactive(vc, info)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_clear_margins(vc, info, bottom_only);
++ else
++ ops->clear_margins(vc, info, margin_color, bottom_only);
++ }
+ }
+
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1835,7 +1859,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ count = vc->vc_rows;
+ if (softback_top)
+ fbcon_softback_note(vc, t, count);
+- if (logo_shown >= 0)
++ if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ goto redraw_up;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+@@ -1928,6 +1952,8 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ count = vc->vc_rows;
+ if (logo_shown >= 0)
+ goto redraw_down;
++ if (fbcon_decor_active(info, vc))
++ goto redraw_down;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2076,6 +2102,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ }
+ return;
+ }
++
++ if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ /* must use slower redraw bmove to keep background pic intact */
++ fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ return;
++ }
++
+ ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ height, width);
+ }
+@@ -2146,8 +2179,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ var.yres = virt_h * virt_fh;
+ x_diff = info->var.xres - var.xres;
+ y_diff = info->var.yres - var.yres;
+- if (x_diff < 0 || x_diff > virt_fw ||
+- y_diff < 0 || y_diff > virt_fh) {
++ if ((x_diff < 0 || x_diff > virt_fw ||
++ y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ const struct fb_videomode *mode;
+
+ DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2183,6 +2216,22 @@ static int fbcon_switch(struct vc_data *vc)
+
+ info = registered_fb[con2fb_map[vc->vc_num]];
+ ops = info->fbcon_par;
++ prev_console = ops->currcon;
++ if (prev_console != -1)
++ old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++ if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++ // Clear the screen to avoid displaying funky colors
++ // during palette updates.
++ memset((u8 *)info->screen_base + info->fix.line_length * info->var.yoffset,
++ 0, info->var.yres * info->fix.line_length);
++ }
++ }
++#endif
+
+ if (softback_top) {
+ if (softback_lines)
+@@ -2201,9 +2250,6 @@ static int fbcon_switch(struct vc_data *vc)
+ logo_shown = FBCON_LOGO_CANSHOW;
+ }
+
+- prev_console = ops->currcon;
+- if (prev_console != -1)
+- old_info = registered_fb[con2fb_map[prev_console]];
+ /*
+ * FIXME: If we have multiple fbdev's loaded, we need to
+ * update all info->currcon. Perhaps, we can place this
+@@ -2247,6 +2293,18 @@ static int fbcon_switch(struct vc_data *vc)
+ fbcon_del_cursor_timer(old_info);
+ }
+
++ if (fbcon_decor_active_vc(vc)) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (!vc_curr->vc_decor.theme ||
++ strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++ (fbcon_decor_active_nores(info, vc_curr) &&
++ !fbcon_decor_active(info, vc_curr))) {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++ }
++
+ if (fbcon_is_inactive(vc, info) ||
+ ops->blank_state != FB_BLANK_UNBLANK)
+ fbcon_del_cursor_timer(info);
+@@ -2355,15 +2413,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ }
+ }
+
+- if (!fbcon_is_inactive(vc, info)) {
++ if (!fbcon_is_inactive(vc, info)) {
+ if (ops->blank_state != blank) {
+ ops->blank_state = blank;
+ fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ ops->cursor_flash = (!blank);
+
+- if (!(info->flags & FBINFO_MISC_USEREVENT))
+- if (fb_blank(info, blank))
+- fbcon_generic_blank(vc, info, blank);
++ if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++ if (fb_blank(info, blank)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_blank(vc, info, blank);
++ else
++ fbcon_generic_blank(vc, info, blank);
++ }
++ }
+ }
+
+ if (!blank)
+@@ -2546,13 +2609,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ set_vc_hi_font(vc, true);
+
+ if (resize) {
++ /* reset wrap/pan */
+ int cols, rows;
+
+ cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++ if (fbcon_decor_active(info, vc)) {
++ info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++ cols = vc->vc_decor.twidth;
++ rows = vc->vc_decor.theight;
++ }
+ cols /= w;
+ rows /= h;
++
+ vc_resize(vc, cols, rows);
++
+ if (con_is_visible(vc) && softback_buf)
+ fbcon_update_softback(vc);
+ } else if (con_is_visible(vc)
+@@ -2681,7 +2753,11 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ int i, j, k, depth;
+ u8 val;
+
+- if (fbcon_is_inactive(vc, info))
++ if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++ || vc->vc_num != fg_console
++#endif
++ )
+ return;
+
+ if (!con_is_visible(vc))
+@@ -2707,7 +2783,47 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ } else
+ fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+
+- fb_set_cmap(&palette_cmap, info);
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++ u16 *red, *green, *blue;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++
++ struct fb_cmap cmap = {
++ .start = 0,
++ .len = (1 << minlen),
++ .red = NULL,
++ .green = NULL,
++ .blue = NULL,
++ .transp = NULL
++ };
++
++ red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++ if (!red)
++ goto out;
++
++ green = red + 256;
++ blue = green + 256;
++ cmap.red = red;
++ cmap.green = green;
++ cmap.blue = blue;
++
++ for (i = 0; i < cmap.len; i++)
++ red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++
++ fb_set_cmap(&cmap, info);
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ kfree(red);
++
++ return;
++
++ } else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++ fb_set_cmap(&info->bgdecor.cmap, info);
++
++out: fb_set_cmap(&palette_cmap, info);
+ }
+
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+@@ -2932,7 +3048,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++
++ if (!fbcon_decor_active_nores(info, vc)) {
++ vc_resize(vc, cols, rows);
++ } else {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++
+ updatescrollmode(p, info, vc);
+ scrollback_max = 0;
+ scrollback_current = 0;
+@@ -2977,7 +3100,8 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++ if (!fbcon_decor_active_nores(info, vc))
++ vc_resize(vc, cols, rows);
+ }
+
+ if (fg != -1)
+@@ -3618,6 +3742,7 @@ static void fbcon_exit(void)
+ }
+ }
+
++ fbcon_decor_exit();
+ fbcon_has_exited = 1;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index f741ba8df01b..b0141433d249 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1253,15 +1253,6 @@ struct fb_fix_screeninfo32 {
+ u16 reserved[3];
+ };
+
+-struct fb_cmap32 {
+- u32 start;
+- u32 len;
+- compat_caddr_t red;
+- compat_caddr_t green;
+- compat_caddr_t blue;
+- compat_caddr_t transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 000000000000..15143556c2aa
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ char *theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index c0ec478ea5bf..8bfed6b21fc9 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -21,6 +21,7 @@ struct vt_struct;
+ struct uni_pagedir;
+
+ #define NPAR 16
++#include <linux/console_decor.h>
+
+ /*
+ * Example: vc_data of a console that was scrolled 3 lines down.
+@@ -141,6 +142,8 @@ struct vc_data {
+ struct uni_pagedir *vc_uni_pagedir;
+ struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++ struct vc_decor vc_decor;
+ /* additional information is in vt_kern.h */
+ };
+
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index bc24e48e396d..ad7d182c7545 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -239,6 +239,34 @@ struct fb_deferred_io {
+ };
+ #endif
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++ __u32 dx; /* Where to place image */
++ __u32 dy;
++ __u32 width; /* Size of image */
++ __u32 height;
++ __u32 fg_color; /* Only used when a mono bitmap */
++ __u32 bg_color;
++ __u8 depth; /* Depth of the image */
++ const compat_uptr_t data; /* Pointer to image data */
++ struct fb_cmap32 cmap; /* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++ (to).dx = (from).dx; \
++ (to).dy = (from).dy; \
++ (to).width = (from).width; \
++ (to).height = (from).height; \
++ (to).fg_color = (from).fg_color; \
++ (to).bg_color = (from).bg_color; \
++ (to).depth = (from).depth; \
++ (to).data = compat_ptr((from).data); \
++ fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+ * Frame buffer operations
+ *
+@@ -509,6 +537,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED 1
+ u32 state; /* Hardware state i.e suspend */
+ void *fbcon_par; /* fbcon use-only private area */
++
++ struct fb_image bgdecor;
++
+ /* From here on everything is device dependent */
+ void *par;
+ /* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index 6cd9b198b7c6..a228440649fa 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -9,6 +9,23 @@
+
+ #define FB_MAX 32 /* sufficient for now */
+
++struct fbcon_decor_iowrapper {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32 {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+ 0x46 is 'F' */
+ #define FBIOGET_VSCREENINFO 0x4600
+@@ -36,6 +53,25 @@
+ #define FBIOGET_DISPINFO 0x4618
+ #define FBIO_WAITFORVSYNC _IOW('F', 0x20, __u32)
+
++#define FBIOCONDECOR_SETCFG _IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG _IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE _IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE _IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC _IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32 _IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32 _IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32 _IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32 _IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32 _IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN 128 /* Maximum length of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL 0 /* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER 1 /* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS 0 /* Packed Pixels */
+ #define FB_TYPE_PLANES 1 /* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES 2 /* Interleaved planes */
+@@ -278,6 +314,29 @@ struct fb_var_screeninfo {
+ __u32 reserved[4]; /* Reserved for future compatibility */
+ };
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++ __u32 start;
++ __u32 len; /* Number of entries */
++ compat_uptr_t red; /* Red values */
++ compat_uptr_t green;
++ compat_uptr_t blue;
++ compat_uptr_t transp; /* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++ (to).start = (from).start; \
++ (to).len = (from).len; \
++ (to).red = compat_ptr((from).red); \
++ (to).green = compat_ptr((from).green); \
++ (to).blue = compat_ptr((from).blue); \
++ (to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ __u32 start; /* First entry */
+ __u32 len; /* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d9c31bc2eaea..e33ac56cc32a 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -150,6 +150,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -283,6 +287,15 @@ static struct ctl_table sysctl_base_table[] = {
+ .mode = 0555,
+ .child = dev_table,
+ },
++#ifdef CONFIG_FB_CON_DECOR
++ {
++ .procname = "fbcondecor",
++ .data = &fbcon_decor_path,
++ .maxlen = KMOD_PATH_LEN,
++ .mode = 0644,
++ .proc_handler = &proc_dostring,
++ },
++#endif
+ { }
+ };
+
diff --git a/4400_alpha-sysctl-uac.patch b/4400_alpha-sysctl-uac.patch
new file mode 100644
index 0000000..d42b4ed
--- /dev/null
+++ b/4400_alpha-sysctl-uac.patch
@@ -0,0 +1,142 @@
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 7f312d8..1eb686b 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -697,6 +697,33 @@ config HZ
+ default 1200 if HZ_1200
+ default 1024
+
++config ALPHA_UAC_SYSCTL
++ bool "Configure UAC policy via sysctl"
++ depends on SYSCTL
++ default y
++ ---help---
++ Configuring the UAC (unaligned access control) policy on a Linux
++ system usually involves setting a compile time define. If you say
++ Y here, you will be able to modify the UAC policy at runtime using
++ the /proc interface.
++
++ The UAC policy defines the action Linux should take when an
++ unaligned memory access occurs. The action can include printing a
++ warning message (NOPRINT), sending a signal to the offending
++ program to help developers debug their applications (SIGBUS), or
++ disabling the transparent fixing (NOFIX).
++
++ The sysctls will be initialized to the compile-time defined UAC
++ policy. You can change these manually, or with the sysctl(8)
++ userspace utility.
++
++ To disable the warning messages at runtime, you would use
++
++ echo 1 > /proc/sys/kernel/uac/noprint
++
++ This is pretty harmless. Say Y if you're not sure.
++
++
+ source "drivers/pci/Kconfig"
+ source "drivers/eisa/Kconfig"
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 74aceea..cb35d80 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -103,6 +103,49 @@ static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
+ #endif
+
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++
++#include <linux/sysctl.h>
++
++static int enabled_noprint = 0;
++static int enabled_sigbus = 0;
++static int enabled_nofix = 0;
++
++struct ctl_table uac_table[] = {
++ {
++ .procname = "noprint",
++ .data = &enabled_noprint,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "sigbus",
++ .data = &enabled_sigbus,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "nofix",
++ .data = &enabled_nofix,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ { }
++};
++
++static int __init init_uac_sysctl(void)
++{
++ /* Initialize sysctls with the #defined UAC policy */
++ enabled_noprint = (test_thread_flag (TS_UAC_NOPRINT)) ? 1 : 0;
++ enabled_sigbus = (test_thread_flag (TS_UAC_SIGBUS)) ? 1 : 0;
++ enabled_nofix = (test_thread_flag (TS_UAC_NOFIX)) ? 1 : 0;
++ return 0;
++}
++#endif
++
+ static void
+ dik_show_code(unsigned int *pc)
+ {
+@@ -785,7 +828,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
+ /* Check the UAC bits to decide what the user wants us to do
+ with the unaliged access. */
+
++#ifndef CONFIG_ALPHA_UAC_SYSCTL
+ if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
++#else /* CONFIG_ALPHA_UAC_SYSCTL */
++ if (!(current_thread_info()->status & TS_UAC_NOPRINT) &&
++ !(enabled_noprint)) {
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ if (__ratelimit(&ratelimit)) {
+ printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
+ current->comm, task_pid_nr(current),
+@@ -1090,3 +1138,6 @@ trap_init(void)
+ wrent(entSys, 5);
+ wrent(entDbg, 6);
+ }
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ __initcall(init_uac_sysctl);
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 87b2fc3..55021a8 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -152,6 +152,11 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++extern struct ctl_table uac_table[];
++#endif
++
+ #ifdef CONFIG_SPARC
+ #endif
+
+@@ -1844,6 +1849,13 @@ static struct ctl_table debug_table[] = {
+ .extra2 = &one,
+ },
+ #endif
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ {
++ .procname = "uac",
++ .mode = 0555,
++ .child = uac_table,
++ },
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ { }
+ };
+
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..1aba143
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,530 @@
+WARNING
+This patch works with gcc versions 4.9+ and with kernel version 3.15+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5.i7 (Skylake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h 2018-02-25 21:50:41.000000000 -0500
++++ b/arch/x86/include/asm/module.h 2018-02-26 15:37:52.684596240 -0500
+@@ -25,6 +25,24 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +61,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2018-02-25 21:50:41.000000000 -0500
++++ b/arch/x86/Kconfig.cpu 2018-02-26 15:46:09.886742109 -0500
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -148,9 +149,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -158,7 +158,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -166,12 +166,83 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -253,6 +324,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,8 +334,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -271,14 +354,79 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -287,6 +435,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -311,7 +472,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -342,35 +503,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -380,7 +542,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2018-02-25 21:50:41.000000000 -0500
++++ b/arch/x86/Makefile 2018-02-26 15:37:52.685596255 -0500
+@@ -124,13 +124,40 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2018-02-25 21:50:41.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu 2018-02-26 15:37:52.686596269 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +43,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 30+ messages in thread
end of thread, other threads:[~2018-08-24 11:45 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-03 22:19 [gentoo-commits] proj/linux-patches:4.17 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2018-08-24 11:45 Mike Pagano
2018-08-22 9:56 Alice Ferrazzi
2018-08-18 18:10 Mike Pagano
2018-08-17 19:40 Mike Pagano
2018-08-17 19:27 Mike Pagano
2018-08-16 11:47 Mike Pagano
2018-08-15 16:35 Mike Pagano
2018-08-09 10:55 Mike Pagano
2018-08-07 18:10 Mike Pagano
2018-08-03 12:19 Mike Pagano
2018-07-28 10:41 Mike Pagano
2018-07-25 12:19 Mike Pagano
2018-07-25 10:28 Mike Pagano
2018-07-22 15:12 Mike Pagano
2018-07-18 11:18 Mike Pagano
2018-07-17 16:18 Mike Pagano
2018-07-12 15:15 Alice Ferrazzi
2018-07-09 15:01 Alice Ferrazzi
2018-07-03 13:36 Mike Pagano
2018-07-03 13:19 Mike Pagano
2018-06-29 23:18 Mike Pagano
2018-06-26 16:29 Alice Ferrazzi
2018-06-25 12:40 Mike Pagano
2018-06-20 17:47 Mike Pagano
2018-06-19 23:30 Mike Pagano
2018-06-16 15:46 Mike Pagano
2018-06-11 21:50 Mike Pagano
2018-06-08 23:11 Mike Pagano
2018-05-23 18:47 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox