public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.2 commit in: /
Date: Fri, 23 Oct 2015 17:14:15 +0000 (UTC)	[thread overview]
Message-ID: <1445620456.a66c9411919f0d467ddacb949af14b1336517b90.mpagano@gentoo> (raw)

commit:     a66c9411919f0d467ddacb949af14b1336517b90
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 23 17:14:16 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 23 17:14:16 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a66c9411

Linux patch 4.2.4

 0000_README            |     4 +
 1003_linux-4.2.4.patch | 10010 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10014 insertions(+)

diff --git a/0000_README b/0000_README
index 5a14372..2a467c2 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.2.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.2.3
 
+Patch:  1003_linux-4.2.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.2.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.2.4.patch b/1003_linux-4.2.4.patch
new file mode 100644
index 0000000..4118bfa
--- /dev/null
+++ b/1003_linux-4.2.4.patch
@@ -0,0 +1,10010 @@
+diff --git a/Documentation/HOWTO b/Documentation/HOWTO
+index 93aa8604630e..21152d397b88 100644
+--- a/Documentation/HOWTO
++++ b/Documentation/HOWTO
+@@ -218,16 +218,16 @@ The development process
+ Linux kernel development process currently consists of a few different
+ main kernel "branches" and lots of different subsystem-specific kernel
+ branches.  These different branches are:
+-  - main 3.x kernel tree
+-  - 3.x.y -stable kernel tree
+-  - 3.x -git kernel patches
++  - main 4.x kernel tree
++  - 4.x.y -stable kernel tree
++  - 4.x -git kernel patches
+   - subsystem specific kernel trees and patches
+-  - the 3.x -next kernel tree for integration tests
++  - the 4.x -next kernel tree for integration tests
+ 
+-3.x kernel tree
++4.x kernel tree
+ -----------------
+-3.x kernels are maintained by Linus Torvalds, and can be found on
+-kernel.org in the pub/linux/kernel/v3.x/ directory.  Its development
++4.x kernels are maintained by Linus Torvalds, and can be found on
++kernel.org in the pub/linux/kernel/v4.x/ directory.  Its development
+ process is as follows:
+   - As soon as a new kernel is released a two weeks window is open,
+     during this period of time maintainers can submit big diffs to
+@@ -262,20 +262,20 @@ mailing list about kernel releases:
+ 	released according to perceived bug status, not according to a
+ 	preconceived timeline."
+ 
+-3.x.y -stable kernel tree
++4.x.y -stable kernel tree
+ ---------------------------
+ Kernels with 3-part versions are -stable kernels. They contain
+ relatively small and critical fixes for security problems or significant
+-regressions discovered in a given 3.x kernel.
++regressions discovered in a given 4.x kernel.
+ 
+ This is the recommended branch for users who want the most recent stable
+ kernel and are not interested in helping test development/experimental
+ versions.
+ 
+-If no 3.x.y kernel is available, then the highest numbered 3.x
++If no 4.x.y kernel is available, then the highest numbered 4.x
+ kernel is the current stable kernel.
+ 
+-3.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and
++4.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and
+ are released as needs dictate.  The normal release period is approximately
+ two weeks, but it can be longer if there are no pressing problems.  A
+ security-related problem, instead, can cause a release to happen almost
+@@ -285,7 +285,7 @@ The file Documentation/stable_kernel_rules.txt in the kernel tree
+ documents what kinds of changes are acceptable for the -stable tree, and
+ how the release process works.
+ 
+-3.x -git patches
++4.x -git patches
+ ------------------
+ These are daily snapshots of Linus' kernel tree which are managed in a
+ git repository (hence the name.) These patches are usually released
+@@ -317,9 +317,9 @@ revisions to it, and maintainers can mark patches as under review,
+ accepted, or rejected.  Most of these patchwork sites are listed at
+ http://patchwork.kernel.org/.
+ 
+-3.x -next kernel tree for integration tests
++4.x -next kernel tree for integration tests
+ ---------------------------------------------
+-Before updates from subsystem trees are merged into the mainline 3.x
++Before updates from subsystem trees are merged into the mainline 4.x
+ tree, they need to be integration-tested.  For this purpose, a special
+ testing repository exists into which virtually all subsystem trees are
+ pulled on an almost daily basis:
+diff --git a/Makefile b/Makefile
+index a6edbb11a69a..a952801a6cd5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 2
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arc/plat-axs10x/axs10x.c b/arch/arc/plat-axs10x/axs10x.c
+index e7769c3ab5f2..ac79491ee2c0 100644
+--- a/arch/arc/plat-axs10x/axs10x.c
++++ b/arch/arc/plat-axs10x/axs10x.c
+@@ -402,6 +402,8 @@ static void __init axs103_early_init(void)
+ 	unsigned int num_cores = (read_aux_reg(ARC_REG_MCIP_BCR) >> 16) & 0x3F;
+ 	if (num_cores > 2)
+ 		arc_set_core_freq(50 * 1000000);
++	else if (num_cores == 2)
++		arc_set_core_freq(75 * 1000000);
+ #endif
+ 
+ 	switch (arc_get_core_freq()/1000000) {
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index 7451b447cc2d..2c2b28ee4811 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -54,6 +54,14 @@ AS		+= -EL
+ LD		+= -EL
+ endif
+ 
++#
++# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
++# later may result in code being generated that handles signed short and signed
++# char struct members incorrectly. So disable it.
++# (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932)
++#
++KBUILD_CFLAGS	+= $(call cc-option,-fno-ipa-sra)
++
+ # This selects which instruction set is used.
+ # Note that GCC does not numerically define an architecture version
+ # macro, but instead defines a whole series of macros which makes
+diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi
+index 534f27ceb10b..fa8107dec109 100644
+--- a/arch/arm/boot/dts/exynos5420.dtsi
++++ b/arch/arm/boot/dts/exynos5420.dtsi
+@@ -1118,7 +1118,7 @@
+ 		interrupt-parent = <&combiner>;
+ 		interrupts = <3 0>;
+ 		clock-names = "sysmmu", "master";
+-		clocks = <&clock CLK_SMMU_FIMD1M0>, <&clock CLK_FIMD1>;
++		clocks = <&clock CLK_SMMU_FIMD1M1>, <&clock CLK_FIMD1>;
+ 		power-domains = <&disp_pd>;
+ 		#iommu-cells = <0>;
+ 	};
+diff --git a/arch/arm/boot/dts/imx6qdl-rex.dtsi b/arch/arm/boot/dts/imx6qdl-rex.dtsi
+index 3373fd958e95..a50356243888 100644
+--- a/arch/arm/boot/dts/imx6qdl-rex.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-rex.dtsi
+@@ -35,7 +35,6 @@
+ 			compatible = "regulator-fixed";
+ 			reg = <1>;
+ 			pinctrl-names = "default";
+-			pinctrl-0 = <&pinctrl_usbh1>;
+ 			regulator-name = "usbh1_vbus";
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+@@ -47,7 +46,6 @@
+ 			compatible = "regulator-fixed";
+ 			reg = <2>;
+ 			pinctrl-names = "default";
+-			pinctrl-0 = <&pinctrl_usbotg>;
+ 			regulator-name = "usb_otg_vbus";
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
+index a5474113cd50..67659a0ed13e 100644
+--- a/arch/arm/boot/dts/omap3-beagle.dts
++++ b/arch/arm/boot/dts/omap3-beagle.dts
+@@ -202,7 +202,7 @@
+ 
+ 	tfp410_pins: pinmux_tfp410_pins {
+ 		pinctrl-single,pins = <
+-			0x194 (PIN_OUTPUT | MUX_MODE4)	/* hdq_sio.gpio_170 */
++			0x196 (PIN_OUTPUT | MUX_MODE4)	/* hdq_sio.gpio_170 */
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts
+index 275618f19a43..5771a149ce4a 100644
+--- a/arch/arm/boot/dts/omap5-uevm.dts
++++ b/arch/arm/boot/dts/omap5-uevm.dts
+@@ -174,8 +174,8 @@
+ 
+ 	i2c5_pins: pinmux_i2c5_pins {
+ 		pinctrl-single,pins = <
+-			0x184 (PIN_INPUT | MUX_MODE0)		/* i2c5_scl */
+-			0x186 (PIN_INPUT | MUX_MODE0)		/* i2c5_sda */
++			0x186 (PIN_INPUT | MUX_MODE0)		/* i2c5_scl */
++			0x188 (PIN_INPUT | MUX_MODE0)		/* i2c5_sda */
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi
+index 6a63f30c9a69..f5f384c04335 100644
+--- a/arch/arm/boot/dts/sun7i-a20.dtsi
++++ b/arch/arm/boot/dts/sun7i-a20.dtsi
+@@ -107,7 +107,7 @@
+ 				720000	1200000
+ 				528000	1100000
+ 				312000	1000000
+-				144000	900000
++				144000	1000000
+ 				>;
+ 			#cooling-cells = <2>;
+ 			cooling-min-level = <0>;
+diff --git a/arch/arm/kernel/kgdb.c b/arch/arm/kernel/kgdb.c
+index a6ad93c9bce3..fd9eefce0a7b 100644
+--- a/arch/arm/kernel/kgdb.c
++++ b/arch/arm/kernel/kgdb.c
+@@ -259,15 +259,17 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
+ 	if (err)
+ 		return err;
+ 
+-	patch_text((void *)bpt->bpt_addr,
+-		   *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr);
++	/* Machine is already stopped, so we can use __patch_text() directly */
++	__patch_text((void *)bpt->bpt_addr,
++		     *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr);
+ 
+ 	return err;
+ }
+ 
+ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
+ {
+-	patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr);
++	/* Machine is already stopped, so we can use __patch_text() directly */
++	__patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
+index 54272e0be713..7d5379c1c443 100644
+--- a/arch/arm/kernel/perf_event.c
++++ b/arch/arm/kernel/perf_event.c
+@@ -795,8 +795,10 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu)
+ 
+ 	/* Don't bother with PPIs; they're already affine */
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq >= 0 && irq_is_percpu(irq))
++	if (irq >= 0 && irq_is_percpu(irq)) {
++		cpumask_setall(&pmu->supported_cpus);
+ 		return 0;
++	}
+ 
+ 	irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
+ 	if (!irqs)
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index 423663e23791..586eef26203d 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -343,12 +343,17 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig,
+ 		 */
+ 		thumb = handler & 1;
+ 
+-#if __LINUX_ARM_ARCH__ >= 7
++#if __LINUX_ARM_ARCH__ >= 6
+ 		/*
+-		 * Clear the If-Then Thumb-2 execution state
+-		 * ARM spec requires this to be all 000s in ARM mode
+-		 * Snapdragon S4/Krait misbehaves on a Thumb=>ARM
+-		 * signal transition without this.
++		 * Clear the If-Then Thumb-2 execution state.  ARM spec
++		 * requires this to be all 000s in ARM mode.  Snapdragon
++		 * S4/Krait misbehaves on a Thumb=>ARM signal transition
++		 * without this.
++		 *
++		 * We must do this whenever we are running on a Thumb-2
++		 * capable CPU, which includes ARMv6T2.  However, we elect
++		 * to do this whenever we're on an ARMv6 or later CPU for
++		 * simplicity.
+ 		 */
+ 		cpsr &= ~PSR_IT_MASK;
+ #endif
+diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
+index 702740d37465..51a59504bef4 100644
+--- a/arch/arm/kvm/interrupts_head.S
++++ b/arch/arm/kvm/interrupts_head.S
+@@ -515,8 +515,7 @@ ARM_BE8(rev	r6, r6  )
+ 
+ 	mrc	p15, 0, r2, c14, c3, 1	@ CNTV_CTL
+ 	str	r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
+-	bic	r2, #1			@ Clear ENABLE
+-	mcr	p15, 0, r2, c14, c3, 1	@ CNTV_CTL
++
+ 	isb
+ 
+ 	mrrc	p15, 3, rr_lo_hi(r2, r3), c14	@ CNTV_CVAL
+@@ -529,6 +528,9 @@ ARM_BE8(rev	r6, r6  )
+ 	mcrr	p15, 4, r2, r2, c14	@ CNTVOFF
+ 
+ 1:
++	mov	r2, #0			@ Clear ENABLE
++	mcr	p15, 0, r2, c14, c3, 1	@ CNTV_CTL
++
+ 	@ Allow physical timer/counter access for the host
+ 	mrc	p15, 4, r2, c14, c1, 0	@ CNTHCTL
+ 	orr	r2, r2, #(CNTHCTL_PL1PCEN | CNTHCTL_PL1PCTEN)
+diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
+index 7b4201294187..6984342da13d 100644
+--- a/arch/arm/kvm/mmu.c
++++ b/arch/arm/kvm/mmu.c
+@@ -1792,8 +1792,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 		if (vma->vm_flags & VM_PFNMAP) {
+ 			gpa_t gpa = mem->guest_phys_addr +
+ 				    (vm_start - mem->userspace_addr);
+-			phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) +
+-					 vm_start - vma->vm_start;
++			phys_addr_t pa;
++
++			pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
++			pa += vm_start - vma->vm_start;
+ 
+ 			/* IO region dirty page logging not allowed */
+ 			if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES)
+diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
+index 9bdf54795f05..56978199c479 100644
+--- a/arch/arm/mach-exynos/mcpm-exynos.c
++++ b/arch/arm/mach-exynos/mcpm-exynos.c
+@@ -20,6 +20,7 @@
+ #include <asm/cputype.h>
+ #include <asm/cp15.h>
+ #include <asm/mcpm.h>
++#include <asm/smp_plat.h>
+ 
+ #include "regs-pmu.h"
+ #include "common.h"
+@@ -70,7 +71,31 @@ static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
+ 		cluster >= EXYNOS5420_NR_CLUSTERS)
+ 		return -EINVAL;
+ 
+-	exynos_cpu_power_up(cpunr);
++	if (!exynos_cpu_power_state(cpunr)) {
++		exynos_cpu_power_up(cpunr);
++
++		/*
++		 * This assumes the cluster number of the big cores(Cortex A15)
++		 * is 0 and the Little cores(Cortex A7) is 1.
++		 * When the system was booted from the Little core,
++		 * they should be reset during power up cpu.
++		 */
++		if (cluster &&
++		    cluster == MPIDR_AFFINITY_LEVEL(cpu_logical_map(0), 1)) {
++			/*
++			 * Before we reset the Little cores, we should wait
++			 * the SPARE2 register is set to 1 because the init
++			 * codes of the iROM will set the register after
++			 * initialization.
++			 */
++			while (!pmu_raw_readl(S5P_PMU_SPARE2))
++				udelay(10);
++
++			pmu_raw_writel(EXYNOS5420_KFC_CORE_RESET(cpu),
++					EXYNOS_SWRESET);
++		}
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm/mach-exynos/regs-pmu.h b/arch/arm/mach-exynos/regs-pmu.h
+index b7614333d296..fba9068ed260 100644
+--- a/arch/arm/mach-exynos/regs-pmu.h
++++ b/arch/arm/mach-exynos/regs-pmu.h
+@@ -513,6 +513,12 @@ static inline unsigned int exynos_pmu_cpunr(unsigned int mpidr)
+ #define SPREAD_ENABLE						0xF
+ #define SPREAD_USE_STANDWFI					0xF
+ 
++#define EXYNOS5420_KFC_CORE_RESET0				BIT(8)
++#define EXYNOS5420_KFC_ETM_RESET0				BIT(20)
++
++#define EXYNOS5420_KFC_CORE_RESET(_nr)				\
++	((EXYNOS5420_KFC_CORE_RESET0 | EXYNOS5420_KFC_ETM_RESET0) << (_nr))
++
+ #define EXYNOS5420_BB_CON1					0x0784
+ #define EXYNOS5420_BB_SEL_EN					BIT(31)
+ #define EXYNOS5420_BB_PMOS_EN					BIT(7)
+diff --git a/arch/arm/plat-pxa/ssp.c b/arch/arm/plat-pxa/ssp.c
+index ad9529cc4203..daa1a65f2eb7 100644
+--- a/arch/arm/plat-pxa/ssp.c
++++ b/arch/arm/plat-pxa/ssp.c
+@@ -107,7 +107,6 @@ static const struct of_device_id pxa_ssp_of_ids[] = {
+ 	{ .compatible = "mvrl,pxa168-ssp",	.data = (void *) PXA168_SSP },
+ 	{ .compatible = "mrvl,pxa910-ssp",	.data = (void *) PXA910_SSP },
+ 	{ .compatible = "mrvl,ce4100-ssp",	.data = (void *) CE4100_SSP },
+-	{ .compatible = "mrvl,lpss-ssp",	.data = (void *) LPSS_SSP },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(of, pxa_ssp_of_ids);
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index e8ca6eaedd02..13671a9cf016 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -258,7 +258,8 @@ static bool __init efi_virtmap_init(void)
+ 		 */
+ 		if (!is_normal_ram(md))
+ 			prot = __pgprot(PROT_DEVICE_nGnRE);
+-		else if (md->type == EFI_RUNTIME_SERVICES_CODE)
++		else if (md->type == EFI_RUNTIME_SERVICES_CODE ||
++			 !PAGE_ALIGNED(md->phys_addr))
+ 			prot = PAGE_KERNEL_EXEC;
+ 		else
+ 			prot = PAGE_KERNEL;
+diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
+index 08cafc518b9a..0f03a8fe2314 100644
+--- a/arch/arm64/kernel/entry-ftrace.S
++++ b/arch/arm64/kernel/entry-ftrace.S
+@@ -178,6 +178,24 @@ ENTRY(ftrace_stub)
+ ENDPROC(ftrace_stub)
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
++	/* save return value regs*/
++	.macro save_return_regs
++	sub sp, sp, #64
++	stp x0, x1, [sp]
++	stp x2, x3, [sp, #16]
++	stp x4, x5, [sp, #32]
++	stp x6, x7, [sp, #48]
++	.endm
++
++	/* restore return value regs*/
++	.macro restore_return_regs
++	ldp x0, x1, [sp]
++	ldp x2, x3, [sp, #16]
++	ldp x4, x5, [sp, #32]
++	ldp x6, x7, [sp, #48]
++	add sp, sp, #64
++	.endm
++
+ /*
+  * void ftrace_graph_caller(void)
+  *
+@@ -204,11 +222,11 @@ ENDPROC(ftrace_graph_caller)
+  * only when CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST is enabled.
+  */
+ ENTRY(return_to_handler)
+-	str	x0, [sp, #-16]!
++	save_return_regs
+ 	mov	x0, x29			//     parent's fp
+ 	bl	ftrace_return_to_handler// addr = ftrace_return_to_hander(fp);
+ 	mov	x30, x0			// restore the original return address
+-	ldr	x0, [sp], #16
++	restore_return_regs
+ 	ret
+ END(return_to_handler)
+ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 94d98cd1aad8..27c3e6fd24c1 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -278,6 +278,7 @@ retry:
+ 			 * starvation.
+ 			 */
+ 			mm_flags &= ~FAULT_FLAG_ALLOW_RETRY;
++			mm_flags |= FAULT_FLAG_TRIED;
+ 			goto retry;
+ 		}
+ 	}
+diff --git a/arch/m68k/include/asm/linkage.h b/arch/m68k/include/asm/linkage.h
+index 5a822bb790f7..066e74f666ae 100644
+--- a/arch/m68k/include/asm/linkage.h
++++ b/arch/m68k/include/asm/linkage.h
+@@ -4,4 +4,34 @@
+ #define __ALIGN .align 4
+ #define __ALIGN_STR ".align 4"
+ 
++/*
++ * Make sure the compiler doesn't do anything stupid with the
++ * arguments on the stack - they are owned by the *caller*, not
++ * the callee. This just fools gcc into not spilling into them,
++ * and keeps it from doing tailcall recursion and/or using the
++ * stack slots for temporaries, since they are live and "used"
++ * all the way to the end of the function.
++ */
++#define asmlinkage_protect(n, ret, args...) \
++	__asmlinkage_protect##n(ret, ##args)
++#define __asmlinkage_protect_n(ret, args...) \
++	__asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)
++#define __asmlinkage_protect0(ret) \
++	__asmlinkage_protect_n(ret)
++#define __asmlinkage_protect1(ret, arg1) \
++	__asmlinkage_protect_n(ret, "m" (arg1))
++#define __asmlinkage_protect2(ret, arg1, arg2) \
++	__asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))
++#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \
++	__asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))
++#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \
++	__asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
++			      "m" (arg4))
++#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \
++	__asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
++			      "m" (arg4), "m" (arg5))
++#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \
++	__asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
++			      "m" (arg4), "m" (arg5), "m" (arg6))
++
+ #endif
+diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S
+index 9f71c06aebf6..209ded16806b 100644
+--- a/arch/mips/kernel/cps-vec.S
++++ b/arch/mips/kernel/cps-vec.S
+@@ -39,6 +39,7 @@
+ 	 mfc0	\dest, CP0_CONFIG, 3
+ 	andi	\dest, \dest, MIPS_CONF3_MT
+ 	beqz	\dest, \nomt
++	 nop
+ 	.endm
+ 
+ .section .text.cps-vec
+@@ -223,10 +224,9 @@ LEAF(excep_ejtag)
+ 	END(excep_ejtag)
+ 
+ LEAF(mips_cps_core_init)
+-#ifdef CONFIG_MIPS_MT
++#ifdef CONFIG_MIPS_MT_SMP
+ 	/* Check that the core implements the MT ASE */
+ 	has_mt	t0, 3f
+-	 nop
+ 
+ 	.set	push
+ 	.set	mips64r2
+@@ -310,8 +310,9 @@ LEAF(mips_cps_boot_vpes)
+ 	PTR_ADDU t0, t0, t1
+ 
+ 	/* Calculate this VPEs ID. If the core doesn't support MT use 0 */
++	li	t9, 0
++#ifdef CONFIG_MIPS_MT_SMP
+ 	has_mt	ta2, 1f
+-	 li	t9, 0
+ 
+ 	/* Find the number of VPEs present in the core */
+ 	mfc0	t1, CP0_MVPCONF0
+@@ -330,6 +331,7 @@ LEAF(mips_cps_boot_vpes)
+ 	/* Retrieve the VPE ID from EBase.CPUNum */
+ 	mfc0	t9, $15, 1
+ 	and	t9, t9, t1
++#endif
+ 
+ 1:	/* Calculate a pointer to this VPEs struct vpe_boot_config */
+ 	li	t1, VPEBOOTCFG_SIZE
+@@ -337,7 +339,7 @@ LEAF(mips_cps_boot_vpes)
+ 	PTR_L	ta3, COREBOOTCFG_VPECONFIG(t0)
+ 	PTR_ADDU v0, v0, ta3
+ 
+-#ifdef CONFIG_MIPS_MT
++#ifdef CONFIG_MIPS_MT_SMP
+ 
+ 	/* If the core doesn't support MT then return */
+ 	bnez	ta2, 1f
+@@ -451,7 +453,7 @@ LEAF(mips_cps_boot_vpes)
+ 
+ 2:	.set	pop
+ 
+-#endif /* CONFIG_MIPS_MT */
++#endif /* CONFIG_MIPS_MT_SMP */
+ 
+ 	/* Return */
+ 	jr	ra
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 008b3378653a..4ceac5cdd6b8 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -338,7 +338,7 @@ static void __init bootmem_init(void)
+ 		if (end <= reserved_end)
+ 			continue;
+ #ifdef CONFIG_BLK_DEV_INITRD
+-		/* mapstart should be after initrd_end */
++		/* Skip zones before initrd and initrd itself */
+ 		if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end)))
+ 			continue;
+ #endif
+@@ -371,6 +371,14 @@ static void __init bootmem_init(void)
+ 		max_low_pfn = PFN_DOWN(HIGHMEM_START);
+ 	}
+ 
++#ifdef CONFIG_BLK_DEV_INITRD
++	/*
++	 * mapstart should be after initrd_end
++	 */
++	if (initrd_end)
++		mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));
++#endif
++
+ 	/*
+ 	 * Initialize the boot-time allocator with low memory only.
+ 	 */
+diff --git a/arch/mips/loongson64/common/env.c b/arch/mips/loongson64/common/env.c
+index f6c44dd332e2..d6d07ad56180 100644
+--- a/arch/mips/loongson64/common/env.c
++++ b/arch/mips/loongson64/common/env.c
+@@ -64,6 +64,9 @@ void __init prom_init_env(void)
+ 	}
+ 	if (memsize == 0)
+ 		memsize = 256;
++
++	loongson_sysconf.nr_uarts = 1;
++
+ 	pr_info("memsize=%u, highmemsize=%u\n", memsize, highmemsize);
+ #else
+ 	struct boot_params *boot_p;
+diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
+index eeaf0245c3b1..815892ed3fe8 100644
+--- a/arch/mips/mm/dma-default.c
++++ b/arch/mips/mm/dma-default.c
+@@ -100,7 +100,7 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
+ 	else
+ #endif
+ #if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32)
+-	     if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
++	     if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
+ 		dma_flag = __GFP_DMA;
+ 	else
+ #endif
+diff --git a/arch/mips/net/bpf_jit_asm.S b/arch/mips/net/bpf_jit_asm.S
+index e92726099be0..dabf4179cd7e 100644
+--- a/arch/mips/net/bpf_jit_asm.S
++++ b/arch/mips/net/bpf_jit_asm.S
+@@ -64,8 +64,20 @@ sk_load_word_positive:
+ 	PTR_ADDU t1, $r_skb_data, offset
+ 	lw	$r_A, 0(t1)
+ #ifdef CONFIG_CPU_LITTLE_ENDIAN
++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2)
+ 	wsbh	t0, $r_A
+ 	rotr	$r_A, t0, 16
++# else
++	sll	t0, $r_A, 24
++	srl	t1, $r_A, 24
++	srl	t2, $r_A, 8
++	or	t0, t0, t1
++	andi	t2, t2, 0xff00
++	andi	t1, $r_A, 0xff00
++	or	t0, t0, t2
++	sll	t1, t1, 8
++	or	$r_A, t0, t1
++# endif
+ #endif
+ 	jr	$r_ra
+ 	 move	$r_ret, zero
+@@ -80,8 +92,16 @@ sk_load_half_positive:
+ 	PTR_ADDU t1, $r_skb_data, offset
+ 	lh	$r_A, 0(t1)
+ #ifdef CONFIG_CPU_LITTLE_ENDIAN
++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2)
+ 	wsbh	t0, $r_A
+ 	seh	$r_A, t0
++# else
++	sll	t0, $r_A, 24
++	andi	t1, $r_A, 0xff00
++	sra	t0, t0, 16
++	srl	t1, t1, 8
++	or	$r_A, t0, t1
++# endif
+ #endif
+ 	jr	$r_ra
+ 	 move	$r_ret, zero
+@@ -148,23 +168,47 @@ sk_load_byte_positive:
+ NESTED(bpf_slow_path_word, (6 * SZREG), $r_sp)
+ 	bpf_slow_path_common(4)
+ #ifdef CONFIG_CPU_LITTLE_ENDIAN
++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2)
+ 	wsbh	t0, $r_s0
+ 	jr	$r_ra
+ 	 rotr	$r_A, t0, 16
+-#endif
++# else
++	sll	t0, $r_s0, 24
++	srl	t1, $r_s0, 24
++	srl	t2, $r_s0, 8
++	or	t0, t0, t1
++	andi	t2, t2, 0xff00
++	andi	t1, $r_s0, 0xff00
++	or	t0, t0, t2
++	sll	t1, t1, 8
++	jr	$r_ra
++	 or	$r_A, t0, t1
++# endif
++#else
+ 	jr	$r_ra
+-	move	$r_A, $r_s0
++	 move	$r_A, $r_s0
++#endif
+ 
+ 	END(bpf_slow_path_word)
+ 
+ NESTED(bpf_slow_path_half, (6 * SZREG), $r_sp)
+ 	bpf_slow_path_common(2)
+ #ifdef CONFIG_CPU_LITTLE_ENDIAN
++# if defined(__mips_isa_rev) && (__mips_isa_rev >= 2)
+ 	jr	$r_ra
+ 	 wsbh	$r_A, $r_s0
+-#endif
++# else
++	sll	t0, $r_s0, 8
++	andi	t1, $r_s0, 0xff00
++	andi	t0, t0, 0xff00
++	srl	t1, t1, 8
++	jr	$r_ra
++	 or	$r_A, t0, t1
++# endif
++#else
+ 	jr	$r_ra
+ 	 move	$r_A, $r_s0
++#endif
+ 
+ 	END(bpf_slow_path_half)
+ 
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 05ea8fc7f829..4816fe2fa857 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -827,12 +827,15 @@ int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
+ 	unsigned long size = kvmppc_get_gpr(vcpu, 4);
+ 	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
+ 	u64 buf;
++	int srcu_idx;
+ 	int ret;
+ 
+ 	if (!is_power_of_2(size) || (size > sizeof(buf)))
+ 		return H_TOO_HARD;
+ 
++	srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+ 	ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
++	srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
+ 	if (ret != 0)
+ 		return H_TOO_HARD;
+ 
+@@ -867,6 +870,7 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
+ 	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
+ 	unsigned long val = kvmppc_get_gpr(vcpu, 6);
+ 	u64 buf;
++	int srcu_idx;
+ 	int ret;
+ 
+ 	switch (size) {
+@@ -890,7 +894,9 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
+ 		return H_TOO_HARD;
+ 	}
+ 
++	srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+ 	ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, size, &buf);
++	srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
+ 	if (ret != 0)
+ 		return H_TOO_HARD;
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 68d067ad4222..a9f753fb73a8 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2178,7 +2178,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
+ 		vc->runner = vcpu;
+ 		if (n_ceded == vc->n_runnable) {
+ 			kvmppc_vcore_blocked(vc);
+-		} else if (should_resched()) {
++		} else if (need_resched()) {
+ 			vc->vcore_state = VCORE_PREEMPT;
+ 			/* Let something else run */
+ 			cond_resched_lock(&vc->lock);
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 76408cf0ad04..437f64350847 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -1171,6 +1171,7 @@ mc_cont:
+ 	bl	kvmhv_accumulate_time
+ #endif
+ 
++	mr 	r3, r12
+ 	/* Increment exit count, poke other threads to exit */
+ 	bl	kvmhv_commence_exit
+ 	nop
+diff --git a/arch/powerpc/platforms/pasemi/msi.c b/arch/powerpc/platforms/pasemi/msi.c
+index 27f2b187a91b..ff1bb4b690b9 100644
+--- a/arch/powerpc/platforms/pasemi/msi.c
++++ b/arch/powerpc/platforms/pasemi/msi.c
+@@ -63,6 +63,7 @@ static struct irq_chip mpic_pasemi_msi_chip = {
+ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
+ {
+ 	struct msi_desc *entry;
++	irq_hw_number_t hwirq;
+ 
+ 	pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev);
+ 
+@@ -70,10 +71,10 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
+ 		if (entry->irq == NO_IRQ)
+ 			continue;
+ 
++		hwirq = virq_to_hw(entry->irq);
+ 		irq_set_msi_desc(entry->irq, NULL);
+-		msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
+-				       virq_to_hw(entry->irq), ALLOC_CHUNK);
+ 		irq_dispose_mapping(entry->irq);
++		msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, ALLOC_CHUNK);
+ 	}
+ 
+ 	return;
+diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
+index 765d8ed558d0..fd16f86e54a9 100644
+--- a/arch/powerpc/platforms/powernv/pci.c
++++ b/arch/powerpc/platforms/powernv/pci.c
+@@ -99,6 +99,7 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev)
+ 	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
+ 	struct pnv_phb *phb = hose->private_data;
+ 	struct msi_desc *entry;
++	irq_hw_number_t hwirq;
+ 
+ 	if (WARN_ON(!phb))
+ 		return;
+@@ -106,10 +107,10 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev)
+ 	list_for_each_entry(entry, &pdev->msi_list, list) {
+ 		if (entry->irq == NO_IRQ)
+ 			continue;
++		hwirq = virq_to_hw(entry->irq);
+ 		irq_set_msi_desc(entry->irq, NULL);
+-		msi_bitmap_free_hwirqs(&phb->msi_bmp,
+-			virq_to_hw(entry->irq) - phb->msi_base, 1);
+ 		irq_dispose_mapping(entry->irq);
++		msi_bitmap_free_hwirqs(&phb->msi_bmp, hwirq - phb->msi_base, 1);
+ 	}
+ }
+ #endif /* CONFIG_PCI_MSI */
+diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
+index 5236e5427c38..691e8e517b3e 100644
+--- a/arch/powerpc/sysdev/fsl_msi.c
++++ b/arch/powerpc/sysdev/fsl_msi.c
+@@ -128,15 +128,16 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev)
+ {
+ 	struct msi_desc *entry;
+ 	struct fsl_msi *msi_data;
++	irq_hw_number_t hwirq;
+ 
+ 	list_for_each_entry(entry, &pdev->msi_list, list) {
+ 		if (entry->irq == NO_IRQ)
+ 			continue;
++		hwirq = virq_to_hw(entry->irq);
+ 		msi_data = irq_get_chip_data(entry->irq);
+ 		irq_set_msi_desc(entry->irq, NULL);
+-		msi_bitmap_free_hwirqs(&msi_data->bitmap,
+-				       virq_to_hw(entry->irq), 1);
+ 		irq_dispose_mapping(entry->irq);
++		msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
+ 	}
+ 
+ 	return;
+diff --git a/arch/powerpc/sysdev/mpic_u3msi.c b/arch/powerpc/sysdev/mpic_u3msi.c
+index fc46ef3b816e..4c3165fa521c 100644
+--- a/arch/powerpc/sysdev/mpic_u3msi.c
++++ b/arch/powerpc/sysdev/mpic_u3msi.c
+@@ -107,15 +107,16 @@ static u64 find_u4_magic_addr(struct pci_dev *pdev, unsigned int hwirq)
+ static void u3msi_teardown_msi_irqs(struct pci_dev *pdev)
+ {
+ 	struct msi_desc *entry;
++	irq_hw_number_t hwirq;
+ 
+         list_for_each_entry(entry, &pdev->msi_list, list) {
+ 		if (entry->irq == NO_IRQ)
+ 			continue;
+ 
++		hwirq = virq_to_hw(entry->irq);
+ 		irq_set_msi_desc(entry->irq, NULL);
+-		msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
+-				       virq_to_hw(entry->irq), 1);
+ 		irq_dispose_mapping(entry->irq);
++		msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, 1);
+ 	}
+ 
+ 	return;
+diff --git a/arch/powerpc/sysdev/ppc4xx_msi.c b/arch/powerpc/sysdev/ppc4xx_msi.c
+index 6eb21f2ea585..060f23775255 100644
+--- a/arch/powerpc/sysdev/ppc4xx_msi.c
++++ b/arch/powerpc/sysdev/ppc4xx_msi.c
+@@ -124,16 +124,17 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev)
+ {
+ 	struct msi_desc *entry;
+ 	struct ppc4xx_msi *msi_data = &ppc4xx_msi;
++	irq_hw_number_t hwirq;
+ 
+ 	dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n");
+ 
+ 	list_for_each_entry(entry, &dev->msi_list, list) {
+ 		if (entry->irq == NO_IRQ)
+ 			continue;
++		hwirq = virq_to_hw(entry->irq);
+ 		irq_set_msi_desc(entry->irq, NULL);
+-		msi_bitmap_free_hwirqs(&msi_data->bitmap,
+-				virq_to_hw(entry->irq), 1);
+ 		irq_dispose_mapping(entry->irq);
++		msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
+ 	}
+ }
+ 
+diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile
+index d4788111c161..fac6ac9790fa 100644
+--- a/arch/s390/boot/compressed/Makefile
++++ b/arch/s390/boot/compressed/Makefile
+@@ -10,7 +10,7 @@ targets += misc.o piggy.o sizes.h head.o
+ 
+ KBUILD_CFLAGS := -m64 -D__KERNEL__ $(LINUX_INCLUDE) -O2
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+-KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks
++KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float
+ KBUILD_CFLAGS += $(call cc-option,-mpacked-stack)
+ KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
+ 
+diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c
+index fe8d6924efaa..c78ba51ae285 100644
+--- a/arch/s390/kernel/compat_signal.c
++++ b/arch/s390/kernel/compat_signal.c
+@@ -48,6 +48,19 @@ typedef struct
+ 	struct ucontext32 uc;
+ } rt_sigframe32;
+ 
++static inline void sigset_to_sigset32(unsigned long *set64,
++				      compat_sigset_word *set32)
++{
++	set32[0] = (compat_sigset_word) set64[0];
++	set32[1] = (compat_sigset_word)(set64[0] >> 32);
++}
++
++static inline void sigset32_to_sigset(compat_sigset_word *set32,
++				      unsigned long *set64)
++{
++	set64[0] = (unsigned long) set32[0] | ((unsigned long) set32[1] << 32);
++}
++
+ int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
+ {
+ 	int err;
+@@ -303,10 +316,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn)
+ {
+ 	struct pt_regs *regs = task_pt_regs(current);
+ 	sigframe32 __user *frame = (sigframe32 __user *)regs->gprs[15];
++	compat_sigset_t cset;
+ 	sigset_t set;
+ 
+-	if (__copy_from_user(&set.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
++	if (__copy_from_user(&cset.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
+ 		goto badframe;
++	sigset32_to_sigset(cset.sig, set.sig);
+ 	set_current_blocked(&set);
+ 	if (restore_sigregs32(regs, &frame->sregs))
+ 		goto badframe;
+@@ -323,10 +338,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn)
+ {
+ 	struct pt_regs *regs = task_pt_regs(current);
+ 	rt_sigframe32 __user *frame = (rt_sigframe32 __user *)regs->gprs[15];
++	compat_sigset_t cset;
+ 	sigset_t set;
+ 
+-	if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
++	if (__copy_from_user(&cset, &frame->uc.uc_sigmask, sizeof(cset)))
+ 		goto badframe;
++	sigset32_to_sigset(cset.sig, set.sig);
+ 	set_current_blocked(&set);
+ 	if (compat_restore_altstack(&frame->uc.uc_stack))
+ 		goto badframe;
+@@ -397,7 +414,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
+ 		return -EFAULT;
+ 
+ 	/* Create struct sigcontext32 on the signal stack */
+-	memcpy(&sc.oldmask, &set->sig, _SIGMASK_COPY_SIZE32);
++	sigset_to_sigset32(set->sig, sc.oldmask);
+ 	sc.sregs = (__u32)(unsigned long __force) &frame->sregs;
+ 	if (__copy_to_user(&frame->sc, &sc, sizeof(frame->sc)))
+ 		return -EFAULT;
+@@ -458,6 +475,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
+ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
+ 			    struct pt_regs *regs)
+ {
++	compat_sigset_t cset;
+ 	rt_sigframe32 __user *frame;
+ 	unsigned long restorer;
+ 	size_t frame_size;
+@@ -505,11 +523,12 @@ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
+ 	store_sigregs();
+ 
+ 	/* Create ucontext on the signal stack. */
++	sigset_to_sigset32(set->sig, cset.sig);
+ 	if (__put_user(uc_flags, &frame->uc.uc_flags) ||
+ 	    __put_user(0, &frame->uc.uc_link) ||
+ 	    __compat_save_altstack(&frame->uc.uc_stack, regs->gprs[15]) ||
+ 	    save_sigregs32(regs, &frame->uc.uc_mcontext) ||
+-	    __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)) ||
++	    __copy_to_user(&frame->uc.uc_sigmask, &cset, sizeof(cset)) ||
+ 	    save_sigregs_ext32(regs, &frame->uc.uc_mcontext_ext))
+ 		return -EFAULT;
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 8cb3e438f21e..d330840a2b18 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -1219,7 +1219,18 @@ END(error_exit)
+ 
+ /* Runs on exception stack */
+ ENTRY(nmi)
++	/*
++	 * Fix up the exception frame if we're on Xen.
++	 * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most
++	 * one value to the stack on native, so it may clobber the rdx
++	 * scratch slot, but it won't clobber any of the important
++	 * slots past it.
++	 *
++	 * Xen is a different story, because the Xen frame itself overlaps
++	 * the "NMI executing" variable.
++	 */
+ 	PARAVIRT_ADJUST_EXCEPTION_FRAME
++
+ 	/*
+ 	 * We allow breakpoints in NMIs. If a breakpoint occurs, then
+ 	 * the iretq it performs will take us out of NMI context.
+@@ -1270,9 +1281,12 @@ ENTRY(nmi)
+ 	 * we don't want to enable interrupts, because then we'll end
+ 	 * up in an awkward situation in which IRQs are on but NMIs
+ 	 * are off.
++	 *
++	 * We also must not push anything to the stack before switching
++	 * stacks lest we corrupt the "NMI executing" variable.
+ 	 */
+ 
+-	SWAPGS
++	SWAPGS_UNSAFE_STACK
+ 	cld
+ 	movq	%rsp, %rdx
+ 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 9ebc3d009373..2350ab78183a 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -311,6 +311,7 @@
+ /* C1E active bits in int pending message */
+ #define K8_INTP_C1E_ACTIVE_MASK		0x18000000
+ #define MSR_K8_TSEG_ADDR		0xc0010112
++#define MSR_K8_TSEG_MASK		0xc0010113
+ #define K8_MTRRFIXRANGE_DRAM_ENABLE	0x00040000 /* MtrrFixDramEn bit    */
+ #define K8_MTRRFIXRANGE_DRAM_MODIFY	0x00080000 /* MtrrFixDramModEn bit */
+ #define K8_MTRR_RDMEM_WRMEM_MASK	0x18181818 /* Mask: RdMem|WrMem    */
+diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
+index dca71714f860..b12f81022a6b 100644
+--- a/arch/x86/include/asm/preempt.h
++++ b/arch/x86/include/asm/preempt.h
+@@ -90,9 +90,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
+ /*
+  * Returns true when we need to resched and can (barring IRQ state).
+  */
+-static __always_inline bool should_resched(void)
++static __always_inline bool should_resched(int preempt_offset)
+ {
+-	return unlikely(!raw_cpu_read_4(__preempt_count));
++	return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
+ }
+ 
+ #ifdef CONFIG_PREEMPT
+diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
+index 9d51fae1cba3..eaba08076030 100644
+--- a/arch/x86/include/asm/qspinlock.h
++++ b/arch/x86/include/asm/qspinlock.h
+@@ -39,18 +39,27 @@ static inline void queued_spin_unlock(struct qspinlock *lock)
+ }
+ #endif
+ 
+-#define virt_queued_spin_lock virt_queued_spin_lock
+-
+-static inline bool virt_queued_spin_lock(struct qspinlock *lock)
++#ifdef CONFIG_PARAVIRT
++#define virt_spin_lock virt_spin_lock
++static inline bool virt_spin_lock(struct qspinlock *lock)
+ {
+ 	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+ 		return false;
+ 
+-	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
+-		cpu_relax();
++	/*
++	 * On hypervisors without PARAVIRT_SPINLOCKS support we fall
++	 * back to a Test-and-Set spinlock, because fair locks have
++	 * horrible lock 'holder' preemption issues.
++	 */
++
++	do {
++		while (atomic_read(&lock->val) != 0)
++			cpu_relax();
++	} while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0);
+ 
+ 	return true;
+ }
++#endif /* CONFIG_PARAVIRT */
+ 
+ #include <asm-generic/qspinlock.h>
+ 
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index c42827eb86cf..25f909362b7a 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -338,10 +338,15 @@ done:
+ 
+ static void __init_or_module optimize_nops(struct alt_instr *a, u8 *instr)
+ {
++	unsigned long flags;
++
+ 	if (instr[0] != 0x90)
+ 		return;
+ 
++	local_irq_save(flags);
+ 	add_nops(instr + (a->instrlen - a->padlen), a->padlen);
++	sync_core();
++	local_irq_restore(flags);
+ 
+ 	DUMP_BYTES(instr, a->instrlen, "%p: [%d:%d) optimized NOPs: ",
+ 		   instr, a->instrlen - a->padlen, a->padlen);
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index cde732c1b495..307a49828826 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -336,6 +336,13 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
+ 	apic_write(APIC_LVTT, lvtt_value);
+ 
+ 	if (lvtt_value & APIC_LVT_TIMER_TSCDEADLINE) {
++		/*
++		 * See Intel SDM: TSC-Deadline Mode chapter. In xAPIC mode,
++		 * writing to the APIC LVTT and TSC_DEADLINE MSR isn't serialized.
++		 * According to Intel, MFENCE can do the serialization here.
++		 */
++		asm volatile("mfence" : : : "memory");
++
+ 		printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
+ 		return;
+ 	}
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 206052e55517..5880b482d83c 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2522,6 +2522,7 @@ void __init setup_ioapic_dest(void)
+ 	int pin, ioapic, irq, irq_entry;
+ 	const struct cpumask *mask;
+ 	struct irq_data *idata;
++	struct irq_chip *chip;
+ 
+ 	if (skip_ioapic_setup == 1)
+ 		return;
+@@ -2545,9 +2546,9 @@ void __init setup_ioapic_dest(void)
+ 		else
+ 			mask = apic->target_cpus();
+ 
+-		irq_set_affinity(irq, mask);
++		chip = irq_data_get_irq_chip(idata);
++		chip->irq_set_affinity(idata, mask, false);
+ 	}
+-
+ }
+ #endif
+ 
+diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
+index 6326ae24e4d5..1b09c420c7ff 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel.c
++++ b/arch/x86/kernel/cpu/perf_event_intel.c
+@@ -2102,9 +2102,12 @@ static struct event_constraint *
+ intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 			    struct perf_event *event)
+ {
+-	struct event_constraint *c1 = cpuc->event_constraint[idx];
++	struct event_constraint *c1 = NULL;
+ 	struct event_constraint *c2;
+ 
++	if (idx >= 0) /* fake does < 0 */
++		c1 = cpuc->event_constraint[idx];
++
+ 	/*
+ 	 * first time only
+ 	 * - static constraint: no change across incremental scheduling calls
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index e068d6683dba..74ca2fe7a0b3 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -185,10 +185,9 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
+ }
+ 
+ #ifdef CONFIG_KEXEC_FILE
+-static int get_nr_ram_ranges_callback(unsigned long start_pfn,
+-				unsigned long nr_pfn, void *arg)
++static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg)
+ {
+-	int *nr_ranges = arg;
++	unsigned int *nr_ranges = arg;
+ 
+ 	(*nr_ranges)++;
+ 	return 0;
+@@ -214,7 +213,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced,
+ 
+ 	ced->image = image;
+ 
+-	walk_system_ram_range(0, -1, &nr_ranges,
++	walk_system_ram_res(0, -1, &nr_ranges,
+ 				get_nr_ram_ranges_callback);
+ 
+ 	ced->max_nr_ranges = nr_ranges;
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 58bcfb67c01f..ebb5657ee280 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -41,10 +41,18 @@
+ #include <asm/timer.h>
+ #include <asm/special_insns.h>
+ 
+-/* nop stub */
+-void _paravirt_nop(void)
+-{
+-}
++/*
++ * nop stub, which must not clobber anything *including the stack* to
++ * avoid confusing the entry prologues.
++ */
++extern void _paravirt_nop(void);
++asm (".pushsection .entry.text, \"ax\"\n"
++     ".global _paravirt_nop\n"
++     "_paravirt_nop:\n\t"
++     "ret\n\t"
++     ".size _paravirt_nop, . - _paravirt_nop\n\t"
++     ".type _paravirt_nop, @function\n\t"
++     ".popsection");
+ 
+ /* identity function, which can be inlined */
+ u32 _paravirt_ident_32(u32 x)
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index f6b916387590..a90ac95562af 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -497,27 +497,59 @@ void set_personality_ia32(bool x32)
+ }
+ EXPORT_SYMBOL_GPL(set_personality_ia32);
+ 
++/*
++ * Called from fs/proc with a reference on @p to find the function
++ * which called into schedule(). This needs to be done carefully
++ * because the task might wake up and we might look at a stack
++ * changing under us.
++ */
+ unsigned long get_wchan(struct task_struct *p)
+ {
+-	unsigned long stack;
+-	u64 fp, ip;
++	unsigned long start, bottom, top, sp, fp, ip;
+ 	int count = 0;
+ 
+ 	if (!p || p == current || p->state == TASK_RUNNING)
+ 		return 0;
+-	stack = (unsigned long)task_stack_page(p);
+-	if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE)
++
++	start = (unsigned long)task_stack_page(p);
++	if (!start)
++		return 0;
++
++	/*
++	 * Layout of the stack page:
++	 *
++	 * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
++	 * PADDING
++	 * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
++	 * stack
++	 * ----------- bottom = start + sizeof(thread_info)
++	 * thread_info
++	 * ----------- start
++	 *
++	 * The tasks stack pointer points at the location where the
++	 * framepointer is stored. The data on the stack is:
++	 * ... IP FP ... IP FP
++	 *
++	 * We need to read FP and IP, so we need to adjust the upper
++	 * bound by another unsigned long.
++	 */
++	top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
++	top -= 2 * sizeof(unsigned long);
++	bottom = start + sizeof(struct thread_info);
++
++	sp = READ_ONCE(p->thread.sp);
++	if (sp < bottom || sp > top)
+ 		return 0;
+-	fp = *(u64 *)(p->thread.sp);
++
++	fp = READ_ONCE(*(unsigned long *)sp);
+ 	do {
+-		if (fp < (unsigned long)stack ||
+-		    fp >= (unsigned long)stack+THREAD_SIZE)
++		if (fp < bottom || fp > top)
+ 			return 0;
+-		ip = *(u64 *)(fp+8);
++		ip = READ_ONCE(*(unsigned long *)(fp + sizeof(unsigned long)));
+ 		if (!in_sched_functions(ip))
+ 			return ip;
+-		fp = *(u64 *)fp;
+-	} while (count++ < 16);
++		fp = READ_ONCE(*(unsigned long *)fp);
++	} while (count++ < 16 && p->state != TASK_RUNNING);
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 7437b41f6a47..dc9af7a0839a 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -21,6 +21,7 @@
+ #include <asm/hypervisor.h>
+ #include <asm/nmi.h>
+ #include <asm/x86_init.h>
++#include <asm/geode.h>
+ 
+ unsigned int __read_mostly cpu_khz;	/* TSC clocks / usec, not used here */
+ EXPORT_SYMBOL(cpu_khz);
+@@ -1013,15 +1014,17 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable);
+ 
+ static void __init check_system_tsc_reliable(void)
+ {
+-#ifdef CONFIG_MGEODE_LX
+-	/* RTSC counts during suspend */
++#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
++	if (is_geode_lx()) {
++		/* RTSC counts during suspend */
+ #define RTSC_SUSP 0x100
+-	unsigned long res_low, res_high;
++		unsigned long res_low, res_high;
+ 
+-	rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
+-	/* Geode_LX - the OLPC CPU has a very reliable TSC */
+-	if (res_low & RTSC_SUSP)
+-		tsc_clocksource_reliable = 1;
++		rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
++		/* Geode_LX - the OLPC CPU has a very reliable TSC */
++		if (res_low & RTSC_SUSP)
++			tsc_clocksource_reliable = 1;
++	}
+ #endif
+ 	if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
+ 		tsc_clocksource_reliable = 1;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 8e0c0844c6b9..2d32b67a1043 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -513,7 +513,7 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+ 	if (svm->vmcb->control.next_rip != 0) {
+-		WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));
++		WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));
+ 		svm->next_rip = svm->vmcb->control.next_rip;
+ 	}
+ 
+@@ -865,64 +865,6 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
+ 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
+ }
+ 
+-#define MTRR_TYPE_UC_MINUS	7
+-#define MTRR2PROTVAL_INVALID 0xff
+-
+-static u8 mtrr2protval[8];
+-
+-static u8 fallback_mtrr_type(int mtrr)
+-{
+-	/*
+-	 * WT and WP aren't always available in the host PAT.  Treat
+-	 * them as UC and UC- respectively.  Everything else should be
+-	 * there.
+-	 */
+-	switch (mtrr)
+-	{
+-	case MTRR_TYPE_WRTHROUGH:
+-		return MTRR_TYPE_UNCACHABLE;
+-	case MTRR_TYPE_WRPROT:
+-		return MTRR_TYPE_UC_MINUS;
+-	default:
+-		BUG();
+-	}
+-}
+-
+-static void build_mtrr2protval(void)
+-{
+-	int i;
+-	u64 pat;
+-
+-	for (i = 0; i < 8; i++)
+-		mtrr2protval[i] = MTRR2PROTVAL_INVALID;
+-
+-	/* Ignore the invalid MTRR types.  */
+-	mtrr2protval[2] = 0;
+-	mtrr2protval[3] = 0;
+-
+-	/*
+-	 * Use host PAT value to figure out the mapping from guest MTRR
+-	 * values to nested page table PAT/PCD/PWT values.  We do not
+-	 * want to change the host PAT value every time we enter the
+-	 * guest.
+-	 */
+-	rdmsrl(MSR_IA32_CR_PAT, pat);
+-	for (i = 0; i < 8; i++) {
+-		u8 mtrr = pat >> (8 * i);
+-
+-		if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID)
+-			mtrr2protval[mtrr] = __cm_idx2pte(i);
+-	}
+-
+-	for (i = 0; i < 8; i++) {
+-		if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) {
+-			u8 fallback = fallback_mtrr_type(i);
+-			mtrr2protval[i] = mtrr2protval[fallback];
+-			BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID);
+-		}
+-	}
+-}
+-
+ static __init int svm_hardware_setup(void)
+ {
+ 	int cpu;
+@@ -989,7 +931,6 @@ static __init int svm_hardware_setup(void)
+ 	} else
+ 		kvm_disable_tdp();
+ 
+-	build_mtrr2protval();
+ 	return 0;
+ 
+ err:
+@@ -1144,39 +1085,6 @@ static u64 svm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)
+ 	return target_tsc - tsc;
+ }
+ 
+-static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat)
+-{
+-	struct kvm_vcpu *vcpu = &svm->vcpu;
+-
+-	/* Unlike Intel, AMD takes the guest's CR0.CD into account.
+-	 *
+-	 * AMD does not have IPAT.  To emulate it for the case of guests
+-	 * with no assigned devices, just set everything to WB.  If guests
+-	 * have assigned devices, however, we cannot force WB for RAM
+-	 * pages only, so use the guest PAT directly.
+-	 */
+-	if (!kvm_arch_has_assigned_device(vcpu->kvm))
+-		*g_pat = 0x0606060606060606;
+-	else
+-		*g_pat = vcpu->arch.pat;
+-}
+-
+-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+-{
+-	u8 mtrr;
+-
+-	/*
+-	 * 1. MMIO: trust guest MTRR, so same as item 3.
+-	 * 2. No passthrough: always map as WB, and force guest PAT to WB as well
+-	 * 3. Passthrough: can't guarantee the result, try to trust guest.
+-	 */
+-	if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm))
+-		return 0;
+-
+-	mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn);
+-	return mtrr2protval[mtrr];
+-}
+-
+ static void init_vmcb(struct vcpu_svm *svm, bool init_event)
+ {
+ 	struct vmcb_control_area *control = &svm->vmcb->control;
+@@ -1260,6 +1168,7 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event)
+ 	 * It also updates the guest-visible cr0 value.
+ 	 */
+ 	(void)kvm_set_cr0(&svm->vcpu, X86_CR0_NW | X86_CR0_CD | X86_CR0_ET);
++	kvm_mmu_reset_context(&svm->vcpu);
+ 
+ 	save->cr4 = X86_CR4_PAE;
+ 	/* rdx = ?? */
+@@ -1272,7 +1181,6 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event)
+ 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
+ 		clr_cr_intercept(svm, INTERCEPT_CR3_WRITE);
+ 		save->g_pat = svm->vcpu.arch.pat;
+-		svm_set_guest_pat(svm, &save->g_pat);
+ 		save->cr3 = 0;
+ 		save->cr4 = 0;
+ 	}
+@@ -3347,16 +3255,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 	case MSR_VM_IGNNE:
+ 		vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
+ 		break;
+-	case MSR_IA32_CR_PAT:
+-		if (npt_enabled) {
+-			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+-				return 1;
+-			vcpu->arch.pat = data;
+-			svm_set_guest_pat(svm, &svm->vmcb->save.g_pat);
+-			mark_dirty(svm->vmcb, VMCB_NPT);
+-			break;
+-		}
+-		/* fall through */
+ 	default:
+ 		return kvm_set_msr_common(vcpu, msr);
+ 	}
+@@ -4191,6 +4089,11 @@ static bool svm_has_high_real_mode_segbase(void)
+ 	return true;
+ }
+ 
++static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
++{
++	return 0;
++}
++
+ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+ {
+ }
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 83b7b5cd75d5..aa9e8229571d 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -6134,6 +6134,8 @@ static __init int hardware_setup(void)
+ 	memcpy(vmx_msr_bitmap_longmode_x2apic,
+ 			vmx_msr_bitmap_longmode, PAGE_SIZE);
+ 
++	set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
++
+ 	if (enable_apicv) {
+ 		for (msr = 0x800; msr <= 0x8ff; msr++)
+ 			vmx_disable_intercept_msr_read_x2apic(msr);
+@@ -8632,17 +8634,22 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+ 	u64 ipat = 0;
+ 
+ 	/* For VT-d and EPT combination
+-	 * 1. MMIO: guest may want to apply WC, trust it.
++	 * 1. MMIO: always map as UC
+ 	 * 2. EPT with VT-d:
+ 	 *   a. VT-d without snooping control feature: can't guarantee the
+-	 *	result, try to trust guest.  So the same as item 1.
++	 *	result, try to trust guest.
+ 	 *   b. VT-d with snooping control feature: snooping control feature of
+ 	 *	VT-d engine can guarantee the cache correctness. Just set it
+ 	 *	to WB to keep consistent with host. So the same as item 3.
+ 	 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep
+ 	 *    consistent with host MTRR
+ 	 */
+-	if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) {
++	if (is_mmio) {
++		cache = MTRR_TYPE_UNCACHABLE;
++		goto exit;
++	}
++
++	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) {
+ 		ipat = VMX_EPT_IPAT_BIT;
+ 		cache = MTRR_TYPE_WRBACK;
+ 		goto exit;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8f0f6eca69da..32c6e6ac5964 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2388,6 +2388,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_LASTINTFROMIP:
+ 	case MSR_IA32_LASTINTTOIP:
+ 	case MSR_K8_SYSCFG:
++	case MSR_K8_TSEG_ADDR:
++	case MSR_K8_TSEG_MASK:
+ 	case MSR_K7_HWCR:
+ 	case MSR_VM_HSAVE_PA:
+ 	case MSR_K8_INT_PENDING_MSG:
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 3fba623e3ba5..f9977a7a9444 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1132,7 +1132,7 @@ void mark_rodata_ro(void)
+ 	 * has been zapped already via cleanup_highmem().
+ 	 */
+ 	all_end = roundup((unsigned long)_brk_end, PMD_SIZE);
+-	set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT);
++	set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT);
+ 
+ 	rodata_test();
+ 
+diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c
+index 27062303c881..7553921c146c 100644
+--- a/arch/x86/pci/intel_mid_pci.c
++++ b/arch/x86/pci/intel_mid_pci.c
+@@ -35,6 +35,9 @@
+ 
+ #define PCIE_CAP_OFFSET	0x100
+ 
++/* Quirks for the listed devices */
++#define PCI_DEVICE_ID_INTEL_MRFL_MMC	0x1190
++
+ /* Fixed BAR fields */
+ #define PCIE_VNDR_CAP_ID_FIXED_BAR 0x00	/* Fixed BAR (TBD) */
+ #define PCI_FIXED_BAR_0_SIZE	0x04
+@@ -214,10 +217,27 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev)
+ 	if (dev->irq_managed && dev->irq > 0)
+ 		return 0;
+ 
+-	if (intel_mid_identify_cpu() == INTEL_MID_CPU_CHIP_TANGIER)
++	switch (intel_mid_identify_cpu()) {
++	case INTEL_MID_CPU_CHIP_TANGIER:
+ 		polarity = 0; /* active high */
+-	else
++
++		/* Special treatment for IRQ0 */
++		if (dev->irq == 0) {
++			/*
++			 * TNG has IRQ0 assigned to eMMC controller. But there
++			 * are also other devices with bogus PCI configuration
++			 * that have IRQ0 assigned. This check ensures that
++			 * eMMC gets it.
++			 */
++			if (dev->device != PCI_DEVICE_ID_INTEL_MRFL_MMC)
++				return -EBUSY;
++		}
++		break;
++	default:
+ 		polarity = 1; /* active low */
++		break;
++	}
++
+ 	ioapic_set_alloc_attr(&info, dev_to_node(&dev->dev), 1, polarity);
+ 
+ 	/*
+diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
+index e4308fe6afe8..c6835bfad3a1 100644
+--- a/arch/x86/platform/efi/efi.c
++++ b/arch/x86/platform/efi/efi.c
+@@ -705,6 +705,70 @@ out:
+ }
+ 
+ /*
++ * Iterate the EFI memory map in reverse order because the regions
++ * will be mapped top-down. The end result is the same as if we had
++ * mapped things forward, but doesn't require us to change the
++ * existing implementation of efi_map_region().
++ */
++static inline void *efi_map_next_entry_reverse(void *entry)
++{
++	/* Initial call */
++	if (!entry)
++		return memmap.map_end - memmap.desc_size;
++
++	entry -= memmap.desc_size;
++	if (entry < memmap.map)
++		return NULL;
++
++	return entry;
++}
++
++/*
++ * efi_map_next_entry - Return the next EFI memory map descriptor
++ * @entry: Previous EFI memory map descriptor
++ *
++ * This is a helper function to iterate over the EFI memory map, which
++ * we do in different orders depending on the current configuration.
++ *
++ * To begin traversing the memory map @entry must be %NULL.
++ *
++ * Returns %NULL when we reach the end of the memory map.
++ */
++static void *efi_map_next_entry(void *entry)
++{
++	if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) {
++		/*
++		 * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE
++		 * config table feature requires us to map all entries
++		 * in the same order as they appear in the EFI memory
++		 * map. That is to say, entry N must have a lower
++		 * virtual address than entry N+1. This is because the
++		 * firmware toolchain leaves relative references in
++		 * the code/data sections, which are split and become
++		 * separate EFI memory regions. Mapping things
++		 * out-of-order leads to the firmware accessing
++		 * unmapped addresses.
++		 *
++		 * Since we need to map things this way whether or not
++		 * the kernel actually makes use of
++		 * EFI_PROPERTIES_TABLE, let's just switch to this
++		 * scheme by default for 64-bit.
++		 */
++		return efi_map_next_entry_reverse(entry);
++	}
++
++	/* Initial call */
++	if (!entry)
++		return memmap.map;
++
++	entry += memmap.desc_size;
++	if (entry >= memmap.map_end)
++		return NULL;
++
++	return entry;
++}
++
++/*
+  * Map the efi memory ranges of the runtime services and update new_mmap with
+  * virtual addresses.
+  */
+@@ -714,7 +778,8 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
+ 	unsigned long left = 0;
+ 	efi_memory_desc_t *md;
+ 
+-	for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++	p = NULL;
++	while ((p = efi_map_next_entry(p))) {
+ 		md = p;
+ 		if (!(md->attribute & EFI_MEMORY_RUNTIME)) {
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 11d6fb4e8483..777ad2f03160 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -33,6 +33,10 @@
+ #include <linux/memblock.h>
+ #include <linux/edd.h>
+ 
++#ifdef CONFIG_KEXEC_CORE
++#include <linux/kexec.h>
++#endif
++
+ #include <xen/xen.h>
+ #include <xen/events.h>
+ #include <xen/interface/xen.h>
+@@ -1800,6 +1804,21 @@ static struct notifier_block xen_hvm_cpu_notifier = {
+ 	.notifier_call	= xen_hvm_cpu_notify,
+ };
+ 
++#ifdef CONFIG_KEXEC_CORE
++static void xen_hvm_shutdown(void)
++{
++	native_machine_shutdown();
++	if (kexec_in_progress)
++		xen_reboot(SHUTDOWN_soft_reset);
++}
++
++static void xen_hvm_crash_shutdown(struct pt_regs *regs)
++{
++	native_machine_crash_shutdown(regs);
++	xen_reboot(SHUTDOWN_soft_reset);
++}
++#endif
++
+ static void __init xen_hvm_guest_init(void)
+ {
+ 	if (xen_pv_domain())
+@@ -1819,6 +1838,10 @@ static void __init xen_hvm_guest_init(void)
+ 	x86_init.irqs.intr_init = xen_init_IRQ;
+ 	xen_hvm_init_time_ops();
+ 	xen_hvm_init_mmu_ops();
++#ifdef CONFIG_KEXEC_CORE
++	machine_ops.shutdown = xen_hvm_shutdown;
++	machine_ops.crash_shutdown = xen_hvm_crash_shutdown;
++#endif
+ }
+ #endif
+ 
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index d6283b3f5db5..9cc48d1d7abb 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -387,6 +387,9 @@ static void blkg_destroy_all(struct request_queue *q)
+ 		blkg_destroy(blkg);
+ 		spin_unlock(&blkcg->lock);
+ 	}
++
++	q->root_blkg = NULL;
++	q->root_rl.blkg = NULL;
+ }
+ 
+ /*
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 176262ec3731..c69902695136 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1807,7 +1807,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 
+ 		hctx = q->mq_ops->map_queue(q, i);
+ 		cpumask_set_cpu(i, hctx->cpumask);
+-		cpumask_set_cpu(i, hctx->tags->cpumask);
+ 		ctx->index_hw = hctx->nr_ctx;
+ 		hctx->ctxs[hctx->nr_ctx++] = ctx;
+ 	}
+@@ -1847,6 +1846,14 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 		hctx->next_cpu = cpumask_first(hctx->cpumask);
+ 		hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
+ 	}
++
++	queue_for_each_ctx(q, ctx, i) {
++		if (!cpu_online(i))
++			continue;
++
++		hctx = q->mq_ops->map_queue(q, i);
++		cpumask_set_cpu(i, hctx->tags->cpumask);
++	}
+ }
+ 
+ static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 764280a91776..e9fd32e91668 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -148,7 +148,11 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
+ 
+ 			if (sibling == cpu) /* skip itself */
+ 				continue;
++
+ 			sib_cpu_ci = get_cpu_cacheinfo(sibling);
++			if (!sib_cpu_ci->info_list)
++				continue;
++
+ 			sib_leaf = sib_cpu_ci->info_list + index;
+ 			cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
+ 			cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
+@@ -159,6 +163,9 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
+ 
+ static void free_cache_attributes(unsigned int cpu)
+ {
++	if (!per_cpu_cacheinfo(cpu))
++		return;
++
+ 	cache_shared_cpu_map_remove(cpu);
+ 
+ 	kfree(per_cpu_cacheinfo(cpu));
+@@ -514,8 +521,7 @@ static int cacheinfo_cpu_callback(struct notifier_block *nfb,
+ 		break;
+ 	case CPU_DEAD:
+ 		cache_remove_dev(cpu);
+-		if (per_cpu_cacheinfo(cpu))
+-			free_cache_attributes(cpu);
++		free_cache_attributes(cpu);
+ 		break;
+ 	}
+ 	return notifier_from_errno(rc);
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index f3f6d167f3f1..37a7bb7b239d 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -27,9 +27,10 @@
+  */
+ void device_add_property_set(struct device *dev, struct property_set *pset)
+ {
+-	if (pset)
+-		pset->fwnode.type = FWNODE_PDATA;
++	if (!pset)
++		return;
+ 
++	pset->fwnode.type = FWNODE_PDATA;
+ 	set_secondary_fwnode(dev, &pset->fwnode);
+ }
+ EXPORT_SYMBOL_GPL(device_add_property_set);
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index 5799a0b9e6cc..c8941f39c919 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -32,8 +32,7 @@ static DEFINE_MUTEX(regmap_debugfs_early_lock);
+ /* Calculate the length of a fixed format  */
+ static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size)
+ {
+-	snprintf(buf, buf_size, "%x", max_val);
+-	return strlen(buf);
++	return snprintf(NULL, 0, "%x", max_val);
+ }
+ 
+ static ssize_t regmap_name_read_file(struct file *file,
+@@ -432,7 +431,7 @@ static ssize_t regmap_access_read_file(struct file *file,
+ 		/* If we're in the region the user is trying to read */
+ 		if (p >= *ppos) {
+ 			/* ...but not beyond it */
+-			if (buf_pos >= count - 1 - tot_len)
++			if (buf_pos + tot_len + 1 >= count)
+ 				break;
+ 
+ 			/* Format the register */
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index deb3f001791f..767657565de6 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -212,6 +212,9 @@ static int xen_blkif_map(struct xen_blkif *blkif, grant_ref_t *gref,
+ 
+ static int xen_blkif_disconnect(struct xen_blkif *blkif)
+ {
++	struct pending_req *req, *n;
++	int i = 0, j;
++
+ 	if (blkif->xenblkd) {
+ 		kthread_stop(blkif->xenblkd);
+ 		wake_up(&blkif->shutdown_wq);
+@@ -238,13 +241,28 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif)
+ 	/* Remove all persistent grants and the cache of ballooned pages. */
+ 	xen_blkbk_free_caches(blkif);
+ 
++	/* Check that there is no request in use */
++	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
++		list_del(&req->free_list);
++
++		for (j = 0; j < MAX_INDIRECT_SEGMENTS; j++)
++			kfree(req->segments[j]);
++
++		for (j = 0; j < MAX_INDIRECT_PAGES; j++)
++			kfree(req->indirect_pages[j]);
++
++		kfree(req);
++		i++;
++	}
++
++	WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
++	blkif->nr_ring_pages = 0;
++
+ 	return 0;
+ }
+ 
+ static void xen_blkif_free(struct xen_blkif *blkif)
+ {
+-	struct pending_req *req, *n;
+-	int i = 0, j;
+ 
+ 	xen_blkif_disconnect(blkif);
+ 	xen_vbd_free(&blkif->vbd);
+@@ -257,22 +275,6 @@ static void xen_blkif_free(struct xen_blkif *blkif)
+ 	BUG_ON(!list_empty(&blkif->free_pages));
+ 	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
+ 
+-	/* Check that there is no request in use */
+-	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
+-		list_del(&req->free_list);
+-
+-		for (j = 0; j < MAX_INDIRECT_SEGMENTS; j++)
+-			kfree(req->segments[j]);
+-
+-		for (j = 0; j < MAX_INDIRECT_PAGES; j++)
+-			kfree(req->indirect_pages[j]);
+-
+-		kfree(req);
+-		i++;
+-	}
+-
+-	WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
+-
+ 	kmem_cache_free(xen_blkif_cachep, blkif);
+ }
+ 
+diff --git a/drivers/clk/samsung/clk-cpu.c b/drivers/clk/samsung/clk-cpu.c
+index 3a1fe07cfe9e..dd02356e2e86 100644
+--- a/drivers/clk/samsung/clk-cpu.c
++++ b/drivers/clk/samsung/clk-cpu.c
+@@ -161,7 +161,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata,
+ 	 * the values for DIV_COPY and DIV_HPM dividers need not be set.
+ 	 */
+ 	div0 = cfg_data->div0;
+-	if (test_bit(CLK_CPU_HAS_DIV1, &cpuclk->flags)) {
++	if (cpuclk->flags & CLK_CPU_HAS_DIV1) {
+ 		div1 = cfg_data->div1;
+ 		if (readl(base + E4210_SRC_CPU) & E4210_MUX_HPM_MASK)
+ 			div1 = readl(base + E4210_DIV_CPU1) &
+@@ -182,7 +182,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata,
+ 		alt_div = DIV_ROUND_UP(alt_prate, tmp_rate) - 1;
+ 		WARN_ON(alt_div >= MAX_DIV);
+ 
+-		if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) {
++		if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) {
+ 			/*
+ 			 * In Exynos4210, ATB clock parent is also mout_core. So
+ 			 * ATB clock also needs to be mantained at safe speed.
+@@ -203,7 +203,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata,
+ 	writel(div0, base + E4210_DIV_CPU0);
+ 	wait_until_divider_stable(base + E4210_DIV_STAT_CPU0, DIV_MASK_ALL);
+ 
+-	if (test_bit(CLK_CPU_HAS_DIV1, &cpuclk->flags)) {
++	if (cpuclk->flags & CLK_CPU_HAS_DIV1) {
+ 		writel(div1, base + E4210_DIV_CPU1);
+ 		wait_until_divider_stable(base + E4210_DIV_STAT_CPU1,
+ 				DIV_MASK_ALL);
+@@ -222,7 +222,7 @@ static int exynos_cpuclk_post_rate_change(struct clk_notifier_data *ndata,
+ 	unsigned long mux_reg;
+ 
+ 	/* find out the divider values to use for clock data */
+-	if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) {
++	if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) {
+ 		while ((cfg_data->prate * 1000) != ndata->new_rate) {
+ 			if (cfg_data->prate == 0)
+ 				return -EINVAL;
+@@ -237,7 +237,7 @@ static int exynos_cpuclk_post_rate_change(struct clk_notifier_data *ndata,
+ 	writel(mux_reg & ~(1 << 16), base + E4210_SRC_CPU);
+ 	wait_until_mux_stable(base + E4210_STAT_CPU, 16, 1);
+ 
+-	if (test_bit(CLK_CPU_NEEDS_DEBUG_ALT_DIV, &cpuclk->flags)) {
++	if (cpuclk->flags & CLK_CPU_NEEDS_DEBUG_ALT_DIV) {
+ 		div |= (cfg_data->div0 & E4210_DIV0_ATB_MASK);
+ 		div_mask |= E4210_DIV0_ATB_MASK;
+ 	}
+diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
+index 757636d166cf..4ab28cfb8d2a 100644
+--- a/drivers/clk/ti/clk-3xxx.c
++++ b/drivers/clk/ti/clk-3xxx.c
+@@ -163,7 +163,6 @@ static struct ti_dt_clk omap3xxx_clks[] = {
+ 	DT_CLK(NULL, "gpio2_ick", "gpio2_ick"),
+ 	DT_CLK(NULL, "wdt3_ick", "wdt3_ick"),
+ 	DT_CLK(NULL, "uart3_ick", "uart3_ick"),
+-	DT_CLK(NULL, "uart4_ick", "uart4_ick"),
+ 	DT_CLK(NULL, "gpt9_ick", "gpt9_ick"),
+ 	DT_CLK(NULL, "gpt8_ick", "gpt8_ick"),
+ 	DT_CLK(NULL, "gpt7_ick", "gpt7_ick"),
+@@ -308,6 +307,7 @@ static struct ti_dt_clk am35xx_clks[] = {
+ static struct ti_dt_clk omap36xx_clks[] = {
+ 	DT_CLK(NULL, "omap_192m_alwon_fck", "omap_192m_alwon_fck"),
+ 	DT_CLK(NULL, "uart4_fck", "uart4_fck"),
++	DT_CLK(NULL, "uart4_ick", "uart4_ick"),
+ 	{ .node_name = NULL },
+ };
+ 
+diff --git a/drivers/clk/ti/clk-7xx.c b/drivers/clk/ti/clk-7xx.c
+index 63b8323df918..0eb82107c421 100644
+--- a/drivers/clk/ti/clk-7xx.c
++++ b/drivers/clk/ti/clk-7xx.c
+@@ -16,7 +16,6 @@
+ #include <linux/clkdev.h>
+ #include <linux/clk/ti.h>
+ 
+-#define DRA7_DPLL_ABE_DEFFREQ				180633600
+ #define DRA7_DPLL_GMAC_DEFFREQ				1000000000
+ #define DRA7_DPLL_USB_DEFFREQ				960000000
+ 
+@@ -312,27 +311,12 @@ static struct ti_dt_clk dra7xx_clks[] = {
+ int __init dra7xx_dt_clk_init(void)
+ {
+ 	int rc;
+-	struct clk *abe_dpll_mux, *sys_clkin2, *dpll_ck, *hdcp_ck;
++	struct clk *dpll_ck, *hdcp_ck;
+ 
+ 	ti_dt_clocks_register(dra7xx_clks);
+ 
+ 	omap2_clk_disable_autoidle_all();
+ 
+-	abe_dpll_mux = clk_get_sys(NULL, "abe_dpll_sys_clk_mux");
+-	sys_clkin2 = clk_get_sys(NULL, "sys_clkin2");
+-	dpll_ck = clk_get_sys(NULL, "dpll_abe_ck");
+-
+-	rc = clk_set_parent(abe_dpll_mux, sys_clkin2);
+-	if (!rc)
+-		rc = clk_set_rate(dpll_ck, DRA7_DPLL_ABE_DEFFREQ);
+-	if (rc)
+-		pr_err("%s: failed to configure ABE DPLL!\n", __func__);
+-
+-	dpll_ck = clk_get_sys(NULL, "dpll_abe_m2x2_ck");
+-	rc = clk_set_rate(dpll_ck, DRA7_DPLL_ABE_DEFFREQ * 2);
+-	if (rc)
+-		pr_err("%s: failed to configure ABE DPLL m2x2!\n", __func__);
+-
+ 	dpll_ck = clk_get_sys(NULL, "dpll_gmac_ck");
+ 	rc = clk_set_rate(dpll_ck, DRA7_DPLL_GMAC_DEFFREQ);
+ 	if (rc)
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 0136dfcdabf0..7c2a7385c2ad 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -146,6 +146,9 @@ static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf)
+ {
+ 	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
+ 
++	if (unlikely(!data))
++		return -ENODEV;
++
+ 	return cpufreq_show_cpus(data->freqdomain_cpus, buf);
+ }
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 528a82bf5038..99a406501e8c 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -255,7 +255,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 			rcu_read_unlock();
+ 
+ 			tol_uV = opp_uV * priv->voltage_tolerance / 100;
+-			if (regulator_is_supported_voltage(cpu_reg, opp_uV,
++			if (regulator_is_supported_voltage(cpu_reg,
++							   opp_uV - tol_uV,
+ 							   opp_uV + tol_uV)) {
+ 				if (opp_uV < min_uV)
+ 					min_uV = opp_uV;
+diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
+index b60698b30d30..bc2a55bc35e4 100644
+--- a/drivers/crypto/marvell/cesa.h
++++ b/drivers/crypto/marvell/cesa.h
+@@ -687,6 +687,33 @@ static inline u32 mv_cesa_get_int_mask(struct mv_cesa_engine *engine)
+ 
+ int mv_cesa_queue_req(struct crypto_async_request *req);
+ 
++/*
++ * Helper function that indicates whether a crypto request needs to be
++ * cleaned up or not after being enqueued using mv_cesa_queue_req().
++ */
++static inline int mv_cesa_req_needs_cleanup(struct crypto_async_request *req,
++					    int ret)
++{
++	/*
++	 * The queue still had some space, the request was queued
++	 * normally, so there's no need to clean it up.
++	 */
++	if (ret == -EINPROGRESS)
++		return false;
++
++	/*
++	 * The queue had not space left, but since the request is
++	 * flagged with CRYPTO_TFM_REQ_MAY_BACKLOG, it was added to
++	 * the backlog and will be processed later. There's no need to
++	 * clean it up.
++	 */
++	if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
++		return false;
++
++	/* Request wasn't queued, we need to clean it up */
++	return true;
++}
++
+ /* TDMA functions */
+ 
+ static inline void mv_cesa_req_dma_iter_init(struct mv_cesa_dma_iter *iter,
+diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
+index 0745cf3b9c0e..3df2f4e7adb2 100644
+--- a/drivers/crypto/marvell/cipher.c
++++ b/drivers/crypto/marvell/cipher.c
+@@ -189,7 +189,6 @@ static inline void mv_cesa_ablkcipher_prepare(struct crypto_async_request *req,
+ {
+ 	struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+ 	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
+-
+ 	creq->req.base.engine = engine;
+ 
+ 	if (creq->req.base.type == CESA_DMA_REQ)
+@@ -431,7 +430,7 @@ static int mv_cesa_des_op(struct ablkcipher_request *req,
+ 		return ret;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS)
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ablkcipher_cleanup(req);
+ 
+ 	return ret;
+@@ -551,7 +550,7 @@ static int mv_cesa_des3_op(struct ablkcipher_request *req,
+ 		return ret;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS)
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ablkcipher_cleanup(req);
+ 
+ 	return ret;
+@@ -693,7 +692,7 @@ static int mv_cesa_aes_op(struct ablkcipher_request *req,
+ 		return ret;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS)
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ablkcipher_cleanup(req);
+ 
+ 	return ret;
+diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
+index ae9272eb9c1a..e8d0d7128137 100644
+--- a/drivers/crypto/marvell/hash.c
++++ b/drivers/crypto/marvell/hash.c
+@@ -739,10 +739,8 @@ static int mv_cesa_ahash_update(struct ahash_request *req)
+ 		return 0;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS) {
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ahash_cleanup(req);
+-		return ret;
+-	}
+ 
+ 	return ret;
+ }
+@@ -766,7 +764,7 @@ static int mv_cesa_ahash_final(struct ahash_request *req)
+ 		return 0;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS)
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ahash_cleanup(req);
+ 
+ 	return ret;
+@@ -791,7 +789,7 @@ static int mv_cesa_ahash_finup(struct ahash_request *req)
+ 		return 0;
+ 
+ 	ret = mv_cesa_queue_req(&req->base);
+-	if (ret && ret != -EINPROGRESS)
++	if (mv_cesa_req_needs_cleanup(&req->base, ret))
+ 		mv_cesa_ahash_cleanup(req);
+ 
+ 	return ret;
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 40afa2a16cfc..da7917a2eed2 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -455,6 +455,15 @@ static struct at_xdmac_desc *at_xdmac_alloc_desc(struct dma_chan *chan,
+ 	return desc;
+ }
+ 
++void at_xdmac_init_used_desc(struct at_xdmac_desc *desc)
++{
++	memset(&desc->lld, 0, sizeof(desc->lld));
++	INIT_LIST_HEAD(&desc->descs_list);
++	desc->direction = DMA_TRANS_NONE;
++	desc->xfer_size = 0;
++	desc->active_xfer = false;
++}
++
+ /* Call must be protected by lock. */
+ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan)
+ {
+@@ -466,7 +475,7 @@ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan)
+ 		desc = list_first_entry(&atchan->free_descs_list,
+ 					struct at_xdmac_desc, desc_node);
+ 		list_del(&desc->desc_node);
+-		desc->active_xfer = false;
++		at_xdmac_init_used_desc(desc);
+ 	}
+ 
+ 	return desc;
+@@ -797,10 +806,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 		list_add_tail(&desc->desc_node, &first->descs_list);
+ 	}
+ 
+-	prev->lld.mbr_nda = first->tx_dma_desc.phys;
+-	dev_dbg(chan2dev(chan),
+-		"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
+-		__func__, prev, &prev->lld.mbr_nda);
++	at_xdmac_queue_desc(chan, prev, first);
+ 	first->tx_dma_desc.flags = flags;
+ 	first->xfer_size = buf_len;
+ 	first->direction = direction;
+@@ -878,14 +884,14 @@ at_xdmac_interleaved_queue_desc(struct dma_chan *chan,
+ 
+ 	if (xt->src_inc) {
+ 		if (xt->src_sgl)
+-			chan_cc |=  AT_XDMAC_CC_SAM_UBS_DS_AM;
++			chan_cc |=  AT_XDMAC_CC_SAM_UBS_AM;
+ 		else
+ 			chan_cc |=  AT_XDMAC_CC_SAM_INCREMENTED_AM;
+ 	}
+ 
+ 	if (xt->dst_inc) {
+ 		if (xt->dst_sgl)
+-			chan_cc |=  AT_XDMAC_CC_DAM_UBS_DS_AM;
++			chan_cc |=  AT_XDMAC_CC_DAM_UBS_AM;
+ 		else
+ 			chan_cc |=  AT_XDMAC_CC_DAM_INCREMENTED_AM;
+ 	}
+diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
+index cf1c87fa1edd..bedce038c6e2 100644
+--- a/drivers/dma/dw/core.c
++++ b/drivers/dma/dw/core.c
+@@ -1591,7 +1591,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
+ 	INIT_LIST_HEAD(&dw->dma.channels);
+ 	for (i = 0; i < nr_channels; i++) {
+ 		struct dw_dma_chan	*dwc = &dw->chan[i];
+-		int			r = nr_channels - i - 1;
+ 
+ 		dwc->chan.device = &dw->dma;
+ 		dma_cookie_init(&dwc->chan);
+@@ -1603,7 +1602,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
+ 
+ 		/* 7 is highest priority & 0 is lowest. */
+ 		if (pdata->chan_priority == CHAN_PRIORITY_ASCENDING)
+-			dwc->priority = r;
++			dwc->priority = nr_channels - i - 1;
+ 		else
+ 			dwc->priority = i;
+ 
+@@ -1622,6 +1621,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
+ 		/* Hardware configuration */
+ 		if (autocfg) {
+ 			unsigned int dwc_params;
++			unsigned int r = DW_DMA_MAX_NR_CHANNELS - i - 1;
+ 			void __iomem *addr = chip->regs + r * sizeof(u32);
+ 
+ 			dwc_params = dma_read_byaddr(addr, DWC_PARAMS);
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index ddcbbf5cd9e9..95bdbbe2a671 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -888,6 +888,7 @@ pxad_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd,
+ 	struct dma_async_tx_descriptor *tx;
+ 	struct pxad_chan *chan = container_of(vc, struct pxad_chan, vc);
+ 
++	INIT_LIST_HEAD(&vd->node);
+ 	tx = vchan_tx_prep(vc, vd, tx_flags);
+ 	tx->tx_submit = pxad_tx_submit;
+ 	dev_dbg(&chan->vc.chan.dev->device,
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index 43b57b02d050..ca94f475fd05 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -126,7 +126,7 @@ static int find_cable_index_by_id(struct extcon_dev *edev, const unsigned int id
+ 
+ static int find_cable_id_by_name(struct extcon_dev *edev, const char *name)
+ {
+-	unsigned int id = -EINVAL;
++	int id = -EINVAL;
+ 	int i = 0;
+ 
+ 	/* Find the id of extcon cable */
+@@ -143,7 +143,7 @@ static int find_cable_id_by_name(struct extcon_dev *edev, const char *name)
+ 
+ static int find_cable_index_by_name(struct extcon_dev *edev, const char *name)
+ {
+-	unsigned int id;
++	int id;
+ 
+ 	if (edev->max_supported == 0)
+ 		return -EINVAL;
+@@ -159,7 +159,7 @@ static int find_cable_index_by_name(struct extcon_dev *edev, const char *name)
+ static bool is_extcon_changed(u32 prev, u32 new, int idx, bool *attached)
+ {
+ 	if (((prev >> idx) & 0x1) != ((new >> idx) & 0x1)) {
+-		*attached = new ? true : false;
++		*attached = ((new >> idx) & 0x1) ? true : false;
+ 		return true;
+ 	}
+ 
+@@ -378,7 +378,7 @@ EXPORT_SYMBOL_GPL(extcon_get_cable_state_);
+  */
+ int extcon_get_cable_state(struct extcon_dev *edev, const char *cable_name)
+ {
+-	unsigned int id;
++	int id;
+ 
+ 	id = find_cable_id_by_name(edev, cable_name);
+ 	if (id < 0)
+@@ -426,7 +426,7 @@ EXPORT_SYMBOL_GPL(extcon_set_cable_state_);
+ int extcon_set_cable_state(struct extcon_dev *edev,
+ 			const char *cable_name, bool cable_state)
+ {
+-	unsigned int id;
++	int id;
+ 
+ 	id = find_cable_id_by_name(edev, cable_name);
+ 	if (id < 0)
+diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
+index e29560e6b40b..950c87f5d279 100644
+--- a/drivers/firmware/efi/libstub/arm-stub.c
++++ b/drivers/firmware/efi/libstub/arm-stub.c
+@@ -13,6 +13,7 @@
+  */
+ 
+ #include <linux/efi.h>
++#include <linux/sort.h>
+ #include <asm/efi.h>
+ 
+ #include "efistub.h"
+@@ -305,6 +306,44 @@ fail:
+  */
+ #define EFI_RT_VIRTUAL_BASE	0x40000000
+ 
++static int cmp_mem_desc(const void *l, const void *r)
++{
++	const efi_memory_desc_t *left = l, *right = r;
++
++	return (left->phys_addr > right->phys_addr) ? 1 : -1;
++}
++
++/*
++ * Returns whether region @left ends exactly where region @right starts,
++ * or false if either argument is NULL.
++ */
++static bool regions_are_adjacent(efi_memory_desc_t *left,
++				 efi_memory_desc_t *right)
++{
++	u64 left_end;
++
++	if (left == NULL || right == NULL)
++		return false;
++
++	left_end = left->phys_addr + left->num_pages * EFI_PAGE_SIZE;
++
++	return left_end == right->phys_addr;
++}
++
++/*
++ * Returns whether region @left and region @right have compatible memory type
++ * mapping attributes, and are both EFI_MEMORY_RUNTIME regions.
++ */
++static bool regions_have_compatible_memory_type_attrs(efi_memory_desc_t *left,
++						      efi_memory_desc_t *right)
++{
++	static const u64 mem_type_mask = EFI_MEMORY_WB | EFI_MEMORY_WT |
++					 EFI_MEMORY_WC | EFI_MEMORY_UC |
++					 EFI_MEMORY_RUNTIME;
++
++	return ((left->attribute ^ right->attribute) & mem_type_mask) == 0;
++}
++
+ /*
+  * efi_get_virtmap() - create a virtual mapping for the EFI memory map
+  *
+@@ -317,33 +356,52 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
+ 		     int *count)
+ {
+ 	u64 efi_virt_base = EFI_RT_VIRTUAL_BASE;
+-	efi_memory_desc_t *out = runtime_map;
++	efi_memory_desc_t *in, *prev = NULL, *out = runtime_map;
+ 	int l;
+ 
+-	for (l = 0; l < map_size; l += desc_size) {
+-		efi_memory_desc_t *in = (void *)memory_map + l;
++	/*
++	 * To work around potential issues with the Properties Table feature
++	 * introduced in UEFI 2.5, which may split PE/COFF executable images
++	 * in memory into several RuntimeServicesCode and RuntimeServicesData
++	 * regions, we need to preserve the relative offsets between adjacent
++	 * EFI_MEMORY_RUNTIME regions with the same memory type attributes.
++	 * The easiest way to find adjacent regions is to sort the memory map
++	 * before traversing it.
++	 */
++	sort(memory_map, map_size / desc_size, desc_size, cmp_mem_desc, NULL);
++
++	for (l = 0; l < map_size; l += desc_size, prev = in) {
+ 		u64 paddr, size;
+ 
++		in = (void *)memory_map + l;
+ 		if (!(in->attribute & EFI_MEMORY_RUNTIME))
+ 			continue;
+ 
++		paddr = in->phys_addr;
++		size = in->num_pages * EFI_PAGE_SIZE;
++
+ 		/*
+ 		 * Make the mapping compatible with 64k pages: this allows
+ 		 * a 4k page size kernel to kexec a 64k page size kernel and
+ 		 * vice versa.
+ 		 */
+-		paddr = round_down(in->phys_addr, SZ_64K);
+-		size = round_up(in->num_pages * EFI_PAGE_SIZE +
+-				in->phys_addr - paddr, SZ_64K);
+-
+-		/*
+-		 * Avoid wasting memory on PTEs by choosing a virtual base that
+-		 * is compatible with section mappings if this region has the
+-		 * appropriate size and physical alignment. (Sections are 2 MB
+-		 * on 4k granule kernels)
+-		 */
+-		if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)
+-			efi_virt_base = round_up(efi_virt_base, SZ_2M);
++		if (!regions_are_adjacent(prev, in) ||
++		    !regions_have_compatible_memory_type_attrs(prev, in)) {
++
++			paddr = round_down(in->phys_addr, SZ_64K);
++			size += in->phys_addr - paddr;
++
++			/*
++			 * Avoid wasting memory on PTEs by choosing a virtual
++			 * base that is compatible with section mappings if this
++			 * region has the appropriate size and physical
++			 * alignment. (Sections are 2 MB on 4k granule kernels)
++			 */
++			if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)
++				efi_virt_base = round_up(efi_virt_base, SZ_2M);
++			else
++				efi_virt_base = round_up(efi_virt_base, SZ_64K);
++		}
+ 
+ 		in->virt_addr = efi_virt_base + in->phys_addr - paddr;
+ 		efi_virt_base += size;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+index b4d36f0f2153..c098d762089c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+@@ -140,7 +140,7 @@ void amdgpu_irq_preinstall(struct drm_device *dev)
+  */
+ int amdgpu_irq_postinstall(struct drm_device *dev)
+ {
+-	dev->max_vblank_count = 0x001fffff;
++	dev->max_vblank_count = 0x00ffffff;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+index 2abc661845b6..ddcfbf3b188b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+@@ -543,46 +543,60 @@ static int amdgpu_uvd_cs_msg(struct amdgpu_uvd_cs_ctx *ctx,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (msg_type == 1) {
++	switch (msg_type) {
++	case 0:
++		/* it's a create msg, calc image size (width * height) */
++		amdgpu_bo_kunmap(bo);
++
++		/* try to alloc a new handle */
++		for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&adev->uvd.handles[i]) == handle) {
++				DRM_ERROR("Handle 0x%x already in use!\n", handle);
++				return -EINVAL;
++			}
++
++			if (!atomic_cmpxchg(&adev->uvd.handles[i], 0, handle)) {
++				adev->uvd.filp[i] = ctx->parser->filp;
++				return 0;
++			}
++		}
++
++		DRM_ERROR("No more free UVD handles!\n");
++		return -EINVAL;
++
++	case 1:
+ 		/* it's a decode msg, calc buffer sizes */
+ 		r = amdgpu_uvd_cs_msg_decode(msg, ctx->buf_sizes);
+ 		amdgpu_bo_kunmap(bo);
+ 		if (r)
+ 			return r;
+ 
+-	} else if (msg_type == 2) {
++		/* validate the handle */
++		for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&adev->uvd.handles[i]) == handle) {
++				if (adev->uvd.filp[i] != ctx->parser->filp) {
++					DRM_ERROR("UVD handle collision detected!\n");
++					return -EINVAL;
++				}
++				return 0;
++			}
++		}
++
++		DRM_ERROR("Invalid UVD handle 0x%x!\n", handle);
++		return -ENOENT;
++
++	case 2:
+ 		/* it's a destroy msg, free the handle */
+ 		for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i)
+ 			atomic_cmpxchg(&adev->uvd.handles[i], handle, 0);
+ 		amdgpu_bo_kunmap(bo);
+ 		return 0;
+-	} else {
+-		/* it's a create msg */
+-		amdgpu_bo_kunmap(bo);
+-
+-		if (msg_type != 0) {
+-			DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
+-			return -EINVAL;
+-		}
+-
+-		/* it's a create msg, no special handling needed */
+-	}
+-
+-	/* create or decode, validate the handle */
+-	for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) {
+-		if (atomic_read(&adev->uvd.handles[i]) == handle)
+-			return 0;
+-	}
+ 
+-	/* handle not found try to alloc a new one */
+-	for (i = 0; i < AMDGPU_MAX_UVD_HANDLES; ++i) {
+-		if (!atomic_cmpxchg(&adev->uvd.handles[i], 0, handle)) {
+-			adev->uvd.filp[i] = ctx->parser->filp;
+-			return 0;
+-		}
++	default:
++		DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
++		return -EINVAL;
+ 	}
+-
+-	DRM_ERROR("No more free UVD handles!\n");
++	BUG();
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 9a4e3b63f1cb..b07402fc8ded 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -787,7 +787,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	if (mem) {
+-		addr = mem->start << PAGE_SHIFT;
++		addr = (u64)mem->start << PAGE_SHIFT;
+ 		if (mem->mem_type != TTM_PL_TT)
+ 			addr += adev->vm_manager.vram_base_offset;
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+index ae8caca61e04..e60557417049 100644
+--- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
++++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+@@ -1279,8 +1279,7 @@ amdgpu_atombios_encoder_setup_dig(struct drm_encoder *encoder, int action)
+ 			amdgpu_atombios_encoder_setup_dig_encoder(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0);
+ 		}
+ 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
+-			amdgpu_atombios_encoder_setup_dig_transmitter(encoder,
+-							       ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0);
++			amdgpu_atombios_encoder_set_backlight_level(amdgpu_encoder, dig->backlight_level);
+ 		if (ext_encoder)
+ 			amdgpu_atombios_encoder_setup_external_encoder(encoder, ext_encoder, ATOM_ENABLE);
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
+index 4efd671d7a9b..9488ea6ea93f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
+@@ -224,11 +224,11 @@ static int uvd_v4_2_suspend(void *handle)
+ 	int r;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+-	r = uvd_v4_2_hw_fini(adev);
++	r = amdgpu_uvd_suspend(adev);
+ 	if (r)
+ 		return r;
+ 
+-	r = amdgpu_uvd_suspend(adev);
++	r = uvd_v4_2_hw_fini(adev);
+ 	if (r)
+ 		return r;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
+index b756bd99c0fd..d0ed998228ef 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
+@@ -220,11 +220,11 @@ static int uvd_v5_0_suspend(void *handle)
+ 	int r;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+-	r = uvd_v5_0_hw_fini(adev);
++	r = amdgpu_uvd_suspend(adev);
+ 	if (r)
+ 		return r;
+ 
+-	r = amdgpu_uvd_suspend(adev);
++	r = uvd_v5_0_hw_fini(adev);
+ 	if (r)
+ 		return r;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index 49aa931b2cb4..345eb760fd5b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -214,11 +214,11 @@ static int uvd_v6_0_suspend(void *handle)
+ 	int r;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+-	r = uvd_v6_0_hw_fini(adev);
++	r = amdgpu_uvd_suspend(adev);
+ 	if (r)
+ 		return r;
+ 
+-	r = amdgpu_uvd_suspend(adev);
++	r = uvd_v6_0_hw_fini(adev);
+ 	if (r)
+ 		return r;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 68552da40287..4f58a1e18de6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -1290,7 +1290,8 @@ static int vi_common_early_init(void *handle)
+ 	case CHIP_CARRIZO:
+ 		adev->has_uvd = true;
+ 		adev->cg_flags = 0;
+-		adev->pg_flags = AMDGPU_PG_SUPPORT_UVD | AMDGPU_PG_SUPPORT_VCE;
++		/* Disable UVD pg */
++		adev->pg_flags = /* AMDGPU_PG_SUPPORT_UVD | */AMDGPU_PG_SUPPORT_VCE;
+ 		adev->external_rev_id = adev->rev_id + 0x1;
+ 		if (amdgpu_smc_load_fw && smc_enabled)
+ 			adev->firmware.smu_load = true;
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index eb603f1defc2..969e7898a7ed 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -804,8 +804,6 @@ static void drm_dp_destroy_mst_branch_device(struct kref *kref)
+ 	struct drm_dp_mst_port *port, *tmp;
+ 	bool wake_tx = false;
+ 
+-	cancel_work_sync(&mstb->mgr->work);
+-
+ 	/*
+ 	 * destroy all ports - don't need lock
+ 	 * as there are no more references to the mst branch
+@@ -863,29 +861,33 @@ static void drm_dp_destroy_port(struct kref *kref)
+ {
+ 	struct drm_dp_mst_port *port = container_of(kref, struct drm_dp_mst_port, kref);
+ 	struct drm_dp_mst_topology_mgr *mgr = port->mgr;
++
+ 	if (!port->input) {
+ 		port->vcpi.num_slots = 0;
+ 
+ 		kfree(port->cached_edid);
+ 
+-		/* we can't destroy the connector here, as
+-		   we might be holding the mode_config.mutex
+-		   from an EDID retrieval */
++		/*
++		 * The only time we don't have a connector
++		 * on an output port is if the connector init
++		 * fails.
++		 */
+ 		if (port->connector) {
++			/* we can't destroy the connector here, as
++			 * we might be holding the mode_config.mutex
++			 * from an EDID retrieval */
++
+ 			mutex_lock(&mgr->destroy_connector_lock);
+ 			list_add(&port->next, &mgr->destroy_connector_list);
+ 			mutex_unlock(&mgr->destroy_connector_lock);
+ 			schedule_work(&mgr->destroy_connector_work);
+ 			return;
+ 		}
++		/* no need to clean up vcpi
++		 * as if we have no connector we never setup a vcpi */
+ 		drm_dp_port_teardown_pdt(port, port->pdt);
+-
+-		if (!port->input && port->vcpi.vcpi > 0)
+-			drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+ 	}
+ 	kfree(port);
+-
+-	(*mgr->cbs->hotplug)(mgr);
+ }
+ 
+ static void drm_dp_put_port(struct drm_dp_mst_port *port)
+@@ -1115,12 +1117,21 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
+ 		char proppath[255];
+ 		build_mst_prop_path(port, mstb, proppath, sizeof(proppath));
+ 		port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);
+-
++		if (!port->connector) {
++			/* remove it from the port list */
++			mutex_lock(&mstb->mgr->lock);
++			list_del(&port->next);
++			mutex_unlock(&mstb->mgr->lock);
++			/* drop port list reference */
++			drm_dp_put_port(port);
++			goto out;
++		}
+ 		if (port->port_num >= 8) {
+ 			port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);
+ 		}
+ 	}
+ 
++out:
+ 	/* put reference to this port */
+ 	drm_dp_put_port(port);
+ }
+@@ -1978,6 +1989,8 @@ void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr)
+ 	drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
+ 			   DP_MST_EN | DP_UPSTREAM_IS_SRC);
+ 	mutex_unlock(&mgr->lock);
++	flush_work(&mgr->work);
++	flush_work(&mgr->destroy_connector_work);
+ }
+ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend);
+ 
+@@ -2661,7 +2674,7 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
+ {
+ 	struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work);
+ 	struct drm_dp_mst_port *port;
+-
++	bool send_hotplug = false;
+ 	/*
+ 	 * Not a regular list traverse as we have to drop the destroy
+ 	 * connector lock before destroying the connector, to avoid AB->BA
+@@ -2684,7 +2697,10 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
+ 		if (!port->input && port->vcpi.vcpi > 0)
+ 			drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+ 		kfree(port);
++		send_hotplug = true;
+ 	}
++	if (send_hotplug)
++		(*mgr->cbs->hotplug)(mgr);
+ }
+ 
+ /**
+@@ -2737,6 +2753,7 @@ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
+  */
+ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
+ {
++	flush_work(&mgr->work);
+ 	flush_work(&mgr->destroy_connector_work);
+ 	mutex_lock(&mgr->payload_lock);
+ 	kfree(mgr->payloads);
+diff --git a/drivers/gpu/drm/drm_lock.c b/drivers/gpu/drm/drm_lock.c
+index f861361a635e..4924d381b664 100644
+--- a/drivers/gpu/drm/drm_lock.c
++++ b/drivers/gpu/drm/drm_lock.c
+@@ -61,6 +61,9 @@ int drm_legacy_lock(struct drm_device *dev, void *data,
+ 	struct drm_master *master = file_priv->master;
+ 	int ret = 0;
+ 
++	if (drm_core_check_feature(dev, DRIVER_MODESET))
++		return -EINVAL;
++
+ 	++file_priv->lock_count;
+ 
+ 	if (lock->context == DRM_KERNEL_CONTEXT) {
+@@ -153,6 +156,9 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_
+ 	struct drm_lock *lock = data;
+ 	struct drm_master *master = file_priv->master;
+ 
++	if (drm_core_check_feature(dev, DRIVER_MODESET))
++		return -EINVAL;
++
+ 	if (lock->context == DRM_KERNEL_CONTEXT) {
+ 		DRM_ERROR("Process %d using kernel context %d\n",
+ 			  task_pid_nr(current), lock->context);
+diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
+index 198fc3c3291b..17522f733513 100644
+--- a/drivers/gpu/drm/i915/intel_bios.c
++++ b/drivers/gpu/drm/i915/intel_bios.c
+@@ -42,7 +42,7 @@ find_section(const void *_bdb, int section_id)
+ 	const struct bdb_header *bdb = _bdb;
+ 	const u8 *base = _bdb;
+ 	int index = 0;
+-	u16 total, current_size;
++	u32 total, current_size;
+ 	u8 current_id;
+ 
+ 	/* skip to first section */
+@@ -57,6 +57,10 @@ find_section(const void *_bdb, int section_id)
+ 		current_size = *((const u16 *)(base + index));
+ 		index += 2;
+ 
++		/* The MIPI Sequence Block v3+ has a separate size field. */
++		if (current_id == BDB_MIPI_SEQUENCE && *(base + index) >= 3)
++			current_size = *((const u32 *)(base + index + 1));
++
+ 		if (index + current_size > total)
+ 			return NULL;
+ 
+@@ -859,6 +863,12 @@ parse_mipi(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
+ 		return;
+ 	}
+ 
++	/* Fail gracefully for forward incompatible sequence block. */
++	if (sequence->version >= 3) {
++		DRM_ERROR("Unable to parse MIPI Sequence Block v3+\n");
++		return;
++	}
++
+ 	DRM_DEBUG_DRIVER("Found MIPI sequence block\n");
+ 
+ 	block_size = get_blocksize(sequence);
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 7c6225c84ba6..4649bd2ed340 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -618,7 +618,7 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc,
+ 		  adjusted_mode->hdisplay,
+ 		  adjusted_mode->vdisplay);
+ 
+-	if (qcrtc->index == 0)
++	if (bo->is_primary == false)
+ 		recreate_primary = true;
+ 
+ 	if (bo->surf.stride * bo->surf.height > qdev->vram_size) {
+@@ -886,13 +886,15 @@ static enum drm_connector_status qxl_conn_detect(
+ 		drm_connector_to_qxl_output(connector);
+ 	struct drm_device *ddev = connector->dev;
+ 	struct qxl_device *qdev = ddev->dev_private;
+-	int connected;
++	bool connected = false;
+ 
+ 	/* The first monitor is always connected */
+-	connected = (output->index == 0) ||
+-		    (qdev->client_monitors_config &&
+-		     qdev->client_monitors_config->count > output->index &&
+-		     qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]));
++	if (!qdev->client_monitors_config) {
++		if (output->index == 0)
++			connected = true;
++	} else
++		connected = qdev->client_monitors_config->count > output->index &&
++		     qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]);
+ 
+ 	DRM_DEBUG("#%d connected: %d\n", output->index, connected);
+ 	if (!connected)
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index c3872598b85a..65adb9c72377 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -1624,8 +1624,9 @@ radeon_atom_encoder_dpms_avivo(struct drm_encoder *encoder, int mode)
+ 		} else
+ 			atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+ 		if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
+-			args.ucAction = ATOM_LCD_BLON;
+-			atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
++			struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
++
++			atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
+ 		}
+ 		break;
+ 	case DRM_MODE_DPMS_STANDBY:
+@@ -1706,8 +1707,7 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode)
+ 				atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0);
+ 		}
+ 		if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
+-			atombios_dig_transmitter_setup(encoder,
+-						       ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0);
++			atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
+ 		if (ext_encoder)
+ 			atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE);
+ 		break;
+diff --git a/drivers/hv/hv_utils_transport.c b/drivers/hv/hv_utils_transport.c
+index ea7ba5ef16a9..6a9d80a5332d 100644
+--- a/drivers/hv/hv_utils_transport.c
++++ b/drivers/hv/hv_utils_transport.c
+@@ -186,7 +186,7 @@ int hvutil_transport_send(struct hvutil_transport *hvt, void *msg, int len)
+ 		return -EINVAL;
+ 	} else if (hvt->mode == HVUTIL_TRANSPORT_NETLINK) {
+ 		cn_msg = kzalloc(sizeof(*cn_msg) + len, GFP_ATOMIC);
+-		if (!msg)
++		if (!cn_msg)
+ 			return -ENOMEM;
+ 		cn_msg->id.idx = hvt->cn_id.idx;
+ 		cn_msg->id.val = hvt->cn_id.val;
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index bd1c99deac71..2aaedbe0b023 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -354,6 +354,10 @@ static const u16 NCT6775_REG_TEMP_CRIT[ARRAY_SIZE(nct6775_temp_label) - 1]
+ 
+ /* NCT6776 specific data */
+ 
++/* STEP_UP_TIME and STEP_DOWN_TIME regs are swapped for all chips but NCT6775 */
++#define NCT6776_REG_FAN_STEP_UP_TIME NCT6775_REG_FAN_STEP_DOWN_TIME
++#define NCT6776_REG_FAN_STEP_DOWN_TIME NCT6775_REG_FAN_STEP_UP_TIME
++
+ static const s8 NCT6776_ALARM_BITS[] = {
+ 	0, 1, 2, 3, 8, 21, 20, 16,	/* in0.. in7 */
+ 	17, -1, -1, -1, -1, -1, -1,	/* in8..in14 */
+@@ -3528,8 +3532,8 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->REG_FAN_PULSES = NCT6776_REG_FAN_PULSES;
+ 		data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
+ 		data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
+-		data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
+-		data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
++		data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
++		data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
+ 		data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
+ 		data->REG_PWM[0] = NCT6775_REG_PWM;
+ 		data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
+@@ -3600,8 +3604,8 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
+ 		data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
+ 		data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
+-		data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
+-		data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
++		data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
++		data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
+ 		data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
+ 		data->REG_PWM[0] = NCT6775_REG_PWM;
+ 		data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
+@@ -3677,8 +3681,8 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
+ 		data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
+ 		data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
+-		data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
+-		data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
++		data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
++		data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
+ 		data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
+ 		data->REG_PWM[0] = NCT6775_REG_PWM;
+ 		data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index d851e1828d6f..85761b78bb5f 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -3012,9 +3012,16 @@ isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery)
+ static int
+ isert_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state)
+ {
+-	int ret;
++	struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
++	int ret = 0;
+ 
+ 	switch (state) {
++	case ISTATE_REMOVE:
++		spin_lock_bh(&conn->cmd_lock);
++		list_del_init(&cmd->i_conn_node);
++		spin_unlock_bh(&conn->cmd_lock);
++		isert_put_cmd(isert_cmd, true);
++		break;
+ 	case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+ 		ret = isert_put_nopin(cmd, conn, false);
+ 		break;
+@@ -3379,6 +3386,41 @@ isert_wait4flush(struct isert_conn *isert_conn)
+ 	wait_for_completion(&isert_conn->wait_comp_err);
+ }
+ 
++/**
++ * isert_put_unsol_pending_cmds() - Drop commands waiting for
++ *     unsolicitate dataout
++ * @conn:    iscsi connection
++ *
++ * We might still have commands that are waiting for unsolicited
++ * dataouts messages. We must put the extra reference on those
++ * before blocking on the target_wait_for_session_cmds
++ */
++static void
++isert_put_unsol_pending_cmds(struct iscsi_conn *conn)
++{
++	struct iscsi_cmd *cmd, *tmp;
++	static LIST_HEAD(drop_cmd_list);
++
++	spin_lock_bh(&conn->cmd_lock);
++	list_for_each_entry_safe(cmd, tmp, &conn->conn_cmd_list, i_conn_node) {
++		if ((cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA) &&
++		    (cmd->write_data_done < conn->sess->sess_ops->FirstBurstLength) &&
++		    (cmd->write_data_done < cmd->se_cmd.data_length))
++			list_move_tail(&cmd->i_conn_node, &drop_cmd_list);
++	}
++	spin_unlock_bh(&conn->cmd_lock);
++
++	list_for_each_entry_safe(cmd, tmp, &drop_cmd_list, i_conn_node) {
++		list_del_init(&cmd->i_conn_node);
++		if (cmd->i_state != ISTATE_REMOVE) {
++			struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
++
++			isert_info("conn %p dropping cmd %p\n", conn, cmd);
++			isert_put_cmd(isert_cmd, true);
++		}
++	}
++}
++
+ static void isert_wait_conn(struct iscsi_conn *conn)
+ {
+ 	struct isert_conn *isert_conn = conn->context;
+@@ -3397,8 +3439,9 @@ static void isert_wait_conn(struct iscsi_conn *conn)
+ 	isert_conn_terminate(isert_conn);
+ 	mutex_unlock(&isert_conn->mutex);
+ 
+-	isert_wait4cmds(conn);
+ 	isert_wait4flush(isert_conn);
++	isert_put_unsol_pending_cmds(conn);
++	isert_wait4cmds(conn);
+ 	isert_wait4logout(isert_conn);
+ 
+ 	queue_work(isert_release_wq, &isert_conn->release_work);
+diff --git a/drivers/irqchip/irq-atmel-aic5.c b/drivers/irqchip/irq-atmel-aic5.c
+index 459bf4429d36..7e077bf13fe1 100644
+--- a/drivers/irqchip/irq-atmel-aic5.c
++++ b/drivers/irqchip/irq-atmel-aic5.c
+@@ -88,28 +88,36 @@ static void aic5_mask(struct irq_data *d)
+ {
+ 	struct irq_domain *domain = d->domain;
+ 	struct irq_domain_chip_generic *dgc = domain->gc;
+-	struct irq_chip_generic *gc = dgc->gc[0];
++	struct irq_chip_generic *bgc = dgc->gc[0];
++	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 
+-	/* Disable interrupt on AIC5 */
+-	irq_gc_lock(gc);
++	/*
++	 * Disable interrupt on AIC5. We always take the lock of the
++	 * first irq chip as all chips share the same registers.
++	 */
++	irq_gc_lock(bgc);
+ 	irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);
+ 	irq_reg_writel(gc, 1, AT91_AIC5_IDCR);
+ 	gc->mask_cache &= ~d->mask;
+-	irq_gc_unlock(gc);
++	irq_gc_unlock(bgc);
+ }
+ 
+ static void aic5_unmask(struct irq_data *d)
+ {
+ 	struct irq_domain *domain = d->domain;
+ 	struct irq_domain_chip_generic *dgc = domain->gc;
+-	struct irq_chip_generic *gc = dgc->gc[0];
++	struct irq_chip_generic *bgc = dgc->gc[0];
++	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 
+-	/* Enable interrupt on AIC5 */
+-	irq_gc_lock(gc);
++	/*
++	 * Enable interrupt on AIC5. We always take the lock of the
++	 * first irq chip as all chips share the same registers.
++	 */
++	irq_gc_lock(bgc);
+ 	irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);
+ 	irq_reg_writel(gc, 1, AT91_AIC5_IECR);
+ 	gc->mask_cache |= d->mask;
+-	irq_gc_unlock(gc);
++	irq_gc_unlock(bgc);
+ }
+ 
+ static int aic5_retrigger(struct irq_data *d)
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c00e2db351ba..9a791dd52199 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -921,8 +921,10 @@ retry_baser:
+ 			 * non-cacheable as well.
+ 			 */
+ 			shr = tmp & GITS_BASER_SHAREABILITY_MASK;
+-			if (!shr)
++			if (!shr) {
+ 				cache = GITS_BASER_nC;
++				__flush_dcache_area(base, alloc_size);
++			}
+ 			goto retry_baser;
+ 		}
+ 
+@@ -1163,6 +1165,8 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
+ 		return NULL;
+ 	}
+ 
++	__flush_dcache_area(itt, sz);
++
+ 	dev->its = its;
+ 	dev->itt = itt;
+ 	dev->nr_ites = nr_ites;
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 9ad35f72ab4c..433fb9df848a 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -229,7 +229,7 @@ config LEDS_LP55XX_COMMON
+ 	tristate "Common Driver for TI/National LP5521/5523/55231/5562/8501"
+ 	depends on LEDS_LP5521 || LEDS_LP5523 || LEDS_LP5562 || LEDS_LP8501
+ 	select FW_LOADER
+-	select FW_LOADER_USER_HELPER_FALLBACK
++	select FW_LOADER_USER_HELPER
+ 	help
+ 	  This option supports common operations for LP5521/5523/55231/5562/8501
+ 	  devices.
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index beabfbc6f7cd..ca51d58bed24 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -228,12 +228,15 @@ static int led_classdev_next_name(const char *init_name, char *name,
+ {
+ 	unsigned int i = 0;
+ 	int ret = 0;
++	struct device *dev;
+ 
+ 	strlcpy(name, init_name, len);
+ 
+-	while (class_find_device(leds_class, NULL, name, match_name) &&
+-	       (ret < len))
++	while ((ret < len) &&
++	       (dev = class_find_device(leds_class, NULL, name, match_name))) {
++		put_device(dev);
+ 		ret = snprintf(name, len, "%s_%u", init_name, ++i);
++	}
+ 
+ 	if (ret >= len)
+ 		return -ENOMEM;
+diff --git a/drivers/macintosh/windfarm_core.c b/drivers/macintosh/windfarm_core.c
+index 3ee198b65843..cc7ece1712b5 100644
+--- a/drivers/macintosh/windfarm_core.c
++++ b/drivers/macintosh/windfarm_core.c
+@@ -435,7 +435,7 @@ int wf_unregister_client(struct notifier_block *nb)
+ {
+ 	mutex_lock(&wf_lock);
+ 	blocking_notifier_chain_unregister(&wf_client_list, nb);
+-	wf_client_count++;
++	wf_client_count--;
+ 	if (wf_client_count == 0)
+ 		wf_stop_thread();
+ 	mutex_unlock(&wf_lock);
+diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
+index e51de52eeb94..48b5890c28e3 100644
+--- a/drivers/md/bitmap.c
++++ b/drivers/md/bitmap.c
+@@ -1997,7 +1997,8 @@ int bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 	if (bitmap->mddev->bitmap_info.offset || bitmap->mddev->bitmap_info.file)
+ 		ret = bitmap_storage_alloc(&store, chunks,
+ 					   !bitmap->mddev->bitmap_info.external,
+-					   bitmap->cluster_slot);
++					   mddev_is_clustered(bitmap->mddev)
++					   ? bitmap->cluster_slot : 0);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c
+index 240c9f0e85e7..8a096456579b 100644
+--- a/drivers/md/dm-cache-policy-cleaner.c
++++ b/drivers/md/dm-cache-policy-cleaner.c
+@@ -436,7 +436,7 @@ static struct dm_cache_policy *wb_create(dm_cblock_t cache_size,
+ static struct dm_cache_policy_type wb_policy_type = {
+ 	.name = "cleaner",
+ 	.version = {1, 0, 0},
+-	.hint_size = 0,
++	.hint_size = 4,
+ 	.owner = THIS_MODULE,
+ 	.create = wb_create
+ };
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 0f48fed44a17..0d28c5b9d065 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -968,7 +968,8 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone);
+ 
+ /*
+  * Generate a new unfragmented bio with the given size
+- * This should never violate the device limitations
++ * This should never violate the device limitations (but only because
++ * max_segment_size is being constrained to PAGE_SIZE).
+  *
+  * This function may be called concurrently. If we allocate from the mempool
+  * concurrently, there is a possibility of deadlock. For example, if we have
+@@ -2058,9 +2059,20 @@ static int crypt_iterate_devices(struct dm_target *ti,
+ 	return fn(ti, cc->dev, cc->start, ti->len, data);
+ }
+ 
++static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
++{
++	/*
++	 * Unfortunate constraint that is required to avoid the potential
++	 * for exceeding underlying device's max_segments limits -- due to
++	 * crypt_alloc_buffer() possibly allocating pages for the encryption
++	 * bio that are not as physically contiguous as the original bio.
++	 */
++	limits->max_segment_size = PAGE_SIZE;
++}
++
+ static struct target_type crypt_target = {
+ 	.name   = "crypt",
+-	.version = {1, 14, 0},
++	.version = {1, 14, 1},
+ 	.module = THIS_MODULE,
+ 	.ctr    = crypt_ctr,
+ 	.dtr    = crypt_dtr,
+@@ -2072,6 +2084,7 @@ static struct target_type crypt_target = {
+ 	.message = crypt_message,
+ 	.merge  = crypt_merge,
+ 	.iterate_devices = crypt_iterate_devices,
++	.io_hints = crypt_io_hints,
+ };
+ 
+ static int __init dm_crypt_init(void)
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 2daa67793511..1257d484392a 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -329,8 +329,7 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
+ 		 */
+ 		if (min_region_size > (1 << 13)) {
+ 			/* If not a power of 2, make it the next power of 2 */
+-			if (min_region_size & (min_region_size - 1))
+-				region_size = 1 << fls(region_size);
++			region_size = roundup_pow_of_two(min_region_size);
+ 			DMINFO("Choosing default region size of %lu sectors",
+ 			       region_size);
+ 		} else {
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index d2bbe8cc1e97..75aef240c2d1 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -4333,6 +4333,10 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ {
+ 	struct thin_c *tc = ti->private;
+ 	struct pool *pool = tc->pool;
++	struct queue_limits *pool_limits = dm_get_queue_limits(pool->pool_md);
++
++	if (!pool_limits->discard_granularity)
++		return; /* pool's discard support is disabled */
+ 
+ 	limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
+ 	limits->max_discard_sectors = 2048 * 1024 * 16; /* 16G */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 0d7ab20c58df..3e32f4e31bbb 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2952,8 +2952,6 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
+ 
+ 	might_sleep();
+ 
+-	map = dm_get_live_table(md, &srcu_idx);
+-
+ 	spin_lock(&_minor_lock);
+ 	idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md))));
+ 	set_bit(DMF_FREEING, &md->flags);
+@@ -2967,14 +2965,14 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
+ 	 * do not race with internal suspend.
+ 	 */
+ 	mutex_lock(&md->suspend_lock);
++	map = dm_get_live_table(md, &srcu_idx);
+ 	if (!dm_suspended_md(md)) {
+ 		dm_table_presuspend_targets(map);
+ 		dm_table_postsuspend_targets(map);
+ 	}
+-	mutex_unlock(&md->suspend_lock);
+-
+ 	/* dm_put_live_table must be before msleep, otherwise deadlock is possible */
+ 	dm_put_live_table(md, srcu_idx);
++	mutex_unlock(&md->suspend_lock);
+ 
+ 	/*
+ 	 * Rare, but there may be I/O requests still going to complete,
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index efb654eb5399..0875e5e7e09a 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -83,7 +83,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 	char b[BDEVNAME_SIZE];
+ 	char b2[BDEVNAME_SIZE];
+ 	struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL);
+-	bool discard_supported = false;
++	unsigned short blksize = 512;
+ 
+ 	if (!conf)
+ 		return -ENOMEM;
+@@ -98,6 +98,9 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 		sector_div(sectors, mddev->chunk_sectors);
+ 		rdev1->sectors = sectors * mddev->chunk_sectors;
+ 
++		blksize = max(blksize, queue_logical_block_size(
++				      rdev1->bdev->bd_disk->queue));
++
+ 		rdev_for_each(rdev2, mddev) {
+ 			pr_debug("md/raid0:%s:   comparing %s(%llu)"
+ 				 " with %s(%llu)\n",
+@@ -134,6 +137,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 	}
+ 	pr_debug("md/raid0:%s: FINAL %d zones\n",
+ 		 mdname(mddev), conf->nr_strip_zones);
++	/*
++	 * now since we have the hard sector sizes, we can make sure
++	 * chunk size is a multiple of that sector size
++	 */
++	if ((mddev->chunk_sectors << 9) % blksize) {
++		printk(KERN_ERR "md/raid0:%s: chunk_size of %d not multiple of block size %d\n",
++		       mdname(mddev),
++		       mddev->chunk_sectors << 9, blksize);
++		err = -EINVAL;
++		goto abort;
++	}
++
+ 	err = -ENOMEM;
+ 	conf->strip_zone = kzalloc(sizeof(struct strip_zone)*
+ 				conf->nr_strip_zones, GFP_KERNEL);
+@@ -188,19 +203,12 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 		}
+ 		dev[j] = rdev1;
+ 
+-		if (mddev->queue)
+-			disk_stack_limits(mddev->gendisk, rdev1->bdev,
+-					  rdev1->data_offset << 9);
+-
+ 		if (rdev1->bdev->bd_disk->queue->merge_bvec_fn)
+ 			conf->has_merge_bvec = 1;
+ 
+ 		if (!smallest || (rdev1->sectors < smallest->sectors))
+ 			smallest = rdev1;
+ 		cnt++;
+-
+-		if (blk_queue_discard(bdev_get_queue(rdev1->bdev)))
+-			discard_supported = true;
+ 	}
+ 	if (cnt != mddev->raid_disks) {
+ 		printk(KERN_ERR "md/raid0:%s: too few disks (%d of %d) - "
+@@ -261,28 +269,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ 			 (unsigned long long)smallest->sectors);
+ 	}
+ 
+-	/*
+-	 * now since we have the hard sector sizes, we can make sure
+-	 * chunk size is a multiple of that sector size
+-	 */
+-	if ((mddev->chunk_sectors << 9) % queue_logical_block_size(mddev->queue)) {
+-		printk(KERN_ERR "md/raid0:%s: chunk_size of %d not valid\n",
+-		       mdname(mddev),
+-		       mddev->chunk_sectors << 9);
+-		goto abort;
+-	}
+-
+-	if (mddev->queue) {
+-		blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
+-		blk_queue_io_opt(mddev->queue,
+-				 (mddev->chunk_sectors << 9) * mddev->raid_disks);
+-
+-		if (!discard_supported)
+-			queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
+-		else
+-			queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
+-	}
+-
+ 	pr_debug("md/raid0:%s: done.\n", mdname(mddev));
+ 	*private_conf = conf;
+ 
+@@ -433,12 +419,6 @@ static int raid0_run(struct mddev *mddev)
+ 	if (md_check_no_bitmap(mddev))
+ 		return -EINVAL;
+ 
+-	if (mddev->queue) {
+-		blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
+-		blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors);
+-		blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors);
+-	}
+-
+ 	/* if private is not null, we are here after takeover */
+ 	if (mddev->private == NULL) {
+ 		ret = create_strip_zones(mddev, &conf);
+@@ -447,6 +427,29 @@ static int raid0_run(struct mddev *mddev)
+ 		mddev->private = conf;
+ 	}
+ 	conf = mddev->private;
++	if (mddev->queue) {
++		struct md_rdev *rdev;
++		bool discard_supported = false;
++
++		blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
++		blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors);
++		blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors);
++
++		blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
++		blk_queue_io_opt(mddev->queue,
++				 (mddev->chunk_sectors << 9) * mddev->raid_disks);
++
++		rdev_for_each(rdev, mddev) {
++			disk_stack_limits(mddev->gendisk, rdev->bdev,
++					  rdev->data_offset << 9);
++			if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
++				discard_supported = true;
++		}
++		if (!discard_supported)
++			queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
++		else
++			queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
++	}
+ 
+ 	/* calculate array device size */
+ 	md_set_array_sectors(mddev, raid0_size(mddev, 0, 0));
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 9e3fdbdc4037..2f4503a7f315 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -134,9 +134,11 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
+ 	int err = cmd->error;
+ 
+ 	/* Flag re-tuning needed on CRC errors */
+-	if (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) ||
++	if ((cmd->opcode != MMC_SEND_TUNING_BLOCK &&
++	    cmd->opcode != MMC_SEND_TUNING_BLOCK_HS200) &&
++	    (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) ||
+ 	    (mrq->data && mrq->data->error == -EILSEQ) ||
+-	    (mrq->stop && mrq->stop->error == -EILSEQ))
++	    (mrq->stop && mrq->stop->error == -EILSEQ)))
+ 		mmc_retune_needed(host);
+ 
+ 	if (err && cmd->retries && mmc_host_is_spi(host)) {
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 99a9c9011c50..79979e9d5a09 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -457,7 +457,7 @@ int mmc_of_parse(struct mmc_host *host)
+ 					   0, &cd_gpio_invert);
+ 		if (!ret)
+ 			dev_info(host->parent, "Got CD GPIO\n");
+-		else if (ret != -ENOENT)
++		else if (ret != -ENOENT && ret != -ENOSYS)
+ 			return ret;
+ 
+ 		/*
+@@ -481,7 +481,7 @@ int mmc_of_parse(struct mmc_host *host)
+ 	ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &ro_gpio_invert);
+ 	if (!ret)
+ 		dev_info(host->parent, "Got WP GPIO\n");
+-	else if (ret != -ENOENT)
++	else if (ret != -ENOENT && ret != -ENOSYS)
+ 		return ret;
+ 
+ 	if (of_property_read_bool(np, "disable-wp"))
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 40e9d8e45f25..e41fb7405426 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -99,6 +99,9 @@ struct idmac_desc {
+ 
+ 	__le32		des3;	/* buffer 2 physical address */
+ };
++
++/* Each descriptor can transfer up to 4KB of data in chained mode */
++#define DW_MCI_DESC_DATA_LENGTH	0x1000
+ #endif /* CONFIG_MMC_DW_IDMAC */
+ 
+ static bool dw_mci_reset(struct dw_mci *host);
+@@ -462,66 +465,96 @@ static void dw_mci_idmac_complete_dma(struct dw_mci *host)
+ static void dw_mci_translate_sglist(struct dw_mci *host, struct mmc_data *data,
+ 				    unsigned int sg_len)
+ {
++	unsigned int desc_len;
+ 	int i;
+ 	if (host->dma_64bit_address == 1) {
+-		struct idmac_desc_64addr *desc = host->sg_cpu;
++		struct idmac_desc_64addr *desc_first, *desc_last, *desc;
++
++		desc_first = desc_last = desc = host->sg_cpu;
+ 
+-		for (i = 0; i < sg_len; i++, desc++) {
++		for (i = 0; i < sg_len; i++) {
+ 			unsigned int length = sg_dma_len(&data->sg[i]);
+ 			u64 mem_addr = sg_dma_address(&data->sg[i]);
+ 
+-			/*
+-			 * Set the OWN bit and disable interrupts for this
+-			 * descriptor
+-			 */
+-			desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC |
+-						IDMAC_DES0_CH;
+-			/* Buffer length */
+-			IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, length);
+-
+-			/* Physical address to DMA to/from */
+-			desc->des4 = mem_addr & 0xffffffff;
+-			desc->des5 = mem_addr >> 32;
++			for ( ; length ; desc++) {
++				desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ?
++					   length : DW_MCI_DESC_DATA_LENGTH;
++
++				length -= desc_len;
++
++				/*
++				 * Set the OWN bit and disable interrupts
++				 * for this descriptor
++				 */
++				desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC |
++							IDMAC_DES0_CH;
++
++				/* Buffer length */
++				IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len);
++
++				/* Physical address to DMA to/from */
++				desc->des4 = mem_addr & 0xffffffff;
++				desc->des5 = mem_addr >> 32;
++
++				/* Update physical address for the next desc */
++				mem_addr += desc_len;
++
++				/* Save pointer to the last descriptor */
++				desc_last = desc;
++			}
+ 		}
+ 
+ 		/* Set first descriptor */
+-		desc = host->sg_cpu;
+-		desc->des0 |= IDMAC_DES0_FD;
++		desc_first->des0 |= IDMAC_DES0_FD;
+ 
+ 		/* Set last descriptor */
+-		desc = host->sg_cpu + (i - 1) *
+-				sizeof(struct idmac_desc_64addr);
+-		desc->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC);
+-		desc->des0 |= IDMAC_DES0_LD;
++		desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC);
++		desc_last->des0 |= IDMAC_DES0_LD;
+ 
+ 	} else {
+-		struct idmac_desc *desc = host->sg_cpu;
++		struct idmac_desc *desc_first, *desc_last, *desc;
++
++		desc_first = desc_last = desc = host->sg_cpu;
+ 
+-		for (i = 0; i < sg_len; i++, desc++) {
++		for (i = 0; i < sg_len; i++) {
+ 			unsigned int length = sg_dma_len(&data->sg[i]);
+ 			u32 mem_addr = sg_dma_address(&data->sg[i]);
+ 
+-			/*
+-			 * Set the OWN bit and disable interrupts for this
+-			 * descriptor
+-			 */
+-			desc->des0 = cpu_to_le32(IDMAC_DES0_OWN |
+-					IDMAC_DES0_DIC | IDMAC_DES0_CH);
+-			/* Buffer length */
+-			IDMAC_SET_BUFFER1_SIZE(desc, length);
++			for ( ; length ; desc++) {
++				desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ?
++					   length : DW_MCI_DESC_DATA_LENGTH;
++
++				length -= desc_len;
++
++				/*
++				 * Set the OWN bit and disable interrupts
++				 * for this descriptor
++				 */
++				desc->des0 = cpu_to_le32(IDMAC_DES0_OWN |
++							 IDMAC_DES0_DIC |
++							 IDMAC_DES0_CH);
++
++				/* Buffer length */
++				IDMAC_SET_BUFFER1_SIZE(desc, desc_len);
+ 
+-			/* Physical address to DMA to/from */
+-			desc->des2 = cpu_to_le32(mem_addr);
++				/* Physical address to DMA to/from */
++				desc->des2 = cpu_to_le32(mem_addr);
++
++				/* Update physical address for the next desc */
++				mem_addr += desc_len;
++
++				/* Save pointer to the last descriptor */
++				desc_last = desc;
++			}
+ 		}
+ 
+ 		/* Set first descriptor */
+-		desc = host->sg_cpu;
+-		desc->des0 |= cpu_to_le32(IDMAC_DES0_FD);
++		desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD);
+ 
+ 		/* Set last descriptor */
+-		desc = host->sg_cpu + (i - 1) * sizeof(struct idmac_desc);
+-		desc->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | IDMAC_DES0_DIC));
+-		desc->des0 |= cpu_to_le32(IDMAC_DES0_LD);
++		desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH |
++					       IDMAC_DES0_DIC));
++		desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD);
+ 	}
+ 
+ 	wmb();
+@@ -2394,7 +2427,7 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
+ #ifdef CONFIG_MMC_DW_IDMAC
+ 		mmc->max_segs = host->ring_size;
+ 		mmc->max_blk_size = 65536;
+-		mmc->max_seg_size = 0x1000;
++		mmc->max_seg_size = DW_MCI_DESC_DATA_LENGTH;
+ 		mmc->max_req_size = mmc->max_seg_size * host->ring_size;
+ 		mmc->max_blk_count = mmc->max_req_size / 512;
+ #else
+diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c
+index 946d37f94a31..f5edf9d3a18a 100644
+--- a/drivers/mmc/host/sdhci-pxav3.c
++++ b/drivers/mmc/host/sdhci-pxav3.c
+@@ -135,6 +135,7 @@ static int armada_38x_quirks(struct platform_device *pdev,
+ 	struct sdhci_pxa *pxa = pltfm_host->priv;
+ 	struct resource *res;
+ 
++	host->quirks &= ~SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN;
+ 	host->quirks |= SDHCI_QUIRK_MISSING_CAPS;
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 					   "conf-sdio3");
+@@ -290,6 +291,9 @@ static void pxav3_set_uhs_signaling(struct sdhci_host *host, unsigned int uhs)
+ 		    uhs == MMC_TIMING_UHS_DDR50) {
+ 			reg_val &= ~SDIO3_CONF_CLK_INV;
+ 			reg_val |= SDIO3_CONF_SD_FB_CLK;
++		} else if (uhs == MMC_TIMING_MMC_HS) {
++			reg_val &= ~SDIO3_CONF_CLK_INV;
++			reg_val &= ~SDIO3_CONF_SD_FB_CLK;
+ 		} else {
+ 			reg_val |= SDIO3_CONF_CLK_INV;
+ 			reg_val &= ~SDIO3_CONF_SD_FB_CLK;
+@@ -398,7 +402,7 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
+ 	if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) {
+ 		ret = armada_38x_quirks(pdev, host);
+ 		if (ret < 0)
+-			goto err_clk_get;
++			goto err_mbus_win;
+ 		ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info());
+ 		if (ret < 0)
+ 			goto err_mbus_win;
+diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c
+index 1259cc558ce9..5465fa439c9e 100644
+--- a/drivers/mtd/nand/pxa3xx_nand.c
++++ b/drivers/mtd/nand/pxa3xx_nand.c
+@@ -1473,6 +1473,9 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
+ 	if (pdata->keep_config && !pxa3xx_nand_detect_config(info))
+ 		goto KEEP_CONFIG;
+ 
++	/* Set a default chunk size */
++	info->chunk_size = 512;
++
+ 	ret = pxa3xx_nand_sensing(info);
+ 	if (ret) {
+ 		dev_info(&info->pdev->dev, "There is no chip on cs %d!\n",
+diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
+index 6f93b2990d25..499b8e433d3d 100644
+--- a/drivers/mtd/nand/sunxi_nand.c
++++ b/drivers/mtd/nand/sunxi_nand.c
+@@ -138,6 +138,10 @@
+ #define NFC_ECC_MODE		GENMASK(15, 12)
+ #define NFC_RANDOM_SEED		GENMASK(30, 16)
+ 
++/* NFC_USER_DATA helper macros */
++#define NFC_BUF_TO_USER_DATA(buf)	((buf)[0] | ((buf)[1] << 8) | \
++					((buf)[2] << 16) | ((buf)[3] << 24))
++
+ #define NFC_DEFAULT_TIMEOUT_MS	1000
+ 
+ #define NFC_SRAM_SIZE		1024
+@@ -632,15 +636,9 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
+ 		offset = layout->eccpos[i * ecc->bytes] - 4 + mtd->writesize;
+ 
+ 		/* Fill OOB data in */
+-		if (oob_required) {
+-			tmp = 0xffffffff;
+-			memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp,
+-				    4);
+-		} else {
+-			memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE,
+-				    chip->oob_poi + offset - mtd->writesize,
+-				    4);
+-		}
++		writel(NFC_BUF_TO_USER_DATA(chip->oob_poi +
++					    layout->oobfree[i].offset),
++		       nfc->regs + NFC_REG_USER_DATA_BASE);
+ 
+ 		chip->cmdfunc(mtd, NAND_CMD_RNDIN, offset, -1);
+ 
+@@ -770,14 +768,8 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd,
+ 		offset += ecc->size;
+ 
+ 		/* Fill OOB data in */
+-		if (oob_required) {
+-			tmp = 0xffffffff;
+-			memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp,
+-				    4);
+-		} else {
+-			memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, oob,
+-				    4);
+-		}
++		writel(NFC_BUF_TO_USER_DATA(oob),
++		       nfc->regs + NFC_REG_USER_DATA_BASE);
+ 
+ 		tmp = NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ACCESS_DIR |
+ 		      (1 << 30);
+@@ -1312,6 +1304,7 @@ static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc)
+ 					node);
+ 		nand_release(&chip->mtd);
+ 		sunxi_nand_ecc_cleanup(&chip->nand.ecc);
++		list_del(&chip->node);
+ 	}
+ }
+ 
+diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c
+index 5bbd1f094f4e..1fc23e48fe8e 100644
+--- a/drivers/mtd/ubi/io.c
++++ b/drivers/mtd/ubi/io.c
+@@ -926,6 +926,11 @@ static int validate_vid_hdr(const struct ubi_device *ubi,
+ 		goto bad;
+ 	}
+ 
++	if (data_size > ubi->leb_size) {
++		ubi_err(ubi, "bad data_size");
++		goto bad;
++	}
++
+ 	if (vol_type == UBI_VID_STATIC) {
+ 		/*
+ 		 * Although from high-level point of view static volumes may
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index 80bdd5b88bac..d85c19762160 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -649,6 +649,7 @@ static int init_volumes(struct ubi_device *ubi,
+ 		if (ubi->corr_peb_count)
+ 			ubi_err(ubi, "%d PEBs are corrupted and not used",
+ 				ubi->corr_peb_count);
++		return -ENOSPC;
+ 	}
+ 	ubi->rsvd_pebs += reserved_pebs;
+ 	ubi->avail_pebs -= reserved_pebs;
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 275d9fb6fe5c..eb4489f9082f 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1601,6 +1601,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 		if (ubi->corr_peb_count)
+ 			ubi_err(ubi, "%d PEBs are corrupted and not used",
+ 				ubi->corr_peb_count);
++		err = -ENOSPC;
+ 		goto out_free;
+ 	}
+ 	ubi->avail_pebs -= reserved_pebs;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 89d788d8f263..adfe1de78d99 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -4280,18 +4280,29 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc)
+ 	struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter,
+ 						     cc);
+ 	struct e1000_hw *hw = &adapter->hw;
++	u32 systimel_1, systimel_2, systimeh;
+ 	cycle_t systim, systim_next;
+-	/* SYSTIMH latching upon SYSTIML read does not work well. To fix that
+-	 * we don't want to allow overflow of SYSTIML and a change to SYSTIMH
+-	 * to occur between reads, so if we read a vale close to overflow, we
+-	 * wait for overflow to occur and read both registers when its safe.
++	/* SYSTIMH latching upon SYSTIML read does not work well.
++	 * This means that if SYSTIML overflows after we read it but before
++	 * we read SYSTIMH, the value of SYSTIMH has been incremented and we
++	 * will experience a huge non linear increment in the systime value
++	 * to fix that we test for overflow and if true, we re-read systime.
+ 	 */
+-	u32 systim_overflow_latch_fix = 0x3FFFFFFF;
+-
+-	do {
+-		systim = (cycle_t)er32(SYSTIML);
+-	} while (systim > systim_overflow_latch_fix);
+-	systim |= (cycle_t)er32(SYSTIMH) << 32;
++	systimel_1 = er32(SYSTIML);
++	systimeh = er32(SYSTIMH);
++	systimel_2 = er32(SYSTIML);
++	/* Check for overflow. If there was no overflow, use the values */
++	if (systimel_1 < systimel_2) {
++		systim = (cycle_t)systimel_1;
++		systim |= (cycle_t)systimeh << 32;
++	} else {
++		/* There was an overflow, read again SYSTIMH, and use
++		 * systimel_2
++		 */
++		systimeh = er32(SYSTIMH);
++		systim = (cycle_t)systimel_2;
++		systim |= (cycle_t)systimeh << 32;
++	}
+ 
+ 	if ((hw->mac.type == e1000_82574) || (hw->mac.type == e1000_82583)) {
+ 		u64 incvalue, time_delta, rem, temp;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 8d7b59689722..5bc9fca67957 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -2851,7 +2851,7 @@ static void igb_probe_vfs(struct igb_adapter *adapter)
+ 		return;
+ 
+ 	pci_sriov_set_totalvfs(pdev, 7);
+-	igb_pci_enable_sriov(pdev, max_vfs);
++	igb_enable_sriov(pdev, max_vfs);
+ 
+ #endif /* CONFIG_PCI_IOV */
+ }
+diff --git a/drivers/net/ethernet/via/Kconfig b/drivers/net/ethernet/via/Kconfig
+index 2f1264b882b9..d3d094742a7e 100644
+--- a/drivers/net/ethernet/via/Kconfig
++++ b/drivers/net/ethernet/via/Kconfig
+@@ -17,7 +17,7 @@ if NET_VENDOR_VIA
+ 
+ config VIA_RHINE
+ 	tristate "VIA Rhine support"
+-	depends on (PCI || OF_IRQ)
++	depends on PCI || (OF_IRQ && GENERIC_PCI_IOMAP)
+ 	depends on HAS_DMA
+ 	select CRC32
+ 	select MII
+diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
+index 85bfa2acb801..32d9ff1b19dc 100644
+--- a/drivers/net/wireless/ath/ath10k/htc.c
++++ b/drivers/net/wireless/ath/ath10k/htc.c
+@@ -145,8 +145,10 @@ int ath10k_htc_send(struct ath10k_htc *htc,
+ 	skb_cb->eid = eid;
+ 	skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
+ 	ret = dma_mapping_error(dev, skb_cb->paddr);
+-	if (ret)
++	if (ret) {
++		ret = -EIO;
+ 		goto err_credits;
++	}
+ 
+ 	sg_item.transfer_id = ep->eid;
+ 	sg_item.transfer_context = skb;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index a60ef7d1d5fc..7be3ce6e0ffa 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -371,8 +371,10 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
+ 	skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
+ 				       DMA_TO_DEVICE);
+ 	res = dma_mapping_error(dev, skb_cb->paddr);
+-	if (res)
++	if (res) {
++		res = -EIO;
+ 		goto err_free_txdesc;
++	}
+ 
+ 	skb_put(txdesc, len);
+ 	cmd = (struct htt_cmd *)txdesc->data;
+@@ -456,8 +458,10 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
+ 	skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
+ 				       DMA_TO_DEVICE);
+ 	res = dma_mapping_error(dev, skb_cb->paddr);
+-	if (res)
++	if (res) {
++		res = -EIO;
+ 		goto err_free_txbuf;
++	}
+ 
+ 	switch (skb_cb->txmode) {
+ 	case ATH10K_HW_TXRX_RAW:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 218b6af63447..0d3c474ff76d 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -591,11 +591,19 @@ ath10k_mac_get_any_chandef_iter(struct ieee80211_hw *hw,
+ static int ath10k_peer_create(struct ath10k *ar, u32 vdev_id, const u8 *addr,
+ 			      enum wmi_peer_type peer_type)
+ {
++	struct ath10k_vif *arvif;
++	int num_peers = 0;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+-	if (ar->num_peers >= ar->max_num_peers)
++	num_peers = ar->num_peers;
++
++	/* Each vdev consumes a peer entry as well */
++	list_for_each_entry(arvif, &ar->arvifs, list)
++		num_peers++;
++
++	if (num_peers >= ar->max_num_peers)
+ 		return -ENOBUFS;
+ 
+ 	ret = ath10k_wmi_peer_create(ar, vdev_id, addr, peer_type);
+@@ -2995,6 +3003,8 @@ void ath10k_mac_tx_unlock(struct ath10k *ar, int reason)
+ 						   IEEE80211_IFACE_ITER_RESUME_ALL,
+ 						   ath10k_mac_tx_unlock_iter,
+ 						   ar);
++
++	ieee80211_wake_queue(ar->hw, ar->hw->offchannel_tx_hw_queue);
+ }
+ 
+ void ath10k_mac_vif_tx_lock(struct ath10k_vif *arvif, int reason)
+@@ -3034,38 +3044,16 @@ static void ath10k_mac_vif_handle_tx_pause(struct ath10k_vif *arvif,
+ 
+ 	lockdep_assert_held(&ar->htt.tx_lock);
+ 
+-	switch (pause_id) {
+-	case WMI_TLV_TX_PAUSE_ID_MCC:
+-	case WMI_TLV_TX_PAUSE_ID_P2P_CLI_NOA:
+-	case WMI_TLV_TX_PAUSE_ID_P2P_GO_PS:
+-	case WMI_TLV_TX_PAUSE_ID_AP_PS:
+-	case WMI_TLV_TX_PAUSE_ID_IBSS_PS:
+-		switch (action) {
+-		case WMI_TLV_TX_PAUSE_ACTION_STOP:
+-			ath10k_mac_vif_tx_lock(arvif, pause_id);
+-			break;
+-		case WMI_TLV_TX_PAUSE_ACTION_WAKE:
+-			ath10k_mac_vif_tx_unlock(arvif, pause_id);
+-			break;
+-		default:
+-			ath10k_warn(ar, "received unknown tx pause action %d on vdev %i, ignoring\n",
+-				    action, arvif->vdev_id);
+-			break;
+-		}
++	switch (action) {
++	case WMI_TLV_TX_PAUSE_ACTION_STOP:
++		ath10k_mac_vif_tx_lock(arvif, pause_id);
++		break;
++	case WMI_TLV_TX_PAUSE_ACTION_WAKE:
++		ath10k_mac_vif_tx_unlock(arvif, pause_id);
+ 		break;
+-	case WMI_TLV_TX_PAUSE_ID_AP_PEER_PS:
+-	case WMI_TLV_TX_PAUSE_ID_AP_PEER_UAPSD:
+-	case WMI_TLV_TX_PAUSE_ID_STA_ADD_BA:
+-	case WMI_TLV_TX_PAUSE_ID_HOST:
+ 	default:
+-		/* FIXME: Some pause_ids aren't vdev specific. Instead they
+-		 * target peer_id and tid. Implementing these could improve
+-		 * traffic scheduling fairness across multiple connected
+-		 * stations in AP/IBSS modes.
+-		 */
+-		ath10k_dbg(ar, ATH10K_DBG_MAC,
+-			   "mac ignoring unsupported tx pause vdev %i id %d\n",
+-			   arvif->vdev_id, pause_id);
++		ath10k_warn(ar, "received unknown tx pause action %d on vdev %i, ignoring\n",
++			    action, arvif->vdev_id);
+ 		break;
+ 	}
+ }
+@@ -3082,12 +3070,15 @@ static void ath10k_mac_handle_tx_pause_iter(void *data, u8 *mac,
+ 	struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
+ 	struct ath10k_mac_tx_pause *arg = data;
+ 
++	if (arvif->vdev_id != arg->vdev_id)
++		return;
++
+ 	ath10k_mac_vif_handle_tx_pause(arvif, arg->pause_id, arg->action);
+ }
+ 
+-void ath10k_mac_handle_tx_pause(struct ath10k *ar, u32 vdev_id,
+-				enum wmi_tlv_tx_pause_id pause_id,
+-				enum wmi_tlv_tx_pause_action action)
++void ath10k_mac_handle_tx_pause_vdev(struct ath10k *ar, u32 vdev_id,
++				     enum wmi_tlv_tx_pause_id pause_id,
++				     enum wmi_tlv_tx_pause_action action)
+ {
+ 	struct ath10k_mac_tx_pause arg = {
+ 		.vdev_id = vdev_id,
+@@ -4080,6 +4071,11 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 		       sizeof(arvif->bitrate_mask.control[i].vht_mcs));
+ 	}
+ 
++	if (ar->num_peers >= ar->max_num_peers) {
++		ath10k_warn(ar, "refusing vdev creation due to insufficient peer entry resources in firmware\n");
++		return -ENOBUFS;
++	}
++
+ 	if (ar->free_vdev_map == 0) {
+ 		ath10k_warn(ar, "Free vdev map is empty, no more interfaces allowed.\n");
+ 		ret = -EBUSY;
+@@ -4287,6 +4283,11 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 		}
+ 	}
+ 
++	spin_lock_bh(&ar->htt.tx_lock);
++	if (!ar->tx_paused)
++		ieee80211_wake_queue(ar->hw, arvif->vdev_id);
++	spin_unlock_bh(&ar->htt.tx_lock);
++
+ 	mutex_unlock(&ar->conf_mutex);
+ 	return 0;
+ 
+@@ -5561,6 +5562,21 @@ static int ath10k_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
+ 	return ret;
+ }
+ 
++static int ath10k_mac_op_set_frag_threshold(struct ieee80211_hw *hw, u32 value)
++{
++	/* Even though there's a WMI enum for fragmentation threshold no known
++	 * firmware actually implements it. Moreover it is not possible to rely
++	 * frame fragmentation to mac80211 because firmware clears the "more
++	 * fragments" bit in frame control making it impossible for remote
++	 * devices to reassemble frames.
++	 *
++	 * Hence implement a dummy callback just to say fragmentation isn't
++	 * supported. This effectively prevents mac80211 from doing frame
++	 * fragmentation in software.
++	 */
++	return -EOPNOTSUPP;
++}
++
+ static void ath10k_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 			 u32 queues, bool drop)
+ {
+@@ -6395,6 +6411,7 @@ static const struct ieee80211_ops ath10k_ops = {
+ 	.remain_on_channel		= ath10k_remain_on_channel,
+ 	.cancel_remain_on_channel	= ath10k_cancel_remain_on_channel,
+ 	.set_rts_threshold		= ath10k_set_rts_threshold,
++	.set_frag_threshold		= ath10k_mac_op_set_frag_threshold,
+ 	.flush				= ath10k_flush,
+ 	.tx_last_beacon			= ath10k_tx_last_beacon,
+ 	.set_antenna			= ath10k_set_antenna,
+diff --git a/drivers/net/wireless/ath/ath10k/mac.h b/drivers/net/wireless/ath/ath10k/mac.h
+index b291f063705c..e3cefe4c7cfd 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.h
++++ b/drivers/net/wireless/ath/ath10k/mac.h
+@@ -61,9 +61,9 @@ int ath10k_mac_vif_chan(struct ieee80211_vif *vif,
+ 
+ void ath10k_mac_handle_beacon(struct ath10k *ar, struct sk_buff *skb);
+ void ath10k_mac_handle_beacon_miss(struct ath10k *ar, u32 vdev_id);
+-void ath10k_mac_handle_tx_pause(struct ath10k *ar, u32 vdev_id,
+-				enum wmi_tlv_tx_pause_id pause_id,
+-				enum wmi_tlv_tx_pause_action action);
++void ath10k_mac_handle_tx_pause_vdev(struct ath10k *ar, u32 vdev_id,
++				     enum wmi_tlv_tx_pause_id pause_id,
++				     enum wmi_tlv_tx_pause_action action);
+ 
+ u8 ath10k_mac_hw_rate_to_idx(const struct ieee80211_supported_band *sband,
+ 			     u8 hw_rate);
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index ea656e011a96..8c5cc1facc45 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -1546,8 +1546,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
+ 
+ 	req_paddr = dma_map_single(ar->dev, treq, req_len, DMA_TO_DEVICE);
+ 	ret = dma_mapping_error(ar->dev, req_paddr);
+-	if (ret)
++	if (ret) {
++		ret = -EIO;
+ 		goto err_dma;
++	}
+ 
+ 	if (resp && resp_len) {
+ 		tresp = kzalloc(*resp_len, GFP_KERNEL);
+@@ -1559,8 +1561,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
+ 		resp_paddr = dma_map_single(ar->dev, tresp, *resp_len,
+ 					    DMA_FROM_DEVICE);
+ 		ret = dma_mapping_error(ar->dev, resp_paddr);
+-		if (ret)
++		if (ret) {
++			ret = EIO;
+ 			goto err_req;
++		}
+ 
+ 		xfer.wait_for_resp = true;
+ 		xfer.resp_len = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 8fdba3865c96..6f477e83099d 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -377,12 +377,34 @@ static int ath10k_wmi_tlv_event_tx_pause(struct ath10k *ar,
+ 		   "wmi tlv tx pause pause_id %u action %u vdev_map 0x%08x peer_id %u tid_map 0x%08x\n",
+ 		   pause_id, action, vdev_map, peer_id, tid_map);
+ 
+-	for (vdev_id = 0; vdev_map; vdev_id++) {
+-		if (!(vdev_map & BIT(vdev_id)))
+-			continue;
+-
+-		vdev_map &= ~BIT(vdev_id);
+-		ath10k_mac_handle_tx_pause(ar, vdev_id, pause_id, action);
++	switch (pause_id) {
++	case WMI_TLV_TX_PAUSE_ID_MCC:
++	case WMI_TLV_TX_PAUSE_ID_P2P_CLI_NOA:
++	case WMI_TLV_TX_PAUSE_ID_P2P_GO_PS:
++	case WMI_TLV_TX_PAUSE_ID_AP_PS:
++	case WMI_TLV_TX_PAUSE_ID_IBSS_PS:
++		for (vdev_id = 0; vdev_map; vdev_id++) {
++			if (!(vdev_map & BIT(vdev_id)))
++				continue;
++
++			vdev_map &= ~BIT(vdev_id);
++			ath10k_mac_handle_tx_pause_vdev(ar, vdev_id, pause_id,
++							action);
++		}
++		break;
++	case WMI_TLV_TX_PAUSE_ID_AP_PEER_PS:
++	case WMI_TLV_TX_PAUSE_ID_AP_PEER_UAPSD:
++	case WMI_TLV_TX_PAUSE_ID_STA_ADD_BA:
++	case WMI_TLV_TX_PAUSE_ID_HOST:
++		ath10k_dbg(ar, ATH10K_DBG_MAC,
++			   "mac ignoring unsupported tx pause id %d\n",
++			   pause_id);
++		break;
++	default:
++		ath10k_dbg(ar, ATH10K_DBG_MAC,
++			   "mac ignoring unknown tx pause vdev %d\n",
++			   pause_id);
++		break;
+ 	}
+ 
+ 	kfree(tb);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 6c046c244705..8dd84c160cfd 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2391,6 +2391,7 @@ void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
+ 				ath10k_warn(ar, "failed to map beacon: %d\n",
+ 					    ret);
+ 				dev_kfree_skb_any(bcn);
++				ret = -EIO;
+ 				goto skip;
+ 			}
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
+index 1c6788aecc62..40d72312f3df 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
+@@ -203,8 +203,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
+ 
+ 	/* Copy firmware into DMA-accessible memory */
+ 	fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
+-	if (!fw)
+-		return -ENOMEM;
++	if (!fw) {
++		status = -ENOMEM;
++		goto out;
++	}
+ 	len = fw_entry->size;
+ 
+ 	if (len % 4)
+@@ -217,6 +219,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
+ 
+ 	status = rsi_copy_to_card(common, fw, len, num_blocks);
+ 	kfree(fw);
++
++out:
+ 	release_firmware(fw_entry);
+ 	return status;
+ }
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
+index 30c2cf7fa93b..de4900862836 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
+@@ -148,8 +148,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
+ 
+ 	/* Copy firmware into DMA-accessible memory */
+ 	fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
+-	if (!fw)
+-		return -ENOMEM;
++	if (!fw) {
++		status = -ENOMEM;
++		goto out;
++	}
+ 	len = fw_entry->size;
+ 
+ 	if (len % 4)
+@@ -162,6 +164,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
+ 
+ 	status = rsi_copy_to_card(common, fw, len, num_blocks);
+ 	kfree(fw);
++
++out:
+ 	release_firmware(fw_entry);
+ 	return status;
+ }
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index f948c46d5132..5ff0cfd142ee 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1348,7 +1348,8 @@ static void xennet_disconnect_backend(struct netfront_info *info)
+ 		queue->tx_evtchn = queue->rx_evtchn = 0;
+ 		queue->tx_irq = queue->rx_irq = 0;
+ 
+-		napi_synchronize(&queue->napi);
++		if (netif_running(info->netdev))
++			napi_synchronize(&queue->napi);
+ 
+ 		xennet_release_tx_bufs(queue);
+ 		xennet_release_rx_bufs(queue);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index ade9eb917a4d..b796d1bd8988 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -86,6 +86,8 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
+ 	struct pmem_device *pmem = bdev->bd_disk->private_data;
+ 
+ 	pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
++	if (rw & WRITE)
++		wmb_pmem();
+ 	page_endio(page, rw & WRITE, 0);
+ 
+ 	return 0;
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index b965c12168b7..502a82ca1db0 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -442,7 +442,8 @@ static const struct pci_vpd_ops pci_vpd_pci22_ops = {
+ static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
+ 			       void *arg)
+ {
+-	struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
++	struct pci_dev *tdev = pci_get_slot(dev->bus,
++					    PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
+ 	ssize_t ret;
+ 
+ 	if (!tdev)
+@@ -456,7 +457,8 @@ static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
+ static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
+ 				const void *arg)
+ {
+-	struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
++	struct pci_dev *tdev = pci_get_slot(dev->bus,
++					    PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
+ 	ssize_t ret;
+ 
+ 	if (!tdev)
+@@ -473,22 +475,6 @@ static const struct pci_vpd_ops pci_vpd_f0_ops = {
+ 	.release = pci_vpd_pci22_release,
+ };
+ 
+-static int pci_vpd_f0_dev_check(struct pci_dev *dev)
+-{
+-	struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
+-	int ret = 0;
+-
+-	if (!tdev)
+-		return -ENODEV;
+-	if (!tdev->vpd || !tdev->multifunction ||
+-	    dev->class != tdev->class || dev->vendor != tdev->vendor ||
+-	    dev->device != tdev->device)
+-		ret = -ENODEV;
+-
+-	pci_dev_put(tdev);
+-	return ret;
+-}
+-
+ int pci_vpd_pci22_init(struct pci_dev *dev)
+ {
+ 	struct pci_vpd_pci22 *vpd;
+@@ -497,12 +483,7 @@ int pci_vpd_pci22_init(struct pci_dev *dev)
+ 	cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
+ 	if (!cap)
+ 		return -ENODEV;
+-	if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
+-		int ret = pci_vpd_f0_dev_check(dev);
+ 
+-		if (ret)
+-			return ret;
+-	}
+ 	vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC);
+ 	if (!vpd)
+ 		return -ENOMEM;
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 6fbd3f2b5992..d3346d23963b 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -256,6 +256,8 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx)
+ 
+ 		res->start = start;
+ 		res->end = end;
++		res->flags &= ~IORESOURCE_UNSET;
++		orig_res.flags &= ~IORESOURCE_UNSET;
+ 		dev_printk(KERN_DEBUG, &dev->dev, "%pR clipped to %pR\n",
+ 				 &orig_res, res);
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index dbd13854f21e..6b1c6a915daa 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1906,11 +1906,27 @@ static void quirk_netmos(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID,
+ 			 PCI_CLASS_COMMUNICATION_SERIAL, 8, quirk_netmos);
+ 
++/*
++ * Quirk non-zero PCI functions to route VPD access through function 0 for
++ * devices that share VPD resources between functions.  The functions are
++ * expected to be identical devices.
++ */
+ static void quirk_f0_vpd_link(struct pci_dev *dev)
+ {
+-	if (!dev->multifunction || !PCI_FUNC(dev->devfn))
++	struct pci_dev *f0;
++
++	if (!PCI_FUNC(dev->devfn))
+ 		return;
+-	dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
++
++	f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
++	if (!f0)
++		return;
++
++	if (f0->vpd && dev->class == f0->class &&
++	    dev->vendor == f0->vendor && dev->device == f0->device)
++		dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
++
++	pci_dev_put(f0);
+ }
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ 			      PCI_CLASS_NETWORK_ETHERNET, 8, quirk_f0_vpd_link);
+diff --git a/drivers/pcmcia/sa1100_generic.c b/drivers/pcmcia/sa1100_generic.c
+index 803945259da8..42861cc70158 100644
+--- a/drivers/pcmcia/sa1100_generic.c
++++ b/drivers/pcmcia/sa1100_generic.c
+@@ -93,7 +93,6 @@ static int sa11x0_drv_pcmcia_remove(struct platform_device *dev)
+ 	for (i = 0; i < sinfo->nskt; i++)
+ 		soc_pcmcia_remove_one(&sinfo->skt[i]);
+ 
+-	clk_put(sinfo->clk);
+ 	kfree(sinfo);
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/sa11xx_base.c b/drivers/pcmcia/sa11xx_base.c
+index cf6de2c2b329..553d70a67f80 100644
+--- a/drivers/pcmcia/sa11xx_base.c
++++ b/drivers/pcmcia/sa11xx_base.c
+@@ -222,7 +222,7 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops,
+ 	int i, ret = 0;
+ 	struct clk *clk;
+ 
+-	clk = clk_get(dev, NULL);
++	clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+@@ -251,7 +251,6 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops,
+ 	if (ret) {
+ 		while (--i >= 0)
+ 			soc_pcmcia_remove_one(&sinfo->skt[i]);
+-		clk_put(clk);
+ 		kfree(sinfo);
+ 	} else {
+ 		dev_set_drvdata(dev, sinfo);
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index 3ad7b1fa24ce..6f4f310de946 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -2408,11 +2408,9 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev)
+ 	if (error)
+ 		return error;
+ 
+-	error = toshiba_hotkey_event_type_get(dev, &events_type);
+-	if (error) {
+-		pr_err("Unable to query Hotkey Event Type\n");
+-		return error;
+-	}
++	if (toshiba_hotkey_event_type_get(dev, &events_type))
++		pr_notice("Unable to query Hotkey Event Type\n");
++
+ 	dev->hotkey_event_type = events_type;
+ 
+ 	dev->hotkey_dev = input_allocate_device();
+diff --git a/drivers/power/avs/Kconfig b/drivers/power/avs/Kconfig
+index 7f3d389bd601..a67eeace6a89 100644
+--- a/drivers/power/avs/Kconfig
++++ b/drivers/power/avs/Kconfig
+@@ -13,7 +13,7 @@ menuconfig POWER_AVS
+ 
+ config ROCKCHIP_IODOMAIN
+         tristate "Rockchip IO domain support"
+-        depends on ARCH_ROCKCHIP && OF
++        depends on POWER_AVS && ARCH_ROCKCHIP && OF
+         help
+           Say y here to enable support io domains on Rockchip SoCs. It is
+           necessary for the io domain setting of the SoC to match the
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index 646829132b59..1dea0e8353e0 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -192,9 +192,9 @@ static const struct regulator_desc axp22x_regulators[] = {
+ 	AXP_DESC(AXP22X, DCDC3, "dcdc3", "vin3", 600, 1860, 20,
+ 		 AXP22X_DCDC3_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(3)),
+ 	AXP_DESC(AXP22X, DCDC4, "dcdc4", "vin4", 600, 1540, 20,
+-		 AXP22X_DCDC4_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(3)),
++		 AXP22X_DCDC4_V_OUT, 0x3f, AXP22X_PWR_OUT_CTRL1, BIT(4)),
+ 	AXP_DESC(AXP22X, DCDC5, "dcdc5", "vin5", 1000, 2550, 50,
+-		 AXP22X_DCDC5_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL1, BIT(4)),
++		 AXP22X_DCDC5_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL1, BIT(5)),
+ 	/* secondary switchable output of DCDC1 */
+ 	AXP_DESC_SW(AXP22X, DC1SW, "dc1sw", "dcdc1", 1600, 3400, 100,
+ 		    AXP22X_DCDC1_V_OUT, 0x1f, AXP22X_PWR_OUT_CTRL2, BIT(7)),
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 78387a6cbae5..5081533858f1 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1376,15 +1376,19 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ 		return 0;
+ 
+ 	r = regulator_dev_lookup(dev, rdev->supply_name, &ret);
+-	if (ret == -ENODEV) {
+-		/*
+-		 * No supply was specified for this regulator and
+-		 * there will never be one.
+-		 */
+-		return 0;
+-	}
+-
+ 	if (!r) {
++		if (ret == -ENODEV) {
++			/*
++			 * No supply was specified for this regulator and
++			 * there will never be one.
++			 */
++			return 0;
++		}
++
++		/* Did the lookup explicitly defer for us? */
++		if (ret == -EPROBE_DEFER)
++			return ret;
++
+ 		if (have_full_constraints()) {
+ 			r = dummy_regulator_rdev;
+ 		} else {
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index add419d6ff34..a56a7b243e91 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -212,6 +212,17 @@ static const struct file_operations twa_fops = {
+ 	.llseek		= noop_llseek,
+ };
+ 
++/*
++ * The controllers use an inline buffer instead of a mapped SGL for small,
++ * single entry buffers.  Note that we treat a zero-length transfer like
++ * a mapped SGL.
++ */
++static bool twa_command_mapped(struct scsi_cmnd *cmd)
++{
++	return scsi_sg_count(cmd) != 1 ||
++		scsi_bufflen(cmd) >= TW_MIN_SGL_LENGTH;
++}
++
+ /* This function will complete an aen request from the isr */
+ static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id)
+ {
+@@ -1339,7 +1350,8 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
+ 				}
+ 
+ 				/* Now complete the io */
+-				scsi_dma_unmap(cmd);
++				if (twa_command_mapped(cmd))
++					scsi_dma_unmap(cmd);
+ 				cmd->scsi_done(cmd);
+ 				tw_dev->state[request_id] = TW_S_COMPLETED;
+ 				twa_free_request_id(tw_dev, request_id);
+@@ -1582,7 +1594,8 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
+ 				struct scsi_cmnd *cmd = tw_dev->srb[i];
+ 
+ 				cmd->result = (DID_RESET << 16);
+-				scsi_dma_unmap(cmd);
++				if (twa_command_mapped(cmd))
++					scsi_dma_unmap(cmd);
+ 				cmd->scsi_done(cmd);
+ 			}
+ 		}
+@@ -1765,12 +1778,14 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ 	retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ 	switch (retval) {
+ 	case SCSI_MLQUEUE_HOST_BUSY:
+-		scsi_dma_unmap(SCpnt);
++		if (twa_command_mapped(SCpnt))
++			scsi_dma_unmap(SCpnt);
+ 		twa_free_request_id(tw_dev, request_id);
+ 		break;
+ 	case 1:
+ 		SCpnt->result = (DID_ERROR << 16);
+-		scsi_dma_unmap(SCpnt);
++		if (twa_command_mapped(SCpnt))
++			scsi_dma_unmap(SCpnt);
+ 		done(SCpnt);
+ 		tw_dev->state[request_id] = TW_S_COMPLETED;
+ 		twa_free_request_id(tw_dev, request_id);
+@@ -1831,8 +1846,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
+ 		/* Map sglist from scsi layer to cmd packet */
+ 
+ 		if (scsi_sg_count(srb)) {
+-			if ((scsi_sg_count(srb) == 1) &&
+-			    (scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) {
++			if (!twa_command_mapped(srb)) {
+ 				if (srb->sc_data_direction == DMA_TO_DEVICE ||
+ 				    srb->sc_data_direction == DMA_BIDIRECTIONAL)
+ 					scsi_sg_copy_to_buffer(srb,
+@@ -1905,7 +1919,7 @@ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int re
+ {
+ 	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ 
+-	if (scsi_bufflen(cmd) < TW_MIN_SGL_LENGTH &&
++	if (!twa_command_mapped(cmd) &&
+ 	    (cmd->sc_data_direction == DMA_FROM_DEVICE ||
+ 	     cmd->sc_data_direction == DMA_BIDIRECTIONAL)) {
+ 		if (scsi_sg_count(cmd) == 1) {
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 1dafeb43333b..cab4e98b2b0e 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -5104,7 +5104,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
+ 	int rc;
+ 	struct ctlr_info *h;
+ 	struct hpsa_scsi_dev_t *dev;
+-	char msg[40];
++	char msg[48];
+ 
+ 	/* find the controller to which the command to be aborted was sent */
+ 	h = sdev_to_hba(scsicmd->device);
+@@ -5122,16 +5122,18 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
+ 
+ 	/* if controller locked up, we can guarantee command won't complete */
+ 	if (lockup_detected(h)) {
+-		sprintf(msg, "cmd %d RESET FAILED, lockup detected",
+-				hpsa_get_cmd_index(scsicmd));
++		snprintf(msg, sizeof(msg),
++			 "cmd %d RESET FAILED, lockup detected",
++			 hpsa_get_cmd_index(scsicmd));
+ 		hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
+ 		return FAILED;
+ 	}
+ 
+ 	/* this reset request might be the result of a lockup; check */
+ 	if (detect_controller_lockup(h)) {
+-		sprintf(msg, "cmd %d RESET FAILED, new lockup detected",
+-				hpsa_get_cmd_index(scsicmd));
++		snprintf(msg, sizeof(msg),
++			 "cmd %d RESET FAILED, new lockup detected",
++			 hpsa_get_cmd_index(scsicmd));
+ 		hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
+ 		return FAILED;
+ 	}
+@@ -5145,7 +5147,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
+ 	/* send a reset to the SCSI LUN which the command was sent to */
+ 	rc = hpsa_do_reset(h, dev, dev->scsi3addr, HPSA_RESET_TYPE_LUN,
+ 			   DEFAULT_REPLY_QUEUE);
+-	sprintf(msg, "reset %s", rc == 0 ? "completed successfully" : "failed");
++	snprintf(msg, sizeof(msg), "reset %s",
++		 rc == 0 ? "completed successfully" : "failed");
+ 	hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
+ 	return rc == 0 ? SUCCESS : FAILED;
+ }
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index a9aa38903efe..cccab6188328 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -4554,7 +4554,7 @@ static ssize_t ipr_store_raw_mode(struct device *dev,
+ 	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ 	res = (struct ipr_resource_entry *)sdev->hostdata;
+ 	if (res) {
+-		if (ioa_cfg->sis64 && ipr_is_af_dasd_device(res)) {
++		if (ipr_is_af_dasd_device(res)) {
+ 			res->raw_mode = simple_strtoul(buf, NULL, 10);
+ 			len = strlen(buf);
+ 			if (res->sdev)
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 6457a8a0db9c..bf3d801ac5f9 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -2169,8 +2169,17 @@ int scsi_error_handler(void *data)
+ 	 * We never actually get interrupted because kthread_run
+ 	 * disables signal delivery for the created thread.
+ 	 */
+-	while (!kthread_should_stop()) {
++	while (true) {
++		/*
++		 * The sequence in kthread_stop() sets the stop flag first
++		 * then wakes the process.  To avoid missed wakeups, the task
++		 * should always be in a non running state before the stop
++		 * flag is checked
++		 */
+ 		set_current_state(TASK_INTERRUPTIBLE);
++		if (kthread_should_stop())
++			break;
++
+ 		if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) ||
+ 		    shost->host_failed != atomic_read(&shost->host_busy)) {
+ 			SCSI_LOG_ERROR_RECOVERY(1,
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index c9357bb393d3..744596464d33 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -386,14 +386,14 @@ static bool bcm2835_spi_can_dma(struct spi_master *master,
+ 	/* otherwise we only allow transfers within the same page
+ 	 * to avoid wasting time on dma_mapping when it is not practical
+ 	 */
+-	if (((size_t)tfr->tx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) {
++	if (((size_t)tfr->tx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) {
+ 		dev_warn_once(&spi->dev,
+ 			      "Unaligned spi tx-transfer bridging page\n");
+ 		return false;
+ 	}
+-	if (((size_t)tfr->rx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) {
++	if (((size_t)tfr->rx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) {
+ 		dev_warn_once(&spi->dev,
+-			      "Unaligned spi tx-transfer bridging page\n");
++			      "Unaligned spi rx-transfer bridging page\n");
+ 		return false;
+ 	}
+ 
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 7293d6d875c5..8e4b1a7c37ce 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -643,6 +643,10 @@ static irqreturn_t ssp_int(int irq, void *dev_id)
+ 	if (!(sccr1_reg & SSCR1_TIE))
+ 		mask &= ~SSSR_TFS;
+ 
++	/* Ignore RX timeout interrupt if it is disabled */
++	if (!(sccr1_reg & SSCR1_TINTE))
++		mask &= ~SSSR_TINT;
++
+ 	if (!(status & mask))
+ 		return IRQ_NONE;
+ 
+diff --git a/drivers/spi/spi-xtensa-xtfpga.c b/drivers/spi/spi-xtensa-xtfpga.c
+index 2e32ea2f194f..be6155cba9de 100644
+--- a/drivers/spi/spi-xtensa-xtfpga.c
++++ b/drivers/spi/spi-xtensa-xtfpga.c
+@@ -34,13 +34,13 @@ struct xtfpga_spi {
+ static inline void xtfpga_spi_write32(const struct xtfpga_spi *spi,
+ 				      unsigned addr, u32 val)
+ {
+-	iowrite32(val, spi->regs + addr);
++	__raw_writel(val, spi->regs + addr);
+ }
+ 
+ static inline unsigned int xtfpga_spi_read32(const struct xtfpga_spi *spi,
+ 					     unsigned addr)
+ {
+-	return ioread32(spi->regs + addr);
++	return __raw_readl(spi->regs + addr);
+ }
+ 
+ static inline void xtfpga_spi_wait_busy(struct xtfpga_spi *xspi)
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index cf8b91b23a76..9ce2f156d382 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1437,8 +1437,7 @@ static struct class spi_master_class = {
+  *
+  * The caller is responsible for assigning the bus number and initializing
+  * the master's methods before calling spi_register_master(); and (after errors
+- * adding the device) calling spi_master_put() and kfree() to prevent a memory
+- * leak.
++ * adding the device) calling spi_master_put() to prevent a memory leak.
+  */
+ struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
+ {
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index c7de64171c45..97aad8f91c2f 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -651,7 +651,8 @@ static int spidev_release(struct inode *inode, struct file *filp)
+ 		kfree(spidev->rx_buffer);
+ 		spidev->rx_buffer = NULL;
+ 
+-		spidev->speed_hz = spidev->spi->max_speed_hz;
++		if (spidev->spi)
++			spidev->speed_hz = spidev->spi->max_speed_hz;
+ 
+ 		/* ... after we unbound from the underlying device? */
+ 		spin_lock_irq(&spidev->spi_lock);
+diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
+index 6f4811263557..b71b1f2d98d5 100644
+--- a/drivers/staging/android/ion/ion.c
++++ b/drivers/staging/android/ion/ion.c
+@@ -1179,13 +1179,13 @@ struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd)
+ 		mutex_unlock(&client->lock);
+ 		goto end;
+ 	}
+-	mutex_unlock(&client->lock);
+ 
+ 	handle = ion_handle_create(client, buffer);
+-	if (IS_ERR(handle))
++	if (IS_ERR(handle)) {
++		mutex_unlock(&client->lock);
+ 		goto end;
++	}
+ 
+-	mutex_lock(&client->lock);
+ 	ret = ion_handle_add(client, handle);
+ 	mutex_unlock(&client->lock);
+ 	if (ret) {
+diff --git a/drivers/staging/speakup/fakekey.c b/drivers/staging/speakup/fakekey.c
+index 4299cf45f947..5e1f16c36b49 100644
+--- a/drivers/staging/speakup/fakekey.c
++++ b/drivers/staging/speakup/fakekey.c
+@@ -81,6 +81,7 @@ void speakup_fake_down_arrow(void)
+ 	__this_cpu_write(reporting_keystroke, true);
+ 	input_report_key(virt_keyboard, KEY_DOWN, PRESSED);
+ 	input_report_key(virt_keyboard, KEY_DOWN, RELEASED);
++	input_sync(virt_keyboard);
+ 	__this_cpu_write(reporting_keystroke, false);
+ 
+ 	/* reenable preemption */
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index fd092909a457..56cf1996f30f 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -341,7 +341,6 @@ static struct iscsi_np *iscsit_get_np(
+ 
+ struct iscsi_np *iscsit_add_np(
+ 	struct __kernel_sockaddr_storage *sockaddr,
+-	char *ip_str,
+ 	int network_transport)
+ {
+ 	struct sockaddr_in *sock_in;
+@@ -370,11 +369,9 @@ struct iscsi_np *iscsit_add_np(
+ 	np->np_flags |= NPF_IP_NETWORK;
+ 	if (sockaddr->ss_family == AF_INET6) {
+ 		sock_in6 = (struct sockaddr_in6 *)sockaddr;
+-		snprintf(np->np_ip, IPV6_ADDRESS_SPACE, "%s", ip_str);
+ 		np->np_port = ntohs(sock_in6->sin6_port);
+ 	} else {
+ 		sock_in = (struct sockaddr_in *)sockaddr;
+-		sprintf(np->np_ip, "%s", ip_str);
+ 		np->np_port = ntohs(sock_in->sin_port);
+ 	}
+ 
+@@ -411,8 +408,8 @@ struct iscsi_np *iscsit_add_np(
+ 	list_add_tail(&np->np_list, &g_np_list);
+ 	mutex_unlock(&np_lock);
+ 
+-	pr_debug("CORE[0] - Added Network Portal: %s:%hu on %s\n",
+-		np->np_ip, np->np_port, np->np_transport->name);
++	pr_debug("CORE[0] - Added Network Portal: %pISc:%hu on %s\n",
++		&np->np_sockaddr, np->np_port, np->np_transport->name);
+ 
+ 	return np;
+ }
+@@ -481,8 +478,8 @@ int iscsit_del_np(struct iscsi_np *np)
+ 	list_del(&np->np_list);
+ 	mutex_unlock(&np_lock);
+ 
+-	pr_debug("CORE[0] - Removed Network Portal: %s:%hu on %s\n",
+-		np->np_ip, np->np_port, np->np_transport->name);
++	pr_debug("CORE[0] - Removed Network Portal: %pISc:%hu on %s\n",
++		&np->np_sockaddr, np->np_port, np->np_transport->name);
+ 
+ 	iscsit_put_transport(np->np_transport);
+ 	kfree(np);
+@@ -3464,7 +3461,6 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
+ 						tpg_np_list) {
+ 				struct iscsi_np *np = tpg_np->tpg_np;
+ 				bool inaddr_any = iscsit_check_inaddr_any(np);
+-				char *fmt_str;
+ 
+ 				if (np->np_network_transport != network_transport)
+ 					continue;
+@@ -3492,15 +3488,18 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
+ 					}
+ 				}
+ 
+-				if (np->np_sockaddr.ss_family == AF_INET6)
+-					fmt_str = "TargetAddress=[%s]:%hu,%hu";
+-				else
+-					fmt_str = "TargetAddress=%s:%hu,%hu";
+-
+-				len = sprintf(buf, fmt_str,
+-					inaddr_any ? conn->local_ip : np->np_ip,
+-					np->np_port,
+-					tpg->tpgt);
++				if (inaddr_any) {
++					len = sprintf(buf, "TargetAddress="
++						      "%s:%hu,%hu",
++						      conn->local_ip,
++						      np->np_port,
++						      tpg->tpgt);
++				} else {
++					len = sprintf(buf, "TargetAddress="
++						      "%pISpc,%hu",
++						      &np->np_sockaddr,
++						      tpg->tpgt);
++				}
+ 				len += 1;
+ 
+ 				if ((len + payload_len) > buffer_len) {
+diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
+index 7d0f9c00d9c2..d294f030a097 100644
+--- a/drivers/target/iscsi/iscsi_target.h
++++ b/drivers/target/iscsi/iscsi_target.h
+@@ -13,7 +13,7 @@ extern int iscsit_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *,
+ extern bool iscsit_check_np_match(struct __kernel_sockaddr_storage *,
+ 				struct iscsi_np *, int);
+ extern struct iscsi_np *iscsit_add_np(struct __kernel_sockaddr_storage *,
+-				char *, int);
++				int);
+ extern int iscsit_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
+ 				struct iscsi_portal_group *, bool);
+ extern int iscsit_del_np(struct iscsi_np *);
+diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
+index c1898c84b3d2..db3b9b986954 100644
+--- a/drivers/target/iscsi/iscsi_target_configfs.c
++++ b/drivers/target/iscsi/iscsi_target_configfs.c
+@@ -99,7 +99,7 @@ static ssize_t lio_target_np_store_sctp(
+ 		 * Use existing np->np_sockaddr for SCTP network portal reference
+ 		 */
+ 		tpg_np_sctp = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
+-					np->np_ip, tpg_np, ISCSI_SCTP_TCP);
++					tpg_np, ISCSI_SCTP_TCP);
+ 		if (!tpg_np_sctp || IS_ERR(tpg_np_sctp))
+ 			goto out;
+ 	} else {
+@@ -177,7 +177,7 @@ static ssize_t lio_target_np_store_iser(
+ 		}
+ 
+ 		tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
+-				np->np_ip, tpg_np, ISCSI_INFINIBAND);
++				tpg_np, ISCSI_INFINIBAND);
+ 		if (IS_ERR(tpg_np_iser)) {
+ 			rc = PTR_ERR(tpg_np_iser);
+ 			goto out;
+@@ -248,8 +248,8 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
+ 			return ERR_PTR(-EINVAL);
+ 		}
+ 		str++; /* Skip over leading "[" */
+-		*str2 = '\0'; /* Terminate the IPv6 address */
+-		str2++; /* Skip over the "]" */
++		*str2 = '\0'; /* Terminate the unbracketed IPv6 address */
++		str2++; /* Skip over the \0 */
+ 		port_str = strstr(str2, ":");
+ 		if (!port_str) {
+ 			pr_err("Unable to locate \":port\""
+@@ -316,7 +316,7 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
+ 	 * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/
+ 	 *
+ 	 */
+-	tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, str, NULL,
++	tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, NULL,
+ 				ISCSI_TCP);
+ 	if (IS_ERR(tpg_np)) {
+ 		iscsit_put_tpg(tpg);
+@@ -344,8 +344,8 @@ static void lio_target_call_delnpfromtpg(
+ 
+ 	se_tpg = &tpg->tpg_se_tpg;
+ 	pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu"
+-		" PORTAL: %s:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
+-		tpg->tpgt, tpg_np->tpg_np->np_ip, tpg_np->tpg_np->np_port);
++		" PORTAL: %pISc:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
++		tpg->tpgt, &tpg_np->tpg_np->np_sockaddr, tpg_np->tpg_np->np_port);
+ 
+ 	ret = iscsit_tpg_del_network_portal(tpg, tpg_np);
+ 	if (ret < 0)
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 7e8f65e5448f..666c0739bfbe 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -823,8 +823,8 @@ static void iscsi_handle_login_thread_timeout(unsigned long data)
+ 	struct iscsi_np *np = (struct iscsi_np *) data;
+ 
+ 	spin_lock_bh(&np->np_thread_lock);
+-	pr_err("iSCSI Login timeout on Network Portal %s:%hu\n",
+-			np->np_ip, np->np_port);
++	pr_err("iSCSI Login timeout on Network Portal %pISc:%hu\n",
++			&np->np_sockaddr, np->np_port);
+ 
+ 	if (np->np_login_timer_flags & ISCSI_TF_STOP) {
+ 		spin_unlock_bh(&np->np_thread_lock);
+@@ -1302,8 +1302,8 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 	spin_lock_bh(&np->np_thread_lock);
+ 	if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
+ 		spin_unlock_bh(&np->np_thread_lock);
+-		pr_err("iSCSI Network Portal on %s:%hu currently not"
+-			" active.\n", np->np_ip, np->np_port);
++		pr_err("iSCSI Network Portal on %pISc:%hu currently not"
++			" active.\n", &np->np_sockaddr, np->np_port);
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+ 		goto new_sess_out;
+diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
+index e8a52f7d6204..51d1734d5390 100644
+--- a/drivers/target/iscsi/iscsi_target_parameters.c
++++ b/drivers/target/iscsi/iscsi_target_parameters.c
+@@ -407,6 +407,7 @@ int iscsi_create_default_params(struct iscsi_param_list **param_list_ptr)
+ 			TYPERANGE_UTF8, USE_INITIAL_ONLY);
+ 	if (!param)
+ 		goto out;
++
+ 	/*
+ 	 * Extra parameters for ISER from RFC-5046
+ 	 */
+@@ -496,9 +497,9 @@ int iscsi_set_keys_to_negotiate(
+ 		} else if (!strcmp(param->name, SESSIONTYPE)) {
+ 			SET_PSTATE_NEGOTIATE(param);
+ 		} else if (!strcmp(param->name, IFMARKER)) {
+-			SET_PSTATE_NEGOTIATE(param);
++			SET_PSTATE_REJECT(param);
+ 		} else if (!strcmp(param->name, OFMARKER)) {
+-			SET_PSTATE_NEGOTIATE(param);
++			SET_PSTATE_REJECT(param);
+ 		} else if (!strcmp(param->name, IFMARKINT)) {
+ 			SET_PSTATE_REJECT(param);
+ 		} else if (!strcmp(param->name, OFMARKINT)) {
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 968068ffcb1c..de26bee4bddd 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -460,7 +460,6 @@ static bool iscsit_tpg_check_network_portal(
+ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
+ 	struct iscsi_portal_group *tpg,
+ 	struct __kernel_sockaddr_storage *sockaddr,
+-	char *ip_str,
+ 	struct iscsi_tpg_np *tpg_np_parent,
+ 	int network_transport)
+ {
+@@ -470,8 +469,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
+ 	if (!tpg_np_parent) {
+ 		if (iscsit_tpg_check_network_portal(tpg->tpg_tiqn, sockaddr,
+ 				network_transport)) {
+-			pr_err("Network Portal: %s already exists on a"
+-				" different TPG on %s\n", ip_str,
++			pr_err("Network Portal: %pISc already exists on a"
++				" different TPG on %s\n", sockaddr,
+ 				tpg->tpg_tiqn->tiqn);
+ 			return ERR_PTR(-EEXIST);
+ 		}
+@@ -484,7 +483,7 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+-	np = iscsit_add_np(sockaddr, ip_str, network_transport);
++	np = iscsit_add_np(sockaddr, network_transport);
+ 	if (IS_ERR(np)) {
+ 		kfree(tpg_np);
+ 		return ERR_CAST(np);
+@@ -514,8 +513,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
+ 		spin_unlock(&tpg_np_parent->tpg_np_parent_lock);
+ 	}
+ 
+-	pr_debug("CORE[%s] - Added Network Portal: %s:%hu,%hu on %s\n",
+-		tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
++	pr_debug("CORE[%s] - Added Network Portal: %pISc:%hu,%hu on %s\n",
++		tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
+ 		np->np_transport->name);
+ 
+ 	return tpg_np;
+@@ -528,8 +527,8 @@ static int iscsit_tpg_release_np(
+ {
+ 	iscsit_clear_tpg_np_login_thread(tpg_np, tpg, true);
+ 
+-	pr_debug("CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s\n",
+-		tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
++	pr_debug("CORE[%s] - Removed Network Portal: %pISc:%hu,%hu on %s\n",
++		tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
+ 		np->np_transport->name);
+ 
+ 	tpg_np->tpg_np = NULL;
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h
+index 95ff5bdecd71..28abda89ea98 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.h
++++ b/drivers/target/iscsi/iscsi_target_tpg.h
+@@ -22,7 +22,7 @@ extern struct iscsi_node_attrib *iscsit_tpg_get_node_attrib(struct iscsi_session
+ extern void iscsit_tpg_del_external_nps(struct iscsi_tpg_np *);
+ extern struct iscsi_tpg_np *iscsit_tpg_locate_child_np(struct iscsi_tpg_np *, int);
+ extern struct iscsi_tpg_np *iscsit_tpg_add_network_portal(struct iscsi_portal_group *,
+-			struct __kernel_sockaddr_storage *, char *, struct iscsi_tpg_np *,
++			struct __kernel_sockaddr_storage *, struct iscsi_tpg_np *,
+ 			int);
+ extern int iscsit_tpg_del_network_portal(struct iscsi_portal_group *,
+ 			struct iscsi_tpg_np *);
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 09e682b1c549..8f1cd194f06a 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -427,8 +427,6 @@ void core_disable_device_list_for_node(
+ 
+ 	hlist_del_rcu(&orig->link);
+ 	clear_bit(DEF_PR_REG_ACTIVE, &orig->deve_flags);
+-	rcu_assign_pointer(orig->se_lun, NULL);
+-	rcu_assign_pointer(orig->se_lun_acl, NULL);
+ 	orig->lun_flags = 0;
+ 	orig->creation_time = 0;
+ 	orig->attach_count--;
+@@ -439,6 +437,9 @@ void core_disable_device_list_for_node(
+ 	kref_put(&orig->pr_kref, target_pr_kref_release);
+ 	wait_for_completion(&orig->pr_comp);
+ 
++	rcu_assign_pointer(orig->se_lun, NULL);
++	rcu_assign_pointer(orig->se_lun_acl, NULL);
++
+ 	kfree_rcu(orig, rcu_head);
+ 
+ 	core_scsi3_free_pr_reg_from_nacl(dev, nacl);
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 5ab7100de17e..e7933115087a 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -618,7 +618,7 @@ static struct t10_pr_registration *__core_scsi3_do_alloc_registration(
+ 	struct se_device *dev,
+ 	struct se_node_acl *nacl,
+ 	struct se_lun *lun,
+-	struct se_dev_entry *deve,
++	struct se_dev_entry *dest_deve,
+ 	u64 mapped_lun,
+ 	unsigned char *isid,
+ 	u64 sa_res_key,
+@@ -640,7 +640,29 @@ static struct t10_pr_registration *__core_scsi3_do_alloc_registration(
+ 	INIT_LIST_HEAD(&pr_reg->pr_reg_atp_mem_list);
+ 	atomic_set(&pr_reg->pr_res_holders, 0);
+ 	pr_reg->pr_reg_nacl = nacl;
+-	pr_reg->pr_reg_deve = deve;
++	/*
++	 * For destination registrations for ALL_TG_PT=1 and SPEC_I_PT=1,
++	 * the se_dev_entry->pr_ref will have been already obtained by
++	 * core_get_se_deve_from_rtpi() or __core_scsi3_alloc_registration().
++	 *
++	 * Otherwise, locate se_dev_entry now and obtain a reference until
++	 * registration completes in __core_scsi3_add_registration().
++	 */
++	if (dest_deve) {
++		pr_reg->pr_reg_deve = dest_deve;
++	} else {
++		rcu_read_lock();
++		pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun);
++		if (!pr_reg->pr_reg_deve) {
++			rcu_read_unlock();
++			pr_err("Unable to locate PR deve %s mapped_lun: %llu\n",
++				nacl->initiatorname, mapped_lun);
++			kmem_cache_free(t10_pr_reg_cache, pr_reg);
++			return NULL;
++		}
++		kref_get(&pr_reg->pr_reg_deve->pr_kref);
++		rcu_read_unlock();
++	}
+ 	pr_reg->pr_res_mapped_lun = mapped_lun;
+ 	pr_reg->pr_aptpl_target_lun = lun->unpacked_lun;
+ 	pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi;
+@@ -936,17 +958,29 @@ static int __core_scsi3_check_aptpl_registration(
+ 		    !(strcmp(pr_reg->pr_tport, t_port)) &&
+ 		     (pr_reg->pr_reg_tpgt == tpgt) &&
+ 		     (pr_reg->pr_aptpl_target_lun == target_lun)) {
++			/*
++			 * Obtain the ->pr_reg_deve pointer + reference, that
++			 * is released by __core_scsi3_add_registration() below.
++			 */
++			rcu_read_lock();
++			pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun);
++			if (!pr_reg->pr_reg_deve) {
++				pr_err("Unable to locate PR APTPL %s mapped_lun:"
++					" %llu\n", nacl->initiatorname, mapped_lun);
++				rcu_read_unlock();
++				continue;
++			}
++			kref_get(&pr_reg->pr_reg_deve->pr_kref);
++			rcu_read_unlock();
+ 
+ 			pr_reg->pr_reg_nacl = nacl;
+ 			pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi;
+-
+ 			list_del(&pr_reg->pr_reg_aptpl_list);
+ 			spin_unlock(&pr_tmpl->aptpl_reg_lock);
+ 			/*
+ 			 * At this point all of the pointers in *pr_reg will
+ 			 * be setup, so go ahead and add the registration.
+ 			 */
+-
+ 			__core_scsi3_add_registration(dev, nacl, pr_reg, 0, 0);
+ 			/*
+ 			 * If this registration is the reservation holder,
+@@ -1044,18 +1078,11 @@ static void __core_scsi3_add_registration(
+ 
+ 	__core_scsi3_dump_registration(tfo, dev, nacl, pr_reg, register_type);
+ 	spin_unlock(&pr_tmpl->registration_lock);
+-
+-	rcu_read_lock();
+-	deve = pr_reg->pr_reg_deve;
+-	if (deve)
+-		set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags);
+-	rcu_read_unlock();
+-
+ 	/*
+ 	 * Skip extra processing for ALL_TG_PT=0 or REGISTER_AND_MOVE.
+ 	 */
+ 	if (!pr_reg->pr_reg_all_tg_pt || register_move)
+-		return;
++		goto out;
+ 	/*
+ 	 * Walk pr_reg->pr_reg_atp_list and add registrations for ALL_TG_PT=1
+ 	 * allocated in __core_scsi3_alloc_registration()
+@@ -1075,19 +1102,31 @@ static void __core_scsi3_add_registration(
+ 		__core_scsi3_dump_registration(tfo, dev, nacl_tmp, pr_reg_tmp,
+ 					       register_type);
+ 		spin_unlock(&pr_tmpl->registration_lock);
+-
++		/*
++		 * Drop configfs group dependency reference and deve->pr_kref
++		 * obtained from  __core_scsi3_alloc_registration() code.
++		 */
+ 		rcu_read_lock();
+ 		deve = pr_reg_tmp->pr_reg_deve;
+-		if (deve)
++		if (deve) {
+ 			set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags);
++			core_scsi3_lunacl_undepend_item(deve);
++			pr_reg_tmp->pr_reg_deve = NULL;
++		}
+ 		rcu_read_unlock();
+-
+-		/*
+-		 * Drop configfs group dependency reference from
+-		 * __core_scsi3_alloc_registration()
+-		 */
+-		core_scsi3_lunacl_undepend_item(pr_reg_tmp->pr_reg_deve);
+ 	}
++out:
++	/*
++	 * Drop deve->pr_kref obtained in __core_scsi3_do_alloc_registration()
++	 */
++	rcu_read_lock();
++	deve = pr_reg->pr_reg_deve;
++	if (deve) {
++		set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags);
++		kref_put(&deve->pr_kref, target_pr_kref_release);
++		pr_reg->pr_reg_deve = NULL;
++	}
++	rcu_read_unlock();
+ }
+ 
+ static int core_scsi3_alloc_registration(
+@@ -1785,9 +1824,11 @@ core_scsi3_decode_spec_i_port(
+ 			dest_node_acl->initiatorname, i_buf, (dest_se_deve) ?
+ 			dest_se_deve->mapped_lun : 0);
+ 
+-		if (!dest_se_deve)
++		if (!dest_se_deve) {
++			kref_put(&local_pr_reg->pr_reg_deve->pr_kref,
++				 target_pr_kref_release);
+ 			continue;
+-
++		}
+ 		core_scsi3_lunacl_undepend_item(dest_se_deve);
+ 		core_scsi3_nodeacl_undepend_item(dest_node_acl);
+ 		core_scsi3_tpg_undepend_item(dest_tpg);
+@@ -1823,9 +1864,11 @@ out:
+ 
+ 		kmem_cache_free(t10_pr_reg_cache, dest_pr_reg);
+ 
+-		if (!dest_se_deve)
++		if (!dest_se_deve) {
++			kref_put(&local_pr_reg->pr_reg_deve->pr_kref,
++				 target_pr_kref_release);
+ 			continue;
+-
++		}
+ 		core_scsi3_lunacl_undepend_item(dest_se_deve);
+ 		core_scsi3_nodeacl_undepend_item(dest_node_acl);
+ 		core_scsi3_tpg_undepend_item(dest_tpg);
+diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
+index 4515f52546f8..47fe94ee10b8 100644
+--- a/drivers/target/target_core_xcopy.c
++++ b/drivers/target/target_core_xcopy.c
+@@ -450,6 +450,8 @@ int target_xcopy_setup_pt(void)
+ 	memset(&xcopy_pt_sess, 0, sizeof(struct se_session));
+ 	INIT_LIST_HEAD(&xcopy_pt_sess.sess_list);
+ 	INIT_LIST_HEAD(&xcopy_pt_sess.sess_acl_list);
++	INIT_LIST_HEAD(&xcopy_pt_sess.sess_cmd_list);
++	spin_lock_init(&xcopy_pt_sess.sess_cmd_lock);
+ 
+ 	xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg;
+ 	xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess;
+@@ -644,7 +646,7 @@ static int target_xcopy_read_source(
+ 	pr_debug("XCOPY: Built READ_16: LBA: %llu Sectors: %u Length: %u\n",
+ 		(unsigned long long)src_lba, src_sectors, length);
+ 
+-	transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
++	transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length,
+ 			      DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
+ 	xop->src_pt_cmd = xpt_cmd;
+ 
+@@ -704,7 +706,7 @@ static int target_xcopy_write_destination(
+ 	pr_debug("XCOPY: Built WRITE_16: LBA: %llu Sectors: %u Length: %u\n",
+ 		(unsigned long long)dst_lba, dst_sectors, length);
+ 
+-	transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
++	transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length,
+ 			      DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
+ 	xop->dst_pt_cmd = xpt_cmd;
+ 
+diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c
+index 620dcd405ff6..42c6f71bdcc1 100644
+--- a/drivers/thermal/cpu_cooling.c
++++ b/drivers/thermal/cpu_cooling.c
+@@ -262,7 +262,9 @@ static int cpufreq_thermal_notifier(struct notifier_block *nb,
+  * efficiently.  Power is stored in mW, frequency in KHz.  The
+  * resulting table is in ascending order.
+  *
+- * Return: 0 on success, -E* on error.
++ * Return: 0 on success, -EINVAL if there are no OPPs for any CPUs,
++ * -ENOMEM if we run out of memory or -EAGAIN if an OPP was
++ * added/enabled while the function was executing.
+  */
+ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
+ 				 u32 capacitance)
+@@ -273,8 +275,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
+ 	int num_opps = 0, cpu, i, ret = 0;
+ 	unsigned long freq;
+ 
+-	rcu_read_lock();
+-
+ 	for_each_cpu(cpu, &cpufreq_device->allowed_cpus) {
+ 		dev = get_cpu_device(cpu);
+ 		if (!dev) {
+@@ -284,24 +284,20 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
+ 		}
+ 
+ 		num_opps = dev_pm_opp_get_opp_count(dev);
+-		if (num_opps > 0) {
++		if (num_opps > 0)
+ 			break;
+-		} else if (num_opps < 0) {
+-			ret = num_opps;
+-			goto unlock;
+-		}
++		else if (num_opps < 0)
++			return num_opps;
+ 	}
+ 
+-	if (num_opps == 0) {
+-		ret = -EINVAL;
+-		goto unlock;
+-	}
++	if (num_opps == 0)
++		return -EINVAL;
+ 
+ 	power_table = kcalloc(num_opps, sizeof(*power_table), GFP_KERNEL);
+-	if (!power_table) {
+-		ret = -ENOMEM;
+-		goto unlock;
+-	}
++	if (!power_table)
++		return -ENOMEM;
++
++	rcu_read_lock();
+ 
+ 	for (freq = 0, i = 0;
+ 	     opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp);
+@@ -309,6 +305,12 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
+ 		u32 freq_mhz, voltage_mv;
+ 		u64 power;
+ 
++		if (i >= num_opps) {
++			rcu_read_unlock();
++			ret = -EAGAIN;
++			goto free_power_table;
++		}
++
+ 		freq_mhz = freq / 1000000;
+ 		voltage_mv = dev_pm_opp_get_voltage(opp) / 1000;
+ 
+@@ -326,17 +328,22 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
+ 		power_table[i].power = power;
+ 	}
+ 
+-	if (i == 0) {
++	rcu_read_unlock();
++
++	if (i != num_opps) {
+ 		ret = PTR_ERR(opp);
+-		goto unlock;
++		goto free_power_table;
+ 	}
+ 
+ 	cpufreq_device->cpu_dev = dev;
+ 	cpufreq_device->dyn_power_table = power_table;
+ 	cpufreq_device->dyn_power_table_entries = i;
+ 
+-unlock:
+-	rcu_read_unlock();
++	return 0;
++
++free_power_table:
++	kfree(power_table);
++
+ 	return ret;
+ }
+ 
+@@ -847,7 +854,7 @@ __cpufreq_cooling_register(struct device_node *np,
+ 	ret = get_idr(&cpufreq_idr, &cpufreq_dev->id);
+ 	if (ret) {
+ 		cool_dev = ERR_PTR(ret);
+-		goto free_table;
++		goto free_power_table;
+ 	}
+ 
+ 	snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d",
+@@ -889,6 +896,8 @@ __cpufreq_cooling_register(struct device_node *np,
+ 
+ remove_idr:
+ 	release_idr(&cpufreq_idr, cpufreq_dev->id);
++free_power_table:
++	kfree(cpufreq_dev->dyn_power_table);
+ free_table:
+ 	kfree(cpufreq_dev->freq_table);
+ free_time_in_idle_timestamp:
+@@ -1039,6 +1048,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
+ 
+ 	thermal_cooling_device_unregister(cpufreq_dev->cool_dev);
+ 	release_idr(&cpufreq_idr, cpufreq_dev->id);
++	kfree(cpufreq_dev->dyn_power_table);
+ 	kfree(cpufreq_dev->time_in_idle_timestamp);
+ 	kfree(cpufreq_dev->time_in_idle);
+ 	kfree(cpufreq_dev->freq_table);
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index ee8bfacf2071..afc1879f66e0 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -343,8 +343,7 @@ static void n_tty_packet_mode_flush(struct tty_struct *tty)
+ 		spin_lock_irqsave(&tty->ctrl_lock, flags);
+ 		tty->ctrl_status |= TIOCPKT_FLUSHREAD;
+ 		spin_unlock_irqrestore(&tty->ctrl_lock, flags);
+-		if (waitqueue_active(&tty->link->read_wait))
+-			wake_up_interruptible(&tty->link->read_wait);
++		wake_up_interruptible(&tty->link->read_wait);
+ 	}
+ }
+ 
+@@ -1382,8 +1381,7 @@ handle_newline:
+ 			put_tty_queue(c, ldata);
+ 			smp_store_release(&ldata->canon_head, ldata->read_head);
+ 			kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-			if (waitqueue_active(&tty->read_wait))
+-				wake_up_interruptible_poll(&tty->read_wait, POLLIN);
++			wake_up_interruptible_poll(&tty->read_wait, POLLIN);
+ 			return 0;
+ 		}
+ 	}
+@@ -1667,8 +1665,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ 
+ 	if ((read_cnt(ldata) >= ldata->minimum_to_wake) || L_EXTPROC(tty)) {
+ 		kill_fasync(&tty->fasync, SIGIO, POLL_IN);
+-		if (waitqueue_active(&tty->read_wait))
+-			wake_up_interruptible_poll(&tty->read_wait, POLLIN);
++		wake_up_interruptible_poll(&tty->read_wait, POLLIN);
+ 	}
+ }
+ 
+@@ -1887,10 +1884,8 @@ static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old)
+ 	}
+ 
+ 	/* The termios change make the tty ready for I/O */
+-	if (waitqueue_active(&tty->write_wait))
+-		wake_up_interruptible(&tty->write_wait);
+-	if (waitqueue_active(&tty->read_wait))
+-		wake_up_interruptible(&tty->read_wait);
++	wake_up_interruptible(&tty->write_wait);
++	wake_up_interruptible(&tty->read_wait);
+ }
+ 
+ /**
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 37fff12dd4d0..c35d96ece8ff 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -326,6 +326,14 @@ configured less than Maximum supported fifo bytes */
+ 				  UART_FCR7_64BYTE,
+ 		.flags		= UART_CAP_FIFO,
+ 	},
++	[PORT_RT2880] = {
++		.name		= "Palmchip BK-3103",
++		.fifo_size	= 16,
++		.tx_loadsz	= 16,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
++		.rxtrig_bytes	= {1, 4, 8, 14},
++		.flags		= UART_CAP_FIFO,
++	},
+ };
+ 
+ /* Uart divisor latch read */
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 2a8f528153e7..40326b342762 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -2641,7 +2641,7 @@ static int atmel_serial_probe(struct platform_device *pdev)
+ 	ret = atmel_init_gpios(port, &pdev->dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Failed to initialize GPIOs.");
+-		goto err;
++		goto err_clear_bit;
+ 	}
+ 
+ 	ret = atmel_init_port(port, pdev);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 57fc6ee12332..774df354af55 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2136,8 +2136,24 @@ retry_open:
+ 	if (!noctty &&
+ 	    current->signal->leader &&
+ 	    !current->signal->tty &&
+-	    tty->session == NULL)
+-		__proc_set_tty(tty);
++	    tty->session == NULL) {
++		/*
++		 * Don't let a process that only has write access to the tty
++		 * obtain the privileges associated with having a tty as
++		 * controlling terminal (being able to reopen it with full
++		 * access through /dev/tty, being able to perform pushback).
++		 * Many distributions set the group of all ttys to "tty" and
++		 * grant write-only access to all terminals for setgid tty
++		 * binaries, which should not imply full privileges on all ttys.
++		 *
++		 * This could theoretically break old code that performs open()
++		 * on a write-only file descriptor. In that case, it might be
++		 * necessary to also permit this if
++		 * inode_permission(inode, MAY_READ) == 0.
++		 */
++		if (filp->f_mode & FMODE_READ)
++			__proc_set_tty(tty);
++	}
+ 	spin_unlock_irq(&current->sighand->siglock);
+ 	read_unlock(&tasklist_lock);
+ 	tty_unlock(tty);
+@@ -2426,7 +2442,7 @@ static int fionbio(struct file *file, int __user *p)
+  *		Takes ->siglock() when updating signal->tty
+  */
+ 
+-static int tiocsctty(struct tty_struct *tty, int arg)
++static int tiocsctty(struct tty_struct *tty, struct file *file, int arg)
+ {
+ 	int ret = 0;
+ 
+@@ -2460,6 +2476,13 @@ static int tiocsctty(struct tty_struct *tty, int arg)
+ 			goto unlock;
+ 		}
+ 	}
++
++	/* See the comment in tty_open(). */
++	if ((file->f_mode & FMODE_READ) == 0 && !capable(CAP_SYS_ADMIN)) {
++		ret = -EPERM;
++		goto unlock;
++	}
++
+ 	proc_set_tty(tty);
+ unlock:
+ 	read_unlock(&tasklist_lock);
+@@ -2852,7 +2875,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		no_tty();
+ 		return 0;
+ 	case TIOCSCTTY:
+-		return tiocsctty(tty, arg);
++		return tiocsctty(tty, file, arg);
+ 	case TIOCGPGRP:
+ 		return tiocgpgrp(tty, real_tty, p);
+ 	case TIOCSPGRP:
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 389f0e034259..fa774323ebda 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -56,7 +56,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
+ 	{ .compatible = "fsl,imx27-usb", .data = &imx27_usb_data},
+ 	{ .compatible = "fsl,imx6q-usb", .data = &imx6q_usb_data},
+ 	{ .compatible = "fsl,imx6sl-usb", .data = &imx6sl_usb_data},
+-	{ .compatible = "fsl,imx6sx-usb", .data = &imx6sl_usb_data},
++	{ .compatible = "fsl,imx6sx-usb", .data = &imx6sx_usb_data},
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 764f668d45a9..6e53c24fa1cb 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -656,6 +656,44 @@ __acquires(hwep->lock)
+ 	return 0;
+ }
+ 
++static int _ep_set_halt(struct usb_ep *ep, int value, bool check_transfer)
++{
++	struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
++	int direction, retval = 0;
++	unsigned long flags;
++
++	if (ep == NULL || hwep->ep.desc == NULL)
++		return -EINVAL;
++
++	if (usb_endpoint_xfer_isoc(hwep->ep.desc))
++		return -EOPNOTSUPP;
++
++	spin_lock_irqsave(hwep->lock, flags);
++
++	if (value && hwep->dir == TX && check_transfer &&
++		!list_empty(&hwep->qh.queue) &&
++			!usb_endpoint_xfer_control(hwep->ep.desc)) {
++		spin_unlock_irqrestore(hwep->lock, flags);
++		return -EAGAIN;
++	}
++
++	direction = hwep->dir;
++	do {
++		retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
++
++		if (!value)
++			hwep->wedge = 0;
++
++		if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
++			hwep->dir = (hwep->dir == TX) ? RX : TX;
++
++	} while (hwep->dir != direction);
++
++	spin_unlock_irqrestore(hwep->lock, flags);
++	return retval;
++}
++
++
+ /**
+  * _gadget_stop_activity: stops all USB activity, flushes & disables all endpts
+  * @gadget: gadget
+@@ -1051,7 +1089,7 @@ __acquires(ci->lock)
+ 				num += ci->hw_ep_max / 2;
+ 
+ 			spin_unlock(&ci->lock);
+-			err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep);
++			err = _ep_set_halt(&ci->ci_hw_ep[num].ep, 1, false);
+ 			spin_lock(&ci->lock);
+ 			if (!err)
+ 				isr_setup_status_phase(ci);
+@@ -1110,8 +1148,8 @@ delegate:
+ 
+ 	if (err < 0) {
+ 		spin_unlock(&ci->lock);
+-		if (usb_ep_set_halt(&hwep->ep))
+-			dev_err(ci->dev, "error: ep_set_halt\n");
++		if (_ep_set_halt(&hwep->ep, 1, false))
++			dev_err(ci->dev, "error: _ep_set_halt\n");
+ 		spin_lock(&ci->lock);
+ 	}
+ }
+@@ -1142,9 +1180,9 @@ __acquires(ci->lock)
+ 					err = isr_setup_status_phase(ci);
+ 				if (err < 0) {
+ 					spin_unlock(&ci->lock);
+-					if (usb_ep_set_halt(&hwep->ep))
++					if (_ep_set_halt(&hwep->ep, 1, false))
+ 						dev_err(ci->dev,
+-							"error: ep_set_halt\n");
++						"error: _ep_set_halt\n");
+ 					spin_lock(&ci->lock);
+ 				}
+ 			}
+@@ -1390,41 +1428,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
+  */
+ static int ep_set_halt(struct usb_ep *ep, int value)
+ {
+-	struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
+-	int direction, retval = 0;
+-	unsigned long flags;
+-
+-	if (ep == NULL || hwep->ep.desc == NULL)
+-		return -EINVAL;
+-
+-	if (usb_endpoint_xfer_isoc(hwep->ep.desc))
+-		return -EOPNOTSUPP;
+-
+-	spin_lock_irqsave(hwep->lock, flags);
+-
+-#ifndef STALL_IN
+-	/* g_file_storage MS compliant but g_zero fails chapter 9 compliance */
+-	if (value && hwep->type == USB_ENDPOINT_XFER_BULK && hwep->dir == TX &&
+-	    !list_empty(&hwep->qh.queue)) {
+-		spin_unlock_irqrestore(hwep->lock, flags);
+-		return -EAGAIN;
+-	}
+-#endif
+-
+-	direction = hwep->dir;
+-	do {
+-		retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
+-
+-		if (!value)
+-			hwep->wedge = 0;
+-
+-		if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
+-			hwep->dir = (hwep->dir == TX) ? RX : TX;
+-
+-	} while (hwep->dir != direction);
+-
+-	spin_unlock_irqrestore(hwep->lock, flags);
+-	return retval;
++	return _ep_set_halt(ep, value, true);
+ }
+ 
+ /**
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index b2a540b43f97..b9ddf0c1ffe5 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -112,7 +112,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 				cfgno, inum, asnum, ep->desc.bEndpointAddress);
+ 		ep->ss_ep_comp.bmAttributes = 16;
+ 	} else if (usb_endpoint_xfer_isoc(&ep->desc) &&
+-			desc->bmAttributes > 2) {
++		   USB_SS_MULT(desc->bmAttributes) > 3) {
+ 		dev_warn(ddev, "Isoc endpoint has Mult of %d in "
+ 				"config %d interface %d altsetting %d ep %d: "
+ 				"setting to 3\n", desc->bmAttributes + 1,
+@@ -121,7 +121,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	}
+ 
+ 	if (usb_endpoint_xfer_isoc(&ep->desc))
+-		max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) *
++		max_tx = (desc->bMaxBurst + 1) *
++			(USB_SS_MULT(desc->bmAttributes)) *
+ 			usb_endpoint_maxp(&ep->desc);
+ 	else if (usb_endpoint_xfer_int(&ep->desc))
+ 		max_tx = usb_endpoint_maxp(&ep->desc) *
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index d85abfed84cc..f5a381945db2 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -54,6 +54,13 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+ 	{ USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT },
+ 
++	/* Logitech ConferenceCam CC3000e */
++	{ USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT },
++	{ USB_DEVICE(0x046d, 0x0848), .driver_info = USB_QUIRK_DELAY_INIT },
++
++	/* Logitech PTZ Pro Camera */
++	{ USB_DEVICE(0x046d, 0x0853), .driver_info = USB_QUIRK_DELAY_INIT },
++
+ 	/* Logitech Quickcam Fusion */
+ 	{ USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+@@ -78,6 +85,12 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* Philips PSC805 audio device */
+ 	{ USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* Plantronic Audio 655 DSP */
++	{ USB_DEVICE(0x047f, 0xc008), .driver_info = USB_QUIRK_RESET_RESUME },
++
++	/* Plantronic Audio 648 USB */
++	{ USB_DEVICE(0x047f, 0xc013), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* Artisman Watchdog Dongle */
+ 	{ USB_DEVICE(0x04b4, 0x0526), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 9a8c936cd42c..41f841fa6c4d 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1498,10 +1498,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
+ 	 * use Event Data TRBs, and we don't chain in a link TRB on short
+ 	 * transfers, we're basically dividing by 1.
+ 	 *
+-	 * xHCI 1.0 specification indicates that the Average TRB Length should
+-	 * be set to 8 for control endpoints.
++	 * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length
++	 * should be set to 8 for control endpoints.
+ 	 */
+-	if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)
++	if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
+ 		ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));
+ 	else
+ 		ep_ctx->tx_info |=
+@@ -1792,8 +1792,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
+ 	int size;
+ 	int i, j, num_ports;
+ 
+-	if (timer_pending(&xhci->cmd_timer))
+-		del_timer_sync(&xhci->cmd_timer);
++	del_timer_sync(&xhci->cmd_timer);
+ 
+ 	/* Free the Event Ring Segment Table and the actual Event Ring */
+ 	size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);
+@@ -2321,6 +2320,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 
+ 	INIT_LIST_HEAD(&xhci->cmd_list);
+ 
++	/* init command timeout timer */
++	setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,
++		    (unsigned long)xhci);
++
+ 	page_size = readl(&xhci->op_regs->page_size);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Supported page size register = 0x%x", page_size);
+@@ -2505,10 +2508,6 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 			"Wrote ERST address to ir_set 0.");
+ 	xhci_print_ir_set(xhci, 0);
+ 
+-	/* init command timeout timer */
+-	setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,
+-		    (unsigned long)xhci);
+-
+ 	/*
+ 	 * XXX: Might need to set the Interrupter Moderation Register to
+ 	 * something other than the default (~1ms minimum between interrupts).
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 5590eac2b22d..c79d33676672 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -180,51 +180,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 				"QUIRK: Resetting on resume");
+ }
+ 
+-/*
+- * In some Intel xHCI controllers, in order to get D3 working,
+- * through a vendor specific SSIC CONFIG register at offset 0x883c,
+- * SSIC PORT need to be marked as "unused" before putting xHCI
+- * into D3. After D3 exit, the SSIC port need to be marked as "used".
+- * Without this change, xHCI might not enter D3 state.
+- * Make sure PME works on some Intel xHCI controllers by writing 1 to clear
+- * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4
+- */
+-static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend)
+-{
+-	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+-	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
+-	u32 val;
+-	void __iomem *reg;
+-
+-	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
+-
+-		reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2;
+-
+-		/* Notify SSIC that SSIC profile programming is not done */
+-		val = readl(reg) & ~PROG_DONE;
+-		writel(val, reg);
+-
+-		/* Mark SSIC port as unused(suspend) or used(resume) */
+-		val = readl(reg);
+-		if (suspend)
+-			val |= SSIC_PORT_UNUSED;
+-		else
+-			val &= ~SSIC_PORT_UNUSED;
+-		writel(val, reg);
+-
+-		/* Notify SSIC that SSIC profile programming is done */
+-		val = readl(reg) | PROG_DONE;
+-		writel(val, reg);
+-		readl(reg);
+-	}
+-
+-	reg = (void __iomem *) xhci->cap_regs + 0x80a4;
+-	val = readl(reg);
+-	writel(val | BIT(28), reg);
+-	readl(reg);
+-}
+-
+ #ifdef CONFIG_ACPI
+ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev)
+ {
+@@ -345,6 +300,51 @@ static void xhci_pci_remove(struct pci_dev *dev)
+ }
+ 
+ #ifdef CONFIG_PM
++/*
++ * In some Intel xHCI controllers, in order to get D3 working,
++ * through a vendor specific SSIC CONFIG register at offset 0x883c,
++ * SSIC PORT need to be marked as "unused" before putting xHCI
++ * into D3. After D3 exit, the SSIC port need to be marked as "used".
++ * Without this change, xHCI might not enter D3 state.
++ * Make sure PME works on some Intel xHCI controllers by writing 1 to clear
++ * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4
++ */
++static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend)
++{
++	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
++	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
++	u32 val;
++	void __iomem *reg;
++
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
++
++		reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2;
++
++		/* Notify SSIC that SSIC profile programming is not done */
++		val = readl(reg) & ~PROG_DONE;
++		writel(val, reg);
++
++		/* Mark SSIC port as unused(suspend) or used(resume) */
++		val = readl(reg);
++		if (suspend)
++			val |= SSIC_PORT_UNUSED;
++		else
++			val &= ~SSIC_PORT_UNUSED;
++		writel(val, reg);
++
++		/* Notify SSIC that SSIC profile programming is done */
++		val = readl(reg) | PROG_DONE;
++		writel(val, reg);
++		readl(reg);
++	}
++
++	reg = (void __iomem *) xhci->cap_regs + 0x80a4;
++	val = readl(reg);
++	writel(val | BIT(28), reg);
++	readl(reg);
++}
++
+ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ {
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 32f4d564494a..8aadf3def901 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -302,6 +302,15 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
+ 	ret = xhci_handshake(&xhci->op_regs->cmd_ring,
+ 			CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
+ 	if (ret < 0) {
++		/* we are about to kill xhci, give it one more chance */
++		xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
++			      &xhci->op_regs->cmd_ring);
++		udelay(1000);
++		ret = xhci_handshake(&xhci->op_regs->cmd_ring,
++				     CMD_RING_RUNNING, 0, 3 * 1000 * 1000);
++		if (ret == 0)
++			return 0;
++
+ 		xhci_err(xhci, "Stopped the command ring failed, "
+ 				"maybe the host is dead\n");
+ 		xhci->xhc_state |= XHCI_STATE_DYING;
+@@ -3041,9 +3050,11 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 	struct xhci_td *td;
+ 	struct scatterlist *sg;
+ 	int num_sgs;
+-	int trb_buff_len, this_sg_len, running_total;
++	int trb_buff_len, this_sg_len, running_total, ret;
+ 	unsigned int total_packet_count;
++	bool zero_length_needed;
+ 	bool first_trb;
++	int last_trb_num;
+ 	u64 addr;
+ 	bool more_trbs_coming;
+ 
+@@ -3059,13 +3070,27 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 	total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length,
+ 			usb_endpoint_maxp(&urb->ep->desc));
+ 
+-	trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id],
++	ret = prepare_transfer(xhci, xhci->devs[slot_id],
+ 			ep_index, urb->stream_id,
+ 			num_trbs, urb, 0, mem_flags);
+-	if (trb_buff_len < 0)
+-		return trb_buff_len;
++	if (ret < 0)
++		return ret;
+ 
+ 	urb_priv = urb->hcpriv;
++
++	/* Deal with URB_ZERO_PACKET - need one more td/trb */
++	zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
++		urb_priv->length == 2;
++	if (zero_length_needed) {
++		num_trbs++;
++		xhci_dbg(xhci, "Creating zero length td.\n");
++		ret = prepare_transfer(xhci, xhci->devs[slot_id],
++				ep_index, urb->stream_id,
++				1, urb, 1, mem_flags);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	td = urb_priv->td[0];
+ 
+ 	/*
+@@ -3095,6 +3120,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		trb_buff_len = urb->transfer_buffer_length;
+ 
+ 	first_trb = true;
++	last_trb_num = zero_length_needed ? 2 : 1;
+ 	/* Queue the first TRB, even if it's zero-length */
+ 	do {
+ 		u32 field = 0;
+@@ -3112,12 +3138,15 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		/* Chain all the TRBs together; clear the chain bit in the last
+ 		 * TRB to indicate it's the last TRB in the chain.
+ 		 */
+-		if (num_trbs > 1) {
++		if (num_trbs > last_trb_num) {
+ 			field |= TRB_CHAIN;
+-		} else {
+-			/* FIXME - add check for ZERO_PACKET flag before this */
++		} else if (num_trbs == last_trb_num) {
+ 			td->last_trb = ep_ring->enqueue;
+ 			field |= TRB_IOC;
++		} else if (zero_length_needed && num_trbs == 1) {
++			trb_buff_len = 0;
++			urb_priv->td[1]->last_trb = ep_ring->enqueue;
++			field |= TRB_IOC;
+ 		}
+ 
+ 		/* Only set interrupt on short packet for IN endpoints */
+@@ -3179,7 +3208,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		if (running_total + trb_buff_len > urb->transfer_buffer_length)
+ 			trb_buff_len =
+ 				urb->transfer_buffer_length - running_total;
+-	} while (running_total < urb->transfer_buffer_length);
++	} while (num_trbs > 0);
+ 
+ 	check_trb_math(urb, num_trbs, running_total);
+ 	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+@@ -3197,7 +3226,9 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 	int num_trbs;
+ 	struct xhci_generic_trb *start_trb;
+ 	bool first_trb;
++	int last_trb_num;
+ 	bool more_trbs_coming;
++	bool zero_length_needed;
+ 	int start_cycle;
+ 	u32 field, length_field;
+ 
+@@ -3228,7 +3259,6 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		num_trbs++;
+ 		running_total += TRB_MAX_BUFF_SIZE;
+ 	}
+-	/* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */
+ 
+ 	ret = prepare_transfer(xhci, xhci->devs[slot_id],
+ 			ep_index, urb->stream_id,
+@@ -3237,6 +3267,20 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		return ret;
+ 
+ 	urb_priv = urb->hcpriv;
++
++	/* Deal with URB_ZERO_PACKET - need one more td/trb */
++	zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
++		urb_priv->length == 2;
++	if (zero_length_needed) {
++		num_trbs++;
++		xhci_dbg(xhci, "Creating zero length td.\n");
++		ret = prepare_transfer(xhci, xhci->devs[slot_id],
++				ep_index, urb->stream_id,
++				1, urb, 1, mem_flags);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	td = urb_priv->td[0];
+ 
+ 	/*
+@@ -3258,7 +3302,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		trb_buff_len = urb->transfer_buffer_length;
+ 
+ 	first_trb = true;
+-
++	last_trb_num = zero_length_needed ? 2 : 1;
+ 	/* Queue the first TRB, even if it's zero-length */
+ 	do {
+ 		u32 remainder = 0;
+@@ -3275,12 +3319,15 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		/* Chain all the TRBs together; clear the chain bit in the last
+ 		 * TRB to indicate it's the last TRB in the chain.
+ 		 */
+-		if (num_trbs > 1) {
++		if (num_trbs > last_trb_num) {
+ 			field |= TRB_CHAIN;
+-		} else {
+-			/* FIXME - add check for ZERO_PACKET flag before this */
++		} else if (num_trbs == last_trb_num) {
+ 			td->last_trb = ep_ring->enqueue;
+ 			field |= TRB_IOC;
++		} else if (zero_length_needed && num_trbs == 1) {
++			trb_buff_len = 0;
++			urb_priv->td[1]->last_trb = ep_ring->enqueue;
++			field |= TRB_IOC;
+ 		}
+ 
+ 		/* Only set interrupt on short packet for IN endpoints */
+@@ -3318,7 +3365,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 		trb_buff_len = urb->transfer_buffer_length - running_total;
+ 		if (trb_buff_len > TRB_MAX_BUFF_SIZE)
+ 			trb_buff_len = TRB_MAX_BUFF_SIZE;
+-	} while (running_total < urb->transfer_buffer_length);
++	} while (num_trbs > 0);
+ 
+ 	check_trb_math(urb, num_trbs, running_total);
+ 	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+@@ -3385,8 +3432,8 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 	if (start_cycle == 0)
+ 		field |= 0x1;
+ 
+-	/* xHCI 1.0 6.4.1.2.1: Transfer Type field */
+-	if (xhci->hci_version == 0x100) {
++	/* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */
++	if (xhci->hci_version >= 0x100) {
+ 		if (urb->transfer_buffer_length > 0) {
+ 			if (setup->bRequestType & USB_DIR_IN)
+ 				field |= TRB_TX_TYPE(TRB_DATA_IN);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 526ebc0c7e72..d7b9f484d4e9 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -146,7 +146,8 @@ static int xhci_start(struct xhci_hcd *xhci)
+ 				"waited %u microseconds.\n",
+ 				XHCI_MAX_HALT_USEC);
+ 	if (!ret)
+-		xhci->xhc_state &= ~XHCI_STATE_HALTED;
++		xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING);
++
+ 	return ret;
+ }
+ 
+@@ -654,15 +655,6 @@ int xhci_run(struct usb_hcd *hcd)
+ }
+ EXPORT_SYMBOL_GPL(xhci_run);
+ 
+-static void xhci_only_stop_hcd(struct usb_hcd *hcd)
+-{
+-	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+-
+-	spin_lock_irq(&xhci->lock);
+-	xhci_halt(xhci);
+-	spin_unlock_irq(&xhci->lock);
+-}
+-
+ /*
+  * Stop xHCI driver.
+  *
+@@ -677,12 +669,14 @@ void xhci_stop(struct usb_hcd *hcd)
+ 	u32 temp;
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 
+-	if (!usb_hcd_is_primary_hcd(hcd)) {
+-		xhci_only_stop_hcd(xhci->shared_hcd);
++	if (xhci->xhc_state & XHCI_STATE_HALTED)
+ 		return;
+-	}
+ 
++	mutex_lock(&xhci->mutex);
+ 	spin_lock_irq(&xhci->lock);
++	xhci->xhc_state |= XHCI_STATE_HALTED;
++	xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
++
+ 	/* Make sure the xHC is halted for a USB3 roothub
+ 	 * (xhci_stop() could be called as part of failed init).
+ 	 */
+@@ -717,6 +711,7 @@ void xhci_stop(struct usb_hcd *hcd)
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"xhci_stop completed - status = %x",
+ 			readl(&xhci->op_regs->status));
++	mutex_unlock(&xhci->mutex);
+ }
+ 
+ /*
+@@ -1340,6 +1335,11 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ 
+ 	if (usb_endpoint_xfer_isoc(&urb->ep->desc))
+ 		size = urb->number_of_packets;
++	else if (usb_endpoint_is_bulk_out(&urb->ep->desc) &&
++	    urb->transfer_buffer_length > 0 &&
++	    urb->transfer_flags & URB_ZERO_PACKET &&
++	    !(urb->transfer_buffer_length % usb_endpoint_maxp(&urb->ep->desc)))
++		size = 2;
+ 	else
+ 		size = 1;
+ 
+@@ -3788,6 +3788,9 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 
+ 	mutex_lock(&xhci->mutex);
+ 
++	if (xhci->xhc_state)	/* dying or halted */
++		goto out;
++
+ 	if (!udev->slot_id) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 				"Bad Slot ID %d", udev->slot_id);
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index 3ad5d19e4d04..23c794813e6a 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -472,7 +472,7 @@ static int chaoskey_rng_read(struct hwrng *rng, void *data,
+ 	if (this_time > max)
+ 		this_time = max;
+ 
+-	memcpy(data, dev->buf, this_time);
++	memcpy(data, dev->buf + dev->used, this_time);
+ 
+ 	dev->used += this_time;
+ 
+diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c
+index 4d1b44c232ee..d07cafb7d5f5 100644
+--- a/drivers/usb/musb/musb_cppi41.c
++++ b/drivers/usb/musb/musb_cppi41.c
+@@ -614,7 +614,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
+ {
+ 	struct musb *musb = controller->musb;
+ 	struct device *dev = musb->controller;
+-	struct device_node *np = dev->of_node;
++	struct device_node *np = dev->parent->of_node;
+ 	struct cppi41_dma_channel *cppi41_channel;
+ 	int count;
+ 	int i;
+@@ -664,7 +664,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
+ 		musb_dma->status = MUSB_DMA_STATUS_FREE;
+ 		musb_dma->max_len = SZ_4M;
+ 
+-		dc = dma_request_slave_channel(dev, str);
++		dc = dma_request_slave_channel(dev->parent, str);
+ 		if (!dc) {
+ 			dev_err(dev, "Failed to request %s.\n", str);
+ 			ret = -EPROBE_DEFER;
+@@ -695,7 +695,7 @@ cppi41_dma_controller_create(struct musb *musb, void __iomem *base)
+ 	struct cppi41_dma_controller *controller;
+ 	int ret = 0;
+ 
+-	if (!musb->controller->of_node) {
++	if (!musb->controller->parent->of_node) {
+ 		dev_err(musb->controller, "Need DT for the DMA engine.\n");
+ 		return NULL;
+ 	}
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index 1334a3de31b8..67325ec94894 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -225,8 +225,11 @@ static void dsps_musb_enable(struct musb *musb)
+ 
+ 	dsps_writel(reg_base, wrp->epintr_set, epmask);
+ 	dsps_writel(reg_base, wrp->coreintr_set, coremask);
+-	/* start polling for ID change. */
+-	mod_timer(&glue->timer, jiffies + msecs_to_jiffies(wrp->poll_timeout));
++	/* start polling for ID change in dual-role idle mode */
++	if (musb->xceiv->otg->state == OTG_STATE_B_IDLE &&
++			musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE)
++		mod_timer(&glue->timer, jiffies +
++				msecs_to_jiffies(wrp->poll_timeout));
+ 	dsps_musb_try_idle(musb, 0);
+ }
+ 
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index deee68eafb72..0cd85f2ccddd 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -230,7 +230,8 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop,
+ 		clk_rate = pdata->clk_rate;
+ 		needs_vcc = pdata->needs_vcc;
+ 		if (gpio_is_valid(pdata->gpio_reset)) {
+-			err = devm_gpio_request_one(dev, pdata->gpio_reset, 0,
++			err = devm_gpio_request_one(dev, pdata->gpio_reset,
++						    GPIOF_ACTIVE_LOW,
+ 						    dev_name(dev));
+ 			if (!err)
+ 				nop->gpiod_reset =
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 876423b8892c..7c8eb4c4c175 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -278,6 +278,10 @@ static void option_instat_callback(struct urb *urb);
+ #define ZTE_PRODUCT_MF622			0x0001
+ #define ZTE_PRODUCT_MF628			0x0015
+ #define ZTE_PRODUCT_MF626			0x0031
++#define ZTE_PRODUCT_ZM8620_X			0x0396
++#define ZTE_PRODUCT_ME3620_MBIM			0x0426
++#define ZTE_PRODUCT_ME3620_X			0x1432
++#define ZTE_PRODUCT_ME3620_L			0x1433
+ #define ZTE_PRODUCT_AC2726			0xfff1
+ #define ZTE_PRODUCT_MG880			0xfffd
+ #define ZTE_PRODUCT_CDMA_TECH			0xfffe
+@@ -544,6 +548,18 @@ static const struct option_blacklist_info zte_mc2716_z_blacklist = {
+ 	.sendsetup = BIT(1) | BIT(2) | BIT(3),
+ };
+ 
++static const struct option_blacklist_info zte_me3620_mbim_blacklist = {
++	.reserved = BIT(2) | BIT(3) | BIT(4),
++};
++
++static const struct option_blacklist_info zte_me3620_xl_blacklist = {
++	.reserved = BIT(3) | BIT(4) | BIT(5),
++};
++
++static const struct option_blacklist_info zte_zm8620_x_blacklist = {
++	.reserved = BIT(3) | BIT(4) | BIT(5),
++};
++
+ static const struct option_blacklist_info huawei_cdc12_blacklist = {
+ 	.reserved = BIT(1) | BIT(2),
+ };
+@@ -1591,6 +1607,14 @@ static const struct usb_device_id option_ids[] = {
+ 	 .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff),
+ 	 .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist },
++	{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L),
++	 .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
++	{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM),
++	 .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist },
++	{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X),
++	 .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
++	{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X),
++	 .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist },
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
+ 	{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
+diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
+index 6c3734d2b45a..d3ea90bef84d 100644
+--- a/drivers/usb/serial/whiteheat.c
++++ b/drivers/usb/serial/whiteheat.c
+@@ -80,6 +80,8 @@ static int  whiteheat_firmware_download(struct usb_serial *serial,
+ static int  whiteheat_firmware_attach(struct usb_serial *serial);
+ 
+ /* function prototypes for the Connect Tech WhiteHEAT serial converter */
++static int whiteheat_probe(struct usb_serial *serial,
++				const struct usb_device_id *id);
+ static int  whiteheat_attach(struct usb_serial *serial);
+ static void whiteheat_release(struct usb_serial *serial);
+ static int  whiteheat_port_probe(struct usb_serial_port *port);
+@@ -116,6 +118,7 @@ static struct usb_serial_driver whiteheat_device = {
+ 	.description =		"Connect Tech - WhiteHEAT",
+ 	.id_table =		id_table_std,
+ 	.num_ports =		4,
++	.probe =		whiteheat_probe,
+ 	.attach =		whiteheat_attach,
+ 	.release =		whiteheat_release,
+ 	.port_probe =		whiteheat_port_probe,
+@@ -217,6 +220,34 @@ static int whiteheat_firmware_attach(struct usb_serial *serial)
+ /*****************************************************************************
+  * Connect Tech's White Heat serial driver functions
+  *****************************************************************************/
++
++static int whiteheat_probe(struct usb_serial *serial,
++				const struct usb_device_id *id)
++{
++	struct usb_host_interface *iface_desc;
++	struct usb_endpoint_descriptor *endpoint;
++	size_t num_bulk_in = 0;
++	size_t num_bulk_out = 0;
++	size_t min_num_bulk;
++	unsigned int i;
++
++	iface_desc = serial->interface->cur_altsetting;
++
++	for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
++		endpoint = &iface_desc->endpoint[i].desc;
++		if (usb_endpoint_is_bulk_in(endpoint))
++			++num_bulk_in;
++		if (usb_endpoint_is_bulk_out(endpoint))
++			++num_bulk_out;
++	}
++
++	min_num_bulk = COMMAND_PORT + 1;
++	if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk)
++		return -ENODEV;
++
++	return 0;
++}
++
+ static int whiteheat_attach(struct usb_serial *serial)
+ {
+ 	struct usb_serial_port *command_port;
+diff --git a/drivers/watchdog/imgpdc_wdt.c b/drivers/watchdog/imgpdc_wdt.c
+index 0f73621827ab..15ab07230960 100644
+--- a/drivers/watchdog/imgpdc_wdt.c
++++ b/drivers/watchdog/imgpdc_wdt.c
+@@ -316,6 +316,7 @@ static int pdc_wdt_remove(struct platform_device *pdev)
+ {
+ 	struct pdc_wdt_dev *pdc_wdt = platform_get_drvdata(pdev);
+ 
++	unregister_restart_handler(&pdc_wdt->restart_handler);
+ 	pdc_wdt_stop(&pdc_wdt->wdt_dev);
+ 	watchdog_unregister_device(&pdc_wdt->wdt_dev);
+ 	clk_disable_unprepare(pdc_wdt->wdt_clk);
+diff --git a/drivers/watchdog/sunxi_wdt.c b/drivers/watchdog/sunxi_wdt.c
+index a29afb37c48c..47bd8a14d01f 100644
+--- a/drivers/watchdog/sunxi_wdt.c
++++ b/drivers/watchdog/sunxi_wdt.c
+@@ -184,7 +184,7 @@ static int sunxi_wdt_start(struct watchdog_device *wdt_dev)
+ 	/* Set system reset function */
+ 	reg = readl(wdt_base + regs->wdt_cfg);
+ 	reg &= ~(regs->wdt_reset_mask);
+-	reg |= ~(regs->wdt_reset_val);
++	reg |= regs->wdt_reset_val;
+ 	writel(reg, wdt_base + regs->wdt_cfg);
+ 
+ 	/* Enable watchdog */
+diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
+index a1800c150839..08cb419eb4e6 100644
+--- a/drivers/xen/preempt.c
++++ b/drivers/xen/preempt.c
+@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+ asmlinkage __visible void xen_maybe_preempt_hcall(void)
+ {
+ 	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
+-		     && should_resched())) {
++		     && need_resched())) {
+ 		/*
+ 		 * Clear flag as we may be rescheduled on a different
+ 		 * cpu.
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 198243717da5..1170f8ce5e7f 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1241,6 +1241,13 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ 				goto out_clear;
+ 			}
+ 			bd_set_size(bdev, (loff_t)bdev->bd_part->nr_sects << 9);
++			/*
++			 * If the partition is not aligned on a page
++			 * boundary, we can't do dax I/O to it.
++			 */
++			if ((bdev->bd_part->start_sect % (PAGE_SIZE / 512)) ||
++			    (bdev->bd_part->nr_sects % (PAGE_SIZE / 512)))
++				bdev->bd_inode->i_flags &= ~S_DAX;
+ 		}
+ 	} else {
+ 		if (bdev->bd_contains == bdev) {
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 02d05817cbdf..3fc4fec9b94e 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2798,7 +2798,8 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
+ 			      bio_end_io_t end_io_func,
+ 			      int mirror_num,
+ 			      unsigned long prev_bio_flags,
+-			      unsigned long bio_flags)
++			      unsigned long bio_flags,
++			      bool force_bio_submit)
+ {
+ 	int ret = 0;
+ 	struct bio *bio;
+@@ -2816,6 +2817,7 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
+ 			contig = bio_end_sector(bio) == sector;
+ 
+ 		if (prev_bio_flags != bio_flags || !contig ||
++		    force_bio_submit ||
+ 		    merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) ||
+ 		    bio_add_page(bio, page, page_size, offset) < page_size) {
+ 			ret = submit_one_bio(rw, bio, mirror_num,
+@@ -2909,7 +2911,8 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 			 get_extent_t *get_extent,
+ 			 struct extent_map **em_cached,
+ 			 struct bio **bio, int mirror_num,
+-			 unsigned long *bio_flags, int rw)
++			 unsigned long *bio_flags, int rw,
++			 u64 *prev_em_start)
+ {
+ 	struct inode *inode = page->mapping->host;
+ 	u64 start = page_offset(page);
+@@ -2957,6 +2960,7 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 	}
+ 	while (cur <= end) {
+ 		unsigned long pnr = (last_byte >> PAGE_CACHE_SHIFT) + 1;
++		bool force_bio_submit = false;
+ 
+ 		if (cur >= last_byte) {
+ 			char *userpage;
+@@ -3007,6 +3011,49 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 		block_start = em->block_start;
+ 		if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
+ 			block_start = EXTENT_MAP_HOLE;
++
++		/*
++		 * If we have a file range that points to a compressed extent
++		 * and it's followed by a consecutive file range that points to
++		 * to the same compressed extent (possibly with a different
++		 * offset and/or length, so it either points to the whole extent
++		 * or only part of it), we must make sure we do not submit a
++		 * single bio to populate the pages for the 2 ranges because
++		 * this makes the compressed extent read zero out the pages
++		 * belonging to the 2nd range. Imagine the following scenario:
++		 *
++		 *  File layout
++		 *  [0 - 8K]                     [8K - 24K]
++		 *    |                               |
++		 *    |                               |
++		 * points to extent X,         points to extent X,
++		 * offset 4K, length of 8K     offset 0, length 16K
++		 *
++		 * [extent X, compressed length = 4K uncompressed length = 16K]
++		 *
++		 * If the bio to read the compressed extent covers both ranges,
++		 * it will decompress extent X into the pages belonging to the
++		 * first range and then it will stop, zeroing out the remaining
++		 * pages that belong to the other range that points to extent X.
++		 * So here we make sure we submit 2 bios, one for the first
++		 * range and another one for the third range. Both will target
++		 * the same physical extent from disk, but we can't currently
++		 * make the compressed bio endio callback populate the pages
++		 * for both ranges because each compressed bio is tightly
++		 * coupled with a single extent map, and each range can have
++		 * an extent map with a different offset value relative to the
++		 * uncompressed data of our extent and different lengths. This
++		 * is a corner case so we prioritize correctness over
++		 * non-optimal behavior (submitting 2 bios for the same extent).
++		 */
++		if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
++		    prev_em_start && *prev_em_start != (u64)-1 &&
++		    *prev_em_start != em->orig_start)
++			force_bio_submit = true;
++
++		if (prev_em_start)
++			*prev_em_start = em->orig_start;
++
+ 		free_extent_map(em);
+ 		em = NULL;
+ 
+@@ -3056,7 +3103,8 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 					 bdev, bio, pnr,
+ 					 end_bio_extent_readpage, mirror_num,
+ 					 *bio_flags,
+-					 this_bio_flag);
++					 this_bio_flag,
++					 force_bio_submit);
+ 		if (!ret) {
+ 			nr++;
+ 			*bio_flags = this_bio_flag;
+@@ -3083,7 +3131,8 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
+ 					     get_extent_t *get_extent,
+ 					     struct extent_map **em_cached,
+ 					     struct bio **bio, int mirror_num,
+-					     unsigned long *bio_flags, int rw)
++					     unsigned long *bio_flags, int rw,
++					     u64 *prev_em_start)
+ {
+ 	struct inode *inode;
+ 	struct btrfs_ordered_extent *ordered;
+@@ -3103,7 +3152,7 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
+ 
+ 	for (index = 0; index < nr_pages; index++) {
+ 		__do_readpage(tree, pages[index], get_extent, em_cached, bio,
+-			      mirror_num, bio_flags, rw);
++			      mirror_num, bio_flags, rw, prev_em_start);
+ 		page_cache_release(pages[index]);
+ 	}
+ }
+@@ -3113,7 +3162,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
+ 			       int nr_pages, get_extent_t *get_extent,
+ 			       struct extent_map **em_cached,
+ 			       struct bio **bio, int mirror_num,
+-			       unsigned long *bio_flags, int rw)
++			       unsigned long *bio_flags, int rw,
++			       u64 *prev_em_start)
+ {
+ 	u64 start = 0;
+ 	u64 end = 0;
+@@ -3134,7 +3184,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
+ 						  index - first_index, start,
+ 						  end, get_extent, em_cached,
+ 						  bio, mirror_num, bio_flags,
+-						  rw);
++						  rw, prev_em_start);
+ 			start = page_start;
+ 			end = start + PAGE_CACHE_SIZE - 1;
+ 			first_index = index;
+@@ -3145,7 +3195,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
+ 		__do_contiguous_readpages(tree, &pages[first_index],
+ 					  index - first_index, start,
+ 					  end, get_extent, em_cached, bio,
+-					  mirror_num, bio_flags, rw);
++					  mirror_num, bio_flags, rw,
++					  prev_em_start);
+ }
+ 
+ static int __extent_read_full_page(struct extent_io_tree *tree,
+@@ -3171,7 +3222,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
+ 	}
+ 
+ 	ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
+-			    bio_flags, rw);
++			    bio_flags, rw, NULL);
+ 	return ret;
+ }
+ 
+@@ -3197,7 +3248,7 @@ int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
+ 	int ret;
+ 
+ 	ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num,
+-				      &bio_flags, READ);
++			    &bio_flags, READ, NULL);
+ 	if (bio)
+ 		ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
+ 	return ret;
+@@ -3450,7 +3501,7 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode,
+ 						 sector, iosize, pg_offset,
+ 						 bdev, &epd->bio, max_nr,
+ 						 end_bio_extent_writepage,
+-						 0, 0, 0);
++						 0, 0, 0, false);
+ 			if (ret)
+ 				SetPageError(page);
+ 		}
+@@ -3752,7 +3803,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
+ 		ret = submit_extent_page(rw, tree, p, offset >> 9,
+ 					 PAGE_CACHE_SIZE, 0, bdev, &epd->bio,
+ 					 -1, end_bio_extent_buffer_writepage,
+-					 0, epd->bio_flags, bio_flags);
++					 0, epd->bio_flags, bio_flags, false);
+ 		epd->bio_flags = bio_flags;
+ 		if (ret) {
+ 			set_btree_ioerr(p);
+@@ -4156,6 +4207,7 @@ int extent_readpages(struct extent_io_tree *tree,
+ 	struct page *page;
+ 	struct extent_map *em_cached = NULL;
+ 	int nr = 0;
++	u64 prev_em_start = (u64)-1;
+ 
+ 	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
+ 		page = list_entry(pages->prev, struct page, lru);
+@@ -4172,12 +4224,12 @@ int extent_readpages(struct extent_io_tree *tree,
+ 		if (nr < ARRAY_SIZE(pagepool))
+ 			continue;
+ 		__extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
+-				   &bio, 0, &bio_flags, READ);
++				   &bio, 0, &bio_flags, READ, &prev_em_start);
+ 		nr = 0;
+ 	}
+ 	if (nr)
+ 		__extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
+-				   &bio, 0, &bio_flags, READ);
++				   &bio, 0, &bio_flags, READ, &prev_em_start);
+ 
+ 	if (em_cached)
+ 		free_extent_map(em_cached);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e33dff356460..b54e63038b96 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5051,7 +5051,8 @@ void btrfs_evict_inode(struct inode *inode)
+ 		goto no_delete;
+ 	}
+ 	/* do we really want it for ->i_nlink > 0 and zero btrfs_root_refs? */
+-	btrfs_wait_ordered_range(inode, 0, (u64)-1);
++	if (!special_file(inode->i_mode))
++		btrfs_wait_ordered_range(inode, 0, (u64)-1);
+ 
+ 	btrfs_free_io_failure_record(inode, 0, (u64)-1);
+ 
+diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
+index aa0dc2573374..afa09fce8151 100644
+--- a/fs/cifs/cifsencrypt.c
++++ b/fs/cifs/cifsencrypt.c
+@@ -444,6 +444,48 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 	return 0;
+ }
+ 
++/* Server has provided av pairs/target info in the type 2 challenge
++ * packet and we have plucked it and stored within smb session.
++ * We parse that blob here to find the server given timestamp
++ * as part of ntlmv2 authentication (or local current time as
++ * default in case of failure)
++ */
++static __le64
++find_timestamp(struct cifs_ses *ses)
++{
++	unsigned int attrsize;
++	unsigned int type;
++	unsigned int onesize = sizeof(struct ntlmssp2_name);
++	unsigned char *blobptr;
++	unsigned char *blobend;
++	struct ntlmssp2_name *attrptr;
++
++	if (!ses->auth_key.len || !ses->auth_key.response)
++		return 0;
++
++	blobptr = ses->auth_key.response;
++	blobend = blobptr + ses->auth_key.len;
++
++	while (blobptr + onesize < blobend) {
++		attrptr = (struct ntlmssp2_name *) blobptr;
++		type = le16_to_cpu(attrptr->type);
++		if (type == NTLMSSP_AV_EOL)
++			break;
++		blobptr += 2; /* advance attr type */
++		attrsize = le16_to_cpu(attrptr->length);
++		blobptr += 2; /* advance attr size */
++		if (blobptr + attrsize > blobend)
++			break;
++		if (type == NTLMSSP_AV_TIMESTAMP) {
++			if (attrsize == sizeof(u64))
++				return *((__le64 *)blobptr);
++		}
++		blobptr += attrsize; /* advance attr value */
++	}
++
++	return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
++}
++
+ static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
+ 			    const struct nls_table *nls_cp)
+ {
+@@ -641,6 +683,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 	struct ntlmv2_resp *ntlmv2;
+ 	char ntlmv2_hash[16];
+ 	unsigned char *tiblob = NULL; /* target info blob */
++	__le64 rsp_timestamp;
+ 
+ 	if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) {
+ 		if (!ses->domainName) {
+@@ -659,6 +702,12 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 		}
+ 	}
+ 
++	/* Must be within 5 minutes of the server (or in range +/-2h
++	 * in case of Mac OS X), so simply carry over server timestamp
++	 * (as Windows 7 does)
++	 */
++	rsp_timestamp = find_timestamp(ses);
++
+ 	baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
+ 	tilen = ses->auth_key.len;
+ 	tiblob = ses->auth_key.response;
+@@ -675,8 +724,8 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 			(ses->auth_key.response + CIFS_SESS_KEY_SIZE);
+ 	ntlmv2->blob_signature = cpu_to_le32(0x00000101);
+ 	ntlmv2->reserved = 0;
+-	/* Must be within 5 minutes of the server */
+-	ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
++	ntlmv2->time = rsp_timestamp;
++
+ 	get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));
+ 	ntlmv2->reserved2 = 0;
+ 
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index f621b44cb800..6b66dd5d1540 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2034,7 +2034,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
+ 	struct tcon_link *tlink = NULL;
+ 	struct cifs_tcon *tcon = NULL;
+ 	struct TCP_Server_Info *server;
+-	struct cifs_io_parms io_parms;
+ 
+ 	/*
+ 	 * To avoid spurious oplock breaks from server, in the case of
+@@ -2056,18 +2055,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
+ 			rc = -ENOSYS;
+ 		cifsFileInfo_put(open_file);
+ 		cifs_dbg(FYI, "SetFSize for attrs rc = %d\n", rc);
+-		if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
+-			unsigned int bytes_written;
+-
+-			io_parms.netfid = open_file->fid.netfid;
+-			io_parms.pid = open_file->pid;
+-			io_parms.tcon = tcon;
+-			io_parms.offset = 0;
+-			io_parms.length = attrs->ia_size;
+-			rc = CIFSSMBWrite(xid, &io_parms, &bytes_written,
+-					  NULL, NULL, 1);
+-			cifs_dbg(FYI, "Wrt seteof rc %d\n", rc);
+-		}
+ 	} else
+ 		rc = -EINVAL;
+ 
+@@ -2093,28 +2080,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
+ 	else
+ 		rc = -ENOSYS;
+ 	cifs_dbg(FYI, "SetEOF by path (setattrs) rc = %d\n", rc);
+-	if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
+-		__u16 netfid;
+-		int oplock = 0;
+ 
+-		rc = SMBLegacyOpen(xid, tcon, full_path, FILE_OPEN,
+-				   GENERIC_WRITE, CREATE_NOT_DIR, &netfid,
+-				   &oplock, NULL, cifs_sb->local_nls,
+-				   cifs_remap(cifs_sb));
+-		if (rc == 0) {
+-			unsigned int bytes_written;
+-
+-			io_parms.netfid = netfid;
+-			io_parms.pid = current->tgid;
+-			io_parms.tcon = tcon;
+-			io_parms.offset = 0;
+-			io_parms.length = attrs->ia_size;
+-			rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, NULL,
+-					  NULL,  1);
+-			cifs_dbg(FYI, "wrt seteof rc %d\n", rc);
+-			CIFSSMBClose(xid, tcon, netfid);
+-		}
+-	}
+ 	if (tlink)
+ 		cifs_put_tlink(tlink);
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index df91bcf56d67..18da19f4f811 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -50,9 +50,13 @@ change_conf(struct TCP_Server_Info *server)
+ 		break;
+ 	default:
+ 		server->echoes = true;
+-		server->oplocks = true;
++		if (enable_oplocks) {
++			server->oplocks = true;
++			server->oplock_credits = 1;
++		} else
++			server->oplocks = false;
++
+ 		server->echo_credits = 1;
+-		server->oplock_credits = 1;
+ 	}
+ 	server->credits -= server->echo_credits + server->oplock_credits;
+ 	return 0;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index b8b4f08ee094..60dd83164ed6 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -46,6 +46,7 @@
+ #include "smb2status.h"
+ #include "smb2glob.h"
+ #include "cifspdu.h"
++#include "cifs_spnego.h"
+ 
+ /*
+  *  The following table defines the expected "StructureSize" of SMB2 requests
+@@ -486,19 +487,15 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 		cifs_dbg(FYI, "missing security blob on negprot\n");
+ 
+ 	rc = cifs_enable_signing(server, ses->sign);
+-#ifdef CONFIG_SMB2_ASN1  /* BB REMOVEME when updated asn1.c ready */
+ 	if (rc)
+ 		goto neg_exit;
+-	if (blob_length)
++	if (blob_length) {
+ 		rc = decode_negTokenInit(security_blob, blob_length, server);
+-	if (rc == 1)
+-		rc = 0;
+-	else if (rc == 0) {
+-		rc = -EIO;
+-		goto neg_exit;
++		if (rc == 1)
++			rc = 0;
++		else if (rc == 0)
++			rc = -EIO;
+ 	}
+-#endif
+-
+ neg_exit:
+ 	free_rsp_buf(resp_buftype, rsp);
+ 	return rc;
+@@ -592,7 +589,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
+ 	__le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */
+ 	struct TCP_Server_Info *server = ses->server;
+ 	u16 blob_length = 0;
+-	char *security_blob;
++	struct key *spnego_key = NULL;
++	char *security_blob = NULL;
+ 	char *ntlmssp_blob = NULL;
+ 	bool use_spnego = false; /* else use raw ntlmssp */
+ 
+@@ -620,7 +618,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
+ 	ses->ntlmssp->sesskey_per_smbsess = true;
+ 
+ 	/* FIXME: allow for other auth types besides NTLMSSP (e.g. krb5) */
+-	ses->sectype = RawNTLMSSP;
++	if (ses->sectype != Kerberos && ses->sectype != RawNTLMSSP)
++		ses->sectype = RawNTLMSSP;
+ 
+ ssetup_ntlmssp_authenticate:
+ 	if (phase == NtLmChallenge)
+@@ -649,7 +648,48 @@ ssetup_ntlmssp_authenticate:
+ 	iov[0].iov_base = (char *)req;
+ 	/* 4 for rfc1002 length field and 1 for pad */
+ 	iov[0].iov_len = get_rfc1002_length(req) + 4 - 1;
+-	if (phase == NtLmNegotiate) {
++
++	if (ses->sectype == Kerberos) {
++#ifdef CONFIG_CIFS_UPCALL
++		struct cifs_spnego_msg *msg;
++
++		spnego_key = cifs_get_spnego_key(ses);
++		if (IS_ERR(spnego_key)) {
++			rc = PTR_ERR(spnego_key);
++			spnego_key = NULL;
++			goto ssetup_exit;
++		}
++
++		msg = spnego_key->payload.data;
++		/*
++		 * check version field to make sure that cifs.upcall is
++		 * sending us a response in an expected form
++		 */
++		if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) {
++			cifs_dbg(VFS,
++				  "bad cifs.upcall version. Expected %d got %d",
++				  CIFS_SPNEGO_UPCALL_VERSION, msg->version);
++			rc = -EKEYREJECTED;
++			goto ssetup_exit;
++		}
++		ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len,
++						 GFP_KERNEL);
++		if (!ses->auth_key.response) {
++			cifs_dbg(VFS,
++				"Kerberos can't allocate (%u bytes) memory",
++				msg->sesskey_len);
++			rc = -ENOMEM;
++			goto ssetup_exit;
++		}
++		ses->auth_key.len = msg->sesskey_len;
++		blob_length = msg->secblob_len;
++		iov[1].iov_base = msg->data + msg->sesskey_len;
++		iov[1].iov_len = blob_length;
++#else
++		rc = -EOPNOTSUPP;
++		goto ssetup_exit;
++#endif /* CONFIG_CIFS_UPCALL */
++	} else if (phase == NtLmNegotiate) { /* if not krb5 must be ntlmssp */
+ 		ntlmssp_blob = kmalloc(sizeof(struct _NEGOTIATE_MESSAGE),
+ 				       GFP_KERNEL);
+ 		if (ntlmssp_blob == NULL) {
+@@ -672,6 +712,8 @@ ssetup_ntlmssp_authenticate:
+ 			/* with raw NTLMSSP we don't encapsulate in SPNEGO */
+ 			security_blob = ntlmssp_blob;
+ 		}
++		iov[1].iov_base = security_blob;
++		iov[1].iov_len = blob_length;
+ 	} else if (phase == NtLmAuthenticate) {
+ 		req->hdr.SessionId = ses->Suid;
+ 		ntlmssp_blob = kzalloc(sizeof(struct _NEGOTIATE_MESSAGE) + 500,
+@@ -699,6 +741,8 @@ ssetup_ntlmssp_authenticate:
+ 		} else {
+ 			security_blob = ntlmssp_blob;
+ 		}
++		iov[1].iov_base = security_blob;
++		iov[1].iov_len = blob_length;
+ 	} else {
+ 		cifs_dbg(VFS, "illegal ntlmssp phase\n");
+ 		rc = -EIO;
+@@ -710,8 +754,6 @@ ssetup_ntlmssp_authenticate:
+ 				cpu_to_le16(sizeof(struct smb2_sess_setup_req) -
+ 					    1 /* pad */ - 4 /* rfc1001 len */);
+ 	req->SecurityBufferLength = cpu_to_le16(blob_length);
+-	iov[1].iov_base = security_blob;
+-	iov[1].iov_len = blob_length;
+ 
+ 	inc_rfc1001_len(req, blob_length - 1 /* pad */);
+ 
+@@ -722,6 +764,7 @@ ssetup_ntlmssp_authenticate:
+ 
+ 	kfree(security_blob);
+ 	rsp = (struct smb2_sess_setup_rsp *)iov[0].iov_base;
++	ses->Suid = rsp->hdr.SessionId;
+ 	if (resp_buftype != CIFS_NO_BUFFER &&
+ 	    rsp->hdr.Status == STATUS_MORE_PROCESSING_REQUIRED) {
+ 		if (phase != NtLmNegotiate) {
+@@ -739,7 +782,6 @@ ssetup_ntlmssp_authenticate:
+ 		/* NTLMSSP Negotiate sent now processing challenge (response) */
+ 		phase = NtLmChallenge; /* process ntlmssp challenge */
+ 		rc = 0; /* MORE_PROCESSING is not an error here but expected */
+-		ses->Suid = rsp->hdr.SessionId;
+ 		rc = decode_ntlmssp_challenge(rsp->Buffer,
+ 				le16_to_cpu(rsp->SecurityBufferLength), ses);
+ 	}
+@@ -796,6 +838,10 @@ keygen_exit:
+ 		kfree(ses->auth_key.response);
+ 		ses->auth_key.response = NULL;
+ 	}
++	if (spnego_key) {
++		key_invalidate(spnego_key);
++		key_put(spnego_key);
++	}
+ 	kfree(ses->ntlmssp);
+ 
+ 	return rc;
+diff --git a/fs/dax.c b/fs/dax.c
+index a7f77e1fa18c..ef35a2014580 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -116,7 +116,8 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter,
+ 		unsigned len;
+ 		if (pos == max) {
+ 			unsigned blkbits = inode->i_blkbits;
+-			sector_t block = pos >> blkbits;
++			long page = pos >> PAGE_SHIFT;
++			sector_t block = page << (PAGE_SHIFT - blkbits);
+ 			unsigned first = pos - (block << blkbits);
+ 			long size;
+ 
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 9b5fe503f6cb..e3b44ca75a1b 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -2926,6 +2926,13 @@ restart:
+ 
+ 		if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) {
+ 			struct mount *parent = ACCESS_ONCE(mnt->mnt_parent);
++			/* Escaped? */
++			if (dentry != vfsmnt->mnt_root) {
++				bptr = *buffer;
++				blen = *buflen;
++				error = 3;
++				break;
++			}
+ 			/* Global root? */
+ 			if (mnt != parent) {
+ 				dentry = ACCESS_ONCE(mnt->mnt_mountpoint);
+diff --git a/fs/namei.c b/fs/namei.c
+index 1c2105ed20c5..36df4818a635 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -560,6 +560,24 @@ static int __nd_alloc_stack(struct nameidata *nd)
+ 	return 0;
+ }
+ 
++/**
++ * path_connected - Verify that a path->dentry is below path->mnt.mnt_root
++ * @path: nameidate to verify
++ *
++ * Rename can sometimes move a file or directory outside of a bind
++ * mount, path_connected allows those cases to be detected.
++ */
++static bool path_connected(const struct path *path)
++{
++	struct vfsmount *mnt = path->mnt;
++
++	/* Only bind mounts can have disconnected paths */
++	if (mnt->mnt_root == mnt->mnt_sb->s_root)
++		return true;
++
++	return is_subdir(path->dentry, mnt->mnt_root);
++}
++
+ static inline int nd_alloc_stack(struct nameidata *nd)
+ {
+ 	if (likely(nd->depth != EMBEDDED_LEVELS))
+@@ -1296,6 +1314,8 @@ static int follow_dotdot_rcu(struct nameidata *nd)
+ 				return -ECHILD;
+ 			nd->path.dentry = parent;
+ 			nd->seq = seq;
++			if (unlikely(!path_connected(&nd->path)))
++				return -ENOENT;
+ 			break;
+ 		} else {
+ 			struct mount *mnt = real_mount(nd->path.mnt);
+@@ -1396,7 +1416,7 @@ static void follow_mount(struct path *path)
+ 	}
+ }
+ 
+-static void follow_dotdot(struct nameidata *nd)
++static int follow_dotdot(struct nameidata *nd)
+ {
+ 	if (!nd->root.mnt)
+ 		set_root(nd);
+@@ -1412,6 +1432,8 @@ static void follow_dotdot(struct nameidata *nd)
+ 			/* rare case of legitimate dget_parent()... */
+ 			nd->path.dentry = dget_parent(nd->path.dentry);
+ 			dput(old);
++			if (unlikely(!path_connected(&nd->path)))
++				return -ENOENT;
+ 			break;
+ 		}
+ 		if (!follow_up(&nd->path))
+@@ -1419,6 +1441,7 @@ static void follow_dotdot(struct nameidata *nd)
+ 	}
+ 	follow_mount(&nd->path);
+ 	nd->inode = nd->path.dentry->d_inode;
++	return 0;
+ }
+ 
+ /*
+@@ -1535,8 +1558,6 @@ static int lookup_fast(struct nameidata *nd,
+ 		negative = d_is_negative(dentry);
+ 		if (read_seqcount_retry(&dentry->d_seq, seq))
+ 			return -ECHILD;
+-		if (negative)
+-			return -ENOENT;
+ 
+ 		/*
+ 		 * This sequence count validates that the parent had no
+@@ -1557,6 +1578,12 @@ static int lookup_fast(struct nameidata *nd,
+ 				goto unlazy;
+ 			}
+ 		}
++		/*
++		 * Note: do negative dentry check after revalidation in
++		 * case that drops it.
++		 */
++		if (negative)
++			return -ENOENT;
+ 		path->mnt = mnt;
+ 		path->dentry = dentry;
+ 		if (likely(__follow_mount_rcu(nd, path, inode, seqp)))
+@@ -1634,7 +1661,7 @@ static inline int handle_dots(struct nameidata *nd, int type)
+ 		if (nd->flags & LOOKUP_RCU) {
+ 			return follow_dotdot_rcu(nd);
+ 		} else
+-			follow_dotdot(nd);
++			return follow_dotdot(nd);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 029d688a969f..c56886829708 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -113,7 +113,8 @@ out:
+ 	return status;
+ }
+ 
+-static int nfs_delegation_claim_opens(struct inode *inode, const nfs4_stateid *stateid)
++static int nfs_delegation_claim_opens(struct inode *inode,
++		const nfs4_stateid *stateid, fmode_t type)
+ {
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+ 	struct nfs_open_context *ctx;
+@@ -140,7 +141,7 @@ again:
+ 		/* Block nfs4_proc_unlck */
+ 		mutex_lock(&sp->so_delegreturn_mutex);
+ 		seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
+-		err = nfs4_open_delegation_recall(ctx, state, stateid);
++		err = nfs4_open_delegation_recall(ctx, state, stateid, type);
+ 		if (!err)
+ 			err = nfs_delegation_claim_locks(ctx, state, stateid);
+ 		if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
+@@ -411,7 +412,8 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation
+ 	do {
+ 		if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+ 			break;
+-		err = nfs_delegation_claim_opens(inode, &delegation->stateid);
++		err = nfs_delegation_claim_opens(inode, &delegation->stateid,
++				delegation->type);
+ 		if (!issync || err != -EAGAIN)
+ 			break;
+ 		/*
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index e3c20a3ccc93..785c8525b576 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -54,7 +54,7 @@ void nfs_delegation_reap_unclaimed(struct nfs_client *clp);
+ 
+ /* NFSv4 delegation-related procedures */
+ int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync);
+-int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid);
++int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid, fmode_t type);
+ int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state, const nfs4_stateid *stateid);
+ bool nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode, fmode_t flags);
+ 
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index b34f2e228601..02ec07973bc4 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -629,23 +629,18 @@ out_put:
+ 	goto out;
+ }
+ 
+-static void filelayout_free_fh_array(struct nfs4_filelayout_segment *fl)
++static void _filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < fl->num_fh; i++) {
+-		if (!fl->fh_array[i])
+-			break;
+-		kfree(fl->fh_array[i]);
++	if (fl->fh_array) {
++		for (i = 0; i < fl->num_fh; i++) {
++			if (!fl->fh_array[i])
++				break;
++			kfree(fl->fh_array[i]);
++		}
++		kfree(fl->fh_array);
+ 	}
+-	kfree(fl->fh_array);
+-	fl->fh_array = NULL;
+-}
+-
+-static void
+-_filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
+-{
+-	filelayout_free_fh_array(fl);
+ 	kfree(fl);
+ }
+ 
+@@ -716,21 +711,21 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
+ 		/* Do we want to use a mempool here? */
+ 		fl->fh_array[i] = kmalloc(sizeof(struct nfs_fh), gfp_flags);
+ 		if (!fl->fh_array[i])
+-			goto out_err_free;
++			goto out_err;
+ 
+ 		p = xdr_inline_decode(&stream, 4);
+ 		if (unlikely(!p))
+-			goto out_err_free;
++			goto out_err;
+ 		fl->fh_array[i]->size = be32_to_cpup(p++);
+ 		if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
+ 			printk(KERN_ERR "NFS: Too big fh %d received %d\n",
+ 			       i, fl->fh_array[i]->size);
+-			goto out_err_free;
++			goto out_err;
+ 		}
+ 
+ 		p = xdr_inline_decode(&stream, fl->fh_array[i]->size);
+ 		if (unlikely(!p))
+-			goto out_err_free;
++			goto out_err;
+ 		memcpy(fl->fh_array[i]->data, p, fl->fh_array[i]->size);
+ 		dprintk("DEBUG: %s: fh len %d\n", __func__,
+ 			fl->fh_array[i]->size);
+@@ -739,8 +734,6 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
+ 	__free_page(scratch);
+ 	return 0;
+ 
+-out_err_free:
+-	filelayout_free_fh_array(fl);
+ out_err:
+ 	__free_page(scratch);
+ 	return -EIO;
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index d731bbf974aa..0f020e4d8421 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -175,10 +175,12 @@ loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence)
+ {
+ 	struct nfs_server *server = NFS_SERVER(file_inode(filep));
+ 	struct nfs4_exception exception = { };
+-	int err;
++	loff_t err;
+ 
+ 	do {
+ 		err = _nfs42_proc_llseek(filep, offset, whence);
++		if (err >= 0)
++			break;
+ 		if (err == -ENOTSUPP)
+ 			return -EOPNOTSUPP;
+ 		err = nfs4_handle_exception(server, err, &exception);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 73c8204ad463..d2daacad3568 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1127,6 +1127,21 @@ static int nfs4_wait_for_completion_rpc_task(struct rpc_task *task)
+ 	return ret;
+ }
+ 
++static bool nfs4_mode_match_open_stateid(struct nfs4_state *state,
++		fmode_t fmode)
++{
++	switch(fmode & (FMODE_READ|FMODE_WRITE)) {
++	case FMODE_READ|FMODE_WRITE:
++		return state->n_rdwr != 0;
++	case FMODE_WRITE:
++		return state->n_wronly != 0;
++	case FMODE_READ:
++		return state->n_rdonly != 0;
++	}
++	WARN_ON_ONCE(1);
++	return false;
++}
++
+ static int can_open_cached(struct nfs4_state *state, fmode_t mode, int open_mode)
+ {
+ 	int ret = 0;
+@@ -1561,17 +1576,13 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context
+ 	return opendata;
+ }
+ 
+-static int nfs4_open_recover_helper(struct nfs4_opendata *opendata, fmode_t fmode, struct nfs4_state **res)
++static int nfs4_open_recover_helper(struct nfs4_opendata *opendata,
++		fmode_t fmode)
+ {
+ 	struct nfs4_state *newstate;
+ 	int ret;
+ 
+-	if ((opendata->o_arg.claim == NFS4_OPEN_CLAIM_DELEGATE_CUR ||
+-	     opendata->o_arg.claim == NFS4_OPEN_CLAIM_DELEG_CUR_FH) &&
+-	    (opendata->o_arg.u.delegation_type & fmode) != fmode)
+-		/* This mode can't have been delegated, so we must have
+-		 * a valid open_stateid to cover it - not need to reclaim.
+-		 */
++	if (!nfs4_mode_match_open_stateid(opendata->state, fmode))
+ 		return 0;
+ 	opendata->o_arg.open_flags = 0;
+ 	opendata->o_arg.fmode = fmode;
+@@ -1587,14 +1598,14 @@ static int nfs4_open_recover_helper(struct nfs4_opendata *opendata, fmode_t fmod
+ 	newstate = nfs4_opendata_to_nfs4_state(opendata);
+ 	if (IS_ERR(newstate))
+ 		return PTR_ERR(newstate);
++	if (newstate != opendata->state)
++		ret = -ESTALE;
+ 	nfs4_close_state(newstate, fmode);
+-	*res = newstate;
+-	return 0;
++	return ret;
+ }
+ 
+ static int nfs4_open_recover(struct nfs4_opendata *opendata, struct nfs4_state *state)
+ {
+-	struct nfs4_state *newstate;
+ 	int ret;
+ 
+ 	/* Don't trigger recovery in nfs_test_and_clear_all_open_stateid */
+@@ -1605,27 +1616,15 @@ static int nfs4_open_recover(struct nfs4_opendata *opendata, struct nfs4_state *
+ 	clear_bit(NFS_DELEGATED_STATE, &state->flags);
+ 	clear_bit(NFS_OPEN_STATE, &state->flags);
+ 	smp_rmb();
+-	if (state->n_rdwr != 0) {
+-		ret = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE, &newstate);
+-		if (ret != 0)
+-			return ret;
+-		if (newstate != state)
+-			return -ESTALE;
+-	}
+-	if (state->n_wronly != 0) {
+-		ret = nfs4_open_recover_helper(opendata, FMODE_WRITE, &newstate);
+-		if (ret != 0)
+-			return ret;
+-		if (newstate != state)
+-			return -ESTALE;
+-	}
+-	if (state->n_rdonly != 0) {
+-		ret = nfs4_open_recover_helper(opendata, FMODE_READ, &newstate);
+-		if (ret != 0)
+-			return ret;
+-		if (newstate != state)
+-			return -ESTALE;
+-	}
++	ret = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE);
++	if (ret != 0)
++		return ret;
++	ret = nfs4_open_recover_helper(opendata, FMODE_WRITE);
++	if (ret != 0)
++		return ret;
++	ret = nfs4_open_recover_helper(opendata, FMODE_READ);
++	if (ret != 0)
++		return ret;
+ 	/*
+ 	 * We may have performed cached opens for all three recoveries.
+ 	 * Check if we need to update the current stateid.
+@@ -1749,18 +1748,32 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
+ 	return err;
+ }
+ 
+-int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid)
++int nfs4_open_delegation_recall(struct nfs_open_context *ctx,
++		struct nfs4_state *state, const nfs4_stateid *stateid,
++		fmode_t type)
+ {
+ 	struct nfs_server *server = NFS_SERVER(state->inode);
+ 	struct nfs4_opendata *opendata;
+-	int err;
++	int err = 0;
+ 
+ 	opendata = nfs4_open_recoverdata_alloc(ctx, state,
+ 			NFS4_OPEN_CLAIM_DELEG_CUR_FH);
+ 	if (IS_ERR(opendata))
+ 		return PTR_ERR(opendata);
+ 	nfs4_stateid_copy(&opendata->o_arg.u.delegation, stateid);
+-	err = nfs4_open_recover(opendata, state);
++	clear_bit(NFS_DELEGATED_STATE, &state->flags);
++	switch (type & (FMODE_READ|FMODE_WRITE)) {
++	case FMODE_READ|FMODE_WRITE:
++	case FMODE_WRITE:
++		err = nfs4_open_recover_helper(opendata, FMODE_READ|FMODE_WRITE);
++		if (err)
++			break;
++		err = nfs4_open_recover_helper(opendata, FMODE_WRITE);
++		if (err)
++			break;
++	case FMODE_READ:
++		err = nfs4_open_recover_helper(opendata, FMODE_READ);
++	}
+ 	nfs4_opendata_put(opendata);
+ 	return nfs4_handle_delegation_recall_error(server, state, stateid, err);
+ }
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 7c5718ba625e..fe3ddd20ff89 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -508,7 +508,7 @@ size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
+ 	 * for it without upsetting the slab allocator.
+ 	 */
+ 	if (((mirror->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
+-			sizeof(struct page) > PAGE_SIZE)
++			sizeof(struct page *) > PAGE_SIZE)
+ 		return 0;
+ 
+ 	return min(mirror->pg_bsize - mirror->pg_count, (size_t)req->wb_bytes);
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index ae0ff7a11b40..01b8cc8e8cfc 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -72,6 +72,9 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio)
+ {
+ 	struct nfs_pgio_mirror *mirror;
+ 
++	if (pgio->pg_ops && pgio->pg_ops->pg_cleanup)
++		pgio->pg_ops->pg_cleanup(pgio);
++
+ 	pgio->pg_ops = &nfs_pgio_rw_ops;
+ 
+ 	/* read path should never have more than one mirror */
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index fdee9270ca15..b45b465bc205 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1223,7 +1223,7 @@ static int nfs_can_extend_write(struct file *file, struct page *page, struct ino
+ 		return 1;
+ 	if (!flctx || (list_empty_careful(&flctx->flc_flock) &&
+ 		       list_empty_careful(&flctx->flc_posix)))
+-		return 0;
++		return 1;
+ 
+ 	/* Check to see if there are whole file write locks */
+ 	ret = 0;
+@@ -1351,6 +1351,9 @@ void nfs_pageio_reset_write_mds(struct nfs_pageio_descriptor *pgio)
+ {
+ 	struct nfs_pgio_mirror *mirror;
+ 
++	if (pgio->pg_ops && pgio->pg_ops->pg_cleanup)
++		pgio->pg_ops->pg_cleanup(pgio);
++
+ 	pgio->pg_ops = &nfs_pgio_rw_ops;
+ 
+ 	nfs_pageio_stop_mirroring(pgio);
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index fdf4b41d0609..482cfd34472d 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -1439,6 +1439,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
+ 	int found, ret;
+ 	int set_maybe;
+ 	int dispatch_assert = 0;
++	int dispatched = 0;
+ 
+ 	if (!dlm_grab(dlm))
+ 		return DLM_MASTER_RESP_NO;
+@@ -1658,15 +1659,18 @@ send_response:
+ 			mlog(ML_ERROR, "failed to dispatch assert master work\n");
+ 			response = DLM_MASTER_RESP_ERROR;
+ 			dlm_lockres_put(res);
+-		} else
++		} else {
++			dispatched = 1;
+ 			__dlm_lockres_grab_inflight_worker(dlm, res);
++		}
+ 		spin_unlock(&res->spinlock);
+ 	} else {
+ 		if (res)
+ 			dlm_lockres_put(res);
+ 	}
+ 
+-	dlm_put(dlm);
++	if (!dispatched)
++		dlm_put(dlm);
+ 	return response;
+ }
+ 
+@@ -2090,7 +2094,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
+ 
+ 
+ 	/* queue up work for dlm_assert_master_worker */
+-	dlm_grab(dlm);  /* get an extra ref for the work item */
+ 	dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
+ 	item->u.am.lockres = res; /* already have a ref */
+ 	/* can optionally ignore node numbers higher than this node */
+diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
+index ce12e0b1a31f..3d90ad7ff91f 100644
+--- a/fs/ocfs2/dlm/dlmrecovery.c
++++ b/fs/ocfs2/dlm/dlmrecovery.c
+@@ -1694,6 +1694,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
+ 	unsigned int hash;
+ 	int master = DLM_LOCK_RES_OWNER_UNKNOWN;
+ 	u32 flags = DLM_ASSERT_MASTER_REQUERY;
++	int dispatched = 0;
+ 
+ 	if (!dlm_grab(dlm)) {
+ 		/* since the domain has gone away on this
+@@ -1719,8 +1720,10 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
+ 				dlm_put(dlm);
+ 				/* sender will take care of this and retry */
+ 				return ret;
+-			} else
++			} else {
++				dispatched = 1;
+ 				__dlm_lockres_grab_inflight_worker(dlm, res);
++			}
+ 			spin_unlock(&res->spinlock);
+ 		} else {
+ 			/* put.. incase we are not the master */
+@@ -1730,7 +1733,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
+ 	}
+ 	spin_unlock(&dlm->spinlock);
+ 
+-	dlm_put(dlm);
++	if (!dispatched)
++		dlm_put(dlm);
+ 	return master;
+ }
+ 
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 96f3448b6eb4..fd65b3f1923c 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -652,11 +652,8 @@ int ubifs_init_security(struct inode *dentry, struct inode *inode,
+ {
+ 	int err;
+ 
+-	mutex_lock(&inode->i_mutex);
+ 	err = security_inode_init_security(inode, dentry, qstr,
+ 					   &init_xattrs, 0);
+-	mutex_unlock(&inode->i_mutex);
+-
+ 	if (err) {
+ 		struct ubifs_info *c = dentry->i_sb->s_fs_info;
+ 		ubifs_err(c, "cannot initialize security for inode %lu, error %d",
+diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
+index d0a7a4753db2..0bec580a4885 100644
+--- a/include/asm-generic/preempt.h
++++ b/include/asm-generic/preempt.h
+@@ -71,9 +71,10 @@ static __always_inline bool __preempt_count_dec_and_test(void)
+ /*
+  * Returns true when we need to resched and can (barring IRQ state).
+  */
+-static __always_inline bool should_resched(void)
++static __always_inline bool should_resched(int preempt_offset)
+ {
+-	return unlikely(!preempt_count() && tif_need_resched());
++	return unlikely(preempt_count() == preempt_offset &&
++			tif_need_resched());
+ }
+ 
+ #ifdef CONFIG_PREEMPT
+diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
+index 83bfb87f5bf1..e2aadbc7151f 100644
+--- a/include/asm-generic/qspinlock.h
++++ b/include/asm-generic/qspinlock.h
+@@ -111,8 +111,8 @@ static inline void queued_spin_unlock_wait(struct qspinlock *lock)
+ 		cpu_relax();
+ }
+ 
+-#ifndef virt_queued_spin_lock
+-static __always_inline bool virt_queued_spin_lock(struct qspinlock *lock)
++#ifndef virt_spin_lock
++static __always_inline bool virt_spin_lock(struct qspinlock *lock)
+ {
+ 	return false;
+ }
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 93755a629299..430c876ad717 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -463,31 +463,8 @@ struct cgroup_subsys {
+ 	unsigned int depends_on;
+ };
+ 
+-extern struct percpu_rw_semaphore cgroup_threadgroup_rwsem;
+-
+-/**
+- * cgroup_threadgroup_change_begin - threadgroup exclusion for cgroups
+- * @tsk: target task
+- *
+- * Called from threadgroup_change_begin() and allows cgroup operations to
+- * synchronize against threadgroup changes using a percpu_rw_semaphore.
+- */
+-static inline void cgroup_threadgroup_change_begin(struct task_struct *tsk)
+-{
+-	percpu_down_read(&cgroup_threadgroup_rwsem);
+-}
+-
+-/**
+- * cgroup_threadgroup_change_end - threadgroup exclusion for cgroups
+- * @tsk: target task
+- *
+- * Called from threadgroup_change_end().  Counterpart of
+- * cgroup_threadcgroup_change_begin().
+- */
+-static inline void cgroup_threadgroup_change_end(struct task_struct *tsk)
+-{
+-	percpu_up_read(&cgroup_threadgroup_rwsem);
+-}
++void cgroup_threadgroup_change_begin(struct task_struct *tsk);
++void cgroup_threadgroup_change_end(struct task_struct *tsk);
+ 
+ #else	/* CONFIG_CGROUPS */
+ 
+diff --git a/include/linux/init_task.h b/include/linux/init_task.h
+index e8493fee8160..bb9b075f0eb0 100644
+--- a/include/linux/init_task.h
++++ b/include/linux/init_task.h
+@@ -25,6 +25,13 @@
+ extern struct files_struct init_files;
+ extern struct fs_struct init_fs;
+ 
++#ifdef CONFIG_CGROUPS
++#define INIT_GROUP_RWSEM(sig)						\
++	.group_rwsem = __RWSEM_INITIALIZER(sig.group_rwsem),
++#else
++#define INIT_GROUP_RWSEM(sig)
++#endif
++
+ #ifdef CONFIG_CPUSETS
+ #define INIT_CPUSET_SEQ(tsk)							\
+ 	.mems_allowed_seq = SEQCNT_ZERO(tsk.mems_allowed_seq),
+@@ -48,6 +55,7 @@ extern struct fs_struct init_fs;
+ 	},								\
+ 	.cred_guard_mutex =						\
+ 		 __MUTEX_INITIALIZER(sig.cred_guard_mutex),		\
++	INIT_GROUP_RWSEM(sig)						\
+ }
+ 
+ extern struct nsproxy init_nsproxy;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index bf6f117fcf4d..2b05068f5878 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -916,6 +916,27 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
+ #endif
+ }
+ 
++#ifdef CONFIG_MEMCG
++static inline struct mem_cgroup *page_memcg(struct page *page)
++{
++	return page->mem_cgroup;
++}
++
++static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg)
++{
++	page->mem_cgroup = memcg;
++}
++#else
++static inline struct mem_cgroup *page_memcg(struct page *page)
++{
++	return NULL;
++}
++
++static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg)
++{
++}
++#endif
++
+ /*
+  * Some inline functions in vmstat.h depend on page_zone()
+  */
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 84991f185173..bea8dd8ff5e0 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -84,13 +84,21 @@
+  */
+ #define in_nmi()	(preempt_count() & NMI_MASK)
+ 
++/*
++ * The preempt_count offset after preempt_disable();
++ */
+ #if defined(CONFIG_PREEMPT_COUNT)
+-# define PREEMPT_DISABLE_OFFSET 1
++# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
+ #else
+-# define PREEMPT_DISABLE_OFFSET 0
++# define PREEMPT_DISABLE_OFFSET	0
+ #endif
+ 
+ /*
++ * The preempt_count offset after spin_lock()
++ */
++#define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
++
++/*
+  * The preempt_count offset needed for things like:
+  *
+  *  spin_lock_bh()
+@@ -103,7 +111,7 @@
+  *
+  * Work as expected.
+  */
+-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_DISABLE_OFFSET)
++#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
+ 
+ /*
+  * Are we running in atomic context?  WARNING: this macro cannot
+@@ -124,7 +132,8 @@
+ #if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
+ extern void preempt_count_add(int val);
+ extern void preempt_count_sub(int val);
+-#define preempt_count_dec_and_test() ({ preempt_count_sub(1); should_resched(); })
++#define preempt_count_dec_and_test() \
++	({ preempt_count_sub(1); should_resched(0); })
+ #else
+ #define preempt_count_add(val)	__preempt_count_add(val)
+ #define preempt_count_sub(val)	__preempt_count_sub(val)
+@@ -184,7 +193,7 @@ do { \
+ 
+ #define preempt_check_resched() \
+ do { \
+-	if (should_resched()) \
++	if (should_resched(0)) \
+ 		__preempt_schedule(); \
+ } while (0)
+ 
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 04b5ada460b4..bfca8aa215d1 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -754,6 +754,18 @@ struct signal_struct {
+ 	unsigned audit_tty_log_passwd;
+ 	struct tty_audit_buf *tty_audit_buf;
+ #endif
++#ifdef CONFIG_CGROUPS
++	/*
++	 * group_rwsem prevents new tasks from entering the threadgroup and
++	 * member tasks from exiting,a more specifically, setting of
++	 * PF_EXITING.  fork and exit paths are protected with this rwsem
++	 * using threadgroup_change_begin/end().  Users which require
++	 * threadgroup to remain stable should use threadgroup_[un]lock()
++	 * which also takes care of exec path.  Currently, cgroup is the
++	 * only user.
++	 */
++	struct rw_semaphore group_rwsem;
++#endif
+ 
+ 	oom_flags_t oom_flags;
+ 	short oom_score_adj;		/* OOM kill score adjustment */
+@@ -2897,12 +2909,6 @@ extern int _cond_resched(void);
+ 
+ extern int __cond_resched_lock(spinlock_t *lock);
+ 
+-#ifdef CONFIG_PREEMPT_COUNT
+-#define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
+-#else
+-#define PREEMPT_LOCK_OFFSET	0
+-#endif
+-
+ #define cond_resched_lock(lock) ({				\
+ 	___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\
+ 	__cond_resched_lock(lock);				\
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 79d85ddf8093..2f4c1f7aa7db 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -946,7 +946,7 @@ static inline int security_task_prctl(int option, unsigned long arg2,
+ 				      unsigned long arg4,
+ 				      unsigned long arg5)
+ {
+-	return cap_task_prctl(option, arg2, arg3, arg3, arg5);
++	return cap_task_prctl(option, arg2, arg3, arg4, arg5);
+ }
+ 
+ static inline void security_task_to_inode(struct task_struct *p, struct inode *inode)
+diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h
+index bab824bde92c..d4c6b5f30acd 100644
+--- a/include/net/netfilter/br_netfilter.h
++++ b/include/net/netfilter/br_netfilter.h
+@@ -59,7 +59,7 @@ static inline unsigned int
+ br_nf_pre_routing_ipv6(const struct nf_hook_ops *ops, struct sk_buff *skb,
+ 		       const struct nf_hook_state *state)
+ {
+-	return NF_DROP;
++	return NF_ACCEPT;
+ }
+ #endif
+ 
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 37cd3911d5c5..4023c4ce260f 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -292,6 +292,7 @@ extern unsigned int nf_conntrack_hash_rnd;
+ void init_nf_conntrack_hash_rnd(void);
+ 
+ struct nf_conn *nf_ct_tmpl_alloc(struct net *net, u16 zone, gfp_t flags);
++void nf_ct_tmpl_free(struct nf_conn *tmpl);
+ 
+ #define NF_CT_STAT_INC(net, count)	  __this_cpu_inc((net)->ct.stat->count)
+ #define NF_CT_STAT_INC_ATOMIC(net, count) this_cpu_inc((net)->ct.stat->count)
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 2a246680a6c3..aa8bee72c9d3 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -125,7 +125,7 @@ static inline enum nft_data_types nft_dreg_to_type(enum nft_registers reg)
+ 
+ static inline enum nft_registers nft_type_to_reg(enum nft_data_types type)
+ {
+-	return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1;
++	return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE;
+ }
+ 
+ unsigned int nft_parse_register(const struct nlattr *attr);
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index 0aedbb2c10e0..7e7f8875ac32 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -776,7 +776,6 @@ struct iscsi_np {
+ 	enum iscsi_timer_flags_table np_login_timer_flags;
+ 	u32			np_exports;
+ 	enum np_flags_table	np_flags;
+-	unsigned char		np_ip[IPV6_ADDRESS_SPACE];
+ 	u16			np_port;
+ 	spinlock_t		np_thread_lock;
+ 	struct completion	np_restart_comp;
+diff --git a/include/xen/interface/sched.h b/include/xen/interface/sched.h
+index 9ce083960a25..f18490985fc8 100644
+--- a/include/xen/interface/sched.h
++++ b/include/xen/interface/sched.h
+@@ -107,5 +107,13 @@ struct sched_watchdog {
+ #define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
+ #define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
+ #define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
++/*
++ * Domain asked to perform 'soft reset' for it. The expected behavior is to
++ * reset internal Xen state for the domain returning it to the point where it
++ * was created but leaving the domain's memory contents and vCPU contexts
++ * intact. This will allow the domain to start over and set up all Xen specific
++ * interfaces again.
++ */
++#define SHUTDOWN_soft_reset 5
+ 
+ #endif /* __XEN_PUBLIC_SCHED_H__ */
+diff --git a/ipc/msg.c b/ipc/msg.c
+index 66c4f567eb73..1471db9a7e61 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -137,13 +137,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
+ 		return retval;
+ 	}
+ 
+-	/* ipc_addid() locks msq upon success. */
+-	id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
+-	if (id < 0) {
+-		ipc_rcu_putref(msq, msg_rcu_free);
+-		return id;
+-	}
+-
+ 	msq->q_stime = msq->q_rtime = 0;
+ 	msq->q_ctime = get_seconds();
+ 	msq->q_cbytes = msq->q_qnum = 0;
+@@ -153,6 +146,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
+ 	INIT_LIST_HEAD(&msq->q_receivers);
+ 	INIT_LIST_HEAD(&msq->q_senders);
+ 
++	/* ipc_addid() locks msq upon success. */
++	id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
++	if (id < 0) {
++		ipc_rcu_putref(msq, msg_rcu_free);
++		return id;
++	}
++
+ 	ipc_unlock_object(&msq->q_perm);
+ 	rcu_read_unlock();
+ 
+diff --git a/ipc/shm.c b/ipc/shm.c
+index 4aef24d91b63..0e61fd430547 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -551,12 +551,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
+ 	if (IS_ERR(file))
+ 		goto no_file;
+ 
+-	id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
+-	if (id < 0) {
+-		error = id;
+-		goto no_id;
+-	}
+-
+ 	shp->shm_cprid = task_tgid_vnr(current);
+ 	shp->shm_lprid = 0;
+ 	shp->shm_atim = shp->shm_dtim = 0;
+@@ -565,6 +559,13 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
+ 	shp->shm_nattch = 0;
+ 	shp->shm_file = file;
+ 	shp->shm_creator = current;
++
++	id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
++	if (id < 0) {
++		error = id;
++		goto no_id;
++	}
++
+ 	list_add(&shp->shm_clist, &current->sysvshm.shm_clist);
+ 
+ 	/*
+diff --git a/ipc/util.c b/ipc/util.c
+index be4230020a1f..0f401d94b7c6 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -237,6 +237,10 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
+ 	rcu_read_lock();
+ 	spin_lock(&new->lock);
+ 
++	current_euid_egid(&euid, &egid);
++	new->cuid = new->uid = euid;
++	new->gid = new->cgid = egid;
++
+ 	id = idr_alloc(&ids->ipcs_idr, new,
+ 		       (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0,
+ 		       GFP_NOWAIT);
+@@ -249,10 +253,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
+ 
+ 	ids->in_use++;
+ 
+-	current_euid_egid(&euid, &egid);
+-	new->cuid = new->uid = euid;
+-	new->gid = new->cgid = egid;
+-
+ 	if (next_id < 0) {
+ 		new->seq = ids->seq++;
+ 		if (ids->seq > IPCID_SEQ_MAX)
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index c6c4240e7d28..fe6f855de3d1 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -46,7 +46,6 @@
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/rwsem.h>
+-#include <linux/percpu-rwsem.h>
+ #include <linux/string.h>
+ #include <linux/sort.h>
+ #include <linux/kmod.h>
+@@ -104,8 +103,6 @@ static DEFINE_SPINLOCK(cgroup_idr_lock);
+  */
+ static DEFINE_SPINLOCK(release_agent_path_lock);
+ 
+-struct percpu_rw_semaphore cgroup_threadgroup_rwsem;
+-
+ #define cgroup_assert_mutex_or_rcu_locked()				\
+ 	rcu_lockdep_assert(rcu_read_lock_held() ||			\
+ 			   lockdep_is_held(&cgroup_mutex),		\
+@@ -870,6 +867,48 @@ static struct css_set *find_css_set(struct css_set *old_cset,
+ 	return cset;
+ }
+ 
++void cgroup_threadgroup_change_begin(struct task_struct *tsk)
++{
++	down_read(&tsk->signal->group_rwsem);
++}
++
++void cgroup_threadgroup_change_end(struct task_struct *tsk)
++{
++	up_read(&tsk->signal->group_rwsem);
++}
++
++/**
++ * threadgroup_lock - lock threadgroup
++ * @tsk: member task of the threadgroup to lock
++ *
++ * Lock the threadgroup @tsk belongs to.  No new task is allowed to enter
++ * and member tasks aren't allowed to exit (as indicated by PF_EXITING) or
++ * change ->group_leader/pid.  This is useful for cases where the threadgroup
++ * needs to stay stable across blockable operations.
++ *
++ * fork and exit explicitly call threadgroup_change_{begin|end}() for
++ * synchronization.  While held, no new task will be added to threadgroup
++ * and no existing live task will have its PF_EXITING set.
++ *
++ * de_thread() does threadgroup_change_{begin|end}() when a non-leader
++ * sub-thread becomes a new leader.
++ */
++static void threadgroup_lock(struct task_struct *tsk)
++{
++	down_write(&tsk->signal->group_rwsem);
++}
++
++/**
++ * threadgroup_unlock - unlock threadgroup
++ * @tsk: member task of the threadgroup to unlock
++ *
++ * Reverse threadgroup_lock().
++ */
++static inline void threadgroup_unlock(struct task_struct *tsk)
++{
++	up_write(&tsk->signal->group_rwsem);
++}
++
+ static struct cgroup_root *cgroup_root_from_kf(struct kernfs_root *kf_root)
+ {
+ 	struct cgroup *root_cgrp = kf_root->kn->priv;
+@@ -2066,9 +2105,9 @@ static void cgroup_task_migrate(struct cgroup *old_cgrp,
+ 	lockdep_assert_held(&css_set_rwsem);
+ 
+ 	/*
+-	 * We are synchronized through cgroup_threadgroup_rwsem against
+-	 * PF_EXITING setting such that we can't race against cgroup_exit()
+-	 * changing the css_set to init_css_set and dropping the old one.
++	 * We are synchronized through threadgroup_lock() against PF_EXITING
++	 * setting such that we can't race against cgroup_exit() changing the
++	 * css_set to init_css_set and dropping the old one.
+ 	 */
+ 	WARN_ON_ONCE(tsk->flags & PF_EXITING);
+ 	old_cset = task_css_set(tsk);
+@@ -2125,11 +2164,10 @@ static void cgroup_migrate_finish(struct list_head *preloaded_csets)
+  * @src_cset and add it to @preloaded_csets, which should later be cleaned
+  * up by cgroup_migrate_finish().
+  *
+- * This function may be called without holding cgroup_threadgroup_rwsem
+- * even if the target is a process.  Threads may be created and destroyed
+- * but as long as cgroup_mutex is not dropped, no new css_set can be put
+- * into play and the preloaded css_sets are guaranteed to cover all
+- * migrations.
++ * This function may be called without holding threadgroup_lock even if the
++ * target is a process.  Threads may be created and destroyed but as long
++ * as cgroup_mutex is not dropped, no new css_set can be put into play and
++ * the preloaded css_sets are guaranteed to cover all migrations.
+  */
+ static void cgroup_migrate_add_src(struct css_set *src_cset,
+ 				   struct cgroup *dst_cgrp,
+@@ -2232,7 +2270,7 @@ err:
+  * @threadgroup: whether @leader points to the whole process or a single task
+  *
+  * Migrate a process or task denoted by @leader to @cgrp.  If migrating a
+- * process, the caller must be holding cgroup_threadgroup_rwsem.  The
++ * process, the caller must be holding threadgroup_lock of @leader.  The
+  * caller is also responsible for invoking cgroup_migrate_add_src() and
+  * cgroup_migrate_prepare_dst() on the targets before invoking this
+  * function and following up with cgroup_migrate_finish().
+@@ -2360,7 +2398,7 @@ out_release_tset:
+  * @leader: the task or the leader of the threadgroup to be attached
+  * @threadgroup: attach the whole threadgroup?
+  *
+- * Call holding cgroup_mutex and cgroup_threadgroup_rwsem.
++ * Call holding cgroup_mutex and threadgroup_lock of @leader.
+  */
+ static int cgroup_attach_task(struct cgroup *dst_cgrp,
+ 			      struct task_struct *leader, bool threadgroup)
+@@ -2452,13 +2490,14 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
+ 	if (!cgrp)
+ 		return -ENODEV;
+ 
+-	percpu_down_write(&cgroup_threadgroup_rwsem);
++retry_find_task:
+ 	rcu_read_lock();
+ 	if (pid) {
+ 		tsk = find_task_by_vpid(pid);
+ 		if (!tsk) {
++			rcu_read_unlock();
+ 			ret = -ESRCH;
+-			goto out_unlock_rcu;
++			goto out_unlock_cgroup;
+ 		}
+ 	} else {
+ 		tsk = current;
+@@ -2474,23 +2513,37 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
+ 	 */
+ 	if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) {
+ 		ret = -EINVAL;
+-		goto out_unlock_rcu;
++		rcu_read_unlock();
++		goto out_unlock_cgroup;
+ 	}
+ 
+ 	get_task_struct(tsk);
+ 	rcu_read_unlock();
+ 
++	threadgroup_lock(tsk);
++	if (threadgroup) {
++		if (!thread_group_leader(tsk)) {
++			/*
++			 * a race with de_thread from another thread's exec()
++			 * may strip us of our leadership, if this happens,
++			 * there is no choice but to throw this task away and
++			 * try again; this is
++			 * "double-double-toil-and-trouble-check locking".
++			 */
++			threadgroup_unlock(tsk);
++			put_task_struct(tsk);
++			goto retry_find_task;
++		}
++	}
++
+ 	ret = cgroup_procs_write_permission(tsk, cgrp, of);
+ 	if (!ret)
+ 		ret = cgroup_attach_task(cgrp, tsk, threadgroup);
+ 
+-	put_task_struct(tsk);
+-	goto out_unlock_threadgroup;
++	threadgroup_unlock(tsk);
+ 
+-out_unlock_rcu:
+-	rcu_read_unlock();
+-out_unlock_threadgroup:
+-	percpu_up_write(&cgroup_threadgroup_rwsem);
++	put_task_struct(tsk);
++out_unlock_cgroup:
+ 	cgroup_kn_unlock(of->kn);
+ 	return ret ?: nbytes;
+ }
+@@ -2635,8 +2688,6 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+ 
+-	percpu_down_write(&cgroup_threadgroup_rwsem);
+-
+ 	/* look up all csses currently attached to @cgrp's subtree */
+ 	down_read(&css_set_rwsem);
+ 	css_for_each_descendant_pre(css, cgroup_css(cgrp, NULL)) {
+@@ -2692,8 +2743,17 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 				goto out_finish;
+ 			last_task = task;
+ 
++			threadgroup_lock(task);
++			/* raced against de_thread() from another thread? */
++			if (!thread_group_leader(task)) {
++				threadgroup_unlock(task);
++				put_task_struct(task);
++				continue;
++			}
++
+ 			ret = cgroup_migrate(src_cset->dfl_cgrp, task, true);
+ 
++			threadgroup_unlock(task);
+ 			put_task_struct(task);
+ 
+ 			if (WARN(ret, "cgroup: failed to update controllers for the default hierarchy (%d), further operations may crash or hang\n", ret))
+@@ -2703,7 +2763,6 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp)
+ 
+ out_finish:
+ 	cgroup_migrate_finish(&preloaded_csets);
+-	percpu_up_write(&cgroup_threadgroup_rwsem);
+ 	return ret;
+ }
+ 
+@@ -5013,7 +5072,6 @@ int __init cgroup_init(void)
+ 	unsigned long key;
+ 	int ssid, err;
+ 
+-	BUG_ON(percpu_init_rwsem(&cgroup_threadgroup_rwsem));
+ 	BUG_ON(cgroup_init_cftypes(NULL, cgroup_dfl_base_files));
+ 	BUG_ON(cgroup_init_cftypes(NULL, cgroup_legacy_base_files));
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 26a70dc7a915..e769c8c86f86 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1146,6 +1146,10 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
+ 	tty_audit_fork(sig);
+ 	sched_autogroup_fork(sig);
+ 
++#ifdef CONFIG_CGROUPS
++	init_rwsem(&sig->group_rwsem);
++#endif
++
+ 	sig->oom_score_adj = current->signal->oom_score_adj;
+ 	sig->oom_score_adj_min = current->signal->oom_score_adj_min;
+ 
+diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
+index 0e97c142ce40..4e6267a34440 100644
+--- a/kernel/irq/proc.c
++++ b/kernel/irq/proc.c
+@@ -12,6 +12,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel_stat.h>
++#include <linux/mutex.h>
+ 
+ #include "internals.h"
+ 
+@@ -323,18 +324,29 @@ void register_handler_proc(unsigned int irq, struct irqaction *action)
+ 
+ void register_irq_proc(unsigned int irq, struct irq_desc *desc)
+ {
++	static DEFINE_MUTEX(register_lock);
+ 	char name [MAX_NAMELEN];
+ 
+-	if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir)
++	if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip))
+ 		return;
+ 
++	/*
++	 * irq directories are registered only when a handler is
++	 * added, not when the descriptor is created, so multiple
++	 * tasks might try to register at the same time.
++	 */
++	mutex_lock(&register_lock);
++
++	if (desc->dir)
++		goto out_unlock;
++
+ 	memset(name, 0, MAX_NAMELEN);
+ 	sprintf(name, "%d", irq);
+ 
+ 	/* create /proc/irq/1234 */
+ 	desc->dir = proc_mkdir(name, root_irq_dir);
+ 	if (!desc->dir)
+-		return;
++		goto out_unlock;
+ 
+ #ifdef CONFIG_SMP
+ 	/* create /proc/irq/<irq>/smp_affinity */
+@@ -355,6 +367,9 @@ void register_irq_proc(unsigned int irq, struct irq_desc *desc)
+ 
+ 	proc_create_data("spurious", 0444, desc->dir,
+ 			 &irq_spurious_proc_fops, (void *)(long)irq);
++
++out_unlock:
++	mutex_unlock(&register_lock);
+ }
+ 
+ void unregister_irq_proc(unsigned int irq, struct irq_desc *desc)
+diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
+index 38c49202d532..8ed01611ae73 100644
+--- a/kernel/locking/qspinlock.c
++++ b/kernel/locking/qspinlock.c
+@@ -289,7 +289,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+ 	if (pv_enabled())
+ 		goto queue;
+ 
+-	if (virt_queued_spin_lock(lock))
++	if (virt_spin_lock(lock))
+ 		return;
+ 
+ 	/*
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e9673433cc01..6776631676e0 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2461,11 +2461,11 @@ static struct rq *finish_task_switch(struct task_struct *prev)
+ 	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
+ 	 * schedule one last time. The schedule call will never return, and
+ 	 * the scheduled task must drop that reference.
+-	 * The test for TASK_DEAD must occur while the runqueue locks are
+-	 * still held, otherwise prev could be scheduled on another cpu, die
+-	 * there before we look at prev->state, and then the reference would
+-	 * be dropped twice.
+-	 *		Manfred Spraul <manfred@colorfullife.com>
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_lock_switch), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
+ 	 */
+ 	prev_state = prev->state;
+ 	vtime_task_switch(prev);
+@@ -2614,13 +2614,20 @@ unsigned long nr_running(void)
+ 
+ /*
+  * Check if only the current task is running on the cpu.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptable section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
+  */
+ bool single_task_running(void)
+ {
+-	if (cpu_rq(smp_processor_id())->nr_running == 1)
+-		return true;
+-	else
+-		return false;
++	return raw_rq()->nr_running == 1;
+ }
+ EXPORT_SYMBOL(single_task_running);
+ 
+@@ -4492,7 +4499,7 @@ SYSCALL_DEFINE0(sched_yield)
+ 
+ int __sched _cond_resched(void)
+ {
+-	if (should_resched()) {
++	if (should_resched(0)) {
+ 		preempt_schedule_common();
+ 		return 1;
+ 	}
+@@ -4510,7 +4517,7 @@ EXPORT_SYMBOL(_cond_resched);
+  */
+ int __cond_resched_lock(spinlock_t *lock)
+ {
+-	int resched = should_resched();
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
+ 	int ret = 0;
+ 
+ 	lockdep_assert_held(lock);
+@@ -4532,7 +4539,7 @@ int __sched __cond_resched_softirq(void)
+ {
+ 	BUG_ON(!in_softirq());
+ 
+-	if (should_resched()) {
++	if (should_resched(SOFTIRQ_DISABLE_OFFSET)) {
+ 		local_bh_enable();
+ 		preempt_schedule_common();
+ 		local_bh_disable();
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 84d48790bb6d..08ab96b366bf 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1091,9 +1091,10 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
+ 	 * After ->on_cpu is cleared, the task can be moved to a different CPU.
+ 	 * We must ensure this doesn't happen until the switch is completely
+ 	 * finished.
++	 *
++	 * Pairs with the control dependency and rmb in try_to_wake_up().
+ 	 */
+-	smp_wmb();
+-	prev->on_cpu = 0;
++	smp_store_release(&prev->on_cpu, 0);
+ #endif
+ #ifdef CONFIG_DEBUG_SPINLOCK
+ 	/* this is a valid case when another task releases the spinlock */
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 841b72f720e8..3a38775b50c2 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -217,7 +217,7 @@ static void clocksource_watchdog(unsigned long data)
+ 			continue;
+ 
+ 		/* Check the deviation from the watchdog clocksource. */
+-		if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) {
++		if (abs64(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) {
+ 			pr_warn("timekeeping watchdog: Marking clocksource '%s' as unstable because the skew is too large:\n",
+ 				cs->name);
+ 			pr_warn("                      '%s' wd_now: %llx wd_last: %llx mask: %llx\n",
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index bca3667a2de1..a20d4110e871 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1607,7 +1607,7 @@ static __always_inline void timekeeping_freqadjust(struct timekeeper *tk,
+ 	negative = (tick_error < 0);
+ 
+ 	/* Sort out the magnitude of the correction */
+-	tick_error = abs(tick_error);
++	tick_error = abs64(tick_error);
+ 	for (adj = 0; tick_error > interval; adj++)
+ 		tick_error >>= 1;
+ 
+diff --git a/lib/iommu-common.c b/lib/iommu-common.c
+index ff19f66d3f7f..b1c93e94ca7a 100644
+--- a/lib/iommu-common.c
++++ b/lib/iommu-common.c
+@@ -21,8 +21,7 @@ static	DEFINE_PER_CPU(unsigned int, iommu_hash_common);
+ 
+ static inline bool need_flush(struct iommu_map_table *iommu)
+ {
+-	return (iommu->lazy_flush != NULL &&
+-		(iommu->flags & IOMMU_NEED_FLUSH) != 0);
++	return ((iommu->flags & IOMMU_NEED_FLUSH) != 0);
+ }
+ 
+ static inline void set_flush(struct iommu_map_table *iommu)
+@@ -211,7 +210,8 @@ unsigned long iommu_tbl_range_alloc(struct device *dev,
+ 			goto bail;
+ 		}
+ 	}
+-	if (n < pool->hint || need_flush(iommu)) {
++	if (iommu->lazy_flush &&
++	    (n < pool->hint || need_flush(iommu))) {
+ 		clear_flush(iommu);
+ 		iommu->lazy_flush(iommu);
+ 	}
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a8c3087089d8..62c1ec5a9d31 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2974,6 +2974,14 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
+ 			continue;
+ 
+ 		/*
++		 * Shared VMAs have their own reserves and do not affect
++		 * MAP_PRIVATE accounting but it is possible that a shared
++		 * VMA is using the same page so check and skip such VMAs.
++		 */
++		if (iter_vma->vm_flags & VM_MAYSHARE)
++			continue;
++
++		/*
+ 		 * Unmap the page from other VMAs without their own reserves.
+ 		 * They get marked to be SIGKILLed if they fault in these
+ 		 * areas. This is because a future no-page fault on this VMA
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index acb93c554f6e..237d4686482d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -806,12 +806,14 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz)
+ }
+ 
+ /*
++ * Return page count for single (non recursive) @memcg.
++ *
+  * Implementation Note: reading percpu statistics for memcg.
+  *
+  * Both of vmstat[] and percpu_counter has threshold and do periodic
+  * synchronization to implement "quick" read. There are trade-off between
+  * reading cost and precision of value. Then, we may have a chance to implement
+- * a periodic synchronizion of counter in memcg's counter.
++ * a periodic synchronization of counter in memcg's counter.
+  *
+  * But this _read() function is used for user interface now. The user accounts
+  * memory usage by memory cgroup and he _always_ requires exact value because
+@@ -821,17 +823,24 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz)
+  *
+  * If there are kernel internal actions which can make use of some not-exact
+  * value, and reading all cpu value can be performance bottleneck in some
+- * common workload, threashold and synchonization as vmstat[] should be
++ * common workload, threshold and synchronization as vmstat[] should be
+  * implemented.
+  */
+-static long mem_cgroup_read_stat(struct mem_cgroup *memcg,
+-				 enum mem_cgroup_stat_index idx)
++static unsigned long
++mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx)
+ {
+ 	long val = 0;
+ 	int cpu;
+ 
++	/* Per-cpu values can be negative, use a signed accumulator */
+ 	for_each_possible_cpu(cpu)
+ 		val += per_cpu(memcg->stat->count[idx], cpu);
++	/*
++	 * Summing races with updates, so val may be negative.  Avoid exposing
++	 * transient negative values.
++	 */
++	if (val < 0)
++		val = 0;
+ 	return val;
+ }
+ 
+@@ -1498,7 +1507,7 @@ void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p)
+ 		for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {
+ 			if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)
+ 				continue;
+-			pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i],
++			pr_cont(" %s:%luKB", mem_cgroup_stat_names[i],
+ 				K(mem_cgroup_read_stat(iter, i)));
+ 		}
+ 
+@@ -3119,14 +3128,11 @@ static unsigned long tree_stat(struct mem_cgroup *memcg,
+ 			       enum mem_cgroup_stat_index idx)
+ {
+ 	struct mem_cgroup *iter;
+-	long val = 0;
++	unsigned long val = 0;
+ 
+-	/* Per-cpu values can be negative, use a signed accumulator */
+ 	for_each_mem_cgroup_tree(iter, memcg)
+ 		val += mem_cgroup_read_stat(iter, idx);
+ 
+-	if (val < 0) /* race ? */
+-		val = 0;
+ 	return val;
+ }
+ 
+@@ -3469,7 +3475,7 @@ static int memcg_stat_show(struct seq_file *m, void *v)
+ 	for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {
+ 		if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)
+ 			continue;
+-		seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i],
++		seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i],
+ 			   mem_cgroup_read_stat(memcg, i) * PAGE_SIZE);
+ 	}
+ 
+@@ -3494,13 +3500,13 @@ static int memcg_stat_show(struct seq_file *m, void *v)
+ 			   (u64)memsw * PAGE_SIZE);
+ 
+ 	for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {
+-		long long val = 0;
++		unsigned long long val = 0;
+ 
+ 		if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)
+ 			continue;
+ 		for_each_mem_cgroup_tree(mi, memcg)
+ 			val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE;
+-		seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val);
++		seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val);
+ 	}
+ 
+ 	for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) {
+diff --git a/mm/migrate.c b/mm/migrate.c
+index eb4267107d1f..fcb6204de108 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -734,6 +734,15 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ 	if (PageSwapBacked(page))
+ 		SetPageSwapBacked(newpage);
+ 
++	/*
++	 * Indirectly called below, migrate_page_copy() copies PG_dirty and thus
++	 * needs newpage's memcg set to transfer memcg dirty page accounting.
++	 * So perform memcg migration in two steps:
++	 * 1. set newpage->mem_cgroup (here)
++	 * 2. clear page->mem_cgroup (below)
++	 */
++	set_page_memcg(newpage, page_memcg(page));
++
+ 	mapping = page_mapping(page);
+ 	if (!mapping)
+ 		rc = migrate_page(mapping, newpage, page, mode);
+@@ -750,9 +759,10 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ 		rc = fallback_migrate_page(mapping, newpage, page, mode);
+ 
+ 	if (rc != MIGRATEPAGE_SUCCESS) {
++		set_page_memcg(newpage, NULL);
+ 		newpage->mapping = NULL;
+ 	} else {
+-		mem_cgroup_migrate(page, newpage, false);
++		set_page_memcg(page, NULL);
+ 		if (page_was_mapped)
+ 			remove_migration_ptes(page, newpage);
+ 		page->mapping = NULL;
+@@ -1068,7 +1078,7 @@ out:
+ 	if (rc != MIGRATEPAGE_SUCCESS && put_new_page)
+ 		put_new_page(new_hpage, private);
+ 	else
+-		put_page(new_hpage);
++		putback_active_hugepage(new_hpage);
+ 
+ 	if (result) {
+ 		if (rc)
+diff --git a/mm/slab.c b/mm/slab.c
+index bbd0b47dc6a9..ae360283029c 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -2190,9 +2190,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
+ 			size += BYTES_PER_WORD;
+ 	}
+ #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
+-	if (size >= kmalloc_size(INDEX_NODE + 1)
+-	    && cachep->object_size > cache_line_size()
+-	    && ALIGN(size, cachep->align) < PAGE_SIZE) {
++	/*
++	 * To activate debug pagealloc, off-slab management is necessary
++	 * requirement. In early phase of initialization, small sized slab
++	 * doesn't get initialized so it would not be possible. So, we need
++	 * to check size >= 256. It guarantees that all necessary small
++	 * sized slab is initialized in current slab initialization sequence.
++	 */
++	if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
++		size >= 256 && cachep->object_size > cache_line_size() &&
++		ALIGN(size, cachep->align) < PAGE_SIZE) {
+ 		cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);
+ 		size = PAGE_SIZE;
+ 	}
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 6d0b471eede8..cc7d87d64987 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -19,6 +19,7 @@
+ #include "main.h"
+ 
+ #include <linux/atomic.h>
++#include <linux/bitops.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/errno.h>
+ #include <linux/etherdevice.h>
+@@ -453,7 +454,7 @@ static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res,
+ 	int j;
+ 
+ 	/* check if orig node candidate is running DAT */
+-	if (!(candidate->capabilities & BATADV_ORIG_CAPA_HAS_DAT))
++	if (!test_bit(BATADV_ORIG_CAPA_HAS_DAT, &candidate->capabilities))
+ 		goto out;
+ 
+ 	/* Check if this node has already been selected... */
+@@ -713,9 +714,9 @@ static void batadv_dat_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
+ 					   uint16_t tvlv_value_len)
+ {
+ 	if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
+-		orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_DAT;
++		clear_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
+ 	else
+-		orig->capabilities |= BATADV_ORIG_CAPA_HAS_DAT;
++		set_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
+ }
+ 
+ /**
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index 7aa480b7edd0..68a9554961eb 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -19,6 +19,8 @@
+ #include "main.h"
+ 
+ #include <linux/atomic.h>
++#include <linux/bitops.h>
++#include <linux/bug.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/errno.h>
+ #include <linux/etherdevice.h>
+@@ -588,19 +590,26 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+  *
+  * If the BATADV_MCAST_WANT_ALL_UNSNOOPABLES flag of this originator,
+  * orig, has toggled then this method updates counter and list accordingly.
++ *
++ * Caller needs to hold orig->mcast_handler_lock.
+  */
+ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
+ 					     struct batadv_orig_node *orig,
+ 					     uint8_t mcast_flags)
+ {
++	struct hlist_node *node = &orig->mcast_want_all_unsnoopables_node;
++	struct hlist_head *head = &bat_priv->mcast.want_all_unsnoopables_list;
++
+ 	/* switched from flag unset to set */
+ 	if (mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES &&
+ 	    !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES)) {
+ 		atomic_inc(&bat_priv->mcast.num_want_all_unsnoopables);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_add_head_rcu(&orig->mcast_want_all_unsnoopables_node,
+-				   &bat_priv->mcast.want_all_unsnoopables_list);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(!hlist_unhashed(node));
++
++		hlist_add_head_rcu(node, head);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	/* switched from flag set to unset */
+ 	} else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES) &&
+@@ -608,7 +617,10 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
+ 		atomic_dec(&bat_priv->mcast.num_want_all_unsnoopables);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_del_rcu(&orig->mcast_want_all_unsnoopables_node);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(hlist_unhashed(node));
++
++		hlist_del_init_rcu(node);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	}
+ }
+@@ -621,19 +633,26 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
+  *
+  * If the BATADV_MCAST_WANT_ALL_IPV4 flag of this originator, orig, has
+  * toggled then this method updates counter and list accordingly.
++ *
++ * Caller needs to hold orig->mcast_handler_lock.
+  */
+ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
+ 					  struct batadv_orig_node *orig,
+ 					  uint8_t mcast_flags)
+ {
++	struct hlist_node *node = &orig->mcast_want_all_ipv4_node;
++	struct hlist_head *head = &bat_priv->mcast.want_all_ipv4_list;
++
+ 	/* switched from flag unset to set */
+ 	if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV4 &&
+ 	    !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV4)) {
+ 		atomic_inc(&bat_priv->mcast.num_want_all_ipv4);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_add_head_rcu(&orig->mcast_want_all_ipv4_node,
+-				   &bat_priv->mcast.want_all_ipv4_list);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(!hlist_unhashed(node));
++
++		hlist_add_head_rcu(node, head);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	/* switched from flag set to unset */
+ 	} else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV4) &&
+@@ -641,7 +660,10 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
+ 		atomic_dec(&bat_priv->mcast.num_want_all_ipv4);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_del_rcu(&orig->mcast_want_all_ipv4_node);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(hlist_unhashed(node));
++
++		hlist_del_init_rcu(node);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	}
+ }
+@@ -654,19 +676,26 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
+  *
+  * If the BATADV_MCAST_WANT_ALL_IPV6 flag of this originator, orig, has
+  * toggled then this method updates counter and list accordingly.
++ *
++ * Caller needs to hold orig->mcast_handler_lock.
+  */
+ static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv,
+ 					  struct batadv_orig_node *orig,
+ 					  uint8_t mcast_flags)
+ {
++	struct hlist_node *node = &orig->mcast_want_all_ipv6_node;
++	struct hlist_head *head = &bat_priv->mcast.want_all_ipv6_list;
++
+ 	/* switched from flag unset to set */
+ 	if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV6 &&
+ 	    !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV6)) {
+ 		atomic_inc(&bat_priv->mcast.num_want_all_ipv6);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_add_head_rcu(&orig->mcast_want_all_ipv6_node,
+-				   &bat_priv->mcast.want_all_ipv6_list);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(!hlist_unhashed(node));
++
++		hlist_add_head_rcu(node, head);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	/* switched from flag set to unset */
+ 	} else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV6) &&
+@@ -674,7 +703,10 @@ static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv,
+ 		atomic_dec(&bat_priv->mcast.num_want_all_ipv6);
+ 
+ 		spin_lock_bh(&bat_priv->mcast.want_lists_lock);
+-		hlist_del_rcu(&orig->mcast_want_all_ipv6_node);
++		/* flag checks above + mcast_handler_lock prevents this */
++		WARN_ON(hlist_unhashed(node));
++
++		hlist_del_init_rcu(node);
+ 		spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
+ 	}
+ }
+@@ -697,39 +729,42 @@ static void batadv_mcast_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
+ 	uint8_t mcast_flags = BATADV_NO_FLAGS;
+ 	bool orig_initialized;
+ 
+-	orig_initialized = orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST;
++	if (orig_mcast_enabled && tvlv_value &&
++	    (tvlv_value_len >= sizeof(mcast_flags)))
++		mcast_flags = *(uint8_t *)tvlv_value;
++
++	spin_lock_bh(&orig->mcast_handler_lock);
++	orig_initialized = test_bit(BATADV_ORIG_CAPA_HAS_MCAST,
++				    &orig->capa_initialized);
+ 
+ 	/* If mcast support is turned on decrease the disabled mcast node
+ 	 * counter only if we had increased it for this node before. If this
+ 	 * is a completely new orig_node no need to decrease the counter.
+ 	 */
+ 	if (orig_mcast_enabled &&
+-	    !(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST)) {
++	    !test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities)) {
+ 		if (orig_initialized)
+ 			atomic_dec(&bat_priv->mcast.num_disabled);
+-		orig->capabilities |= BATADV_ORIG_CAPA_HAS_MCAST;
++		set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities);
+ 	/* If mcast support is being switched off or if this is an initial
+ 	 * OGM without mcast support then increase the disabled mcast
+ 	 * node counter.
+ 	 */
+ 	} else if (!orig_mcast_enabled &&
+-		   (orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST ||
++		   (test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) ||
+ 		    !orig_initialized)) {
+ 		atomic_inc(&bat_priv->mcast.num_disabled);
+-		orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_MCAST;
++		clear_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities);
+ 	}
+ 
+-	orig->capa_initialized |= BATADV_ORIG_CAPA_HAS_MCAST;
+-
+-	if (orig_mcast_enabled && tvlv_value &&
+-	    (tvlv_value_len >= sizeof(mcast_flags)))
+-		mcast_flags = *(uint8_t *)tvlv_value;
++	set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized);
+ 
+ 	batadv_mcast_want_unsnoop_update(bat_priv, orig, mcast_flags);
+ 	batadv_mcast_want_ipv4_update(bat_priv, orig, mcast_flags);
+ 	batadv_mcast_want_ipv6_update(bat_priv, orig, mcast_flags);
+ 
+ 	orig->mcast_flags = mcast_flags;
++	spin_unlock_bh(&orig->mcast_handler_lock);
+ }
+ 
+ /**
+@@ -763,11 +798,15 @@ void batadv_mcast_purge_orig(struct batadv_orig_node *orig)
+ {
+ 	struct batadv_priv *bat_priv = orig->bat_priv;
+ 
+-	if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) &&
+-	    orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST)
++	spin_lock_bh(&orig->mcast_handler_lock);
++
++	if (!test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) &&
++	    test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized))
+ 		atomic_dec(&bat_priv->mcast.num_disabled);
+ 
+ 	batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS);
+ 	batadv_mcast_want_ipv4_update(bat_priv, orig, BATADV_NO_FLAGS);
+ 	batadv_mcast_want_ipv6_update(bat_priv, orig, BATADV_NO_FLAGS);
++
++	spin_unlock_bh(&orig->mcast_handler_lock);
+ }
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index f0a50f31d822..46604010dcd4 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -19,6 +19,7 @@
+ #include "main.h"
+ 
+ #include <linux/atomic.h>
++#include <linux/bitops.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/compiler.h>
+ #include <linux/debugfs.h>
+@@ -134,9 +135,9 @@ static void batadv_nc_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
+ 					  uint16_t tvlv_value_len)
+ {
+ 	if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
+-		orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_NC;
++		clear_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
+ 	else
+-		orig->capabilities |= BATADV_ORIG_CAPA_HAS_NC;
++		set_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
+ }
+ 
+ /**
+@@ -894,7 +895,7 @@ void batadv_nc_update_nc_node(struct batadv_priv *bat_priv,
+ 		goto out;
+ 
+ 	/* check if orig node is network coding enabled */
+-	if (!(orig_node->capabilities & BATADV_ORIG_CAPA_HAS_NC))
++	if (!test_bit(BATADV_ORIG_CAPA_HAS_NC, &orig_node->capabilities))
+ 		goto out;
+ 
+ 	/* accept ogms from 'good' neighbors and single hop neighbors */
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 018b7495ad84..32a0fcfab36d 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -696,8 +696,13 @@ struct batadv_orig_node *batadv_orig_node_new(struct batadv_priv *bat_priv,
+ 	orig_node->last_seen = jiffies;
+ 	reset_time = jiffies - 1 - msecs_to_jiffies(BATADV_RESET_PROTECTION_MS);
+ 	orig_node->bcast_seqno_reset = reset_time;
++
+ #ifdef CONFIG_BATMAN_ADV_MCAST
+ 	orig_node->mcast_flags = BATADV_NO_FLAGS;
++	INIT_HLIST_NODE(&orig_node->mcast_want_all_unsnoopables_node);
++	INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv4_node);
++	INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv6_node);
++	spin_lock_init(&orig_node->mcast_handler_lock);
+ #endif
+ 
+ 	/* create a vlan object for the "untagged" LAN */
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index a2fc843c2243..51cda3a7c51d 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -202,6 +202,7 @@ static int batadv_interface_tx(struct sk_buff *skb,
+ 	int gw_mode;
+ 	enum batadv_forw_mode forw_mode;
+ 	struct batadv_orig_node *mcast_single_orig = NULL;
++	int network_offset = ETH_HLEN;
+ 
+ 	if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE)
+ 		goto dropped;
+@@ -214,14 +215,18 @@ static int batadv_interface_tx(struct sk_buff *skb,
+ 	case ETH_P_8021Q:
+ 		vhdr = vlan_eth_hdr(skb);
+ 
+-		if (vhdr->h_vlan_encapsulated_proto != ethertype)
++		if (vhdr->h_vlan_encapsulated_proto != ethertype) {
++			network_offset += VLAN_HLEN;
+ 			break;
++		}
+ 
+ 		/* fall through */
+ 	case ETH_P_BATMAN:
+ 		goto dropped;
+ 	}
+ 
++	skb_set_network_header(skb, network_offset);
++
+ 	if (batadv_bla_tx(bat_priv, skb, vid))
+ 		goto dropped;
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 5809b39c1922..c9b26291ac4c 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -19,6 +19,7 @@
+ #include "main.h"
+ 
+ #include <linux/atomic.h>
++#include <linux/bitops.h>
+ #include <linux/bug.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/compiler.h>
+@@ -1882,7 +1883,7 @@ void batadv_tt_global_del_orig(struct batadv_priv *bat_priv,
+ 		}
+ 		spin_unlock_bh(list_lock);
+ 	}
+-	orig_node->capa_initialized &= ~BATADV_ORIG_CAPA_HAS_TT;
++	clear_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
+ }
+ 
+ static bool batadv_tt_global_to_purge(struct batadv_tt_global_entry *tt_global,
+@@ -2841,7 +2842,7 @@ static void _batadv_tt_update_changes(struct batadv_priv *bat_priv,
+ 				return;
+ 		}
+ 	}
+-	orig_node->capa_initialized |= BATADV_ORIG_CAPA_HAS_TT;
++	set_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
+ }
+ 
+ static void batadv_tt_fill_gtable(struct batadv_priv *bat_priv,
+@@ -3343,7 +3344,8 @@ static void batadv_tt_update_orig(struct batadv_priv *bat_priv,
+ 	bool has_tt_init;
+ 
+ 	tt_vlan = (struct batadv_tvlv_tt_vlan_data *)tt_buff;
+-	has_tt_init = orig_node->capa_initialized & BATADV_ORIG_CAPA_HAS_TT;
++	has_tt_init = test_bit(BATADV_ORIG_CAPA_HAS_TT,
++			       &orig_node->capa_initialized);
+ 
+ 	/* orig table not initialised AND first diff is in the OGM OR the ttvn
+ 	 * increased by one -> we can apply the attached changes
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index 67d63483618e..55610a805b53 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -221,6 +221,7 @@ struct batadv_orig_bat_iv {
+  * @batadv_dat_addr_t:  address of the orig node in the distributed hash
+  * @last_seen: time when last packet from this node was received
+  * @bcast_seqno_reset: time when the broadcast seqno window was reset
++ * @mcast_handler_lock: synchronizes mcast-capability and -flag changes
+  * @mcast_flags: multicast flags announced by the orig node
+  * @mcast_want_all_unsnoop_node: a list node for the
+  *  mcast.want_all_unsnoopables list
+@@ -268,13 +269,15 @@ struct batadv_orig_node {
+ 	unsigned long last_seen;
+ 	unsigned long bcast_seqno_reset;
+ #ifdef CONFIG_BATMAN_ADV_MCAST
++	/* synchronizes mcast tvlv specific orig changes */
++	spinlock_t mcast_handler_lock;
+ 	uint8_t mcast_flags;
+ 	struct hlist_node mcast_want_all_unsnoopables_node;
+ 	struct hlist_node mcast_want_all_ipv4_node;
+ 	struct hlist_node mcast_want_all_ipv6_node;
+ #endif
+-	uint8_t capabilities;
+-	uint8_t capa_initialized;
++	unsigned long capabilities;
++	unsigned long capa_initialized;
+ 	atomic_t last_ttvn;
+ 	unsigned char *tt_buff;
+ 	int16_t tt_buff_len;
+@@ -313,10 +316,10 @@ struct batadv_orig_node {
+  *  (= orig node announces a tvlv of type BATADV_TVLV_MCAST)
+  */
+ enum batadv_orig_capabilities {
+-	BATADV_ORIG_CAPA_HAS_DAT = BIT(0),
+-	BATADV_ORIG_CAPA_HAS_NC = BIT(1),
+-	BATADV_ORIG_CAPA_HAS_TT = BIT(2),
+-	BATADV_ORIG_CAPA_HAS_MCAST = BIT(3),
++	BATADV_ORIG_CAPA_HAS_DAT,
++	BATADV_ORIG_CAPA_HAS_NC,
++	BATADV_ORIG_CAPA_HAS_TT,
++	BATADV_ORIG_CAPA_HAS_MCAST,
+ };
+ 
+ /**
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index ad82324f710f..0510a577a7b5 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2311,12 +2311,6 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+ 	if (!conn)
+ 		return 1;
+ 
+-	chan = conn->smp;
+-	if (!chan) {
+-		BT_ERR("SMP security requested but not available");
+-		return 1;
+-	}
+-
+ 	if (!hci_dev_test_flag(hcon->hdev, HCI_LE_ENABLED))
+ 		return 1;
+ 
+@@ -2330,6 +2324,12 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+ 		if (smp_ltk_encrypt(conn, hcon->pending_sec_level))
+ 			return 0;
+ 
++	chan = conn->smp;
++	if (!chan) {
++		BT_ERR("SMP security requested but not available");
++		return 1;
++	}
++
+ 	l2cap_chan_lock(chan);
+ 
+ 	/* If SMP is already in progress ignore this request */
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index afe905c208af..691b54fcaf2a 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -152,9 +152,13 @@ htable_bits(u32 hashsize)
+ #define SET_HOST_MASK(family)	(family == AF_INET ? 32 : 128)
+ 
+ #ifdef IP_SET_HASH_WITH_NET0
++/* cidr from 0 to SET_HOST_MASK() value and c = cidr + 1 */
+ #define NLEN(family)		(SET_HOST_MASK(family) + 1)
++#define CIDR_POS(c)		((c) - 1)
+ #else
++/* cidr from 1 to SET_HOST_MASK() value and c = cidr + 1 */
+ #define NLEN(family)		SET_HOST_MASK(family)
++#define CIDR_POS(c)		((c) - 2)
+ #endif
+ 
+ #else
+@@ -305,7 +309,7 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n)
+ 		} else if (h->nets[i].cidr[n] < cidr) {
+ 			j = i;
+ 		} else if (h->nets[i].cidr[n] == cidr) {
+-			h->nets[cidr - 1].nets[n]++;
++			h->nets[CIDR_POS(cidr)].nets[n]++;
+ 			return;
+ 		}
+ 	}
+@@ -314,7 +318,7 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n)
+ 			h->nets[i].cidr[n] = h->nets[i - 1].cidr[n];
+ 	}
+ 	h->nets[i].cidr[n] = cidr;
+-	h->nets[cidr - 1].nets[n] = 1;
++	h->nets[CIDR_POS(cidr)].nets[n] = 1;
+ }
+ 
+ static void
+@@ -325,8 +329,8 @@ mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n)
+ 	for (i = 0; i < nets_length; i++) {
+ 		if (h->nets[i].cidr[n] != cidr)
+ 			continue;
+-		h->nets[cidr - 1].nets[n]--;
+-		if (h->nets[cidr - 1].nets[n] > 0)
++		h->nets[CIDR_POS(cidr)].nets[n]--;
++		if (h->nets[CIDR_POS(cidr)].nets[n] > 0)
+ 			return;
+ 		for (j = i; j < net_end && h->nets[j].cidr[n]; j++)
+ 			h->nets[j].cidr[n] = h->nets[j + 1].cidr[n];
+diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c
+index 3c862c0a76d1..a93dfebffa81 100644
+--- a/net/netfilter/ipset/ip_set_hash_netnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netnet.c
+@@ -131,6 +131,13 @@ hash_netnet4_data_next(struct hash_netnet4_elem *next,
+ #define HOST_MASK	32
+ #include "ip_set_hash_gen.h"
+ 
++static void
++hash_netnet4_init(struct hash_netnet4_elem *e)
++{
++	e->cidr[0] = HOST_MASK;
++	e->cidr[1] = HOST_MASK;
++}
++
+ static int
+ hash_netnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 		  const struct xt_action_param *par,
+@@ -160,7 +167,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ {
+ 	const struct hash_netnet *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+-	struct hash_netnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, };
++	struct hash_netnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, last;
+ 	u32 ip2 = 0, ip2_from = 0, ip2_to = 0, last2;
+@@ -169,6 +176,7 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	if (tb[IPSET_ATTR_LINENO])
+ 		*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
+ 
++	hash_netnet4_init(&e);
+ 	if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] ||
+ 		     !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS)))
+ 		return -IPSET_ERR_PROTOCOL;
+@@ -357,6 +365,13 @@ hash_netnet6_data_next(struct hash_netnet4_elem *next,
+ #define IP_SET_EMIT_CREATE
+ #include "ip_set_hash_gen.h"
+ 
++static void
++hash_netnet6_init(struct hash_netnet6_elem *e)
++{
++	e->cidr[0] = HOST_MASK;
++	e->cidr[1] = HOST_MASK;
++}
++
+ static int
+ hash_netnet6_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 		  const struct xt_action_param *par,
+@@ -385,13 +400,14 @@ hash_netnet6_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		  enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
+ {
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+-	struct hash_netnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, };
++	struct hash_netnet6_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+ 		*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
+ 
++	hash_netnet6_init(&e);
+ 	if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] ||
+ 		     !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS)))
+ 		return -IPSET_ERR_PROTOCOL;
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 0c68734f5cc4..9a14c237830f 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -142,6 +142,13 @@ hash_netportnet4_data_next(struct hash_netportnet4_elem *next,
+ #define HOST_MASK	32
+ #include "ip_set_hash_gen.h"
+ 
++static void
++hash_netportnet4_init(struct hash_netportnet4_elem *e)
++{
++	e->cidr[0] = HOST_MASK;
++	e->cidr[1] = HOST_MASK;
++}
++
+ static int
+ hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 		      const struct xt_action_param *par,
+@@ -175,7 +182,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ {
+ 	const struct hash_netportnet *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+-	struct hash_netportnet4_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, };
++	struct hash_netportnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, ip_last, p = 0, port, port_to;
+ 	u32 ip2_from = 0, ip2_to = 0, ip2_last, ip2;
+@@ -185,6 +192,7 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	if (tb[IPSET_ATTR_LINENO])
+ 		*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
+ 
++	hash_netportnet4_init(&e);
+ 	if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] ||
+ 		     !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) ||
+ 		     !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) ||
+@@ -412,6 +420,13 @@ hash_netportnet6_data_next(struct hash_netportnet4_elem *next,
+ #define IP_SET_EMIT_CREATE
+ #include "ip_set_hash_gen.h"
+ 
++static void
++hash_netportnet6_init(struct hash_netportnet6_elem *e)
++{
++	e->cidr[0] = HOST_MASK;
++	e->cidr[1] = HOST_MASK;
++}
++
+ static int
+ hash_netportnet6_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 		      const struct xt_action_param *par,
+@@ -445,7 +460,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[],
+ {
+ 	const struct hash_netportnet *h = set->data;
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+-	struct hash_netportnet6_elem e = { .cidr = { HOST_MASK, HOST_MASK, }, };
++	struct hash_netportnet6_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 port, port_to;
+ 	bool with_ports = false;
+@@ -454,6 +469,7 @@ hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	if (tb[IPSET_ATTR_LINENO])
+ 		*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
+ 
++	hash_netportnet6_init(&e);
+ 	if (unlikely(!tb[IPSET_ATTR_IP] || !tb[IPSET_ATTR_IP2] ||
+ 		     !ip_set_attr_netorder(tb, IPSET_ATTR_PORT) ||
+ 		     !ip_set_optattr_netorder(tb, IPSET_ATTR_PORT_TO) ||
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 3c20d02aee73..0625a42df108 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -320,12 +320,13 @@ out_free:
+ }
+ EXPORT_SYMBOL_GPL(nf_ct_tmpl_alloc);
+ 
+-static void nf_ct_tmpl_free(struct nf_conn *tmpl)
++void nf_ct_tmpl_free(struct nf_conn *tmpl)
+ {
+ 	nf_ct_ext_destroy(tmpl);
+ 	nf_ct_ext_free(tmpl);
+ 	kfree(tmpl);
+ }
++EXPORT_SYMBOL_GPL(nf_ct_tmpl_free);
+ 
+ static void
+ destroy_conntrack(struct nf_conntrack *nfct)
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index 675d12c69e32..a5d41dfa9f05 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -107,12 +107,17 @@ EXPORT_SYMBOL(nf_log_register);
+ 
+ void nf_log_unregister(struct nf_logger *logger)
+ {
++	const struct nf_logger *log;
+ 	int i;
+ 
+ 	mutex_lock(&nf_log_mutex);
+-	for (i = 0; i < NFPROTO_NUMPROTO; i++)
+-		RCU_INIT_POINTER(loggers[i][logger->type], NULL);
++	for (i = 0; i < NFPROTO_NUMPROTO; i++) {
++		log = nft_log_dereference(loggers[i][logger->type]);
++		if (log == logger)
++			RCU_INIT_POINTER(loggers[i][logger->type], NULL);
++	}
+ 	mutex_unlock(&nf_log_mutex);
++	synchronize_rcu();
+ }
+ EXPORT_SYMBOL(nf_log_unregister);
+ 
+diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c
+index d7f168527903..d6ee8f8b19b6 100644
+--- a/net/netfilter/nf_synproxy_core.c
++++ b/net/netfilter/nf_synproxy_core.c
+@@ -378,7 +378,7 @@ static int __net_init synproxy_net_init(struct net *net)
+ err3:
+ 	free_percpu(snet->stats);
+ err2:
+-	nf_conntrack_free(ct);
++	nf_ct_tmpl_free(ct);
+ err1:
+ 	return err;
+ }
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 0c0e8ecf02ab..70277b11f742 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -444,6 +444,7 @@ done:
+ static void nfnetlink_rcv(struct sk_buff *skb)
+ {
+ 	struct nlmsghdr *nlh = nlmsg_hdr(skb);
++	u_int16_t res_id;
+ 	int msglen;
+ 
+ 	if (nlh->nlmsg_len < NLMSG_HDRLEN ||
+@@ -468,7 +469,12 @@ static void nfnetlink_rcv(struct sk_buff *skb)
+ 
+ 		nfgenmsg = nlmsg_data(nlh);
+ 		skb_pull(skb, msglen);
+-		nfnetlink_rcv_batch(skb, nlh, nfgenmsg->res_id);
++		/* Work around old nft using host byte order */
++		if (nfgenmsg->res_id == NFNL_SUBSYS_NFTABLES)
++			res_id = NFNL_SUBSYS_NFTABLES;
++		else
++			res_id = ntohs(nfgenmsg->res_id);
++		nfnetlink_rcv_batch(skb, nlh, res_id);
+ 	} else {
+ 		netlink_rcv_skb(skb, &nfnetlink_rcv_msg);
+ 	}
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 66def315eb56..9c8fab00164b 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -619,6 +619,13 @@ struct nft_xt {
+ 
+ static struct nft_expr_type nft_match_type;
+ 
++static bool nft_match_cmp(const struct xt_match *match,
++			  const char *name, u32 rev, u32 family)
++{
++	return strcmp(match->name, name) == 0 && match->revision == rev &&
++	       (match->family == NFPROTO_UNSPEC || match->family == family);
++}
++
+ static const struct nft_expr_ops *
+ nft_match_select_ops(const struct nft_ctx *ctx,
+ 		     const struct nlattr * const tb[])
+@@ -626,7 +633,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ 	struct nft_xt *nft_match;
+ 	struct xt_match *match;
+ 	char *mt_name;
+-	__u32 rev, family;
++	u32 rev, family;
+ 
+ 	if (tb[NFTA_MATCH_NAME] == NULL ||
+ 	    tb[NFTA_MATCH_REV] == NULL ||
+@@ -641,8 +648,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ 	list_for_each_entry(nft_match, &nft_match_list, head) {
+ 		struct xt_match *match = nft_match->ops.data;
+ 
+-		if (strcmp(match->name, mt_name) == 0 &&
+-		    match->revision == rev && match->family == family) {
++		if (nft_match_cmp(match, mt_name, rev, family)) {
+ 			if (!try_module_get(match->me))
+ 				return ERR_PTR(-ENOENT);
+ 
+@@ -693,6 +699,13 @@ static LIST_HEAD(nft_target_list);
+ 
+ static struct nft_expr_type nft_target_type;
+ 
++static bool nft_target_cmp(const struct xt_target *tg,
++			   const char *name, u32 rev, u32 family)
++{
++	return strcmp(tg->name, name) == 0 && tg->revision == rev &&
++	       (tg->family == NFPROTO_UNSPEC || tg->family == family);
++}
++
+ static const struct nft_expr_ops *
+ nft_target_select_ops(const struct nft_ctx *ctx,
+ 		      const struct nlattr * const tb[])
+@@ -700,7 +713,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ 	struct nft_xt *nft_target;
+ 	struct xt_target *target;
+ 	char *tg_name;
+-	__u32 rev, family;
++	u32 rev, family;
+ 
+ 	if (tb[NFTA_TARGET_NAME] == NULL ||
+ 	    tb[NFTA_TARGET_REV] == NULL ||
+@@ -715,8 +728,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ 	list_for_each_entry(nft_target, &nft_target_list, head) {
+ 		struct xt_target *target = nft_target->ops.data;
+ 
+-		if (strcmp(target->name, tg_name) == 0 &&
+-		    target->revision == rev && target->family == family) {
++		if (nft_target_cmp(target, tg_name, rev, family)) {
+ 			if (!try_module_get(target->me))
+ 				return ERR_PTR(-ENOENT);
+ 
+diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c
+index 43ddeee404e9..f3377ce1ff18 100644
+--- a/net/netfilter/xt_CT.c
++++ b/net/netfilter/xt_CT.c
+@@ -233,7 +233,7 @@ out:
+ 	return 0;
+ 
+ err3:
+-	nf_conntrack_free(ct);
++	nf_ct_tmpl_free(ct);
+ err2:
+ 	nf_ct_l3proto_module_put(par->family);
+ err1:
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+index d25cd430f9ff..95412abc95b0 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+@@ -384,6 +384,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 		      int byte_count)
+ {
+ 	struct ib_send_wr send_wr;
++	u32 xdr_off;
+ 	int sge_no;
+ 	int sge_bytes;
+ 	int page_no;
+@@ -418,8 +419,8 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 	ctxt->direction = DMA_TO_DEVICE;
+ 
+ 	/* Map the payload indicated by 'byte_count' */
++	xdr_off = 0;
+ 	for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
+-		int xdr_off = 0;
+ 		sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count);
+ 		byte_count -= sge_bytes;
+ 		ctxt->sge[sge_no].addr =
+@@ -457,6 +458,13 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 	}
+ 	rqstp->rq_next_page = rqstp->rq_respages + 1;
+ 
++	/* The loop above bumps sc_dma_used for each sge. The
++	 * xdr_buf.tail gets a separate sge, but resides in the
++	 * same page as xdr_buf.head. Don't count it twice.
++	 */
++	if (sge_no > ctxt->count)
++		atomic_dec(&rdma->sc_dma_used);
++
+ 	if (sge_no > rdma->sc_max_sge) {
+ 		pr_err("svcrdma: Too many sges (%d)\n", sge_no);
+ 		goto err;
+diff --git a/sound/arm/Kconfig b/sound/arm/Kconfig
+index 885683a3b0bd..e0406211716b 100644
+--- a/sound/arm/Kconfig
++++ b/sound/arm/Kconfig
+@@ -9,6 +9,14 @@ menuconfig SND_ARM
+ 	  Drivers that are implemented on ASoC can be found in
+ 	  "ALSA for SoC audio support" section.
+ 
++config SND_PXA2XX_LIB
++	tristate
++	select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
++	select SND_DMAENGINE_PCM
++
++config SND_PXA2XX_LIB_AC97
++	bool
++
+ if SND_ARM
+ 
+ config SND_ARMAACI
+@@ -21,13 +29,6 @@ config SND_PXA2XX_PCM
+ 	tristate
+ 	select SND_PCM
+ 
+-config SND_PXA2XX_LIB
+-	tristate
+-	select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
+-
+-config SND_PXA2XX_LIB_AC97
+-	bool
+-
+ config SND_PXA2XX_AC97
+ 	tristate "AC97 driver for the Intel PXA2xx chip"
+ 	depends on ARCH_PXA
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 477742cb70a2..58c0aad37284 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -73,6 +73,7 @@ struct hda_tegra {
+ 	struct clk *hda2codec_2x_clk;
+ 	struct clk *hda2hdmi_clk;
+ 	void __iomem *regs;
++	struct work_struct probe_work;
+ };
+ 
+ #ifdef CONFIG_PM
+@@ -294,7 +295,9 @@ static int hda_tegra_dev_disconnect(struct snd_device *device)
+ static int hda_tegra_dev_free(struct snd_device *device)
+ {
+ 	struct azx *chip = device->device_data;
++	struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
+ 
++	cancel_work_sync(&hda->probe_work);
+ 	if (azx_bus(chip)->chip_init) {
+ 		azx_stop_all_streams(chip);
+ 		azx_stop_chip(chip);
+@@ -426,6 +429,9 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ /*
+  * constructor
+  */
++
++static void hda_tegra_probe_work(struct work_struct *work);
++
+ static int hda_tegra_create(struct snd_card *card,
+ 			    unsigned int driver_caps,
+ 			    struct hda_tegra *hda)
+@@ -452,6 +458,8 @@ static int hda_tegra_create(struct snd_card *card,
+ 	chip->single_cmd = false;
+ 	chip->snoop = true;
+ 
++	INIT_WORK(&hda->probe_work, hda_tegra_probe_work);
++
+ 	err = azx_bus_init(chip, NULL, &hda_tegra_io_ops);
+ 	if (err < 0)
+ 		return err;
+@@ -499,6 +507,21 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 	card->private_data = chip;
+ 
+ 	dev_set_drvdata(&pdev->dev, card);
++	schedule_work(&hda->probe_work);
++
++	return 0;
++
++out_free:
++	snd_card_free(card);
++	return err;
++}
++
++static void hda_tegra_probe_work(struct work_struct *work)
++{
++	struct hda_tegra *hda = container_of(work, struct hda_tegra, probe_work);
++	struct azx *chip = &hda->chip;
++	struct platform_device *pdev = to_platform_device(hda->dev);
++	int err;
+ 
+ 	err = hda_tegra_first_init(chip, pdev);
+ 	if (err < 0)
+@@ -520,11 +543,8 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ 	chip->running = 1;
+ 	snd_hda_set_power_save(&chip->bus, power_save * 1000);
+ 
+-	return 0;
+-
+-out_free:
+-	snd_card_free(card);
+-	return err;
++ out_free:
++	return; /* no error return from async probe */
+ }
+ 
+ static int hda_tegra_remove(struct platform_device *pdev)
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 584a0343ab0c..85813de26da8 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -633,6 +633,7 @@ static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
+ 	SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
+ 	SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6),
++	SND_PCI_QUIRK(0x106b, 0x7b00, "MacBookPro 12,1", CS4208_MBP11),
+ 	{} /* terminator */
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c8f01ccc2513..6a66139871c6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4188,6 +4188,24 @@ static void alc_fixup_disable_aamix(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* fixup for Thinkpad docks: add dock pins, avoid HP parser fixup */
++static void alc_fixup_tpt440_dock(struct hda_codec *codec,
++				  const struct hda_fixup *fix, int action)
++{
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x16, 0x21211010 }, /* dock headphone */
++		{ 0x19, 0x21a11010 }, /* dock mic */
++		{ }
++	};
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
++		codec->power_save_node = 0; /* avoid click noises */
++		snd_hda_apply_pincfgs(codec, pincfgs);
++	}
++}
++
+ static void alc_shutup_dell_xps13(struct hda_codec *codec)
+ {
+ 	struct alc_spec *spec = codec->spec;
+@@ -4562,7 +4580,6 @@ enum {
+ 	ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ 	ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC292_FIXUP_TPT440_DOCK,
+-	ALC292_FIXUP_TPT440_DOCK2,
+ 	ALC283_FIXUP_BXBT2807_MIC,
+ 	ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
+ 	ALC282_FIXUP_ASPIRE_V5_PINS,
+@@ -5029,17 +5046,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	},
+ 	[ALC292_FIXUP_TPT440_DOCK] = {
+ 		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
+-		.chained = true,
+-		.chain_id = ALC292_FIXUP_TPT440_DOCK2
+-	},
+-	[ALC292_FIXUP_TPT440_DOCK2] = {
+-		.type = HDA_FIXUP_PINS,
+-		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x16, 0x21211010 }, /* dock headphone */
+-			{ 0x19, 0x21a11010 }, /* dock mic */
+-			{ }
+-		},
++		.v.func = alc_fixup_tpt440_dock,
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ 	},
+@@ -5299,6 +5306,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 9d947aef2c8b..def5cc8dff02 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -4520,7 +4520,11 @@ static int patch_stac92hd73xx(struct hda_codec *codec)
+ 		return err;
+ 
+ 	spec = codec->spec;
+-	codec->power_save_node = 1;
++	/* enable power_save_node only for new 92HD89xx chips, as it causes
++	 * click noises on old 92HD73xx chips.
++	 */
++	if ((codec->core.vendor_id & 0xfffffff0) != 0x111d7670)
++		codec->power_save_node = 1;
+ 	spec->linear_tone_beep = 0;
+ 	spec->gen.mixer_nid = 0x1d;
+ 	spec->have_spdif_mux = 1;
+diff --git a/sound/soc/au1x/db1200.c b/sound/soc/au1x/db1200.c
+index 58c3164802b8..8c907ebea189 100644
+--- a/sound/soc/au1x/db1200.c
++++ b/sound/soc/au1x/db1200.c
+@@ -129,6 +129,8 @@ static struct snd_soc_dai_link db1300_i2s_dai = {
+ 	.cpu_dai_name	= "au1xpsc_i2s.2",
+ 	.platform_name	= "au1xpsc-pcm.2",
+ 	.codec_name	= "wm8731.0-001b",
++	.dai_fmt	= SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF |
++			  SND_SOC_DAIFMT_CBM_CFM,
+ 	.ops		= &db1200_i2s_wm8731_ops,
+ };
+ 
+@@ -146,6 +148,8 @@ static struct snd_soc_dai_link db1550_i2s_dai = {
+ 	.cpu_dai_name	= "au1xpsc_i2s.3",
+ 	.platform_name	= "au1xpsc-pcm.3",
+ 	.codec_name	= "wm8731.0-001b",
++	.dai_fmt	= SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF |
++			  SND_SOC_DAIFMT_CBM_CFM,
+ 	.ops		= &db1200_i2s_wm8731_ops,
+ };
+ 
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index e673f6ceb521..7c411297bfdd 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -1377,8 +1377,8 @@ static int sgtl5000_probe(struct snd_soc_codec *codec)
+ 			sgtl5000->micbias_resistor << SGTL5000_BIAS_R_SHIFT);
+ 
+ 	snd_soc_update_bits(codec, SGTL5000_CHIP_MIC_CTRL,
+-			SGTL5000_BIAS_R_MASK,
+-			sgtl5000->micbias_voltage << SGTL5000_BIAS_R_SHIFT);
++			SGTL5000_BIAS_VOLT_MASK,
++			sgtl5000->micbias_voltage << SGTL5000_BIAS_VOLT_SHIFT);
+ 	/*
+ 	 * disable DAP
+ 	 * TODO:
+diff --git a/sound/soc/codecs/tas2552.c b/sound/soc/codecs/tas2552.c
+index 4f25a7d0efa2..b3e5685aca1e 100644
+--- a/sound/soc/codecs/tas2552.c
++++ b/sound/soc/codecs/tas2552.c
+@@ -551,7 +551,7 @@ static struct snd_soc_dai_driver tas2552_dai[] = {
+ /*
+  * DAC digital volumes. From -7 to 24 dB in 1 dB steps
+  */
+-static DECLARE_TLV_DB_SCALE(dac_tlv, -7, 100, 0);
++static DECLARE_TLV_DB_SCALE(dac_tlv, -700, 100, 0);
+ 
+ static const char * const tas2552_din_source_select[] = {
+ 	"Muted",
+diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
+index a3e97b46b64e..0d28e3b356f6 100644
+--- a/sound/soc/dwc/designware_i2s.c
++++ b/sound/soc/dwc/designware_i2s.c
+@@ -131,10 +131,10 @@ static inline void i2s_clear_irqs(struct dw_i2s_dev *dev, u32 stream)
+ 
+ 	if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		for (i = 0; i < 4; i++)
+-			i2s_write_reg(dev->i2s_base, TOR(i), 0);
++			i2s_read_reg(dev->i2s_base, TOR(i));
+ 	} else {
+ 		for (i = 0; i < 4; i++)
+-			i2s_write_reg(dev->i2s_base, ROR(i), 0);
++			i2s_read_reg(dev->i2s_base, ROR(i));
+ 	}
+ }
+ 
+diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig
+index 39cea80846c3..f2bf8661dd21 100644
+--- a/sound/soc/pxa/Kconfig
++++ b/sound/soc/pxa/Kconfig
+@@ -1,7 +1,6 @@
+ config SND_PXA2XX_SOC
+ 	tristate "SoC Audio for the Intel PXA2xx chip"
+ 	depends on ARCH_PXA
+-	select SND_ARM
+ 	select SND_PXA2XX_LIB
+ 	help
+ 	  Say Y or M if you want to add support for codecs attached to
+@@ -25,7 +24,6 @@ config SND_PXA2XX_AC97
+ config SND_PXA2XX_SOC_AC97
+ 	tristate
+ 	select AC97_BUS
+-	select SND_ARM
+ 	select SND_PXA2XX_LIB_AC97
+ 	select SND_SOC_AC97_BUS
+ 
+diff --git a/sound/soc/pxa/pxa2xx-ac97.c b/sound/soc/pxa/pxa2xx-ac97.c
+index 1f6054650991..9e4b04e0fbd1 100644
+--- a/sound/soc/pxa/pxa2xx-ac97.c
++++ b/sound/soc/pxa/pxa2xx-ac97.c
+@@ -49,7 +49,7 @@ static struct snd_ac97_bus_ops pxa2xx_ac97_ops = {
+ 	.reset	= pxa2xx_ac97_cold_reset,
+ };
+ 
+-static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 12;
++static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 11;
+ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
+ 	.addr		= __PREG(PCDR),
+ 	.addr_width	= DMA_SLAVE_BUSWIDTH_4_BYTES,
+@@ -57,7 +57,7 @@ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
+ 	.filter_data	= &pxa2xx_ac97_pcm_stereo_in_req,
+ };
+ 
+-static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 11;
++static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 12;
+ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_out = {
+ 	.addr		= __PREG(PCDR),
+ 	.addr_width	= DMA_SLAVE_BUSWIDTH_4_BYTES,
+diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
+index 82e350e9501c..ac75816ada7c 100644
+--- a/sound/synth/emux/emux_oss.c
++++ b/sound/synth/emux/emux_oss.c
+@@ -69,7 +69,8 @@ snd_emux_init_seq_oss(struct snd_emux *emu)
+ 	struct snd_seq_oss_reg *arg;
+ 	struct snd_seq_device *dev;
+ 
+-	if (snd_seq_device_new(emu->card, 0, SNDRV_SEQ_DEV_ID_OSS,
++	/* using device#1 here for avoiding conflicts with OPL3 */
++	if (snd_seq_device_new(emu->card, 1, SNDRV_SEQ_DEV_ID_OSS,
+ 			       sizeof(struct snd_seq_oss_reg), &dev) < 0)
+ 		return;
+ 
+diff --git a/tools/lguest/lguest.c b/tools/lguest/lguest.c
+index e44052483ed9..80159e6811c2 100644
+--- a/tools/lguest/lguest.c
++++ b/tools/lguest/lguest.c
+@@ -125,7 +125,11 @@ struct device_list {
+ /* The list of Guest devices, based on command line arguments. */
+ static struct device_list devices;
+ 
+-struct virtio_pci_cfg_cap {
++/*
++ * Just like struct virtio_pci_cfg_cap in uapi/linux/virtio_pci.h,
++ * but uses a u32 explicitly for the data.
++ */
++struct virtio_pci_cfg_cap_u32 {
+ 	struct virtio_pci_cap cap;
+ 	u32 pci_cfg_data; /* Data for BAR access. */
+ };
+@@ -157,7 +161,7 @@ struct pci_config {
+ 	struct virtio_pci_notify_cap notify;
+ 	struct virtio_pci_cap isr;
+ 	struct virtio_pci_cap device;
+-	struct virtio_pci_cfg_cap cfg_access;
++	struct virtio_pci_cfg_cap_u32 cfg_access;
+ };
+ 
+ /* The device structure describes a single device. */
+@@ -1291,7 +1295,7 @@ static struct device *dev_and_reg(u32 *reg)
+  * only fault if they try to write with some invalid bar/offset/length.
+  */
+ static bool valid_bar_access(struct device *d,
+-			     struct virtio_pci_cfg_cap *cfg_access)
++			     struct virtio_pci_cfg_cap_u32 *cfg_access)
+ {
+ 	/* We only have 1 bar (BAR0) */
+ 	if (cfg_access->cap.bar != 0)
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index cc25f059ab3d..a843bee66a4f 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -3721,7 +3721,7 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
+ 	struct format_field *field;
+ 	struct printk_map *printk;
+ 	long long val, fval;
+-	unsigned long addr;
++	unsigned long long addr;
+ 	char *str;
+ 	unsigned char *hex;
+ 	int print;
+@@ -3754,13 +3754,30 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
+ 		 */
+ 		if (!(field->flags & FIELD_IS_ARRAY) &&
+ 		    field->size == pevent->long_size) {
+-			addr = *(unsigned long *)(data + field->offset);
++
++			/* Handle heterogeneous recording and processing
++			 * architectures
++			 *
++			 * CASE I:
++			 * Traces recorded on 32-bit devices (32-bit
++			 * addressing) and processed on 64-bit devices:
++			 * In this case, only 32 bits should be read.
++			 *
++			 * CASE II:
++			 * Traces recorded on 64 bit devices and processed
++			 * on 32-bit devices:
++			 * In this case, 64 bits must be read.
++			 */
++			addr = (pevent->long_size == 8) ?
++				*(unsigned long long *)(data + field->offset) :
++				(unsigned long long)*(unsigned int *)(data + field->offset);
++
+ 			/* Check if it matches a print format */
+ 			printk = find_printk(pevent, addr);
+ 			if (printk)
+ 				trace_seq_puts(s, printk->printk);
+ 			else
+-				trace_seq_printf(s, "%lx", addr);
++				trace_seq_printf(s, "%llx", addr);
+ 			break;
+ 		}
+ 		str = malloc(len + 1);
+diff --git a/tools/perf/arch/alpha/Build b/tools/perf/arch/alpha/Build
+new file mode 100644
+index 000000000000..1bb8bf6d7fd4
+--- /dev/null
++++ b/tools/perf/arch/alpha/Build
+@@ -0,0 +1 @@
++# empty
+diff --git a/tools/perf/arch/mips/Build b/tools/perf/arch/mips/Build
+new file mode 100644
+index 000000000000..1bb8bf6d7fd4
+--- /dev/null
++++ b/tools/perf/arch/mips/Build
+@@ -0,0 +1 @@
++# empty
+diff --git a/tools/perf/arch/parisc/Build b/tools/perf/arch/parisc/Build
+new file mode 100644
+index 000000000000..1bb8bf6d7fd4
+--- /dev/null
++++ b/tools/perf/arch/parisc/Build
+@@ -0,0 +1 @@
++# empty
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index d99d850e1444..ef355fc0e870 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -694,7 +694,7 @@ static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg)
+ static void print_aggr(char *prefix)
+ {
+ 	struct perf_evsel *counter;
+-	int cpu, cpu2, s, s2, id, nr;
++	int cpu, s, s2, id, nr;
+ 	double uval;
+ 	u64 ena, run, val;
+ 
+@@ -707,8 +707,7 @@ static void print_aggr(char *prefix)
+ 			val = ena = run = 0;
+ 			nr = 0;
+ 			for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
+-				cpu2 = perf_evsel__cpus(counter)->map[cpu];
+-				s2 = aggr_get_id(evsel_list->cpus, cpu2);
++				s2 = aggr_get_id(perf_evsel__cpus(counter), cpu);
+ 				if (s2 != id)
+ 					continue;
+ 				val += perf_counts(counter->counts, cpu, 0)->val;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 03ace57a800c..4215cc155041 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1442,7 +1442,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
+ 	if (ph->needs_swap)
+ 		nr = bswap_32(nr);
+ 
+-	ph->env.nr_cpus_online = nr;
++	ph->env.nr_cpus_avail = nr;
+ 
+ 	ret = readn(fd, &nr, sizeof(nr));
+ 	if (ret != sizeof(nr))
+@@ -1451,7 +1451,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
+ 	if (ph->needs_swap)
+ 		nr = bswap_32(nr);
+ 
+-	ph->env.nr_cpus_avail = nr;
++	ph->env.nr_cpus_online = nr;
+ 	return 0;
+ }
+ 
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index 6f28d53d4e46..f298c696e24f 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -151,6 +151,9 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
+ 	hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12);
+ 	hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12);
+ 
++	if (h->srcline)
++		hists__new_col_len(hists, HISTC_SRCLINE, strlen(h->srcline));
++
+ 	if (h->transaction)
+ 		hists__new_col_len(hists, HISTC_TRANSACTION,
+ 				   hist_entry__transaction_len());
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index 591905a02b92..9cd70819c795 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -255,7 +255,7 @@ PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc
+ 	list_add_tail(&term->list, head);
+ 
+ 	ALLOC_LIST(list);
+-	ABORT_ON(parse_events_add_pmu(list, &data->idx, "cpu", head));
++	ABORT_ON(parse_events_add_pmu(data, list, "cpu", head));
+ 	parse_events__free_terms(head);
+ 	$$ = list;
+ }
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 381f23a443c7..ae6351db6de4 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -274,12 +274,13 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
+ 	int ret = 0;
+ 
+ 	if (module) {
+-		list_for_each_entry(dso, &host_machine->dsos.head, node) {
+-			if (!dso->kernel)
+-				continue;
+-			if (strncmp(dso->short_name + 1, module,
+-				    dso->short_name_len - 2) == 0)
+-				goto found;
++		char module_name[128];
++
++		snprintf(module_name, sizeof(module_name), "[%s]", module);
++		map = map_groups__find_by_name(&host_machine->kmaps, MAP__FUNCTION, module_name);
++		if (map) {
++			dso = map->dso;
++			goto found;
+ 		}
+ 		pr_debug("Failed to find module %s.\n", module);
+ 		return -ENOENT;
+diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
+index 31db6ee7db54..cd55c6db421d 100644
+--- a/tools/perf/util/probe-event.h
++++ b/tools/perf/util/probe-event.h
+@@ -106,6 +106,8 @@ struct variable_list {
+ 	struct strlist			*vars;	/* Available variables */
+ };
+ 
++struct map;
++
+ /* Command string to events */
+ extern int parse_perf_probe_command(const char *cmd,
+ 				    struct perf_probe_event *pev);
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 65f7e389ae09..333858821ab0 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1260,8 +1260,6 @@ out_close:
+ static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
+ 		       bool temp)
+ {
+-	GElf_Ehdr *ehdr;
+-
+ 	kcore->elfclass = elfclass;
+ 
+ 	if (temp)
+@@ -1278,9 +1276,7 @@ static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
+ 	if (!gelf_newehdr(kcore->elf, elfclass))
+ 		goto out_end;
+ 
+-	ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr);
+-	if (!ehdr)
+-		goto out_end;
++	memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr));
+ 
+ 	return 0;
+ 
+@@ -1337,23 +1333,18 @@ static int kcore__copy_hdr(struct kcore *from, struct kcore *to, size_t count)
+ static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset,
+ 			   u64 addr, u64 len)
+ {
+-	GElf_Phdr gphdr;
+-	GElf_Phdr *phdr;
+-
+-	phdr = gelf_getphdr(kcore->elf, idx, &gphdr);
+-	if (!phdr)
+-		return -1;
+-
+-	phdr->p_type	= PT_LOAD;
+-	phdr->p_flags	= PF_R | PF_W | PF_X;
+-	phdr->p_offset	= offset;
+-	phdr->p_vaddr	= addr;
+-	phdr->p_paddr	= 0;
+-	phdr->p_filesz	= len;
+-	phdr->p_memsz	= len;
+-	phdr->p_align	= page_size;
+-
+-	if (!gelf_update_phdr(kcore->elf, idx, phdr))
++	GElf_Phdr phdr = {
++		.p_type		= PT_LOAD,
++		.p_flags	= PF_R | PF_W | PF_X,
++		.p_offset	= offset,
++		.p_vaddr	= addr,
++		.p_paddr	= 0,
++		.p_filesz	= len,
++		.p_memsz	= len,
++		.p_align	= page_size,
++	};
++
++	if (!gelf_update_phdr(kcore->elf, idx, &phdr))
+ 		return -1;
+ 
+ 	return 0;
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 9ff4193dfa49..79db45336e3a 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -771,40 +771,14 @@ static enum kvm_bus ioeventfd_bus_from_flags(__u32 flags)
+ 	return KVM_MMIO_BUS;
+ }
+ 
+-static int
+-kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
++static int kvm_assign_ioeventfd_idx(struct kvm *kvm,
++				enum kvm_bus bus_idx,
++				struct kvm_ioeventfd *args)
+ {
+-	enum kvm_bus              bus_idx;
+-	struct _ioeventfd        *p;
+-	struct eventfd_ctx       *eventfd;
+-	int                       ret;
+-
+-	bus_idx = ioeventfd_bus_from_flags(args->flags);
+-	/* must be natural-word sized, or 0 to ignore length */
+-	switch (args->len) {
+-	case 0:
+-	case 1:
+-	case 2:
+-	case 4:
+-	case 8:
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	/* check for range overflow */
+-	if (args->addr + args->len < args->addr)
+-		return -EINVAL;
+ 
+-	/* check for extra flags that we don't understand */
+-	if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
+-		return -EINVAL;
+-
+-	/* ioeventfd with no length can't be combined with DATAMATCH */
+-	if (!args->len &&
+-	    args->flags & (KVM_IOEVENTFD_FLAG_PIO |
+-			   KVM_IOEVENTFD_FLAG_DATAMATCH))
+-		return -EINVAL;
++	struct eventfd_ctx *eventfd;
++	struct _ioeventfd *p;
++	int ret;
+ 
+ 	eventfd = eventfd_ctx_fdget(args->fd);
+ 	if (IS_ERR(eventfd))
+@@ -843,16 +817,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
+ 	if (ret < 0)
+ 		goto unlock_fail;
+ 
+-	/* When length is ignored, MMIO is also put on a separate bus, for
+-	 * faster lookups.
+-	 */
+-	if (!args->len && !(args->flags & KVM_IOEVENTFD_FLAG_PIO)) {
+-		ret = kvm_io_bus_register_dev(kvm, KVM_FAST_MMIO_BUS,
+-					      p->addr, 0, &p->dev);
+-		if (ret < 0)
+-			goto register_fail;
+-	}
+-
+ 	kvm->buses[bus_idx]->ioeventfd_count++;
+ 	list_add_tail(&p->list, &kvm->ioeventfds);
+ 
+@@ -860,8 +824,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
+ 
+ 	return 0;
+ 
+-register_fail:
+-	kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
+ unlock_fail:
+ 	mutex_unlock(&kvm->slots_lock);
+ 
+@@ -873,14 +835,13 @@ fail:
+ }
+ 
+ static int
+-kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
++kvm_deassign_ioeventfd_idx(struct kvm *kvm, enum kvm_bus bus_idx,
++			   struct kvm_ioeventfd *args)
+ {
+-	enum kvm_bus              bus_idx;
+ 	struct _ioeventfd        *p, *tmp;
+ 	struct eventfd_ctx       *eventfd;
+ 	int                       ret = -ENOENT;
+ 
+-	bus_idx = ioeventfd_bus_from_flags(args->flags);
+ 	eventfd = eventfd_ctx_fdget(args->fd);
+ 	if (IS_ERR(eventfd))
+ 		return PTR_ERR(eventfd);
+@@ -901,10 +862,6 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
+ 			continue;
+ 
+ 		kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
+-		if (!p->length) {
+-			kvm_io_bus_unregister_dev(kvm, KVM_FAST_MMIO_BUS,
+-						  &p->dev);
+-		}
+ 		kvm->buses[bus_idx]->ioeventfd_count--;
+ 		ioeventfd_release(p);
+ 		ret = 0;
+@@ -918,6 +875,71 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
+ 	return ret;
+ }
+ 
++static int kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
++{
++	enum kvm_bus bus_idx = ioeventfd_bus_from_flags(args->flags);
++	int ret = kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
++
++	if (!args->len && bus_idx == KVM_MMIO_BUS)
++		kvm_deassign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
++
++	return ret;
++}
++
++static int
++kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
++{
++	enum kvm_bus              bus_idx;
++	int ret;
++
++	bus_idx = ioeventfd_bus_from_flags(args->flags);
++	/* must be natural-word sized, or 0 to ignore length */
++	switch (args->len) {
++	case 0:
++	case 1:
++	case 2:
++	case 4:
++	case 8:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	/* check for range overflow */
++	if (args->addr + args->len < args->addr)
++		return -EINVAL;
++
++	/* check for extra flags that we don't understand */
++	if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
++		return -EINVAL;
++
++	/* ioeventfd with no length can't be combined with DATAMATCH */
++	if (!args->len &&
++	    args->flags & (KVM_IOEVENTFD_FLAG_PIO |
++			   KVM_IOEVENTFD_FLAG_DATAMATCH))
++		return -EINVAL;
++
++	ret = kvm_assign_ioeventfd_idx(kvm, bus_idx, args);
++	if (ret)
++		goto fail;
++
++	/* When length is ignored, MMIO is also put on a separate bus, for
++	 * faster lookups.
++	 */
++	if (!args->len && bus_idx == KVM_MMIO_BUS) {
++		ret = kvm_assign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
++		if (ret < 0)
++			goto fast_fail;
++	}
++
++	return 0;
++
++fast_fail:
++	kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
++fail:
++	return ret;
++}
++
+ int
+ kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
+ {
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 8b8a44453670..5a2a78a91d58 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3080,10 +3080,25 @@ static void kvm_io_bus_destroy(struct kvm_io_bus *bus)
+ static inline int kvm_io_bus_cmp(const struct kvm_io_range *r1,
+ 				 const struct kvm_io_range *r2)
+ {
+-	if (r1->addr < r2->addr)
++	gpa_t addr1 = r1->addr;
++	gpa_t addr2 = r2->addr;
++
++	if (addr1 < addr2)
+ 		return -1;
+-	if (r1->addr + r1->len > r2->addr + r2->len)
++
++	/* If r2->len == 0, match the exact address.  If r2->len != 0,
++	 * accept any overlapping write.  Any order is acceptable for
++	 * overlapping ranges, because kvm_io_bus_get_first_dev ensures
++	 * we process all of them.
++	 */
++	if (r2->len) {
++		addr1 += r1->len;
++		addr2 += r2->len;
++	}
++
++	if (addr1 > addr2)
+ 		return 1;
++
+ 	return 0;
+ }
+ 


             reply	other threads:[~2015-10-23 17:14 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-23 17:14 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2015-12-15 11:15 [gentoo-commits] proj/linux-patches:4.2 commit in: / Mike Pagano
2015-12-11 14:31 Mike Pagano
2015-11-10  0:58 Mike Pagano
2015-11-05 23:30 Mike Pagano
2015-10-27 13:36 Mike Pagano
2015-10-23 17:19 Mike Pagano
2015-10-03 16:12 Mike Pagano
2015-09-29 19:16 Mike Pagano
2015-09-29 17:51 Mike Pagano
2015-09-28 23:44 Mike Pagano
2015-09-28 16:49 Mike Pagano
2015-09-22 11:43 Mike Pagano
2015-09-21 22:19 Mike Pagano
2015-09-15 12:31 Mike Pagano
2015-09-02 16:34 Mike Pagano
2015-08-19 14:58 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1445620456.a66c9411919f0d467ddacb949af14b1336517b90.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox