public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-08-22 15:06 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-08-22 15:06 UTC (permalink / raw
  To: gentoo-commits

commit:     3588a4790a924cb888ff5717434c6db1b034cfa3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 22 15:06:29 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 22 15:06:29 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3588a479

Gentoo Linux support config settings and defaults. Patch to add support for namespace user.pax.* on tmpfs. Patch to enable link security restrictions by default.
Patch to ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs. Patch to enable control of the unaligned access control policy from sysctl

 0000_README                                        |  24 ++
 1500_XATTR_USER_PREFIX.patch                       |  69 ++++
 ...ble-link-security-restrictions-by-default.patch |  22 ++
 2900_dev-root-proc-mount-fix.patch                 |  38 ++
 4400_alpha-sysctl-uac.patch                        | 142 +++++++
 ...-additional-cpu-optimizations-for-gcc-4.9.patch | 426 +++++++++++++++++++++
 6 files changed, 721 insertions(+)

diff --git a/0000_README b/0000_README
index 9018993..777f7c8 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,30 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2900_dev-root-proc-mount-fix.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch:  4400_alpha-sysctl-uac.patch
+From:   Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323 
+Desc:   Enable control of the unaligned access control policy from sysctl
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
+
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..bacd032
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,69 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 440e2a7..c377172 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2667,6 +2667,14 @@ static int shmem_xattr_handler_set(const struct xattr_handler *handler,
+ 	struct shmem_inode_info *info = SHMEM_I(d_inode(dentry));
+ 
+ 	name = xattr_full_name(handler, name);
++
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
++
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+ 
+@@ -2682,6 +2690,12 @@ static const struct xattr_handler shmem_trusted_xattr_handler = {
+ 	.set = shmem_xattr_handler_set,
+ };
+ 
++static const struct xattr_handler shmem_user_xattr_handler = {
++	.prefix = XATTR_USER_PREFIX,
++	.get = shmem_xattr_handler_get,
++	.set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ 	&posix_acl_access_xattr_handler,
+@@ -2689,6 +2703,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #endif
+ 	&shmem_security_xattr_handler,
+ 	&shmem_trusted_xattr_handler,
++	&shmem_user_xattr_handler,
+ 	NULL
+ };
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ 	path_put(link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ 
+ /**
+  * may_follow_link - Check symlink following for unsafe situations

diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..60af1eb
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,38 @@
+--- a/init/do_mounts.c	2015-08-19 10:27:16.753852576 -0400
++++ b/init/do_mounts.c	2015-08-19 10:34:25.473850353 -0400
+@@ -490,7 +490,11 @@ void __init change_floppy(char *fmt, ...
+ 	va_start(args, fmt);
+ 	vsprintf(buf, fmt, args);
+ 	va_end(args);
+-	fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++	if (saved_root_name[0])
++		fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++	else
++		fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++
+ 	if (fd >= 0) {
+ 		sys_ioctl(fd, FDEJECT, 0);
+ 		sys_close(fd);
+@@ -534,11 +538,17 @@ void __init mount_root(void)
+ #endif
+ #ifdef CONFIG_BLOCK
+ 	{
+-		int err = create_dev("/dev/root", ROOT_DEV);
+-
+-		if (err < 0)
+-			pr_emerg("Failed to create /dev/root: %d\n", err);
+-		mount_block_root("/dev/root", root_mountflags);
++		if (saved_root_name[0] == '/') {
++	       	int err = create_dev(saved_root_name, ROOT_DEV);
++			if (err < 0)
++				pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
++			mount_block_root(saved_root_name, root_mountflags);
++		} else {
++			int err = create_dev("/dev/root", ROOT_DEV);
++			if (err < 0)
++				pr_emerg("Failed to create /dev/root: %d\n", err);
++			mount_block_root("/dev/root", root_mountflags);
++		}
+ 	}
+ #endif
+ }

diff --git a/4400_alpha-sysctl-uac.patch b/4400_alpha-sysctl-uac.patch
new file mode 100644
index 0000000..d42b4ed
--- /dev/null
+++ b/4400_alpha-sysctl-uac.patch
@@ -0,0 +1,142 @@
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 7f312d8..1eb686b 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -697,6 +697,33 @@ config HZ
+ 	default 1200 if HZ_1200
+ 	default 1024
+
++config ALPHA_UAC_SYSCTL
++       bool "Configure UAC policy via sysctl"
++       depends on SYSCTL
++       default y
++       ---help---
++         Configuring the UAC (unaligned access control) policy on a Linux
++         system usually involves setting a compile time define. If you say
++         Y here, you will be able to modify the UAC policy at runtime using
++         the /proc interface.
++
++         The UAC policy defines the action Linux should take when an
++         unaligned memory access occurs. The action can include printing a
++         warning message (NOPRINT), sending a signal to the offending
++         program to help developers debug their applications (SIGBUS), or
++         disabling the transparent fixing (NOFIX).
++
++         The sysctls will be initialized to the compile-time defined UAC
++         policy. You can change these manually, or with the sysctl(8)
++         userspace utility.
++
++         To disable the warning messages at runtime, you would use
++
++           echo 1 > /proc/sys/kernel/uac/noprint
++
++         This is pretty harmless. Say Y if you're not sure.
++
++
+ source "drivers/pci/Kconfig"
+ source "drivers/eisa/Kconfig"
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 74aceea..cb35d80 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -103,6 +103,49 @@ static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ 			   "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
+ #endif
+
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++
++#include <linux/sysctl.h>
++
++static int enabled_noprint = 0;
++static int enabled_sigbus = 0;
++static int enabled_nofix = 0;
++
++struct ctl_table uac_table[] = {
++       {
++               .procname       = "noprint",
++               .data           = &enabled_noprint,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       {
++               .procname       = "sigbus",
++               .data           = &enabled_sigbus,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       {
++               .procname       = "nofix",
++               .data           = &enabled_nofix,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       { }
++};
++
++static int __init init_uac_sysctl(void)
++{
++   /* Initialize sysctls with the #defined UAC policy */
++   enabled_noprint = (test_thread_flag (TS_UAC_NOPRINT)) ? 1 : 0;
++   enabled_sigbus = (test_thread_flag (TS_UAC_SIGBUS)) ? 1 : 0;
++   enabled_nofix = (test_thread_flag (TS_UAC_NOFIX)) ? 1 : 0;
++   return 0;
++}
++#endif
++
+ static void
+ dik_show_code(unsigned int *pc)
+ {
+@@ -785,7 +828,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
+ 	/* Check the UAC bits to decide what the user wants us to do
+ 	   with the unaliged access.  */
+
++#ifndef CONFIG_ALPHA_UAC_SYSCTL
+ 	if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
++#else  /* CONFIG_ALPHA_UAC_SYSCTL */
++	if (!(current_thread_info()->status & TS_UAC_NOPRINT) &&
++	    !(enabled_noprint)) {
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ 		if (__ratelimit(&ratelimit)) {
+ 			printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
+ 			       current->comm, task_pid_nr(current),
+@@ -1090,3 +1138,6 @@ trap_init(void)
+ 	wrent(entSys, 5);
+ 	wrent(entDbg, 6);
+ }
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++       __initcall(init_uac_sysctl);
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 87b2fc3..55021a8 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -152,6 +152,11 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++extern struct ctl_table uac_table[];
++#endif
++
+ #ifdef CONFIG_SPARC
+ #endif
+
+@@ -1844,6 +1849,13 @@ static struct ctl_table debug_table[] = {
+ 		.extra2		= &one,
+ 	},
+ #endif
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++	{
++	        .procname   = "uac",
++		.mode       = 0555,
++	        .child      = uac_table,
++	 },
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ 	{ }
+ };
+

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
new file mode 100644
index 0000000..d9729b2
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -0,0 +1,426 @@
+WARNING - this version of the patch works with version 4.9+ of gcc and with
+kernel version 3.15.x+ and should NOT be applied when compiling on older
+versions due to name changes of the flags with the 4.9 release of gcc.
+Use the older version of this patch hosted on the same github for older
+versions of gcc. For example:
+
+corei7 --> nehalem
+corei7-avx --> sandybridge
+core-avx-i --> ivybridge
+core-avx2 --> haswell
+
+For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+
+It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
+Note that upstream is using the deprecated 'match=atom' flags when I believe it
+should use the newer 'march=bonnell' flag for atom processors.
+
+I have made that change to this patch set as well.  See the following kernel
+bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+
+This patch will expand the number of microarchitectures to include newer
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 15h (Steamroller), Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7
+(Nehalem), Intel 1.5 Gen Core i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7
+(Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core
+i3/i5/i7 (Haswell), Intel 5th Gen Core i3/i5/i7 (Broadwell), and the low power
+Silvermont series of Atom processors (Silvermont). It also offers the compiler
+the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+--- a/arch/x86/include/asm/module.h	2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/include/asm/module.h	2015-11-06 14:18:24.234941036 -0500
+@@ -15,6 +15,24 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +51,22 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Kconfig.cpu	2015-11-06 14:20:14.948369244 -0500
+@@ -137,9 +137,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -147,7 +146,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -155,12 +154,69 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Barcelona and newer processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Bobcat processors.
++
++	  Enables -march=btver1
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	---help---
++	  Select this for AMD Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Jaguar processors.
++
++	  Enables -march=btver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -251,8 +307,17 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +325,71 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -276,6 +398,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native 
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -300,7 +435,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -331,11 +466,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -359,17 +494,17 @@ config X86_P6_NOP
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++	depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Makefile	2015-11-06 14:21:05.708983344 -0500
+@@ -94,13 +94,38 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu	2015-11-06 14:21:43.604429077 -0500
+@@ -23,7 +23,16 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +41,16 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-09-26 10:35 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-09-26 10:35 UTC (permalink / raw
  To: gentoo-commits

commit:     b60690c2015d9946308c9442b50d8343983f1bfd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 26 10:35:24 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 26 10:35:24 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b60690c2

Update gentoo kconfig patch to remove DEVPTS_MULTIPLE_INSTANCES. See kernel upstream commit: eedf265aa003b4781de24cfed40a655a664457e6.

 4567_distro-Gentoo-Kconfig.patch | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 499b21f..cf5a20c 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,15 +1,14 @@
---- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
-+++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
-@@ -8,4 +8,6 @@ config SRCARCH
- 	string
+--- a/Kconfig	2016-08-30 14:30:48.508361013 -0400
++++ b/Kconfig	2016-08-30 14:31:40.718683061 -0400
+@@ -9,3 +9,5 @@ config SRCARCH
  	option env="SRCARCH"
  
-+source "distro/Kconfig"
-+
  source "arch/$SRCARCH/Kconfig"
---- /dev/null	2016-07-01 11:23:26.087932647 -0400
-+++ b/distro/Kconfig	2016-07-01 19:32:35.581415519 -0400
-@@ -0,0 +1,134 @@
++
++source "distro/Kconfig"
+--- /dev/null	2016-08-30 01:47:09.760073185 -0400
++++ b/distro/Kconfig	2016-08-30 14:32:21.378933599 -0400
+@@ -0,0 +1,133 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -112,7 +111,6 @@
 +	select AUTOFS4_FS
 +	select BLK_DEV_BSG
 +	select CGROUPS
-+	select DEVPTS_MULTIPLE_INSTANCES
 +	select EPOLL
 +	select FANOTIFY
 +	select FHANDLE


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-09-26 10:37 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-09-26 10:37 UTC (permalink / raw
  To: gentoo-commits

commit:     37185eb0b6d421615685085052f14f7cd937cb2d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 26 10:36:56 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 26 10:36:56 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=37185eb0

Rename gcc optimization patch for more clarity.

 0000_README                                                             | 2 +-
 ...-4.9.patch => 5010_enable-additional-cpu-optimizations-for-gcc.patch | 0
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index 777f7c8..db75eae 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,6 @@ Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
 
-Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
similarity index 100%
rename from 5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
rename to 5010_enable-additional-cpu-optimizations-for-gcc.patch


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-08 19:50 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-08 19:50 UTC (permalink / raw
  To: gentoo-commits

commit:     2a460f07ad824ea67abac1d6d7626046a89d8322
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct  8 19:50:27 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct  8 19:50:27 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2a460f07

Linux patch 4.8.1

 0000_README            |   4 +
 1000_linux-4.8.1.patch | 252 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 256 insertions(+)

diff --git a/0000_README b/0000_README
index db75eae..55d306f 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.8.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-4.8.1.patch b/1000_linux-4.8.1.patch
new file mode 100644
index 0000000..870f17f
--- /dev/null
+++ b/1000_linux-4.8.1.patch
@@ -0,0 +1,252 @@
+diff --git a/Makefile b/Makefile
+index 80b8671d5c46..75db9f3988f3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index 91fff48d0f57..2751ff9c0934 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -435,8 +435,10 @@ NOKPROBE_SYMBOL(kernel_active_single_step);
+ /* ptrace API */
+ void user_enable_single_step(struct task_struct *task)
+ {
+-	set_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
+-	set_regs_spsr_ss(task_pt_regs(task));
++	struct thread_info *ti = task_thread_info(task);
++
++	if (!test_and_set_ti_thread_flag(ti, TIF_SINGLESTEP))
++		set_regs_spsr_ss(task_pt_regs(task));
+ }
+ NOKPROBE_SYMBOL(user_enable_single_step);
+ 
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index 0c1a77cafe14..4c281df16816 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -391,11 +391,11 @@ static void fbtft_update_display(struct fbtft_par *par, unsigned start_line,
+ 
+ 	if (unlikely(timeit)) {
+ 		ts_end = ktime_get();
+-		if (ktime_to_ns(par->update_time))
++		if (!ktime_to_ns(par->update_time))
+ 			par->update_time = ts_start;
+ 
+-		par->update_time = ts_start;
+ 		fps = ktime_us_delta(ts_start, par->update_time);
++		par->update_time = ts_start;
+ 		fps = fps ? 1000000 / fps : 0;
+ 
+ 		throughput = ktime_us_delta(ts_end, ts_start);
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 917a55c4480d..ffe9f8875311 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -141,6 +141,7 @@ static void usbtmc_delete(struct kref *kref)
+ 	struct usbtmc_device_data *data = to_usbtmc_data(kref);
+ 
+ 	usb_put_dev(data->usb_dev);
++	kfree(data);
+ }
+ 
+ static int usbtmc_open(struct inode *inode, struct file *filp)
+@@ -1379,7 +1380,7 @@ static int usbtmc_probe(struct usb_interface *intf,
+ 
+ 	dev_dbg(&intf->dev, "%s called\n", __func__);
+ 
+-	data = devm_kzalloc(&intf->dev, sizeof(*data), GFP_KERNEL);
++	data = kmalloc(sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c
+index 7771be3ac178..4dd531ac5a7f 100644
+--- a/drivers/usb/misc/legousbtower.c
++++ b/drivers/usb/misc/legousbtower.c
+@@ -898,24 +898,6 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+ 	dev->interrupt_in_interval = interrupt_in_interval ? interrupt_in_interval : dev->interrupt_in_endpoint->bInterval;
+ 	dev->interrupt_out_interval = interrupt_out_interval ? interrupt_out_interval : dev->interrupt_out_endpoint->bInterval;
+ 
+-	/* we can register the device now, as it is ready */
+-	usb_set_intfdata (interface, dev);
+-
+-	retval = usb_register_dev (interface, &tower_class);
+-
+-	if (retval) {
+-		/* something prevented us from registering this driver */
+-		dev_err(idev, "Not able to get a minor for this device.\n");
+-		usb_set_intfdata (interface, NULL);
+-		goto error;
+-	}
+-	dev->minor = interface->minor;
+-
+-	/* let the user know what node this device is now attached to */
+-	dev_info(&interface->dev, "LEGO USB Tower #%d now attached to major "
+-		 "%d minor %d\n", (dev->minor - LEGO_USB_TOWER_MINOR_BASE),
+-		 USB_MAJOR, dev->minor);
+-
+ 	/* get the firmware version and log it */
+ 	result = usb_control_msg (udev,
+ 				  usb_rcvctrlpipe(udev, 0),
+@@ -936,6 +918,23 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+ 		 get_version_reply.minor,
+ 		 le16_to_cpu(get_version_reply.build_no));
+ 
++	/* we can register the device now, as it is ready */
++	usb_set_intfdata (interface, dev);
++
++	retval = usb_register_dev (interface, &tower_class);
++
++	if (retval) {
++		/* something prevented us from registering this driver */
++		dev_err(idev, "Not able to get a minor for this device.\n");
++		usb_set_intfdata (interface, NULL);
++		goto error;
++	}
++	dev->minor = interface->minor;
++
++	/* let the user know what node this device is now attached to */
++	dev_info(&interface->dev, "LEGO USB Tower #%d now attached to major "
++		 "%d minor %d\n", (dev->minor - LEGO_USB_TOWER_MINOR_BASE),
++		 USB_MAJOR, dev->minor);
+ 
+ exit:
+ 	return retval;
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 4d6a5c672a3d..54a4de0efdba 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -118,6 +118,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
+ 	{ USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */
+ 	{ USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */
++	{ USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */
+ 	{ USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */
+ 	{ USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */
+ 	{ USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */
+diff --git a/drivers/usb/usbip/vudc_rx.c b/drivers/usb/usbip/vudc_rx.c
+index 344bd9473475..e429b59f6f8a 100644
+--- a/drivers/usb/usbip/vudc_rx.c
++++ b/drivers/usb/usbip/vudc_rx.c
+@@ -142,7 +142,7 @@ static int v_recv_cmd_submit(struct vudc *udc,
+ 	urb_p->urb->status = -EINPROGRESS;
+ 
+ 	/* FIXME: more pipe setup to please usbip_common */
+-	urb_p->urb->pipe &= ~(11 << 30);
++	urb_p->urb->pipe &= ~(3 << 30);
+ 	switch (urb_p->ep->type) {
+ 	case USB_ENDPOINT_XFER_BULK:
+ 		urb_p->urb->pipe |= (PIPE_BULK << 30);
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index 4a529c984a3f..e1d761463243 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -257,7 +257,7 @@ static inline void workingset_node_pages_inc(struct radix_tree_node *node)
+ 
+ static inline void workingset_node_pages_dec(struct radix_tree_node *node)
+ {
+-	VM_BUG_ON(!workingset_node_pages(node));
++	VM_WARN_ON_ONCE(!workingset_node_pages(node));
+ 	node->count--;
+ }
+ 
+@@ -273,7 +273,7 @@ static inline void workingset_node_shadows_inc(struct radix_tree_node *node)
+ 
+ static inline void workingset_node_shadows_dec(struct radix_tree_node *node)
+ {
+-	VM_BUG_ON(!workingset_node_shadows(node));
++	VM_WARN_ON_ONCE(!workingset_node_shadows(node));
+ 	node->count -= 1U << RADIX_TREE_COUNT_SHIFT;
+ }
+ 
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 56fefbd85782..ed62748a6d55 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -261,6 +261,7 @@ enum {
+ 	CXT_FIXUP_HP_530,
+ 	CXT_FIXUP_CAP_MIX_AMP_5047,
+ 	CXT_FIXUP_MUTE_LED_EAPD,
++	CXT_FIXUP_HP_SPECTRE,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -765,6 +766,14 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cxt_fixup_mute_led_eapd,
+ 	},
++	[CXT_FIXUP_HP_SPECTRE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			/* enable NID 0x1d for the speaker on top */
++			{ 0x1d, 0x91170111 },
++			{ }
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -814,6 +823,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
++	SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 575cefd8cc4a..bd481ac23faf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5806,6 +5806,13 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{0x14, 0x90170110}, \
+ 	{0x15, 0x0221401f}
+ 
++#define ALC295_STANDARD_PINS \
++	{0x12, 0xb7a60130}, \
++	{0x14, 0x90170110}, \
++	{0x17, 0x21014020}, \
++	{0x18, 0x21a19030}, \
++	{0x21, 0x04211020}
++
+ #define ALC298_STANDARD_PINS \
+ 	{0x12, 0x90a60130}, \
+ 	{0x21, 0x03211020}
+@@ -5846,6 +5853,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170120},
+ 		{0x21, 0x02211030}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x14, 0x90170110},
++		{0x1b, 0x02011020},
++		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170130},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221103f}),
+@@ -5911,6 +5922,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170120},
+ 		{0x21, 0x02211030}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x12, 0xb7a60130},
++		{0x14, 0x90170110},
++		{0x21, 0x02211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC256_STANDARD_PINS),
+ 	SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4,
+ 		{0x12, 0x90a60130},
+@@ -6021,6 +6036,8 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 	SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC292_STANDARD_PINS,
+ 		{0x13, 0x90a60140}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC295_STANDARD_PINS),
+ 	SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC298_STANDARD_PINS,
+ 		{0x17, 0x90170110}),


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-11  0:07 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-11  0:07 UTC (permalink / raw
  To: gentoo-commits

commit:     87cd8ce6b13f62532e383db6302117fd51ed9f62
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 11 00:07:31 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 11 00:07:31 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=87cd8ce6

Bootsplash ported by Uladzimir Bely. (Bug #596126)

 0000_README           |    4 +
 4200_fbcondecor.patch | 2095 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2099 insertions(+)

diff --git a/0000_README b/0000_README
index 55d306f..4af14fd 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  2900_dev-root-proc-mount-fix.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
 Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
 
+Patch:  4200_fbcondecor.patch
+From:   http://www.mepiscommunity.org/fbcondecor
+Desc:   Bootsplash ported by Uladzimir Bely. (Bug #596126)
+
 Patch:  4400_alpha-sysctl-uac.patch
 From:   Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323 
 Desc:   Enable control of the unaligned access control policy from sysctl

diff --git a/4200_fbcondecor.patch b/4200_fbcondecor.patch
new file mode 100644
index 0000000..f7d9879
--- /dev/null
+++ b/4200_fbcondecor.patch
@@ -0,0 +1,2095 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c..2230930 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ 	- info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ 	- intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++	- info on the Framebuffer Console Decoration
+ framebuffer.txt
+ 	- introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 0000000..637209e
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++    http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++   standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++   is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace  helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++  Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++  Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++  Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++  Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 53abb4a..1721aee 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -17,6 +17,10 @@ obj-y				+= pwm/
+ obj-$(CONFIG_PCI)		+= pci/
+ obj-$(CONFIG_PARISC)		+= parisc/
+ obj-$(CONFIG_RAPIDIO)		+= rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y				+= tty/
++obj-y				+= char/
+ obj-y				+= video/
+ obj-y				+= idle/
+ 
+@@ -45,11 +49,6 @@ obj-$(CONFIG_REGULATOR)		+= regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER)	+= reset/
+ 
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y				+= tty/
+-obj-y				+= char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT)	+= iommu/
+ 
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 38da6e2..fe58152 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -130,6 +130,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+          such that other users of the framebuffer will remain normally
+          oriented.
+ 
++config FB_CON_DECOR
++	bool "Support for the Framebuffer Console Decorations"
++	depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++	default n
++	---help---
++	  This option enables support for framebuffer console decorations which
++	  makes it possible to display images in the background of the system
++	  consoles.  Note that userspace utilities are necessary in order to take
++	  advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++	  for more information.
++
++	  If unsure, say N.
++
+ config STI_CONSOLE
+         bool "STI text console"
+         depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index 43bfa48..cc104b6 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -16,4 +16,5 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE)     += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
+                                          fbcon_ccw.o
+ endif
+ 
++obj-$(CONFIG_FB_CON_DECOR)     	  += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI)              += sticore.o
+diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
+index dbfe4ee..14da307 100644
+--- a/drivers/video/console/bitblit.c
++++ b/drivers/video/console/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "fbcondecor.h"
+ 
+ /*
+  * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ 	area.height = height * vc->vc_font.height;
+ 	area.width = width * vc->vc_font.width;
+ 
++	if (fbcon_decor_active(info, vc)) {
++		area.sx += vc->vc_decor.tx;
++		area.sy += vc->vc_decor.ty;
++		area.dx += vc->vc_decor.tx;
++		area.dy += vc->vc_decor.ty;
++	}
++
+ 	info->fbops->fb_copyarea(info, &area);
+ }
+ 
+@@ -379,11 +387,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	cursor.image.depth = 1;
+ 	cursor.rop = ROP_XOR;
+ 
+-	if (info->fbops->fb_cursor)
+-		err = info->fbops->fb_cursor(info, &cursor);
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_cursor(info, &cursor);
++	} else {
++		if (info->fbops->fb_cursor)
++			err = info->fbops->fb_cursor(info, &cursor);
+ 
+-	if (err)
+-		soft_cursor(info, &cursor);
++		if (err)
++			soft_cursor(info, &cursor);
++	}
+ 
+ 	ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 0000000..c262540
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,473 @@
++/*
++ *  linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootdecor" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift, bpp, type)						\
++	do {									\
++		if (d & (0x80 >> (shift)))					\
++			dd2[(shift)] = fgx;					\
++		else								\
++			dd2[(shift)] = transparent ? *(type *)decor_src : bgx;	\
++		decor_src += (bpp);						\
++	} while (0)								\
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++		     u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++	int i, j, k;
++	int minlen = min(min(info->var.red.length, info->var.green.length),
++			     info->var.blue.length);
++	u32 col;
++
++	for (j = i = 0; i < 16; i++) {
++		k = color_table[i];
++
++		col = ((vc->vc_palette[j++]  >> (8-minlen))
++			<< info->var.red.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.green.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.blue.offset);
++			((u32 *)info->pseudo_palette)[k] = col;
++	}
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++		      int width, u8 *src, u32 fgx, u32 bgx, u8 transparent)
++{
++	unsigned int x, y;
++	u32 dd;
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++	unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++	u16 dd2[4];
++
++	u8 *decor_src = (u8 *)(info->bgdecor.data + ds);
++	u8 *dst = (u8 *)(info->screen_base + d);
++
++	if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++		return;
++
++	for (y = 0; y < height; y++) {
++		switch (info->var.bits_per_pixel) {
++
++		case 32:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     *(u32 *)decor_src : bgx;
++
++				d <<= 1;
++				decor_src += 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++		case 24:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     (*(u32 *)decor_src & 0xffffff) : bgx;
++
++				d <<= 1;
++				decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++				fb_writew(dd & 0xffff, dst);
++				dst += 2;
++				fb_writeb((dd >> 16), dst);
++#else
++				fb_writew(dd >> 8, dst);
++				dst += 2;
++				fb_writeb(dd & 0xff, dst);
++#endif
++				dst++;
++			}
++			break;
++		case 16:
++			for (x = 0; x < width; x += 2) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 2, u16);
++				parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 16);
++#else
++				dd = dd2[1] | (dd2[0] << 16);
++#endif
++				d <<= 2;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++
++		case 8:
++			for (x = 0; x < width; x += 4) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 1, u8);
++				parse_pixel(1, 1, u8);
++				parse_pixel(2, 1, u8);
++				parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++				dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++				d <<= 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++		}
++
++		dst += info->fix.line_length - width * bytespp;
++		decor_src += (info->var.xres - width) * bytespp;
++	}
++}
++
++#define cc2cx(a)						\
++	((info->fix.visual == FB_VISUAL_TRUECOLOR ||		\
++		info->fix.visual == FB_VISUAL_DIRECTCOLOR) ?	\
++			((u32 *)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++		   const unsigned short *s, int count, int yy, int xx)
++{
++	unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++	struct fbcon_ops *ops = info->fbcon_par;
++	int fg_color, bg_color, transparent;
++	u8 *src;
++	u32 bgx, fgx;
++	u16 c = scr_readw(s);
++
++	fg_color = get_color(vc, info, c, 1);
++	bg_color = get_color(vc, info, c, 0);
++
++	/* Don't paint the background image if console is blanked */
++	transparent = ops->blank_state ? 0 :
++		(vc->vc_decor.bg_color == bg_color);
++
++	xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++	yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++	fgx = cc2cx(fg_color);
++	bgx = cc2cx(bg_color);
++
++	while (count--) {
++		c = scr_readw(s++);
++		src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++		      ((vc->vc_font.width + 7) >> 3);
++
++		fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++			       vc->vc_font.width, src, fgx, bgx, transparent);
++		xx += vc->vc_font.width;
++	}
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++	int i;
++	unsigned int dsize, s_pitch;
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct vc_data *vc;
++	u8 *src;
++
++	/* we really don't need any cursors while the console is blanked */
++	if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++		return;
++
++	vc = vc_cons[ops->currcon].d;
++
++	src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++	if (!src)
++		return;
++
++	s_pitch = (cursor->image.width + 7) >> 3;
++	dsize = s_pitch * cursor->image.height;
++	if (cursor->enable) {
++		switch (cursor->rop) {
++		case ROP_XOR:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] ^ cursor->mask[i];
++			break;
++		case ROP_COPY:
++		default:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] & cursor->mask[i];
++			break;
++		}
++	} else
++		memcpy(src, cursor->image.data, dsize);
++
++	fbcon_decor_renderc(info,
++			cursor->image.dy + vc->vc_decor.ty,
++			cursor->image.dx + vc->vc_decor.tx,
++			cursor->image.height,
++			cursor->image.width,
++			(u8 *)src,
++			cc2cx(cursor->image.fg_color),
++			cc2cx(cursor->image.bg_color),
++			cursor->image.bg_color == vc->vc_decor.bg_color);
++
++	kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++				u32 bgx, int bpp)
++{
++	int i;
++
++	if (bpp == 8)
++		bgx |= bgx << 8;
++	if (bpp == 16 || bpp == 8)
++		bgx |= bgx << 16;
++
++	while (height-- > 0) {
++		u8 *p = dst;
++
++		switch (bpp) {
++
++		case 32:
++			for (i = 0; i < width; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++			break;
++		case 24:
++			for (i = 0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++				fb_writew((bgx & 0xffff), (u16 *)p); p += 2;
++				fb_writeb((bgx >> 16), p++);
++#else
++				fb_writew((bgx >> 8), (u16 *)p); p += 2;
++				fb_writeb((bgx & 0xff), p++);
++#endif
++			}
++			break;
++		case 16:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(bgx, p); p += 4;
++				fb_writel(bgx, p); p += 4;
++			}
++			if (width & 2) {
++				fb_writel(bgx, p); p += 4;
++			}
++			if (width & 1)
++				fb_writew(bgx, (u16 *)p);
++			break;
++		case 8:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++
++			if (width & 2) {
++				fb_writew(bgx, p); p += 2;
++			}
++			if (width & 1)
++				fb_writeb(bgx, (u8 *)p);
++			break;
++
++		}
++		dst += dstbytes;
++	}
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++		   int srclinebytes, int bpp)
++{
++	int i;
++
++	while (height-- > 0) {
++		u32 *p = (u32 *)dst;
++		u32 *q = (u32 *)src;
++
++		switch (bpp) {
++
++		case 32:
++			for (i = 0; i < width; i++)
++				fb_writel(*q++, p++);
++			break;
++		case 24:
++			for (i = 0; i < (width * 3 / 4); i++)
++				fb_writel(*q++, p++);
++			if ((width * 3) % 4) {
++				if (width & 2) {
++					fb_writeb(*(u8 *)q, (u8 *)p);
++				} else if (width & 1) {
++					fb_writew(*(u16 *)q, (u16 *)p);
++					fb_writeb(*(u8 *)((u16 *)q + 1),
++							(u8 *)((u16 *)p + 2));
++				}
++			}
++			break;
++		case 16:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(*q++, p++);
++				fb_writel(*q++, p++);
++			}
++			if (width & 2)
++				fb_writel(*q++, p++);
++			if (width & 1)
++				fb_writew(*(u16 *)q, (u16 *)p);
++			break;
++		case 8:
++			for (i = 0; i < width/4; i++)
++				fb_writel(*q++, p++);
++
++			if (width & 2) {
++				fb_writew(*(u16 *)q, (u16 *)p);
++				q = (u32 *) ((u16 *)q + 1);
++				p = (u32 *) ((u16 *)p + 1);
++			}
++			if (width & 1)
++				fb_writeb(*(u8 *)q, (u8 *)p);
++			break;
++		}
++
++		dst += linebytes;
++		src += srclinebytes;
++	}
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++		       int width)
++{
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	int d  = sy * info->fix.line_length + sx * bytespp;
++	int ds = (sy * info->var.xres + sx) * bytespp;
++
++	fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++		    height, width, info->fix.line_length, info->var.xres * bytespp,
++		    info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++		    int height, int width)
++{
++	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++	struct fbcon_ops *ops = info->fbcon_par;
++	u8 *dst;
++	int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++	transparent = (vc->vc_decor.bg_color == bg_color);
++	sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++	sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++	height *= vc->vc_font.height;
++	width *= vc->vc_font.width;
++
++	/* Don't paint the background image if console is blanked */
++	if (transparent && !ops->blank_state) {
++		decorfill(info, sy, sx, height, width);
++	} else {
++		dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++			     sx * ((info->var.bits_per_pixel + 7) >> 3));
++		decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++			  info->var.bits_per_pixel);
++	}
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++			    int bottom_only)
++{
++	unsigned int tw = vc->vc_cols*vc->vc_font.width;
++	unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++	if (!bottom_only) {
++		/* top margin */
++		decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++		/* left margin */
++		decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++		/* right margin */
++		decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++			   info->var.xres - vc->vc_decor.tx - tw);
++	}
++	decorfill(info, vc->vc_decor.ty + th, 0,
++		   info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++			   int sx, int dx, int width)
++{
++	u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++	u16 *s = d + (dx - sx);
++	u16 *start = d;
++	u16 *ls = d;
++	u16 *le = d + width;
++	u16 c;
++	int x = dx;
++	u16 attr = 1;
++
++	do {
++		c = scr_readw(d);
++		if (attr != (c & 0xff00)) {
++			attr = c & 0xff00;
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start;
++				start = d;
++			}
++		}
++		if (s >= ls && s < le && c == scr_readw(s)) {
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start + 1;
++				start = d + 1;
++			} else {
++				x++;
++				start++;
++			}
++		}
++		s++;
++		d++;
++	} while (d < le);
++	if (d > start)
++		fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++	if (blank) {
++		decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++			  info->fix.line_length, 0, info->var.bits_per_pixel);
++	} else {
++		update_screen(vc);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++}
++
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index b87f5cf..ce44538 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -79,6 +79,7 @@
+ #include <asm/irq.h>
+ 
+ #include "fbcon.h"
++#include "../console/fbcondecor.h"
+ 
+ #ifdef FBCONDEBUG
+ #  define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -94,7 +95,7 @@ enum {
+ 
+ static struct display fb_display[MAX_NR_CONSOLES];
+ 
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+ 
+ static int logo_lines;
+@@ -282,7 +283,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ 		!vt_force_oops_output(vc);
+ }
+ 
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ 	      u16 c, int is_fg)
+ {
+ 	int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -546,6 +547,9 @@ static int do_fbcon_takeover(int show_logo)
+ 		info_idx = -1;
+ 	} else {
+ 		fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++		fbcon_decor_init();
++#endif
+ 	}
+ 
+ 	return err;
+@@ -1005,6 +1009,12 @@ static const char *fbcon_startup(void)
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
++
++	if (fbcon_decor_active(info, vc)) {
++		cols = vc->vc_decor.twidth / vc->vc_font.width;
++		rows = vc->vc_decor.theight / vc->vc_font.height;
++	}
++
+ 	vc_resize(vc, cols, rows);
+ 
+ 	DPRINTK("mode:   %s\n", info->fix.id);
+@@ -1034,7 +1044,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	cap = info->flags;
+ 
+ 	if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+-	    (info->fix.type == FB_TYPE_TEXT))
++	    (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ 		logo = 0;
+ 
+ 	if (var_to_display(p, &info->var, info))
+@@ -1259,6 +1269,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ 		fbcon_clear_margins(vc, 0);
+ 	}
+ 
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_clear(vc, info, sy, sx, height, width);
++		return;
++	}
++
+ 	/* Split blits that cross physical y_wrap boundary */
+ 
+ 	y_break = p->vrows - p->yscroll;
+@@ -1278,10 +1293,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ 	struct display *p = &fb_display[vc->vc_num];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+-			   get_color(vc, info, scr_readw(s), 1),
+-			   get_color(vc, info, scr_readw(s), 0));
++	if (!fbcon_is_inactive(vc, info)) {
++
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++		else
++			ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++				   get_color(vc, info, scr_readw(s), 1),
++				   get_color(vc, info, scr_readw(s), 0));
++	}
+ }
+ 
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1297,8 +1317,12 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ 	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->clear_margins(vc, info, bottom_only);
++	if (!fbcon_is_inactive(vc, info)) {
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_clear_margins(vc, info, bottom_only);
++		else
++			ops->clear_margins(vc, info, bottom_only);
++	}
+ }
+ 
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1819,7 +1843,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (softback_top)
+ 			fbcon_softback_note(vc, t, count);
+-		if (logo_shown >= 0)
++		if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ 			goto redraw_up;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+@@ -1912,6 +1936,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_down;
++		if (fbcon_decor_active(info, vc))
++			goto redraw_down;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2060,6 +2086,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ 		}
+ 		return;
+ 	}
++
++	if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++		/* must use slower redraw bmove to keep background pic intact */
++		fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++		return;
++	}
++
+ 	ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ 		   height, width);
+ }
+@@ -2130,8 +2163,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ 	var.yres = virt_h * virt_fh;
+ 	x_diff = info->var.xres - var.xres;
+ 	y_diff = info->var.yres - var.yres;
+-	if (x_diff < 0 || x_diff > virt_fw ||
+-	    y_diff < 0 || y_diff > virt_fh) {
++	if ((x_diff < 0 || x_diff > virt_fw ||
++		y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ 		const struct fb_videomode *mode;
+ 
+ 		DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2167,6 +2200,22 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+ 	ops = info->fbcon_par;
++	prev_console = ops->currcon;
++	if (prev_console != -1)
++		old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++	if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++			// Clear the screen to avoid displaying funky colors
++			// during palette updates.
++			memset((u8 *)info->screen_base + info->fix.line_length * info->var.yoffset,
++			       0, info->var.yres * info->fix.line_length);
++		}
++	}
++#endif
+ 
+ 	if (softback_top) {
+ 		if (softback_lines)
+@@ -2185,9 +2234,6 @@ static int fbcon_switch(struct vc_data *vc)
+ 		logo_shown = FBCON_LOGO_CANSHOW;
+ 	}
+ 
+-	prev_console = ops->currcon;
+-	if (prev_console != -1)
+-		old_info = registered_fb[con2fb_map[prev_console]];
+ 	/*
+ 	 * FIXME: If we have multiple fbdev's loaded, we need to
+ 	 * update all info->currcon.  Perhaps, we can place this
+@@ -2231,6 +2277,18 @@ static int fbcon_switch(struct vc_data *vc)
+ 			fbcon_del_cursor_timer(old_info);
+ 	}
+ 
++	if (fbcon_decor_active_vc(vc)) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (!vc_curr->vc_decor.theme ||
++			strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++			(fbcon_decor_active_nores(info, vc_curr) &&
++			 !fbcon_decor_active(info, vc_curr))) {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++	}
++
+ 	if (fbcon_is_inactive(vc, info) ||
+ 	    ops->blank_state != FB_BLANK_UNBLANK)
+ 		fbcon_del_cursor_timer(info);
+@@ -2339,15 +2397,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ 		}
+ 	}
+ 
+- 	if (!fbcon_is_inactive(vc, info)) {
++	if (!fbcon_is_inactive(vc, info)) {
+ 		if (ops->blank_state != blank) {
+ 			ops->blank_state = blank;
+ 			fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ 			ops->cursor_flash = (!blank);
+ 
+-			if (!(info->flags & FBINFO_MISC_USEREVENT))
+-				if (fb_blank(info, blank))
+-					fbcon_generic_blank(vc, info, blank);
++			if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++				if (fb_blank(info, blank)) {
++					if (fbcon_decor_active(info, vc))
++						fbcon_decor_blank(vc, info, blank);
++					else
++						fbcon_generic_blank(vc, info, blank);
++				}
++			}
+ 		}
+ 
+ 		if (!blank)
+@@ -2522,13 +2585,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ 	}
+ 
+ 	if (resize) {
++		/* reset wrap/pan */
+ 		int cols, rows;
+ 
+ 		cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++		if (fbcon_decor_active(info, vc)) {
++			info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++			cols = vc->vc_decor.twidth;
++			rows = vc->vc_decor.theight;
++		}
+ 		cols /= w;
+ 		rows /= h;
++
+ 		vc_resize(vc, cols, rows);
++
+ 		if (con_is_visible(vc) && softback_buf)
+ 			fbcon_update_softback(vc);
+ 	} else if (con_is_visible(vc)
+@@ -2657,7 +2729,11 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ 	int i, j, k, depth;
+ 	u8 val;
+ 
+-	if (fbcon_is_inactive(vc, info))
++	if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++			|| vc->vc_num != fg_console
++#endif
++		)
+ 		return;
+ 
+ 	if (!con_is_visible(vc))
+@@ -2683,7 +2759,47 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ 	} else
+ 		fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+ 
+-	fb_set_cmap(&palette_cmap, info);
++	if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++	    info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++		u16 *red, *green, *blue;
++		int minlen = min(min(info->var.red.length, info->var.green.length),
++				     info->var.blue.length);
++
++		struct fb_cmap cmap = {
++			.start = 0,
++			.len = (1 << minlen),
++			.red = NULL,
++			.green = NULL,
++			.blue = NULL,
++			.transp = NULL
++		};
++
++		red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++		if (!red)
++			goto out;
++
++		green = red + 256;
++		blue = green + 256;
++		cmap.red = red;
++		cmap.green = green;
++		cmap.blue = blue;
++
++		for (i = 0; i < cmap.len; i++)
++			red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++
++		fb_set_cmap(&cmap, info);
++		fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++		kfree(red);
++
++		return;
++
++	} else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		   info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++		fb_set_cmap(&info->bgdecor.cmap, info);
++
++out:	fb_set_cmap(&palette_cmap, info);
+ }
+ 
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+@@ -2908,7 +3024,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		} else {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++
+ 		updatescrollmode(p, info, vc);
+ 		scrollback_max = 0;
+ 		scrollback_current = 0;
+@@ -2953,7 +3076,8 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++		if (!fbcon_decor_active_nores(info, vc))
++			vc_resize(vc, cols, rows);
+ 	}
+ 
+ 	if (fg != -1)
+@@ -3594,6 +3718,7 @@ static void fbcon_exit(void)
+ 		}
+ 	}
+ 
++	fbcon_decor_exit();
+ 	fbcon_has_exited = 1;
+ }
+ 
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 0000000..65cc0d3
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,549 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ *  Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootsplash" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++
++#include <linux/uaccess.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++
++static int initialized;
++
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++EXPORT_SYMBOL(fbcon_decor_path);
++
++int fbcon_decor_call_helper(char *cmd, unsigned short vc)
++{
++	char *envp[] = {
++		"HOME=/",
++		"PATH=/sbin:/bin",
++		NULL
++	};
++
++	char tfb[5];
++	char tcons[5];
++	unsigned char fb = (int) con2fb_map[vc];
++
++	char *argv[] = {
++		fbcon_decor_path,
++		"2",
++		cmd,
++		tcons,
++		tfb,
++		vc_cons[vc].d->vc_decor.theme,
++		NULL
++	};
++
++	snprintf(tfb, 5, "%d", fb);
++	snprintf(tcons, 5, "%d", vc);
++
++	return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++	struct fb_info *info;
++
++	if (!vc->vc_decor.state)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	vc->vc_decor.state = 0;
++	vc_resize(vc, info->var.xres / vc->vc_font.width,
++		  info->var.yres / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num && redraw) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++	struct fb_info *info;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++	    info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++	    vc->vc_num == fg_console))
++		return -EINVAL;
++
++	vc->vc_decor.state = 1;
++	vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++		  vc->vc_decor.theight / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++	int ret;
++
++	console_lock();
++	if (!state)
++		ret = fbcon_decor_disable(vc, 1);
++	else
++		ret = fbcon_decor_enable(vc);
++	console_unlock();
++
++	return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++	*state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	char *tmp;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL || !cfg->twidth || !cfg->theight ||
++	    cfg->tx + cfg->twidth  > info->var.xres ||
++	    cfg->ty + cfg->theight > info->var.yres)
++		return -EINVAL;
++
++	len = strlen_user(cfg->theme);
++	if (!len || len > FBCON_DECOR_THEME_LEN)
++		return -EINVAL;
++	tmp = kmalloc(len, GFP_KERNEL);
++	if (!tmp)
++		return -ENOMEM;
++	if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++		return -EFAULT;
++	cfg->theme = tmp;
++	cfg->state = 0;
++
++	console_lock();
++	if (vc->vc_decor.state)
++		fbcon_decor_disable(vc, 1);
++	kfree(vc->vc_decor.theme);
++	vc->vc_decor = *cfg;
++	console_unlock();
++
++	printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++			 vc->vc_num, vc->vc_decor.theme);
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc,
++					struct vc_decor *decor)
++{
++	char __user *tmp;
++
++	tmp = decor->theme;
++	*decor = vc->vc_decor;
++	decor->theme = tmp;
++
++	if (vc->vc_decor.theme) {
++		if (copy_to_user(tmp, vc->vc_decor.theme,
++					strlen(vc->vc_decor.theme) + 1))
++			return -EFAULT;
++	} else
++		if (put_user(0, tmp))
++			return -EFAULT;
++
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img,
++						unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	u8 *tmp;
++
++	if (vc->vc_num != fg_console)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	if (img->width != info->var.xres || img->height != info->var.yres) {
++		printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++		printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height,
++				info->var.xres, info->var.yres);
++		return -EINVAL;
++	}
++
++	if (img->depth != info->var.bits_per_pixel) {
++		printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++		return -EINVAL;
++	}
++
++	if (img->depth == 8) {
++		if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++		    !img->cmap.blue)
++			return -EINVAL;
++
++		tmp = vmalloc(img->cmap.len * 3 * 2);
++		if (!tmp)
++			return -ENOMEM;
++
++		if (copy_from_user(tmp,
++				(void __user *)img->cmap.red,
++						(img->cmap.len << 1)) ||
++			copy_from_user(tmp + (img->cmap.len << 1),
++				(void __user *)img->cmap.green,
++						(img->cmap.len << 1)) ||
++			copy_from_user(tmp + (img->cmap.len << 2),
++				(void __user *)img->cmap.blue,
++						(img->cmap.len << 1))) {
++			vfree(tmp);
++			return -EFAULT;
++		}
++
++		img->cmap.transp = NULL;
++		img->cmap.red = (u16 *)tmp;
++		img->cmap.green = img->cmap.red + img->cmap.len;
++		img->cmap.blue = img->cmap.green + img->cmap.len;
++	} else {
++		img->cmap.red = NULL;
++	}
++
++	len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++	/*
++	 * Allocate an additional byte so that we never go outside of the
++	 * buffer boundaries in the rendering functions in a 24 bpp mode.
++	 */
++	tmp = vmalloc(len + 1);
++
++	if (!tmp)
++		goto out;
++
++	if (copy_from_user(tmp, (void __user *)img->data, len))
++		goto out;
++
++	img->data = tmp;
++
++	console_lock();
++
++	if (info->bgdecor.data)
++		vfree((u8 *)info->bgdecor.data);
++	if (info->bgdecor.cmap.red)
++		vfree(info->bgdecor.cmap.red);
++
++	info->bgdecor = *img;
++
++	if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	console_unlock();
++
++	return 0;
++
++out:
++	if (img->cmap.red)
++		vfree(img->cmap.red);
++
++	if (tmp)
++		vfree(tmp);
++	return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++	struct fbcon_decor_iowrapper __user *wrapper = (void __user *) arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data, &wrapper->data);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC:
++	{
++		struct fb_image img;
++
++		if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++	case FBIOCONDECOR_SETCFG:
++	{
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++	case FBIOCONDECOR_GETCFG:
++	{
++		int rval;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++			return -EFAULT;
++		return rval;
++	}
++	case FBIOCONDECOR_SETSTATE:
++	{
++		unsigned int state = 0;
++
++		if (get_user(state, (unsigned int __user *)data))
++			return -EFAULT;
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++	case FBIOCONDECOR_GETSTATE:
++	{
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		return put_user(state, (unsigned int __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++	struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	compat_uptr_t data_compat = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper32)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data_compat, &wrapper->data);
++	data = compat_ptr(data_compat);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC32:
++	{
++		struct fb_image32 img_compat;
++		struct fb_image img;
++
++		if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++			return -EFAULT;
++
++		fb_image_from_compat(img, img_compat);
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++
++	case FBIOCONDECOR_SETCFG32:
++	{
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++
++		vc_decor_from_compat(cfg, cfg_compat);
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++
++	case FBIOCONDECOR_GETCFG32:
++	{
++		int rval;
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		cfg.theme = compat_ptr(cfg_compat.theme);
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		vc_decor_to_compat(cfg_compat, cfg);
++
++		if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		return rval;
++	}
++
++	case FBIOCONDECOR_SETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		if (get_user(state_compat, (compat_uint_t __user *)data))
++			return -EFAULT;
++
++		state = (unsigned int)state_compat;
++
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++
++	case FBIOCONDECOR_GETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		state_compat = (compat_uint_t)state;
++
++		return put_user(state_compat, (compat_uint_t __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++#else
++  #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = fbcon_decor_ioctl,
++	.compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++	.minor = MISC_DYNAMIC_MINOR,
++	.name = "fbcondecor",
++	.fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++	int i;
++
++	for (i = 0; i < num_registered_fb; i++) {
++		registered_fb[i]->bgdecor.data = NULL;
++		registered_fb[i]->bgdecor.cmap.red = NULL;
++	}
++
++	for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++		vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++						vc_cons[i].d->vc_decor.theight = 0;
++		vc_cons[i].d->vc_decor.theme = NULL;
++	}
++}
++
++int fbcon_decor_init(void)
++{
++	int i;
++
++	fbcon_decor_reset();
++
++	if (initialized)
++		return 0;
++
++	i = misc_register(&fbcon_decor_dev);
++	if (i) {
++		printk(KERN_ERR "fbcondecor: failed to register device\n");
++		return i;
++	}
++
++	fbcon_decor_call_helper("init", 0);
++	initialized = 1;
++	return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++	fbcon_decor_reset();
++	return 0;
++}
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 0000000..c49386c
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,77 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char *cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x, y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x, y) (fbcon_decor_active_nores(x, y) &&	\
++				x->bgdecor.width == x->var.xres &&	\
++				x->bgdecor.height == x->var.yres &&	\
++				x->bgdecor.depth == x->var.bits_per_pixel)
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char *cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x, y) (0)
++#define fbcon_decor_active(x, y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 88b008f..c84113d 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1216,7 +1216,6 @@ config FB_MATROX
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+-	select FB_TILEBLITTING
+ 	select FB_MACMODES if PPC_PMAC
+ 	---help---
+ 	  Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index f89245b..c2c12ce 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+     0x0000, 0xaaaa
+ };
+@@ -254,9 +256,12 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ 				break;
+ 		}
+ 	}
+-	if (rc == 0)
++	if (rc == 0) {
+ 		fb_copy_cmap(cmap, &info->cmap);
+-
++		if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		    info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++			fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++	}
+ 	return rc;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 76c1ad9..fafc0af 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1251,15 +1251,6 @@ struct fb_fix_screeninfo32 {
+ 	u16			reserved[3];
+ };
+ 
+-struct fb_cmap32 {
+-	u32			start;
+-	u32			len;
+-	compat_caddr_t	red;
+-	compat_caddr_t	green;
+-	compat_caddr_t	blue;
+-	compat_caddr_t	transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ 			  unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 0000000..1514355
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	char *theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 6fd3c90..c649555 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -20,6 +20,7 @@ struct vt_struct;
+ struct uni_pagedir;
+ 
+ #define NPAR 16
++#include <linux/console_decor.h>
+ 
+ /*
+  * Example: vc_data of a console that was scrolled 3 lines down.
+@@ -140,6 +141,8 @@ struct vc_data {
+ 	struct uni_pagedir *vc_uni_pagedir;
+ 	struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ 	bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++	struct vc_decor vc_decor;
+ 	/* additional information is in vt_kern.h */
+ };
+ 
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index a964d07..672cc64 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -238,6 +238,34 @@ struct fb_deferred_io {
+ };
+ #endif
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++	__u32 dx;			/* Where to place image */
++	__u32 dy;
++	__u32 width;			/* Size of image */
++	__u32 height;
++	__u32 fg_color;			/* Only used when a mono bitmap */
++	__u32 bg_color;
++	__u8  depth;			/* Depth of the image */
++	const compat_uptr_t data;	/* Pointer to image data */
++	struct fb_cmap32 cmap;		/* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++	(to).dx       = (from).dx; \
++	(to).dy       = (from).dy; \
++	(to).width    = (from).width; \
++	(to).height   = (from).height; \
++	(to).fg_color = (from).fg_color; \
++	(to).bg_color = (from).bg_color; \
++	(to).depth    = (from).depth; \
++	(to).data     = compat_ptr((from).data); \
++	fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+  * Frame buffer operations
+  *
+@@ -508,6 +536,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED	1
+ 	u32 state;			/* Hardware state i.e suspend */
+ 	void *fbcon_par;                /* fbcon use-only private area */
++
++	struct fb_image bgdecor;
++
+ 	/* From here on everything is device dependent */
+ 	void *par;
+ 	/* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index fb795c3..4b57c67 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -8,6 +8,23 @@
+ 
+ #define FB_MAX			32	/* sufficient for now */
+ 
++struct fbcon_decor_iowrapper {
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32 {
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+    0x46 is 'F'								*/
+ #define FBIOGET_VSCREENINFO	0x4600
+@@ -35,6 +52,25 @@
+ #define FBIOGET_DISPINFO        0x4618
+ #define FBIO_WAITFORVSYNC	_IOW('F', 0x20, __u32)
+ 
++#define FBIOCONDECOR_SETCFG	_IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG	_IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE	_IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32	_IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32	_IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32	_IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN		128	/* Maximum length of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL	0	/* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER	1	/* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS		0	/* Packed Pixels	*/
+ #define FB_TYPE_PLANES			1	/* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES	2	/* Interleaved planes	*/
+@@ -277,6 +313,29 @@ struct fb_var_screeninfo {
+ 	__u32 reserved[4];		/* Reserved for future compatibility */
+ };
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++	__u32 start;
++	__u32 len;			/* Number of entries */
++	compat_uptr_t red;		/* Red values	*/
++	compat_uptr_t green;
++	compat_uptr_t blue;
++	compat_uptr_t transp;		/* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++	(to).start  = (from).start; \
++	(to).len    = (from).len; \
++	(to).red    = compat_ptr((from).red); \
++	(to).green  = compat_ptr((from).green); \
++	(to).blue   = compat_ptr((from).blue); \
++	(to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ 	__u32 start;			/* First entry	*/
+ 	__u32 len;			/* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 6ee416e..d2c2425 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -149,6 +149,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+ 
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -266,6 +270,15 @@ static struct ctl_table sysctl_base_table[] = {
+ 		.mode		= 0555,
+ 		.child		= dev_table,
+ 	},
++#ifdef CONFIG_FB_CON_DECOR
++	{
++		.procname	= "fbcondecor",
++		.data		= &fbcon_decor_path,
++		.maxlen		= KMOD_PATH_LEN,
++		.mode		= 0644,
++		.proc_handler	= &proc_dostring,
++	},
++#endif
+ 	{ }
+ };
+ 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-16 19:21 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-16 19:21 UTC (permalink / raw
  To: gentoo-commits

commit:     29a5a3247fd5e7a469a377914052a120ef0e4d05
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 16 19:21:08 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 16 19:21:08 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=29a5a324

Linux patch 4.8.2

 0000_README            |    4 +
 1001_linux-4.8.2.patch | 1841 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1845 insertions(+)

diff --git a/0000_README b/0000_README
index 4af14fd..07a39ba 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-4.8.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.1
 
+Patch:  1001_linux-4.8.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-4.8.2.patch b/1001_linux-4.8.2.patch
new file mode 100644
index 0000000..353b6a8
--- /dev/null
+++ b/1001_linux-4.8.2.patch
@@ -0,0 +1,1841 @@
+diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
+index c04165868faf..02f50686c418 100644
+--- a/Documentation/virtual/kvm/devices/vcpu.txt
++++ b/Documentation/virtual/kvm/devices/vcpu.txt
+@@ -30,4 +30,6 @@ Returns: -ENODEV: PMUv3 not supported
+                  attribute
+          -EBUSY: PMUv3 already initialized
+ 
+-Request the initialization of the PMUv3.
++Request the initialization of the PMUv3.  This must be done after creating the
++in-kernel irqchip.  Creating a PMU with a userspace irqchip is currently not
++supported.
+diff --git a/Makefile b/Makefile
+index 75db9f3988f3..bf6e44a421df 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/boot/dts/armada-390.dtsi b/arch/arm/boot/dts/armada-390.dtsi
+index 094e39c66039..6cd18d8aaac7 100644
+--- a/arch/arm/boot/dts/armada-390.dtsi
++++ b/arch/arm/boot/dts/armada-390.dtsi
+@@ -47,6 +47,8 @@
+ #include "armada-39x.dtsi"
+ 
+ / {
++	compatible = "marvell,armada390";
++
+ 	soc {
+ 		internal-regs {
+ 			pinctrl@18000 {
+@@ -54,4 +56,5 @@
+ 				reg = <0x18000 0x20>;
+ 			};
+ 		};
++	};
+ };
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 74a9b6c394f5..9dc83b09d987 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -5,6 +5,7 @@
+ #include <dt-bindings/reset/qcom,gcc-msm8960.h>
+ #include <dt-bindings/clock/qcom,mmcc-msm8960.h>
+ #include <dt-bindings/soc/qcom,gsbi.h>
++#include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ / {
+ 	model = "Qualcomm APQ8064";
+@@ -559,22 +560,50 @@
+ 					compatible = "qcom,pm8921-gpio",
+ 						     "qcom,ssbi-gpio";
+ 					reg = <0x150>;
+-					interrupts = <192 1>, <193 1>, <194 1>,
+-						     <195 1>, <196 1>, <197 1>,
+-						     <198 1>, <199 1>, <200 1>,
+-						     <201 1>, <202 1>, <203 1>,
+-						     <204 1>, <205 1>, <206 1>,
+-						     <207 1>, <208 1>, <209 1>,
+-						     <210 1>, <211 1>, <212 1>,
+-						     <213 1>, <214 1>, <215 1>,
+-						     <216 1>, <217 1>, <218 1>,
+-						     <219 1>, <220 1>, <221 1>,
+-						     <222 1>, <223 1>, <224 1>,
+-						     <225 1>, <226 1>, <227 1>,
+-						     <228 1>, <229 1>, <230 1>,
+-						     <231 1>, <232 1>, <233 1>,
+-						     <234 1>, <235 1>;
+-
++					interrupts = <192 IRQ_TYPE_NONE>,
++						     <193 IRQ_TYPE_NONE>,
++						     <194 IRQ_TYPE_NONE>,
++						     <195 IRQ_TYPE_NONE>,
++						     <196 IRQ_TYPE_NONE>,
++						     <197 IRQ_TYPE_NONE>,
++						     <198 IRQ_TYPE_NONE>,
++						     <199 IRQ_TYPE_NONE>,
++						     <200 IRQ_TYPE_NONE>,
++						     <201 IRQ_TYPE_NONE>,
++						     <202 IRQ_TYPE_NONE>,
++						     <203 IRQ_TYPE_NONE>,
++						     <204 IRQ_TYPE_NONE>,
++						     <205 IRQ_TYPE_NONE>,
++						     <206 IRQ_TYPE_NONE>,
++						     <207 IRQ_TYPE_NONE>,
++						     <208 IRQ_TYPE_NONE>,
++						     <209 IRQ_TYPE_NONE>,
++						     <210 IRQ_TYPE_NONE>,
++						     <211 IRQ_TYPE_NONE>,
++						     <212 IRQ_TYPE_NONE>,
++						     <213 IRQ_TYPE_NONE>,
++						     <214 IRQ_TYPE_NONE>,
++						     <215 IRQ_TYPE_NONE>,
++						     <216 IRQ_TYPE_NONE>,
++						     <217 IRQ_TYPE_NONE>,
++						     <218 IRQ_TYPE_NONE>,
++						     <219 IRQ_TYPE_NONE>,
++						     <220 IRQ_TYPE_NONE>,
++						     <221 IRQ_TYPE_NONE>,
++						     <222 IRQ_TYPE_NONE>,
++						     <223 IRQ_TYPE_NONE>,
++						     <224 IRQ_TYPE_NONE>,
++						     <225 IRQ_TYPE_NONE>,
++						     <226 IRQ_TYPE_NONE>,
++						     <227 IRQ_TYPE_NONE>,
++						     <228 IRQ_TYPE_NONE>,
++						     <229 IRQ_TYPE_NONE>,
++						     <230 IRQ_TYPE_NONE>,
++						     <231 IRQ_TYPE_NONE>,
++						     <232 IRQ_TYPE_NONE>,
++						     <233 IRQ_TYPE_NONE>,
++						     <234 IRQ_TYPE_NONE>,
++						     <235 IRQ_TYPE_NONE>;
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+ 
+@@ -587,9 +616,18 @@
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+ 					interrupts =
+-					<128 1>, <129 1>, <130 1>, <131 1>,
+-					<132 1>, <133 1>, <134 1>, <135 1>,
+-					<136 1>, <137 1>, <138 1>, <139 1>;
++					<128 IRQ_TYPE_NONE>,
++					<129 IRQ_TYPE_NONE>,
++					<130 IRQ_TYPE_NONE>,
++					<131 IRQ_TYPE_NONE>,
++					<132 IRQ_TYPE_NONE>,
++					<133 IRQ_TYPE_NONE>,
++					<134 IRQ_TYPE_NONE>,
++					<135 IRQ_TYPE_NONE>,
++					<136 IRQ_TYPE_NONE>,
++					<137 IRQ_TYPE_NONE>,
++					<138 IRQ_TYPE_NONE>,
++					<139 IRQ_TYPE_NONE>;
+ 				};
+ 
+ 				rtc@11d {
+diff --git a/arch/arm/boot/dts/qcom-msm8660.dtsi b/arch/arm/boot/dts/qcom-msm8660.dtsi
+index acbe71febe13..8c65e0d82559 100644
+--- a/arch/arm/boot/dts/qcom-msm8660.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8660.dtsi
+@@ -2,6 +2,7 @@
+ 
+ /include/ "skeleton.dtsi"
+ 
++#include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/clock/qcom,gcc-msm8660.h>
+ #include <dt-bindings/soc/qcom,gsbi.h>
+@@ -159,21 +160,50 @@
+ 						     "qcom,ssbi-gpio";
+ 					reg = <0x150>;
+ 					interrupt-parent = <&pmicintc>;
+-					interrupts = <192 1>, <193 1>, <194 1>,
+-						     <195 1>, <196 1>, <197 1>,
+-						     <198 1>, <199 1>, <200 1>,
+-						     <201 1>, <202 1>, <203 1>,
+-						     <204 1>, <205 1>, <206 1>,
+-						     <207 1>, <208 1>, <209 1>,
+-						     <210 1>, <211 1>, <212 1>,
+-						     <213 1>, <214 1>, <215 1>,
+-						     <216 1>, <217 1>, <218 1>,
+-						     <219 1>, <220 1>, <221 1>,
+-						     <222 1>, <223 1>, <224 1>,
+-						     <225 1>, <226 1>, <227 1>,
+-						     <228 1>, <229 1>, <230 1>,
+-						     <231 1>, <232 1>, <233 1>,
+-						     <234 1>, <235 1>;
++					interrupts = <192 IRQ_TYPE_NONE>,
++						     <193 IRQ_TYPE_NONE>,
++						     <194 IRQ_TYPE_NONE>,
++						     <195 IRQ_TYPE_NONE>,
++						     <196 IRQ_TYPE_NONE>,
++						     <197 IRQ_TYPE_NONE>,
++						     <198 IRQ_TYPE_NONE>,
++						     <199 IRQ_TYPE_NONE>,
++						     <200 IRQ_TYPE_NONE>,
++						     <201 IRQ_TYPE_NONE>,
++						     <202 IRQ_TYPE_NONE>,
++						     <203 IRQ_TYPE_NONE>,
++						     <204 IRQ_TYPE_NONE>,
++						     <205 IRQ_TYPE_NONE>,
++						     <206 IRQ_TYPE_NONE>,
++						     <207 IRQ_TYPE_NONE>,
++						     <208 IRQ_TYPE_NONE>,
++						     <209 IRQ_TYPE_NONE>,
++						     <210 IRQ_TYPE_NONE>,
++						     <211 IRQ_TYPE_NONE>,
++						     <212 IRQ_TYPE_NONE>,
++						     <213 IRQ_TYPE_NONE>,
++						     <214 IRQ_TYPE_NONE>,
++						     <215 IRQ_TYPE_NONE>,
++						     <216 IRQ_TYPE_NONE>,
++						     <217 IRQ_TYPE_NONE>,
++						     <218 IRQ_TYPE_NONE>,
++						     <219 IRQ_TYPE_NONE>,
++						     <220 IRQ_TYPE_NONE>,
++						     <221 IRQ_TYPE_NONE>,
++						     <222 IRQ_TYPE_NONE>,
++						     <223 IRQ_TYPE_NONE>,
++						     <224 IRQ_TYPE_NONE>,
++						     <225 IRQ_TYPE_NONE>,
++						     <226 IRQ_TYPE_NONE>,
++						     <227 IRQ_TYPE_NONE>,
++						     <228 IRQ_TYPE_NONE>,
++						     <229 IRQ_TYPE_NONE>,
++						     <230 IRQ_TYPE_NONE>,
++						     <231 IRQ_TYPE_NONE>,
++						     <232 IRQ_TYPE_NONE>,
++						     <233 IRQ_TYPE_NONE>,
++						     <234 IRQ_TYPE_NONE>,
++						     <235 IRQ_TYPE_NONE>;
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+ 
+@@ -187,9 +217,18 @@
+ 					#gpio-cells = <2>;
+ 					interrupt-parent = <&pmicintc>;
+ 					interrupts =
+-					<128 1>, <129 1>, <130 1>, <131 1>,
+-					<132 1>, <133 1>, <134 1>, <135 1>,
+-					<136 1>, <137 1>, <138 1>, <139 1>;
++					<128 IRQ_TYPE_NONE>,
++					<129 IRQ_TYPE_NONE>,
++					<130 IRQ_TYPE_NONE>,
++					<131 IRQ_TYPE_NONE>,
++					<132 IRQ_TYPE_NONE>,
++					<133 IRQ_TYPE_NONE>,
++					<134 IRQ_TYPE_NONE>,
++					<135 IRQ_TYPE_NONE>,
++					<136 IRQ_TYPE_NONE>,
++					<137 IRQ_TYPE_NONE>,
++					<138 IRQ_TYPE_NONE>,
++					<139 IRQ_TYPE_NONE>;
+ 				};
+ 
+ 				pwrkey@1c {
+diff --git a/arch/arm/include/asm/delay.h b/arch/arm/include/asm/delay.h
+index b7a428154355..b1ce037e4380 100644
+--- a/arch/arm/include/asm/delay.h
++++ b/arch/arm/include/asm/delay.h
+@@ -10,7 +10,7 @@
+ #include <asm/param.h>	/* HZ */
+ 
+ #define MAX_UDELAY_MS	2
+-#define UDELAY_MULT	UL(2047 * HZ + 483648 * HZ / 1000000)
++#define UDELAY_MULT	UL(2147 * HZ + 483648 * HZ / 1000000)
+ #define UDELAY_SHIFT	31
+ 
+ #ifndef __ASSEMBLY__
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index d9751a4769e7..d34fd72172b6 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -43,6 +43,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
+ 	unsigned long fp = frame->fp;
+ 	unsigned long irq_stack_ptr;
+ 
++	if (!tsk)
++		tsk = current;
++
+ 	/*
+ 	 * Switching between stacks is valid when tracing current and in
+ 	 * non-preemptible context.
+@@ -67,7 +70,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
+ 	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+-	if (tsk && tsk->ret_stack &&
++	if (tsk->ret_stack &&
+ 			(frame->pc == (unsigned long)return_to_handler)) {
+ 		/*
+ 		 * This is a case where function graph tracer has
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index e04f83873af7..df06750846de 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -142,6 +142,11 @@ static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ 	unsigned long irq_stack_ptr;
+ 	int skip;
+ 
++	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
++
++	if (!tsk)
++		tsk = current;
++
+ 	/*
+ 	 * Switching between stacks is valid when tracing current and in
+ 	 * non-preemptible context.
+@@ -151,11 +156,6 @@ static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ 	else
+ 		irq_stack_ptr = 0;
+ 
+-	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
+-
+-	if (!tsk)
+-		tsk = current;
+-
+ 	if (tsk == current) {
+ 		frame.fp = (unsigned long)__builtin_frame_address(0);
+ 		frame.sp = current_stack_pointer;
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index e788515f766b..43853ec6e160 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -846,6 +846,47 @@ enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
+ 	return EMULATE_FAIL;
+ }
+ 
++/**
++ * kvm_mips_invalidate_guest_tlb() - Indicates a change in guest MMU map.
++ * @vcpu:	VCPU with changed mappings.
++ * @tlb:	TLB entry being removed.
++ *
++ * This is called to indicate a single change in guest MMU mappings, so that we
++ * can arrange TLB flushes on this and other CPUs.
++ */
++static void kvm_mips_invalidate_guest_tlb(struct kvm_vcpu *vcpu,
++					  struct kvm_mips_tlb *tlb)
++{
++	int cpu, i;
++	bool user;
++
++	/* No need to flush for entries which are already invalid */
++	if (!((tlb->tlb_lo[0] | tlb->tlb_lo[1]) & ENTRYLO_V))
++		return;
++	/* User address space doesn't need flushing for KSeg2/3 changes */
++	user = tlb->tlb_hi < KVM_GUEST_KSEG0;
++
++	preempt_disable();
++
++	/*
++	 * Probe the shadow host TLB for the entry being overwritten, if one
++	 * matches, invalidate it
++	 */
++	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
++
++	/* Invalidate the whole ASID on other CPUs */
++	cpu = smp_processor_id();
++	for_each_possible_cpu(i) {
++		if (i == cpu)
++			continue;
++		if (user)
++			vcpu->arch.guest_user_asid[i] = 0;
++		vcpu->arch.guest_kernel_asid[i] = 0;
++	}
++
++	preempt_enable();
++}
++
+ /* Write Guest TLB Entry @ Index */
+ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
+ {
+@@ -865,11 +906,8 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	tlb = &vcpu->arch.guest_tlb[index];
+-	/*
+-	 * Probe the shadow host TLB for the entry being overwritten, if one
+-	 * matches, invalidate it
+-	 */
+-	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
++
++	kvm_mips_invalidate_guest_tlb(vcpu, tlb);
+ 
+ 	tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0);
+ 	tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0);
+@@ -898,11 +936,7 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
+ 
+ 	tlb = &vcpu->arch.guest_tlb[index];
+ 
+-	/*
+-	 * Probe the shadow host TLB for the entry being overwritten, if one
+-	 * matches, invalidate it
+-	 */
+-	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
++	kvm_mips_invalidate_guest_tlb(vcpu, tlb);
+ 
+ 	tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0);
+ 	tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0);
+@@ -1026,6 +1060,7 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
+ 	enum emulation_result er = EMULATE_DONE;
+ 	u32 rt, rd, sel;
+ 	unsigned long curr_pc;
++	int cpu, i;
+ 
+ 	/*
+ 	 * Update PC and hold onto current PC in case there is
+@@ -1135,8 +1170,16 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
+ 							& KVM_ENTRYHI_ASID,
+ 						nasid);
+ 
++					preempt_disable();
+ 					/* Blow away the shadow host TLBs */
+ 					kvm_mips_flush_host_tlb(1);
++					cpu = smp_processor_id();
++					for_each_possible_cpu(i)
++						if (i != cpu) {
++							vcpu->arch.guest_user_asid[i] = 0;
++							vcpu->arch.guest_kernel_asid[i] = 0;
++						}
++					preempt_enable();
+ 				}
+ 				kvm_write_c0_guest_entryhi(cop0,
+ 							   vcpu->arch.gprs[rt]);
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index f69f40f1519a..978dada662ae 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -737,6 +737,7 @@
+ #define   MMCR0_FCHV	0x00000001UL /* freeze conditions in hypervisor mode */
+ #define SPRN_MMCR1	798
+ #define SPRN_MMCR2	785
++#define SPRN_UMMCR2	769
+ #define SPRN_MMCRA	0x312
+ #define   MMCRA_SDSYNC	0x80000000UL /* SDAR synced with SIAR */
+ #define   MMCRA_SDAR_DCACHE_MISS 0x40000000UL
+diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
+index 2afdb9c0937d..729f8faa95c5 100644
+--- a/arch/powerpc/kvm/book3s_emulate.c
++++ b/arch/powerpc/kvm/book3s_emulate.c
+@@ -498,6 +498,7 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
+ 	case SPRN_MMCR0:
+ 	case SPRN_MMCR1:
+ 	case SPRN_MMCR2:
++	case SPRN_UMMCR2:
+ #endif
+ 		break;
+ unprivileged:
+@@ -640,6 +641,7 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
+ 	case SPRN_MMCR0:
+ 	case SPRN_MMCR1:
+ 	case SPRN_MMCR2:
++	case SPRN_UMMCR2:
+ 	case SPRN_TIR:
+ #endif
+ 		*spr_val = 0;
+diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
+index 02b4672f7347..df3f2706d3e5 100644
+--- a/arch/powerpc/kvm/booke.c
++++ b/arch/powerpc/kvm/booke.c
+@@ -2038,7 +2038,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
+ 		if (type == KVMPPC_DEBUG_NONE)
+ 			continue;
+ 
+-		if (type & !(KVMPPC_DEBUG_WATCH_READ |
++		if (type & ~(KVMPPC_DEBUG_WATCH_READ |
+ 			     KVMPPC_DEBUG_WATCH_WRITE |
+ 			     KVMPPC_DEBUG_BREAKPOINT))
+ 			return -EINVAL;
+diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h
+index ae55a43e09c0..19f30a814f54 100644
+--- a/arch/x86/include/asm/fpu/xstate.h
++++ b/arch/x86/include/asm/fpu/xstate.h
+@@ -27,11 +27,12 @@
+ 				 XFEATURE_MASK_YMM | \
+ 				 XFEATURE_MASK_OPMASK | \
+ 				 XFEATURE_MASK_ZMM_Hi256 | \
+-				 XFEATURE_MASK_Hi16_ZMM	 | \
+-				 XFEATURE_MASK_PKRU)
++				 XFEATURE_MASK_Hi16_ZMM)
+ 
+ /* Supported features which require eager state saving */
+-#define XFEATURE_MASK_EAGER	(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR)
++#define XFEATURE_MASK_EAGER	(XFEATURE_MASK_BNDREGS | \
++				 XFEATURE_MASK_BNDCSR | \
++				 XFEATURE_MASK_PKRU)
+ 
+ /* All currently supported features */
+ #define XCNTXT_MASK	(XFEATURE_MASK_LAZY | XFEATURE_MASK_EAGER)
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 627719475457..9ae5ab80a497 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -56,8 +56,8 @@
+ #define INTEL_FAM6_ATOM_SILVERMONT1	0x37 /* BayTrail/BYT / Valleyview */
+ #define INTEL_FAM6_ATOM_SILVERMONT2	0x4D /* Avaton/Rangely */
+ #define INTEL_FAM6_ATOM_AIRMONT		0x4C /* CherryTrail / Braswell */
+-#define INTEL_FAM6_ATOM_MERRIFIELD1	0x4A /* Tangier */
+-#define INTEL_FAM6_ATOM_MERRIFIELD2	0x5A /* Annidale */
++#define INTEL_FAM6_ATOM_MERRIFIELD	0x4A /* Tangier */
++#define INTEL_FAM6_ATOM_MOOREFIELD	0x5A /* Annidale */
+ #define INTEL_FAM6_ATOM_GOLDMONT	0x5C
+ #define INTEL_FAM6_ATOM_DENVERTON	0x5F /* Goldmont Microserver */
+ 
+diff --git a/arch/x86/include/asm/mpspec.h b/arch/x86/include/asm/mpspec.h
+index b07233b64578..c2f94dcc92ce 100644
+--- a/arch/x86/include/asm/mpspec.h
++++ b/arch/x86/include/asm/mpspec.h
+@@ -6,7 +6,6 @@
+ #include <asm/x86_init.h>
+ #include <asm/apicdef.h>
+ 
+-extern int apic_version[];
+ extern int pic_mode;
+ 
+ #ifdef CONFIG_X86_32
+@@ -40,6 +39,7 @@ extern int mp_bus_id_to_type[MAX_MP_BUSSES];
+ extern DECLARE_BITMAP(mp_bus_not_pci, MAX_MP_BUSSES);
+ 
+ extern unsigned int boot_cpu_physical_apicid;
++extern u8 boot_cpu_apic_version;
+ extern unsigned long mp_lapic_addr;
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 90d84c3eee53..fbd19444403f 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -182,7 +182,7 @@ static int acpi_register_lapic(int id, u32 acpiid, u8 enabled)
+ 	}
+ 
+ 	if (boot_cpu_physical_apicid != -1U)
+-		ver = apic_version[boot_cpu_physical_apicid];
++		ver = boot_cpu_apic_version;
+ 
+ 	cpu = generic_processor_info(id, ver);
+ 	if (cpu >= 0)
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index f3e9b2df4b16..076c315cdf18 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -64,6 +64,8 @@ unsigned disabled_cpus;
+ unsigned int boot_cpu_physical_apicid = -1U;
+ EXPORT_SYMBOL_GPL(boot_cpu_physical_apicid);
+ 
++u8 boot_cpu_apic_version;
++
+ /*
+  * The highest APIC ID seen during enumeration.
+  */
+@@ -1816,8 +1818,7 @@ void __init init_apic_mappings(void)
+ 		 * since smp_sanity_check is prepared for such a case
+ 		 * and disable smp mode
+ 		 */
+-		apic_version[new_apicid] =
+-			 GET_APIC_VERSION(apic_read(APIC_LVR));
++		boot_cpu_apic_version = GET_APIC_VERSION(apic_read(APIC_LVR));
+ 	}
+ }
+ 
+@@ -1832,13 +1833,10 @@ void __init register_lapic_address(unsigned long address)
+ 	}
+ 	if (boot_cpu_physical_apicid == -1U) {
+ 		boot_cpu_physical_apicid  = read_apic_id();
+-		apic_version[boot_cpu_physical_apicid] =
+-			 GET_APIC_VERSION(apic_read(APIC_LVR));
++		boot_cpu_apic_version = GET_APIC_VERSION(apic_read(APIC_LVR));
+ 	}
+ }
+ 
+-int apic_version[MAX_LOCAL_APIC];
+-
+ /*
+  * Local APIC interrupts
+  */
+@@ -2130,11 +2128,10 @@ int generic_processor_info(int apicid, int version)
+ 			   cpu, apicid);
+ 		version = 0x10;
+ 	}
+-	apic_version[apicid] = version;
+ 
+-	if (version != apic_version[boot_cpu_physical_apicid]) {
++	if (version != boot_cpu_apic_version) {
+ 		pr_warning("BIOS bug: APIC version mismatch, boot CPU: %x, CPU %d: version %x\n",
+-			apic_version[boot_cpu_physical_apicid], cpu, version);
++			boot_cpu_apic_version, cpu, version);
+ 	}
+ 
+ 	physid_set(apicid, phys_cpu_present_map);
+@@ -2277,7 +2274,7 @@ int __init APIC_init_uniprocessor(void)
+ 	 * Complain if the BIOS pretends there is one.
+ 	 */
+ 	if (!boot_cpu_has(X86_FEATURE_APIC) &&
+-	    APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
++	    APIC_INTEGRATED(boot_cpu_apic_version)) {
+ 		pr_err("BIOS bug, local APIC 0x%x not detected!...\n",
+ 			boot_cpu_physical_apicid);
+ 		return -1;
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 7491f417a8e4..48e6d84f173e 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1593,7 +1593,7 @@ void __init setup_ioapic_ids_from_mpc(void)
+ 	 * no meaning without the serial APIC bus.
+ 	 */
+ 	if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+-		|| APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
++		|| APIC_XAPIC(boot_cpu_apic_version))
+ 		return;
+ 	setup_ioapic_ids_from_mpc_nocheck();
+ }
+@@ -2423,7 +2423,7 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
+ static u8 io_apic_unique_id(int idx, u8 id)
+ {
+ 	if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
+-	    !APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
++	    !APIC_XAPIC(boot_cpu_apic_version))
+ 		return io_apic_get_unique_id(idx, id);
+ 	else
+ 		return id;
+diff --git a/arch/x86/kernel/apic/probe_32.c b/arch/x86/kernel/apic/probe_32.c
+index 7c43e716c158..563096267ca2 100644
+--- a/arch/x86/kernel/apic/probe_32.c
++++ b/arch/x86/kernel/apic/probe_32.c
+@@ -152,7 +152,7 @@ early_param("apic", parse_apic);
+ 
+ void __init default_setup_apic_routing(void)
+ {
+-	int version = apic_version[boot_cpu_physical_apicid];
++	int version = boot_cpu_apic_version;
+ 
+ 	if (num_possible_cpus() > 8) {
+ 		switch (boot_cpu_data.x86_vendor) {
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 6066d945c40e..5d30c5e42bb1 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -661,11 +661,28 @@ void irq_complete_move(struct irq_cfg *cfg)
+  */
+ void irq_force_complete_move(struct irq_desc *desc)
+ {
+-	struct irq_data *irqdata = irq_desc_get_irq_data(desc);
+-	struct apic_chip_data *data = apic_chip_data(irqdata);
+-	struct irq_cfg *cfg = data ? &data->cfg : NULL;
++	struct irq_data *irqdata;
++	struct apic_chip_data *data;
++	struct irq_cfg *cfg;
+ 	unsigned int cpu;
+ 
++	/*
++	 * The function is called for all descriptors regardless of which
++	 * irqdomain they belong to. For example if an IRQ is provided by
++	 * an irq_chip as part of a GPIO driver, the chip data for that
++	 * descriptor is specific to the irq_chip in question.
++	 *
++	 * Check first that the chip_data is what we expect
++	 * (apic_chip_data) before touching it any further.
++	 */
++	irqdata = irq_domain_get_irq_data(x86_vector_domain,
++					  irq_desc_get_irq(desc));
++	if (!irqdata)
++		return;
++
++	data = apic_chip_data(irqdata);
++	cfg = data ? &data->cfg : NULL;
++
+ 	if (!cfg)
+ 		return;
+ 
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 621b501f8935..8a90f1517837 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -348,7 +348,7 @@ int __init sanitize_e820_map(struct e820entry *biosmap, int max_nr_map,
+ 		 * continue building up new bios map based on this
+ 		 * information
+ 		 */
+-		if (current_type != last_type || current_type == E820_PRAM) {
++		if (current_type != last_type) {
+ 			if (last_type != 0)	 {
+ 				new_bios[new_bios_entry].size =
+ 					change_point[chgidx]->addr - last_addr;
+@@ -754,7 +754,7 @@ u64 __init early_reserve_e820(u64 size, u64 align)
+ /*
+  * Find the highest page frame number we have available
+  */
+-static unsigned long __init e820_end_pfn(unsigned long limit_pfn)
++static unsigned long __init e820_end_pfn(unsigned long limit_pfn, unsigned type)
+ {
+ 	int i;
+ 	unsigned long last_pfn = 0;
+@@ -765,11 +765,7 @@ static unsigned long __init e820_end_pfn(unsigned long limit_pfn)
+ 		unsigned long start_pfn;
+ 		unsigned long end_pfn;
+ 
+-		/*
+-		 * Persistent memory is accounted as ram for purposes of
+-		 * establishing max_pfn and mem_map.
+-		 */
+-		if (ei->type != E820_RAM && ei->type != E820_PRAM)
++		if (ei->type != type)
+ 			continue;
+ 
+ 		start_pfn = ei->addr >> PAGE_SHIFT;
+@@ -794,12 +790,12 @@ static unsigned long __init e820_end_pfn(unsigned long limit_pfn)
+ }
+ unsigned long __init e820_end_of_ram_pfn(void)
+ {
+-	return e820_end_pfn(MAX_ARCH_PFN);
++	return e820_end_pfn(MAX_ARCH_PFN, E820_RAM);
+ }
+ 
+ unsigned long __init e820_end_of_low_ram_pfn(void)
+ {
+-	return e820_end_pfn(1UL << (32-PAGE_SHIFT));
++	return e820_end_pfn(1UL << (32 - PAGE_SHIFT), E820_RAM);
+ }
+ 
+ static void early_panic(char *msg)
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 63236d8f84bf..a21068e49dac 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -110,12 +110,13 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	get_debugreg(d7, 7);
+ 
+ 	/* Only print out debug registers if they are in their non-default state. */
+-	if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&
+-	    (d6 == DR6_RESERVED) && (d7 == 0x400))
+-		return;
+-
+-	printk(KERN_DEFAULT "DR0: %016lx DR1: %016lx DR2: %016lx\n", d0, d1, d2);
+-	printk(KERN_DEFAULT "DR3: %016lx DR6: %016lx DR7: %016lx\n", d3, d6, d7);
++	if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&
++	    (d6 == DR6_RESERVED) && (d7 == 0x400))) {
++		printk(KERN_DEFAULT "DR0: %016lx DR1: %016lx DR2: %016lx\n",
++		       d0, d1, d2);
++		printk(KERN_DEFAULT "DR3: %016lx DR6: %016lx DR7: %016lx\n",
++		       d3, d6, d7);
++	}
+ 
+ 	if (boot_cpu_has(X86_FEATURE_OSPKE))
+ 		printk(KERN_DEFAULT "PKRU: %08x\n", read_pkru());
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+index f79576a541ff..a1606eadd9ce 100644
+--- a/arch/x86/kernel/ptrace.c
++++ b/arch/x86/kernel/ptrace.c
+@@ -173,8 +173,8 @@ unsigned long kernel_stack_pointer(struct pt_regs *regs)
+ 		return sp;
+ 
+ 	prev_esp = (u32 *)(context);
+-	if (prev_esp)
+-		return (unsigned long)prev_esp;
++	if (*prev_esp)
++		return (unsigned long)*prev_esp;
+ 
+ 	return (unsigned long)regs;
+ }
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 4296beb8fdd3..82b17373b66a 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -690,7 +690,7 @@ wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
+ 	 * Give the other CPU some time to accept the IPI.
+ 	 */
+ 	udelay(200);
+-	if (APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
++	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
+ 		maxlvt = lapic_get_maxlvt();
+ 		if (maxlvt > 3)			/* Due to the Pentium erratum 3AP.  */
+ 			apic_write(APIC_ESR, 0);
+@@ -717,7 +717,7 @@ wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ 	/*
+ 	 * Be paranoid about clearing APIC errors.
+ 	 */
+-	if (APIC_INTEGRATED(apic_version[phys_apicid])) {
++	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
+ 		if (maxlvt > 3)		/* Due to the Pentium erratum 3AP.  */
+ 			apic_write(APIC_ESR, 0);
+ 		apic_read(APIC_ESR);
+@@ -756,7 +756,7 @@ wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ 	 * Determine this based on the APIC version.
+ 	 * If we don't have an integrated APIC, don't send the STARTUP IPIs.
+ 	 */
+-	if (APIC_INTEGRATED(apic_version[phys_apicid]))
++	if (APIC_INTEGRATED(boot_cpu_apic_version))
+ 		num_starts = 2;
+ 	else
+ 		num_starts = 0;
+@@ -994,7 +994,7 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
+ 		/*
+ 		 * Be paranoid about clearing APIC errors.
+ 		*/
+-		if (APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
++		if (APIC_INTEGRATED(boot_cpu_apic_version)) {
+ 			apic_write(APIC_ESR, 0);
+ 			apic_read(APIC_ESR);
+ 		}
+@@ -1249,7 +1249,7 @@ static int __init smp_sanity_check(unsigned max_cpus)
+ 	/*
+ 	 * If we couldn't find a local APIC, then get out of here now!
+ 	 */
+-	if (APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid]) &&
++	if (APIC_INTEGRATED(boot_cpu_apic_version) &&
+ 	    !boot_cpu_has(X86_FEATURE_APIC)) {
+ 		if (!disable_apic) {
+ 			pr_err("BIOS bug, local APIC #%d not detected!...\n",
+@@ -1406,9 +1406,21 @@ __init void prefill_possible_map(void)
+ {
+ 	int i, possible;
+ 
+-	/* no processor from mptable or madt */
+-	if (!num_processors)
+-		num_processors = 1;
++	/* No boot processor was found in mptable or ACPI MADT */
++	if (!num_processors) {
++		int apicid = boot_cpu_physical_apicid;
++		int cpu = hard_smp_processor_id();
++
++		pr_warn("Boot CPU (id %d) not listed by BIOS\n", cpu);
++
++		/* Make sure boot cpu is enumerated */
++		if (apic->cpu_present_to_apicid(0) == BAD_APICID &&
++		    apic->apic_id_valid(apicid))
++			generic_processor_info(apicid, boot_cpu_apic_version);
++
++		if (!num_processors)
++			num_processors = 1;
++	}
+ 
+ 	i = setup_max_cpus ?: 1;
+ 	if (setup_possible_cpus == -1) {
+diff --git a/arch/x86/platform/atom/punit_atom_debug.c b/arch/x86/platform/atom/punit_atom_debug.c
+index 8ff7b9355416..d49d3be81953 100644
+--- a/arch/x86/platform/atom/punit_atom_debug.c
++++ b/arch/x86/platform/atom/punit_atom_debug.c
+@@ -155,7 +155,7 @@ static void punit_dbgfs_unregister(void)
+ 
+ static const struct x86_cpu_id intel_punit_cpu_ids[] = {
+ 	ICPU(INTEL_FAM6_ATOM_SILVERMONT1, punit_device_byt),
+-	ICPU(INTEL_FAM6_ATOM_MERRIFIELD1, punit_device_tng),
++	ICPU(INTEL_FAM6_ATOM_MERRIFIELD,  punit_device_tng),
+ 	ICPU(INTEL_FAM6_ATOM_AIRMONT,	  punit_device_cht),
+ 	{}
+ };
+diff --git a/arch/x86/platform/intel-mid/pwr.c b/arch/x86/platform/intel-mid/pwr.c
+index c901a3423772..6eca0f6fe57d 100644
+--- a/arch/x86/platform/intel-mid/pwr.c
++++ b/arch/x86/platform/intel-mid/pwr.c
+@@ -354,7 +354,7 @@ static int mid_pwr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ }
+ 
+-static int mid_set_initial_state(struct mid_pwr *pwr)
++static int mid_set_initial_state(struct mid_pwr *pwr, const u32 *states)
+ {
+ 	unsigned int i, j;
+ 	int ret;
+@@ -379,10 +379,10 @@ static int mid_set_initial_state(struct mid_pwr *pwr)
+ 	 * NOTE: The actual device mapping is provided by a platform at run
+ 	 * time using vendor capability of PCI configuration space.
+ 	 */
+-	mid_pwr_set_state(pwr, 0, 0xffffffff);
+-	mid_pwr_set_state(pwr, 1, 0xffffffff);
+-	mid_pwr_set_state(pwr, 2, 0xffffffff);
+-	mid_pwr_set_state(pwr, 3, 0xffffffff);
++	mid_pwr_set_state(pwr, 0, states[0]);
++	mid_pwr_set_state(pwr, 1, states[1]);
++	mid_pwr_set_state(pwr, 2, states[2]);
++	mid_pwr_set_state(pwr, 3, states[3]);
+ 
+ 	/* Send command to SCU */
+ 	ret = mid_pwr_wait_for_cmd(pwr, CMD_SET_CFG);
+@@ -397,13 +397,41 @@ static int mid_set_initial_state(struct mid_pwr *pwr)
+ 	return 0;
+ }
+ 
+-static const struct mid_pwr_device_info mid_info = {
+-	.set_initial_state = mid_set_initial_state,
++static int pnw_set_initial_state(struct mid_pwr *pwr)
++{
++	/* On Penwell SRAM must stay powered on */
++	const u32 states[] = {
++		0xf00fffff,		/* PM_SSC(0) */
++		0xffffffff,		/* PM_SSC(1) */
++		0xffffffff,		/* PM_SSC(2) */
++		0xffffffff,		/* PM_SSC(3) */
++	};
++	return mid_set_initial_state(pwr, states);
++}
++
++static int tng_set_initial_state(struct mid_pwr *pwr)
++{
++	const u32 states[] = {
++		0xffffffff,		/* PM_SSC(0) */
++		0xffffffff,		/* PM_SSC(1) */
++		0xffffffff,		/* PM_SSC(2) */
++		0xffffffff,		/* PM_SSC(3) */
++	};
++	return mid_set_initial_state(pwr, states);
++}
++
++static const struct mid_pwr_device_info pnw_info = {
++	.set_initial_state = pnw_set_initial_state,
++};
++
++static const struct mid_pwr_device_info tng_info = {
++	.set_initial_state = tng_set_initial_state,
+ };
+ 
++/* This table should be in sync with the one in drivers/pci/pci-mid.c */
+ static const struct pci_device_id mid_pwr_pci_ids[] = {
+-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_PENWELL), (kernel_ulong_t)&mid_info },
+-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_TANGIER), (kernel_ulong_t)&mid_info },
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_PENWELL), (kernel_ulong_t)&pnw_info },
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_TANGIER), (kernel_ulong_t)&tng_info },
+ 	{}
+ };
+ 
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index 0b4d04c8ab4d..62284035be84 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -87,6 +87,12 @@ static void cpu_bringup(void)
+ 	cpu_data(cpu).x86_max_cores = 1;
+ 	set_cpu_sibling_map(cpu);
+ 
++	/*
++	 * identify_cpu() may have set logical_pkg_id to -1 due
++	 * to incorrect phys_proc_id. Let's re-comupte it.
++	 */
++	topology_update_package_map(apic->cpu_present_to_apicid(cpu), cpu);
++
+ 	xen_setup_cpu_clockevents();
+ 
+ 	notify_cpu_starting(cpu);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 811f9b97e360..d4d55f60cd81 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -251,6 +251,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0cf3, 0xe300), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x0489, 0xe092), .driver_info = BTUSB_QCA_ROME },
++	{ USB_DEVICE(0x04ca, 0x3011), .driver_info = BTUSB_QCA_ROME },
+ 
+ 	/* Broadcom BCM2035 */
+ 	{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+diff --git a/drivers/char/tpm/tpm-dev.c b/drivers/char/tpm/tpm-dev.c
+index f5d452151c6b..912ad30be585 100644
+--- a/drivers/char/tpm/tpm-dev.c
++++ b/drivers/char/tpm/tpm-dev.c
+@@ -145,7 +145,7 @@ static ssize_t tpm_write(struct file *file, const char __user *buf,
+ 		return -EPIPE;
+ 	}
+ 	out_size = tpm_transmit(priv->chip, priv->data_buffer,
+-				sizeof(priv->data_buffer));
++				sizeof(priv->data_buffer), 0);
+ 
+ 	tpm_put_ops(priv->chip);
+ 	if (out_size < 0) {
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 1abe2d7a2610..aef20ee2331a 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -330,8 +330,8 @@ EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
+ /*
+  * Internal kernel interface to transmit TPM commands
+  */
+-ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
+-		     size_t bufsiz)
++ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz,
++		     unsigned int flags)
+ {
+ 	ssize_t rc;
+ 	u32 count, ordinal;
+@@ -350,7 +350,8 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
+ 		return -E2BIG;
+ 	}
+ 
+-	mutex_lock(&chip->tpm_mutex);
++	if (!(flags & TPM_TRANSMIT_UNLOCKED))
++		mutex_lock(&chip->tpm_mutex);
+ 
+ 	rc = chip->ops->send(chip, (u8 *) buf, count);
+ 	if (rc < 0) {
+@@ -393,20 +394,21 @@ out_recv:
+ 		dev_err(&chip->dev,
+ 			"tpm_transmit: tpm_recv: error %zd\n", rc);
+ out:
+-	mutex_unlock(&chip->tpm_mutex);
++	if (!(flags & TPM_TRANSMIT_UNLOCKED))
++		mutex_unlock(&chip->tpm_mutex);
+ 	return rc;
+ }
+ 
+ #define TPM_DIGEST_SIZE 20
+ #define TPM_RET_CODE_IDX 6
+ 
+-ssize_t tpm_transmit_cmd(struct tpm_chip *chip, void *cmd,
+-			 int len, const char *desc)
++ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *cmd,
++			 int len, unsigned int flags, const char *desc)
+ {
+-	struct tpm_output_header *header;
++	const struct tpm_output_header *header;
+ 	int err;
+ 
+-	len = tpm_transmit(chip, (u8 *) cmd, len);
++	len = tpm_transmit(chip, (const u8 *)cmd, len, flags);
+ 	if (len <  0)
+ 		return len;
+ 	else if (len < TPM_HEADER_SIZE)
+@@ -453,7 +455,8 @@ ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap,
+ 		tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ 		tpm_cmd.params.getcap_in.subcap = subcap_id;
+ 	}
+-	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, desc);
++	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 0,
++			      desc);
+ 	if (!rc)
+ 		*cap = tpm_cmd.params.getcap_out.cap;
+ 	return rc;
+@@ -469,7 +472,7 @@ void tpm_gen_interrupt(struct tpm_chip *chip)
+ 	tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ 	tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+ 
+-	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
++	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 0,
+ 			      "attempting to determine the timeouts");
+ }
+ EXPORT_SYMBOL_GPL(tpm_gen_interrupt);
+@@ -490,7 +493,7 @@ static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
+ 	start_cmd.header.in = tpm_startup_header;
+ 
+ 	start_cmd.params.startup_in.startup_type = startup_type;
+-	return tpm_transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE,
++	return tpm_transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE, 0,
+ 				"attempting to start the TPM");
+ }
+ 
+@@ -521,7 +524,8 @@ int tpm_get_timeouts(struct tpm_chip *chip)
+ 	tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ 	tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ 	tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+-	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, NULL);
++	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 0,
++			      NULL);
+ 
+ 	if (rc == TPM_ERR_INVALID_POSTINIT) {
+ 		/* The TPM is not started, we are the first to talk to it.
+@@ -535,7 +539,7 @@ int tpm_get_timeouts(struct tpm_chip *chip)
+ 		tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ 		tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+ 		rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
+-				  NULL);
++				      0, NULL);
+ 	}
+ 	if (rc) {
+ 		dev_err(&chip->dev,
+@@ -596,7 +600,7 @@ duration:
+ 	tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ 	tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_DURATION;
+ 
+-	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
++	rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 0,
+ 			      "attempting to determine the durations");
+ 	if (rc)
+ 		return rc;
+@@ -652,7 +656,7 @@ static int tpm_continue_selftest(struct tpm_chip *chip)
+ 	struct tpm_cmd_t cmd;
+ 
+ 	cmd.header.in = continue_selftest_header;
+-	rc = tpm_transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE,
++	rc = tpm_transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE, 0,
+ 			      "continue selftest");
+ 	return rc;
+ }
+@@ -672,7 +676,7 @@ int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
+ 
+ 	cmd.header.in = pcrread_header;
+ 	cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx);
+-	rc = tpm_transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE,
++	rc = tpm_transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE, 0,
+ 			      "attempting to read a pcr value");
+ 
+ 	if (rc == 0)
+@@ -770,7 +774,7 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
+ 	cmd.header.in = pcrextend_header;
+ 	cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx);
+ 	memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE);
+-	rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
++	rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, 0,
+ 			      "attempting extend a PCR value");
+ 
+ 	tpm_put_ops(chip);
+@@ -809,7 +813,7 @@ int tpm_do_selftest(struct tpm_chip *chip)
+ 		/* Attempt to read a PCR value */
+ 		cmd.header.in = pcrread_header;
+ 		cmd.params.pcrread_in.pcr_idx = cpu_to_be32(0);
+-		rc = tpm_transmit(chip, (u8 *) &cmd, READ_PCR_RESULT_SIZE);
++		rc = tpm_transmit(chip, (u8 *) &cmd, READ_PCR_RESULT_SIZE, 0);
+ 		/* Some buggy TPMs will not respond to tpm_tis_ready() for
+ 		 * around 300ms while the self test is ongoing, keep trying
+ 		 * until the self test duration expires. */
+@@ -879,7 +883,7 @@ int tpm_send(u32 chip_num, void *cmd, size_t buflen)
+ 	if (chip == NULL)
+ 		return -ENODEV;
+ 
+-	rc = tpm_transmit_cmd(chip, cmd, buflen, "attempting tpm_cmd");
++	rc = tpm_transmit_cmd(chip, cmd, buflen, 0, "attempting tpm_cmd");
+ 
+ 	tpm_put_ops(chip);
+ 	return rc;
+@@ -981,14 +985,15 @@ int tpm_pm_suspend(struct device *dev)
+ 		cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr);
+ 		memcpy(cmd.params.pcrextend_in.hash, dummy_hash,
+ 		       TPM_DIGEST_SIZE);
+-		rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
++		rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, 0,
+ 				      "extending dummy pcr before suspend");
+ 	}
+ 
+ 	/* now do the actual savestate */
+ 	for (try = 0; try < TPM_RETRY; try++) {
+ 		cmd.header.in = savestate_header;
+-		rc = tpm_transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, NULL);
++		rc = tpm_transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, 0,
++				      NULL);
+ 
+ 		/*
+ 		 * If the TPM indicates that it is too busy to respond to
+@@ -1072,8 +1077,8 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
+ 		tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes);
+ 
+ 		err = tpm_transmit_cmd(chip, &tpm_cmd,
+-				   TPM_GETRANDOM_RESULT_SIZE + num_bytes,
+-				   "attempting get random");
++				       TPM_GETRANDOM_RESULT_SIZE + num_bytes,
++				       0, "attempting get random");
+ 		if (err)
+ 			break;
+ 
+diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
+index b46cf70c8b16..e1f7236c115c 100644
+--- a/drivers/char/tpm/tpm-sysfs.c
++++ b/drivers/char/tpm/tpm-sysfs.c
+@@ -39,7 +39,7 @@ static ssize_t pubek_show(struct device *dev, struct device_attribute *attr,
+ 	struct tpm_chip *chip = to_tpm_chip(dev);
+ 
+ 	tpm_cmd.header.in = tpm_readpubek_header;
+-	err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE,
++	err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE, 0,
+ 			       "attempting to read the PUBEK");
+ 	if (err)
+ 		goto out;
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 3e32d5bd2dc6..b0585e99da49 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -476,12 +476,16 @@ extern dev_t tpm_devt;
+ extern const struct file_operations tpm_fops;
+ extern struct idr dev_nums_idr;
+ 
++enum tpm_transmit_flags {
++	TPM_TRANSMIT_UNLOCKED	= BIT(0),
++};
++
++ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz,
++		     unsigned int flags);
++ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *cmd, int len,
++			 unsigned int flags, const char *desc);
+ ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap,
+ 		   const char *desc);
+-ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
+-		     size_t bufsiz);
+-ssize_t tpm_transmit_cmd(struct tpm_chip *chip, void *cmd, int len,
+-			 const char *desc);
+ extern int tpm_get_timeouts(struct tpm_chip *);
+ extern void tpm_gen_interrupt(struct tpm_chip *);
+ int tpm1_auto_startup(struct tpm_chip *chip);
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index 0c75c3f1689f..ef5a58b986f6 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -282,7 +282,7 @@ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
+ 	       sizeof(cmd.params.pcrread_in.pcr_select));
+ 	cmd.params.pcrread_in.pcr_select[pcr_idx >> 3] = 1 << (pcr_idx & 0x7);
+ 
+-	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd),
++	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0,
+ 			      "attempting to read a pcr value");
+ 	if (rc == 0) {
+ 		buf = cmd.params.pcrread_out.digest;
+@@ -330,7 +330,7 @@ int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, const u8 *hash)
+ 	cmd.params.pcrextend_in.hash_alg = cpu_to_be16(TPM2_ALG_SHA1);
+ 	memcpy(cmd.params.pcrextend_in.digest, hash, TPM_DIGEST_SIZE);
+ 
+-	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd),
++	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0,
+ 			      "attempting extend a PCR value");
+ 
+ 	return rc;
+@@ -376,7 +376,7 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max)
+ 		cmd.header.in = tpm2_getrandom_header;
+ 		cmd.params.getrandom_in.size = cpu_to_be16(num_bytes);
+ 
+-		err = tpm_transmit_cmd(chip, &cmd, sizeof(cmd),
++		err = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0,
+ 				       "attempting get random");
+ 		if (err)
+ 			break;
+@@ -434,12 +434,12 @@ static void tpm2_buf_append_auth(struct tpm_buf *buf, u32 session_handle,
+ }
+ 
+ /**
+- * tpm2_seal_trusted() - seal a trusted key
+- * @chip_num: A specific chip number for the request or TPM_ANY_NUM
+- * @options: authentication values and other options
++ * tpm2_seal_trusted() - seal the payload of a trusted key
++ * @chip_num: TPM chip to use
+  * @payload: the key data in clear and encrypted form
++ * @options: authentication values and other options
+  *
+- * Returns < 0 on error and 0 on success.
++ * Return: < 0 on error and 0 on success.
+  */
+ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 		      struct trusted_key_payload *payload,
+@@ -512,7 +512,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 		goto out;
+ 	}
+ 
+-	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "sealing data");
++	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 0, "sealing data");
+ 	if (rc)
+ 		goto out;
+ 
+@@ -538,10 +538,18 @@ out:
+ 	return rc;
+ }
+ 
+-static int tpm2_load(struct tpm_chip *chip,
+-		     struct trusted_key_payload *payload,
+-		     struct trusted_key_options *options,
+-		     u32 *blob_handle)
++/**
++ * tpm2_load_cmd() - execute a TPM2_Load command
++ * @chip_num: TPM chip to use
++ * @payload: the key data in clear and encrypted form
++ * @options: authentication values and other options
++ *
++ * Return: same as with tpm_transmit_cmd
++ */
++static int tpm2_load_cmd(struct tpm_chip *chip,
++			 struct trusted_key_payload *payload,
++			 struct trusted_key_options *options,
++			 u32 *blob_handle, unsigned int flags)
+ {
+ 	struct tpm_buf buf;
+ 	unsigned int private_len;
+@@ -576,7 +584,7 @@ static int tpm2_load(struct tpm_chip *chip,
+ 		goto out;
+ 	}
+ 
+-	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "loading blob");
++	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, flags, "loading blob");
+ 	if (!rc)
+ 		*blob_handle = be32_to_cpup(
+ 			(__be32 *) &buf.data[TPM_HEADER_SIZE]);
+@@ -590,7 +598,16 @@ out:
+ 	return rc;
+ }
+ 
+-static void tpm2_flush_context(struct tpm_chip *chip, u32 handle)
++/**
++ * tpm2_flush_context_cmd() - execute a TPM2_FlushContext command
++ * @chip_num: TPM chip to use
++ * @payload: the key data in clear and encrypted form
++ * @options: authentication values and other options
++ *
++ * Return: same as with tpm_transmit_cmd
++ */
++static void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle,
++				   unsigned int flags)
+ {
+ 	struct tpm_buf buf;
+ 	int rc;
+@@ -604,7 +621,8 @@ static void tpm2_flush_context(struct tpm_chip *chip, u32 handle)
+ 
+ 	tpm_buf_append_u32(&buf, handle);
+ 
+-	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "flushing context");
++	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, flags,
++			      "flushing context");
+ 	if (rc)
+ 		dev_warn(&chip->dev, "0x%08x was not flushed, rc=%d\n", handle,
+ 			 rc);
+@@ -612,10 +630,18 @@ static void tpm2_flush_context(struct tpm_chip *chip, u32 handle)
+ 	tpm_buf_destroy(&buf);
+ }
+ 
+-static int tpm2_unseal(struct tpm_chip *chip,
+-		       struct trusted_key_payload *payload,
+-		       struct trusted_key_options *options,
+-		       u32 blob_handle)
++/**
++ * tpm2_unseal_cmd() - execute a TPM2_Unload command
++ * @chip_num: TPM chip to use
++ * @payload: the key data in clear and encrypted form
++ * @options: authentication values and other options
++ *
++ * Return: same as with tpm_transmit_cmd
++ */
++static int tpm2_unseal_cmd(struct tpm_chip *chip,
++			   struct trusted_key_payload *payload,
++			   struct trusted_key_options *options,
++			   u32 blob_handle, unsigned int flags)
+ {
+ 	struct tpm_buf buf;
+ 	u16 data_len;
+@@ -635,7 +661,7 @@ static int tpm2_unseal(struct tpm_chip *chip,
+ 			     options->blobauth /* hmac */,
+ 			     TPM_DIGEST_SIZE);
+ 
+-	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "unsealing");
++	rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, flags, "unsealing");
+ 	if (rc > 0)
+ 		rc = -EPERM;
+ 
+@@ -654,12 +680,12 @@ static int tpm2_unseal(struct tpm_chip *chip,
+ }
+ 
+ /**
+- * tpm_unseal_trusted() - unseal a trusted key
+- * @chip_num: A specific chip number for the request or TPM_ANY_NUM
+- * @options: authentication values and other options
++ * tpm_unseal_trusted() - unseal the payload of a trusted key
++ * @chip_num: TPM chip to use
+  * @payload: the key data in clear and encrypted form
++ * @options: authentication values and other options
+  *
+- * Returns < 0 on error and 0 on success.
++ * Return: < 0 on error and 0 on success.
+  */
+ int tpm2_unseal_trusted(struct tpm_chip *chip,
+ 			struct trusted_key_payload *payload,
+@@ -668,14 +694,17 @@ int tpm2_unseal_trusted(struct tpm_chip *chip,
+ 	u32 blob_handle;
+ 	int rc;
+ 
+-	rc = tpm2_load(chip, payload, options, &blob_handle);
++	mutex_lock(&chip->tpm_mutex);
++	rc = tpm2_load_cmd(chip, payload, options, &blob_handle,
++			   TPM_TRANSMIT_UNLOCKED);
+ 	if (rc)
+-		return rc;
+-
+-	rc = tpm2_unseal(chip, payload, options, blob_handle);
+-
+-	tpm2_flush_context(chip, blob_handle);
++		goto out;
+ 
++	rc = tpm2_unseal_cmd(chip, payload, options, blob_handle,
++			     TPM_TRANSMIT_UNLOCKED);
++	tpm2_flush_context_cmd(chip, blob_handle, TPM_TRANSMIT_UNLOCKED);
++out:
++	mutex_unlock(&chip->tpm_mutex);
+ 	return rc;
+ }
+ 
+@@ -701,7 +730,7 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id,  u32 *value,
+ 	cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(property_id);
+ 	cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1);
+ 
+-	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), desc);
++	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, desc);
+ 	if (!rc)
+ 		*value = be32_to_cpu(cmd.params.get_tpm_pt_out.value);
+ 
+@@ -735,7 +764,7 @@ static int tpm2_startup(struct tpm_chip *chip, u16 startup_type)
+ 	cmd.header.in = tpm2_startup_header;
+ 
+ 	cmd.params.startup_in.startup_type = cpu_to_be16(startup_type);
+-	return tpm_transmit_cmd(chip, &cmd, sizeof(cmd),
++	return tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0,
+ 				"attempting to start the TPM");
+ }
+ 
+@@ -763,7 +792,7 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type)
+ 	cmd.header.in = tpm2_shutdown_header;
+ 	cmd.params.startup_in.startup_type = cpu_to_be16(shutdown_type);
+ 
+-	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), "stopping the TPM");
++	rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, "stopping the TPM");
+ 
+ 	/* In places where shutdown command is sent there's no much we can do
+ 	 * except print the error code on a system failure.
+@@ -828,7 +857,7 @@ static int tpm2_start_selftest(struct tpm_chip *chip, bool full)
+ 	cmd.header.in = tpm2_selftest_header;
+ 	cmd.params.selftest_in.full_test = full;
+ 
+-	rc = tpm_transmit_cmd(chip, &cmd, TPM2_SELF_TEST_IN_SIZE,
++	rc = tpm_transmit_cmd(chip, &cmd, TPM2_SELF_TEST_IN_SIZE, 0,
+ 			      "continue selftest");
+ 
+ 	/* At least some prototype chips seem to give RC_TESTING error
+@@ -880,7 +909,7 @@ static int tpm2_do_selftest(struct tpm_chip *chip)
+ 		cmd.params.pcrread_in.pcr_select[1] = 0x00;
+ 		cmd.params.pcrread_in.pcr_select[2] = 0x00;
+ 
+-		rc = tpm_transmit_cmd(chip, (u8 *) &cmd, sizeof(cmd), NULL);
++		rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, NULL);
+ 		if (rc < 0)
+ 			break;
+ 
+@@ -928,7 +957,7 @@ int tpm2_probe(struct tpm_chip *chip)
+ 	cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(0x100);
+ 	cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1);
+ 
+-	rc = tpm_transmit(chip, (const char *) &cmd, sizeof(cmd));
++	rc = tpm_transmit(chip, (const u8 *)&cmd, sizeof(cmd), 0);
+ 	if (rc <  0)
+ 		return rc;
+ 	else if (rc < TPM_HEADER_SIZE)
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 018c382554ba..1801f382377e 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -142,6 +142,11 @@ static int crb_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+ 	int rc = 0;
+ 
++	/* Zero the cancel register so that the next command will not get
++	 * canceled.
++	 */
++	iowrite32(0, &priv->cca->cancel);
++
+ 	if (len > ioread32(&priv->cca->cmd_size)) {
+ 		dev_err(&chip->dev,
+ 			"invalid command count value %x %zx\n",
+@@ -175,8 +180,6 @@ static void crb_cancel(struct tpm_chip *chip)
+ 
+ 	if ((priv->flags & CRB_FL_ACPI_START) && crb_do_acpi_start(chip))
+ 		dev_err(&chip->dev, "ACPI Start failed\n");
+-
+-	iowrite32(0, &priv->cca->cancel);
+ }
+ 
+ static bool crb_req_canceled(struct tpm_chip *chip, u8 status)
+diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
+index 4ba3d3fe142f..f440d385ed34 100644
+--- a/drivers/cpuidle/cpuidle-arm.c
++++ b/drivers/cpuidle/cpuidle-arm.c
+@@ -121,6 +121,7 @@ static int __init arm_idle_init(void)
+ 		dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 		if (!dev) {
+ 			pr_err("Failed to allocate cpuidle device\n");
++			ret = -ENOMEM;
+ 			goto out_fail;
+ 		}
+ 		dev->cpu = cpu;
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 2d1fb6420592..580f4f280d6b 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1549,6 +1549,7 @@ config MFD_WM8350
+ config MFD_WM8350_I2C
+ 	bool "Wolfson Microelectronics WM8350 with I2C"
+ 	select MFD_WM8350
++	select REGMAP_I2C
+ 	depends on I2C=y
+ 	help
+ 	  The WM8350 is an integrated audio and power management
+diff --git a/drivers/mfd/atmel-hlcdc.c b/drivers/mfd/atmel-hlcdc.c
+index eca7ea69b81c..4b15b0840f16 100644
+--- a/drivers/mfd/atmel-hlcdc.c
++++ b/drivers/mfd/atmel-hlcdc.c
+@@ -50,8 +50,9 @@ static int regmap_atmel_hlcdc_reg_write(void *context, unsigned int reg,
+ 	if (reg <= ATMEL_HLCDC_DIS) {
+ 		u32 status;
+ 
+-		readl_poll_timeout(hregmap->regs + ATMEL_HLCDC_SR, status,
+-				   !(status & ATMEL_HLCDC_SIP), 1, 100);
++		readl_poll_timeout_atomic(hregmap->regs + ATMEL_HLCDC_SR,
++					  status, !(status & ATMEL_HLCDC_SIP),
++					  1, 100);
+ 	}
+ 
+ 	writel(val, hregmap->regs + reg);
+diff --git a/drivers/mfd/rtsx_usb.c b/drivers/mfd/rtsx_usb.c
+index dbd907d7170e..691dab791f7a 100644
+--- a/drivers/mfd/rtsx_usb.c
++++ b/drivers/mfd/rtsx_usb.c
+@@ -46,9 +46,6 @@ static void rtsx_usb_sg_timed_out(unsigned long data)
+ 
+ 	dev_dbg(&ucr->pusb_intf->dev, "%s: sg transfer timed out", __func__);
+ 	usb_sg_cancel(&ucr->current_sg);
+-
+-	/* we know the cancellation is caused by time-out */
+-	ucr->current_sg.status = -ETIMEDOUT;
+ }
+ 
+ static int rtsx_usb_bulk_transfer_sglist(struct rtsx_ucr *ucr,
+@@ -67,12 +64,15 @@ static int rtsx_usb_bulk_transfer_sglist(struct rtsx_ucr *ucr,
+ 	ucr->sg_timer.expires = jiffies + msecs_to_jiffies(timeout);
+ 	add_timer(&ucr->sg_timer);
+ 	usb_sg_wait(&ucr->current_sg);
+-	del_timer_sync(&ucr->sg_timer);
++	if (!del_timer_sync(&ucr->sg_timer))
++		ret = -ETIMEDOUT;
++	else
++		ret = ucr->current_sg.status;
+ 
+ 	if (act_len)
+ 		*act_len = ucr->current_sg.bytes;
+ 
+-	return ucr->current_sg.status;
++	return ret;
+ }
+ 
+ int rtsx_usb_transfer_data(struct rtsx_ucr *ucr, unsigned int pipe,
+diff --git a/drivers/pci/pci-mid.c b/drivers/pci/pci-mid.c
+index c878aa71173b..55f453de562e 100644
+--- a/drivers/pci/pci-mid.c
++++ b/drivers/pci/pci-mid.c
+@@ -60,8 +60,13 @@ static struct pci_platform_pm_ops mid_pci_platform_pm = {
+ 
+ #define ICPU(model)	{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, }
+ 
++/*
++ * This table should be in sync with the one in
++ * arch/x86/platform/intel-mid/pwr.c.
++ */
+ static const struct x86_cpu_id lpss_cpu_ids[] = {
+-	ICPU(INTEL_FAM6_ATOM_MERRIFIELD1),
++	ICPU(INTEL_FAM6_ATOM_PENWELL),
++	ICPU(INTEL_FAM6_ATOM_MERRIFIELD),
+ 	{}
+ };
+ 
+diff --git a/drivers/phy/phy-sun4i-usb.c b/drivers/phy/phy-sun4i-usb.c
+index 8c7eb335622e..4d74ca9186c7 100644
+--- a/drivers/phy/phy-sun4i-usb.c
++++ b/drivers/phy/phy-sun4i-usb.c
+@@ -40,6 +40,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/reset.h>
++#include <linux/spinlock.h>
+ #include <linux/usb/of.h>
+ #include <linux/workqueue.h>
+ 
+@@ -112,7 +113,7 @@ struct sun4i_usb_phy_data {
+ 	void __iomem *base;
+ 	const struct sun4i_usb_phy_cfg *cfg;
+ 	enum usb_dr_mode dr_mode;
+-	struct mutex mutex;
++	spinlock_t reg_lock; /* guard access to phyctl reg */
+ 	struct sun4i_usb_phy {
+ 		struct phy *phy;
+ 		void __iomem *pmu;
+@@ -179,9 +180,10 @@ static void sun4i_usb_phy_write(struct sun4i_usb_phy *phy, u32 addr, u32 data,
+ 	struct sun4i_usb_phy_data *phy_data = to_sun4i_usb_phy_data(phy);
+ 	u32 temp, usbc_bit = BIT(phy->index * 2);
+ 	void __iomem *phyctl = phy_data->base + phy_data->cfg->phyctl_offset;
++	unsigned long flags;
+ 	int i;
+ 
+-	mutex_lock(&phy_data->mutex);
++	spin_lock_irqsave(&phy_data->reg_lock, flags);
+ 
+ 	if (phy_data->cfg->type == sun8i_a33_phy) {
+ 		/* A33 needs us to set phyctl to 0 explicitly */
+@@ -218,7 +220,8 @@ static void sun4i_usb_phy_write(struct sun4i_usb_phy *phy, u32 addr, u32 data,
+ 
+ 		data >>= 1;
+ 	}
+-	mutex_unlock(&phy_data->mutex);
++
++	spin_unlock_irqrestore(&phy_data->reg_lock, flags);
+ }
+ 
+ static void sun4i_usb_phy_passby(struct sun4i_usb_phy *phy, int enable)
+@@ -577,7 +580,7 @@ static int sun4i_usb_phy_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
+-	mutex_init(&data->mutex);
++	spin_lock_init(&data->reg_lock);
+ 	INIT_DELAYED_WORK(&data->detect, sun4i_usb_phy0_id_vbus_det_scan);
+ 	dev_set_drvdata(dev, data);
+ 	data->cfg = of_device_get_match_data(dev);
+diff --git a/drivers/powercap/intel_rapl.c b/drivers/powercap/intel_rapl.c
+index fbab29dfa793..243b233ff31b 100644
+--- a/drivers/powercap/intel_rapl.c
++++ b/drivers/powercap/intel_rapl.c
+@@ -1154,8 +1154,8 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
+ 
+ 	RAPL_CPU(INTEL_FAM6_ATOM_SILVERMONT1,	rapl_defaults_byt),
+ 	RAPL_CPU(INTEL_FAM6_ATOM_AIRMONT,	rapl_defaults_cht),
+-	RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD1,	rapl_defaults_tng),
+-	RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD2,	rapl_defaults_ann),
++	RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD,	rapl_defaults_tng),
++	RAPL_CPU(INTEL_FAM6_ATOM_MOOREFIELD,	rapl_defaults_ann),
+ 	RAPL_CPU(INTEL_FAM6_ATOM_GOLDMONT,	rapl_defaults_core),
+ 	RAPL_CPU(INTEL_FAM6_ATOM_DENVERTON,	rapl_defaults_core),
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 122e64df2f4d..68544618982e 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -348,7 +348,8 @@ static int dwc3_send_clear_stall_ep_cmd(struct dwc3_ep *dep)
+ 	 * IN transfers due to a mishandled error condition. Synopsys
+ 	 * STAR 9000614252.
+ 	 */
+-	if (dep->direction && (dwc->revision >= DWC3_REVISION_260A))
++	if (dep->direction && (dwc->revision >= DWC3_REVISION_260A) &&
++	    (dwc->gadget.speed >= USB_SPEED_SUPER))
+ 		cmd |= DWC3_DEPCMD_CLEARPENDIN;
+ 
+ 	memset(&params, 0, sizeof(params));
+diff --git a/drivers/usb/storage/usb.c b/drivers/usb/storage/usb.c
+index ef2d8cde6ef7..8c5f0115166a 100644
+--- a/drivers/usb/storage/usb.c
++++ b/drivers/usb/storage/usb.c
+@@ -1070,17 +1070,17 @@ int usb_stor_probe2(struct us_data *us)
+ 	result = usb_stor_acquire_resources(us);
+ 	if (result)
+ 		goto BadDevice;
++	usb_autopm_get_interface_no_resume(us->pusb_intf);
+ 	snprintf(us->scsi_name, sizeof(us->scsi_name), "usb-storage %s",
+ 					dev_name(&us->pusb_intf->dev));
+ 	result = scsi_add_host(us_to_host(us), dev);
+ 	if (result) {
+ 		dev_warn(dev,
+ 				"Unable to add the scsi host\n");
+-		goto BadDevice;
++		goto HostAddErr;
+ 	}
+ 
+ 	/* Submit the delayed_work for SCSI-device scanning */
+-	usb_autopm_get_interface_no_resume(us->pusb_intf);
+ 	set_bit(US_FLIDX_SCAN_PENDING, &us->dflags);
+ 
+ 	if (delay_use > 0)
+@@ -1090,6 +1090,8 @@ int usb_stor_probe2(struct us_data *us)
+ 	return 0;
+ 
+ 	/* We come here if there are any problems */
++HostAddErr:
++	usb_autopm_put_interface_no_suspend(us->pusb_intf);
+ BadDevice:
+ 	usb_stor_dbg(us, "storage_probe() failed\n");
+ 	release_everything(us);
+diff --git a/include/linux/mfd/88pm80x.h b/include/linux/mfd/88pm80x.h
+index d409ceb2231e..c118a7ec94d6 100644
+--- a/include/linux/mfd/88pm80x.h
++++ b/include/linux/mfd/88pm80x.h
+@@ -350,7 +350,7 @@ static inline int pm80x_dev_suspend(struct device *dev)
+ 	int irq = platform_get_irq(pdev, 0);
+ 
+ 	if (device_may_wakeup(dev))
+-		set_bit((1 << irq), &chip->wu_flag);
++		set_bit(irq, &chip->wu_flag);
+ 
+ 	return 0;
+ }
+@@ -362,7 +362,7 @@ static inline int pm80x_dev_resume(struct device *dev)
+ 	int irq = platform_get_irq(pdev, 0);
+ 
+ 	if (device_may_wakeup(dev))
+-		clear_bit((1 << irq), &chip->wu_flag);
++		clear_bit(irq, &chip->wu_flag);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index e07fb093f819..37dec7e3db43 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -403,8 +403,11 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
+ 		tkr = tkf->base + (seq & 0x01);
+ 		now = ktime_to_ns(tkr->base);
+ 
+-		now += clocksource_delta(tkr->read(tkr->clock),
+-					 tkr->cycle_last, tkr->mask);
++		now += timekeeping_delta_to_ns(tkr,
++				clocksource_delta(
++					tkr->read(tkr->clock),
++					tkr->cycle_last,
++					tkr->mask));
+ 	} while (read_seqcount_retry(&tkf->seq, seq));
+ 
+ 	return now;
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index 4b9b4a4e1b89..ef1e4e701780 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -190,7 +190,7 @@ int ima_appraise_measurement(enum ima_hooks func,
+ {
+ 	static const char op[] = "appraise_data";
+ 	char *cause = "unknown";
+-	struct dentry *dentry = file->f_path.dentry;
++	struct dentry *dentry = file_dentry(file);
+ 	struct inode *inode = d_backing_inode(dentry);
+ 	enum integrity_status status = INTEGRITY_UNKNOWN;
+ 	int rc = xattr_len, hash_start = 0;
+@@ -295,7 +295,7 @@ out:
+  */
+ void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file)
+ {
+-	struct dentry *dentry = file->f_path.dentry;
++	struct dentry *dentry = file_dentry(file);
+ 	int rc = 0;
+ 
+ 	/* do not collect and update hash for digital signatures */
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 596ef616ac21..423d111b3b94 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -228,7 +228,7 @@ static int process_measurement(struct file *file, char *buf, loff_t size,
+ 	if ((action & IMA_APPRAISE_SUBMASK) ||
+ 		    strcmp(template_desc->name, IMA_TEMPLATE_IMA_NAME) != 0)
+ 		/* read 'security.ima' */
+-		xattr_len = ima_read_xattr(file->f_path.dentry, &xattr_value);
++		xattr_len = ima_read_xattr(file_dentry(file), &xattr_value);
+ 
+ 	hash_algo = ima_get_hash_algo(xattr_value, xattr_len);
+ 
+diff --git a/sound/pci/ali5451/ali5451.c b/sound/pci/ali5451/ali5451.c
+index 36470af7eda7..92b819e4f729 100644
+--- a/sound/pci/ali5451/ali5451.c
++++ b/sound/pci/ali5451/ali5451.c
+@@ -1408,6 +1408,7 @@ snd_ali_playback_pointer(struct snd_pcm_substream *substream)
+ 	spin_unlock(&codec->reg_lock);
+ 	dev_dbg(codec->card->dev, "playback pointer returned cso=%xh.\n", cso);
+ 
++	cso %= runtime->buffer_size;
+ 	return cso;
+ }
+ 
+@@ -1428,6 +1429,7 @@ static snd_pcm_uframes_t snd_ali_pointer(struct snd_pcm_substream *substream)
+ 	cso = inw(ALI_REG(codec, ALI_CSO_ALPHA_FMS + 2));
+ 	spin_unlock(&codec->reg_lock);
+ 
++	cso %= runtime->buffer_size;
+ 	return cso;
+ }
+ 
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 81b7da8e56d3..183311cb849e 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -29,7 +29,7 @@
+ /*
+ 	This is Line 6's MIDI manufacturer ID.
+ */
+-const unsigned char line6_midi_id[] = {
++const unsigned char line6_midi_id[3] = {
+ 	0x00, 0x01, 0x0c
+ };
+ EXPORT_SYMBOL_GPL(line6_midi_id);
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index f6c3bf79af9a..04991b009132 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -1831,6 +1831,7 @@ void snd_usb_mixer_rc_memory_change(struct usb_mixer_interface *mixer,
+ }
+ 
+ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
++					 struct usb_mixer_elem_info *cval,
+ 					 struct snd_kcontrol *kctl)
+ {
+ 	/* Approximation using 10 ranges based on output measurement on hw v1.2.
+@@ -1848,10 +1849,19 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ 		41, 50, TLV_DB_MINMAX_ITEM(-441, 0),
+ 	);
+ 
+-	usb_audio_info(mixer->chip, "applying DragonFly dB scale quirk\n");
+-	kctl->tlv.p = scale;
+-	kctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_TLV_READ;
+-	kctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK;
++	if (cval->min == 0 && cval->max == 50) {
++		usb_audio_info(mixer->chip, "applying DragonFly dB scale quirk (0-50 variant)\n");
++		kctl->tlv.p = scale;
++		kctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_TLV_READ;
++		kctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK;
++
++	} else if (cval->min == 0 && cval->max <= 1000) {
++		/* Some other clearly broken DragonFly variant.
++		 * At least a 0..53 variant (hw v1.0) exists.
++		 */
++		usb_audio_info(mixer->chip, "ignoring too narrow dB range on a DragonFly device");
++		kctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK;
++	}
+ }
+ 
+ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+@@ -1860,8 +1870,8 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ {
+ 	switch (mixer->chip->usb_id) {
+ 	case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
+-		if (unitid == 7 && cval->min == 0 && cval->max == 50)
+-			snd_dragonfly_quirk_db_scale(mixer, kctl);
++		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
++			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
+ 	}
+ }
+diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
+index a027569facfa..6e9c40eea208 100644
+--- a/virt/kvm/arm/pmu.c
++++ b/virt/kvm/arm/pmu.c
+@@ -423,6 +423,14 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
+ 	if (!kvm_arm_support_pmu_v3())
+ 		return -ENODEV;
+ 
++	/*
++	 * We currently require an in-kernel VGIC to use the PMU emulation,
++	 * because we do not support forwarding PMU overflow interrupts to
++	 * userspace yet.
++	 */
++	if (!irqchip_in_kernel(vcpu->kvm) || !vgic_initialized(vcpu->kvm))
++		return -ENODEV;
++
+ 	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
+ 	    !kvm_arm_pmu_irq_initialized(vcpu))
+ 		return -ENXIO;
+diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
+index e83b7fe4baae..b465ac6d5d48 100644
+--- a/virt/kvm/arm/vgic/vgic.c
++++ b/virt/kvm/arm/vgic/vgic.c
+@@ -645,6 +645,9 @@ next:
+ /* Sync back the hardware VGIC state into our emulation after a guest's run. */
+ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
+ {
++	if (unlikely(!vgic_initialized(vcpu->kvm)))
++		return;
++
+ 	vgic_process_maintenance_interrupt(vcpu);
+ 	vgic_fold_lr_state(vcpu);
+ 	vgic_prune_ap_list(vcpu);
+@@ -653,6 +656,9 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
+ /* Flush our emulation state into the GIC hardware before entering the guest. */
+ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
+ {
++	if (unlikely(!vgic_initialized(vcpu->kvm)))
++		return;
++
+ 	spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock);
+ 	vgic_flush_lr_state(vcpu);
+ 	spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-21 11:11 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-21 11:11 UTC (permalink / raw
  To: gentoo-commits

commit:     b2af285eb4601a6aa04bd1b1d14c211a1408e39e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 21 11:11:37 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 21 11:11:37 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2af285e

Linux patch 4.8.3

 0000_README            |   4 ++
 1002_linux-4.8.3.patch | 125 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 129 insertions(+)

diff --git a/0000_README b/0000_README
index 07a39ba..f814c9e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.8.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.2
 
+Patch:  1002_linux-4.8.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.8.3.patch b/1002_linux-4.8.3.patch
new file mode 100644
index 0000000..36a0827
--- /dev/null
+++ b/1002_linux-4.8.3.patch
@@ -0,0 +1,125 @@
+diff --git a/Makefile b/Makefile
+index bf6e44a421df..42eb45c86a42 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index ec6381e57eb7..258a3f9a2519 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -246,10 +246,6 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 
+ 	shost->dma_dev = dma_dev;
+ 
+-	error = device_add(&shost->shost_gendev);
+-	if (error)
+-		goto out_destroy_freelist;
+-
+ 	/*
+ 	 * Increase usage count temporarily here so that calling
+ 	 * scsi_autopm_put_host() will trigger runtime idle if there is
+@@ -260,6 +256,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 	pm_runtime_enable(&shost->shost_gendev);
+ 	device_enable_async_suspend(&shost->shost_gendev);
+ 
++	error = device_add(&shost->shost_gendev);
++	if (error)
++		goto out_destroy_freelist;
++
+ 	scsi_host_set_state(shost, SHOST_RUNNING);
+ 	get_device(shost->shost_gendev.parent);
+ 
+@@ -309,6 +309,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+  out_del_gendev:
+ 	device_del(&shost->shost_gendev);
+  out_destroy_freelist:
++	device_disable_async_suspend(&shost->shost_gendev);
++	pm_runtime_disable(&shost->shost_gendev);
++	pm_runtime_set_suspended(&shost->shost_gendev);
++	pm_runtime_put_noidle(&shost->shost_gendev);
+ 	scsi_destroy_command_freelist(shost);
+  out_destroy_tags:
+ 	if (shost_use_blk_mq(shost))
+diff --git a/fs/xfs/xfs_xattr.c b/fs/xfs/xfs_xattr.c
+index ea62245fee26..62900938f26d 100644
+--- a/fs/xfs/xfs_xattr.c
++++ b/fs/xfs/xfs_xattr.c
+@@ -147,6 +147,7 @@ __xfs_xattr_put_listent(
+ 	arraytop = context->count + prefix_len + namelen + 1;
+ 	if (arraytop > context->firstu) {
+ 		context->count = -1;	/* insufficient space */
++		context->seen_enough = 1;
+ 		return 0;
+ 	}
+ 	offset = (char *)context->alist + context->count;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index ef815b9cd426..277cd39a6399 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2234,6 +2234,7 @@ static inline struct page *follow_page(struct vm_area_struct *vma,
+ #define FOLL_TRIED	0x800	/* a retry, previous pass started an IO */
+ #define FOLL_MLOCK	0x1000	/* lock present pages */
+ #define FOLL_REMOTE	0x2000	/* we are working on non-current tsk/mm */
++#define FOLL_COW	0x4000	/* internal GUP flag */
+ 
+ typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
+ 			void *data);
+diff --git a/include/media/rcar-fcp.h b/include/media/rcar-fcp.h
+index 4c7fc77eaf29..8723f05c6321 100644
+--- a/include/media/rcar-fcp.h
++++ b/include/media/rcar-fcp.h
+@@ -29,7 +29,7 @@ static inline struct rcar_fcp_device *rcar_fcp_get(const struct device_node *np)
+ static inline void rcar_fcp_put(struct rcar_fcp_device *fcp) { }
+ static inline int rcar_fcp_enable(struct rcar_fcp_device *fcp)
+ {
+-	return -ENOSYS;
++	return 0;
+ }
+ static inline void rcar_fcp_disable(struct rcar_fcp_device *fcp) { }
+ #endif
+diff --git a/mm/gup.c b/mm/gup.c
+index 96b2b2fd0fbd..22cc22e7432f 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -60,6 +60,16 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+ 	return -EEXIST;
+ }
+ 
++/*
++ * FOLL_FORCE can write to even unwritable pte's, but only
++ * after we've gone through a COW cycle and they are dirty.
++ */
++static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
++{
++	return pte_write(pte) ||
++		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
++}
++
+ static struct page *follow_page_pte(struct vm_area_struct *vma,
+ 		unsigned long address, pmd_t *pmd, unsigned int flags)
+ {
+@@ -95,7 +105,7 @@ retry:
+ 	}
+ 	if ((flags & FOLL_NUMA) && pte_protnone(pte))
+ 		goto no_page;
+-	if ((flags & FOLL_WRITE) && !pte_write(pte)) {
++	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) {
+ 		pte_unmap_unlock(ptep, ptl);
+ 		return NULL;
+ 	}
+@@ -412,7 +422,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
+ 	 * reCOWed by userspace write).
+ 	 */
+ 	if ((ret & VM_FAULT_WRITE) && !(vma->vm_flags & VM_WRITE))
+-		*flags &= ~FOLL_WRITE;
++	        *flags |= FOLL_COW;
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-22 13:08 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-22 13:08 UTC (permalink / raw
  To: gentoo-commits

commit:     586e8ad56c51f3844347707c9b20aa666796fbdf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 22 13:08:18 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 22 13:08:18 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=586e8ad5

Linux patch 4.8.4

 0000_README            |    4 +
 1003_linux-4.8.4.patch | 2264 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2268 insertions(+)

diff --git a/0000_README b/0000_README
index f814c9e..5a8b43e 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.8.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.3
 
+Patch:  1003_linux-4.8.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.8.4.patch b/1003_linux-4.8.4.patch
new file mode 100644
index 0000000..bb2930c
--- /dev/null
+++ b/1003_linux-4.8.4.patch
@@ -0,0 +1,2264 @@
+diff --git a/MAINTAINERS b/MAINTAINERS
+index f593300e310b..babaf8261941 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -12951,11 +12951,10 @@ F:	arch/x86/xen/*swiotlb*
+ F:	drivers/xen/*swiotlb*
+ 
+ XFS FILESYSTEM
+-P:	Silicon Graphics Inc
+ M:	Dave Chinner <david@fromorbit.com>
+-M:	xfs@oss.sgi.com
+-L:	xfs@oss.sgi.com
+-W:	http://oss.sgi.com/projects/xfs
++M:	linux-xfs@vger.kernel.org
++L:	linux-xfs@vger.kernel.org
++W:	http://xfs.org/
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs.git
+ S:	Supported
+ F:	Documentation/filesystems/xfs.txt
+diff --git a/Makefile b/Makefile
+index 42eb45c86a42..82a36ab540a4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arc/include/asm/irqflags-arcv2.h b/arch/arc/include/asm/irqflags-arcv2.h
+index d1ec7f6b31e0..e880dfa3fcd3 100644
+--- a/arch/arc/include/asm/irqflags-arcv2.h
++++ b/arch/arc/include/asm/irqflags-arcv2.h
+@@ -112,7 +112,7 @@ static inline long arch_local_save_flags(void)
+ 	 */
+ 	temp = (1 << 5) |
+ 		((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) |
+-		(temp & CLRI_STATUS_E_MASK);
++		((temp >> 1) & CLRI_STATUS_E_MASK);
+ 	return temp;
+ }
+ 
+diff --git a/arch/arc/kernel/intc-arcv2.c b/arch/arc/kernel/intc-arcv2.c
+index 6c24faf48b16..62b59409a5d9 100644
+--- a/arch/arc/kernel/intc-arcv2.c
++++ b/arch/arc/kernel/intc-arcv2.c
+@@ -74,7 +74,7 @@ void arc_init_IRQ(void)
+ 	tmp = read_aux_reg(0xa);
+ 	tmp |= STATUS_AD_MASK | (irq_prio << 1);
+ 	tmp &= ~STATUS_IE_MASK;
+-	asm volatile("flag %0	\n"::"r"(tmp));
++	asm volatile("kflag %0	\n"::"r"(tmp));
+ }
+ 
+ static void arcv2_irq_mask(struct irq_data *data)
+diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
+index cc2f6dbd4303..5e24d880306c 100644
+--- a/block/cfq-iosched.c
++++ b/block/cfq-iosched.c
+@@ -3042,7 +3042,6 @@ static struct request *cfq_check_fifo(struct cfq_queue *cfqq)
+ 	if (ktime_get_ns() < rq->fifo_time)
+ 		rq = NULL;
+ 
+-	cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq);
+ 	return rq;
+ }
+ 
+@@ -3420,6 +3419,9 @@ static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ {
+ 	unsigned int max_dispatch;
+ 
++	if (cfq_cfqq_must_dispatch(cfqq))
++		return true;
++
+ 	/*
+ 	 * Drain async requests before we start sync IO
+ 	 */
+@@ -3511,15 +3513,20 @@ static bool cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ 
+ 	BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list));
+ 
++	rq = cfq_check_fifo(cfqq);
++	if (rq)
++		cfq_mark_cfqq_must_dispatch(cfqq);
++
+ 	if (!cfq_may_dispatch(cfqd, cfqq))
+ 		return false;
+ 
+ 	/*
+ 	 * follow expired path, else get first next available
+ 	 */
+-	rq = cfq_check_fifo(cfqq);
+ 	if (!rq)
+ 		rq = cfqq->next_rq;
++	else
++		cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq);
+ 
+ 	/*
+ 	 * insert request into driver dispatch list
+@@ -3989,7 +3996,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
+ 	 * if the new request is sync, but the currently running queue is
+ 	 * not, let the sync request have priority.
+ 	 */
+-	if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq))
++	if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq) && !cfq_cfqq_must_dispatch(cfqq))
+ 		return true;
+ 
+ 	/*
+diff --git a/crypto/async_tx/async_pq.c b/crypto/async_tx/async_pq.c
+index 08b3ac68952b..f83de99d7d71 100644
+--- a/crypto/async_tx/async_pq.c
++++ b/crypto/async_tx/async_pq.c
+@@ -368,8 +368,6 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks,
+ 
+ 		dma_set_unmap(tx, unmap);
+ 		async_tx_submit(chan, tx, submit);
+-
+-		return tx;
+ 	} else {
+ 		struct page *p_src = P(blocks, disks);
+ 		struct page *q_src = Q(blocks, disks);
+@@ -424,9 +422,11 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks,
+ 		submit->cb_param = cb_param_orig;
+ 		submit->flags = flags_orig;
+ 		async_tx_sync_epilog(submit);
+-
+-		return NULL;
++		tx = NULL;
+ 	}
++	dmaengine_unmap_put(unmap);
++
++	return tx;
+ }
+ EXPORT_SYMBOL_GPL(async_syndrome_val);
+ 
+diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c
+index bac70995e064..12ad3e3a84e3 100644
+--- a/crypto/ghash-generic.c
++++ b/crypto/ghash-generic.c
+@@ -14,24 +14,13 @@
+ 
+ #include <crypto/algapi.h>
+ #include <crypto/gf128mul.h>
++#include <crypto/ghash.h>
+ #include <crypto/internal/hash.h>
+ #include <linux/crypto.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ 
+-#define GHASH_BLOCK_SIZE	16
+-#define GHASH_DIGEST_SIZE	16
+-
+-struct ghash_ctx {
+-	struct gf128mul_4k *gf128;
+-};
+-
+-struct ghash_desc_ctx {
+-	u8 buffer[GHASH_BLOCK_SIZE];
+-	u32 bytes;
+-};
+-
+ static int ghash_init(struct shash_desc *desc)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index e1d5ea6d5e40..2accf784534e 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2689,6 +2689,9 @@ static void acpi_nfit_notify(struct acpi_device *adev, u32 event)
+ 
+ 	dev_dbg(dev, "%s: event: %d\n", __func__, event);
+ 
++	if (event != NFIT_NOTIFY_UPDATE)
++		return;
++
+ 	device_lock(dev);
+ 	if (!dev->driver) {
+ 		/* dev->driver may be null if we're being removed */
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index e894ded24d99..51d23f130d86 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -78,6 +78,10 @@ enum {
+ 	NFIT_ARS_TIMEOUT = 90,
+ };
+ 
++enum nfit_root_notifiers {
++	NFIT_NOTIFY_UPDATE = 0x80,
++};
++
+ struct nfit_spa {
+ 	struct list_head list;
+ 	struct nd_region *nd_region;
+diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
+index d799662f19eb..261420ddfe66 100644
+--- a/drivers/base/dma-mapping.c
++++ b/drivers/base/dma-mapping.c
+@@ -334,7 +334,7 @@ void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
+ 		return;
+ 	}
+ 
+-	unmap_kernel_range((unsigned long)cpu_addr, size);
++	unmap_kernel_range((unsigned long)cpu_addr, PAGE_ALIGN(size));
+ 	vunmap(cpu_addr);
+ }
+ #endif
+diff --git a/drivers/clk/mvebu/cp110-system-controller.c b/drivers/clk/mvebu/cp110-system-controller.c
+index 7fa42d6b2b92..f2303da7fda7 100644
+--- a/drivers/clk/mvebu/cp110-system-controller.c
++++ b/drivers/clk/mvebu/cp110-system-controller.c
+@@ -81,13 +81,6 @@ enum {
+ #define CP110_GATE_EIP150		25
+ #define CP110_GATE_EIP197		26
+ 
+-static struct clk *cp110_clks[CP110_CLK_NUM];
+-
+-static struct clk_onecell_data cp110_clk_data = {
+-	.clks = cp110_clks,
+-	.clk_num = CP110_CLK_NUM,
+-};
+-
+ struct cp110_gate_clk {
+ 	struct clk_hw hw;
+ 	struct regmap *regmap;
+@@ -142,6 +135,8 @@ static struct clk *cp110_register_gate(const char *name,
+ 	if (!gate)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	memset(&init, 0, sizeof(init));
++
+ 	init.name = name;
+ 	init.ops = &cp110_gate_ops;
+ 	init.parent_names = &parent_name;
+@@ -194,7 +189,8 @@ static int cp110_syscon_clk_probe(struct platform_device *pdev)
+ 	struct regmap *regmap;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	const char *ppv2_name, *apll_name, *core_name, *eip_name, *nand_name;
+-	struct clk *clk;
++	struct clk_onecell_data *cp110_clk_data;
++	struct clk *clk, **cp110_clks;
+ 	u32 nand_clk_ctrl;
+ 	int i, ret;
+ 
+@@ -207,6 +203,20 @@ static int cp110_syscon_clk_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	cp110_clks = devm_kcalloc(&pdev->dev, sizeof(struct clk *),
++				  CP110_CLK_NUM, GFP_KERNEL);
++	if (!cp110_clks)
++		return -ENOMEM;
++
++	cp110_clk_data = devm_kzalloc(&pdev->dev,
++				      sizeof(*cp110_clk_data),
++				      GFP_KERNEL);
++	if (!cp110_clk_data)
++		return -ENOMEM;
++
++	cp110_clk_data->clks = cp110_clks;
++	cp110_clk_data->clk_num = CP110_CLK_NUM;
++
+ 	/* Register the APLL which is the root of the clk tree */
+ 	of_property_read_string_index(np, "core-clock-output-names",
+ 				      CP110_CORE_APLL, &apll_name);
+@@ -334,10 +344,12 @@ static int cp110_syscon_clk_probe(struct platform_device *pdev)
+ 		cp110_clks[CP110_MAX_CORE_CLOCKS + i] = clk;
+ 	}
+ 
+-	ret = of_clk_add_provider(np, cp110_of_clk_get, &cp110_clk_data);
++	ret = of_clk_add_provider(np, cp110_of_clk_get, cp110_clk_data);
+ 	if (ret)
+ 		goto fail_clk_add;
+ 
++	platform_set_drvdata(pdev, cp110_clks);
++
+ 	return 0;
+ 
+ fail_clk_add:
+@@ -364,6 +376,7 @@ fail0:
+ 
+ static int cp110_syscon_clk_remove(struct platform_device *pdev)
+ {
++	struct clk **cp110_clks = platform_get_drvdata(pdev);
+ 	int i;
+ 
+ 	of_clk_del_provider(pdev->dev.of_node);
+diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c
+index 6c999cb01b80..27a94a119009 100644
+--- a/drivers/crypto/vmx/ghash.c
++++ b/drivers/crypto/vmx/ghash.c
+@@ -26,16 +26,13 @@
+ #include <linux/hardirq.h>
+ #include <asm/switch_to.h>
+ #include <crypto/aes.h>
++#include <crypto/ghash.h>
+ #include <crypto/scatterwalk.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/b128ops.h>
+ 
+ #define IN_INTERRUPT in_interrupt()
+ 
+-#define GHASH_BLOCK_SIZE (16)
+-#define GHASH_DIGEST_SIZE (16)
+-#define GHASH_KEY_LEN (16)
+-
+ void gcm_init_p8(u128 htable[16], const u64 Xi[2]);
+ void gcm_gmult_p8(u64 Xi[2], const u128 htable[16]);
+ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16],
+@@ -55,16 +52,11 @@ struct p8_ghash_desc_ctx {
+ 
+ static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
+ {
+-	const char *alg;
++	const char *alg = "ghash-generic";
+ 	struct crypto_shash *fallback;
+ 	struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm);
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
+ 
+-	if (!(alg = crypto_tfm_alg_name(tfm))) {
+-		printk(KERN_ERR "Failed to get algorithm name.\n");
+-		return -ENOENT;
+-	}
+-
+ 	fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
+ 	if (IS_ERR(fallback)) {
+ 		printk(KERN_ERR
+@@ -78,10 +70,18 @@ static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
+ 	crypto_shash_set_flags(fallback,
+ 			       crypto_shash_get_flags((struct crypto_shash
+ 						       *) tfm));
+-	ctx->fallback = fallback;
+ 
+-	shash_tfm->descsize = sizeof(struct p8_ghash_desc_ctx)
+-	    + crypto_shash_descsize(fallback);
++	/* Check if the descsize defined in the algorithm is still enough. */
++	if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx)
++	    + crypto_shash_descsize(fallback)) {
++		printk(KERN_ERR
++		       "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n",
++		       alg,
++		       shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx),
++		       crypto_shash_descsize(fallback));
++		return -EINVAL;
++	}
++	ctx->fallback = fallback;
+ 
+ 	return 0;
+ }
+@@ -113,7 +113,7 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+ {
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(tfm));
+ 
+-	if (keylen != GHASH_KEY_LEN)
++	if (keylen != GHASH_BLOCK_SIZE)
+ 		return -EINVAL;
+ 
+ 	preempt_disable();
+@@ -211,7 +211,8 @@ struct shash_alg p8_ghash_alg = {
+ 	.update = p8_ghash_update,
+ 	.final = p8_ghash_final,
+ 	.setkey = p8_ghash_setkey,
+-	.descsize = sizeof(struct p8_ghash_desc_ctx),
++	.descsize = sizeof(struct p8_ghash_desc_ctx)
++		+ sizeof(struct ghash_desc_ctx),
+ 	.base = {
+ 		 .cra_name = "ghash",
+ 		 .cra_driver_name = "p8_ghash",
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drm_bus.c b/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
+index 7f0e93f87a55..88a39165edd5 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
+@@ -27,6 +27,16 @@
+ 
+ #include "virtgpu_drv.h"
+ 
++int drm_virtio_set_busid(struct drm_device *dev, struct drm_master *master)
++{
++	struct pci_dev *pdev = dev->pdev;
++
++	if (pdev) {
++		return drm_pci_set_busid(dev, master);
++	}
++	return 0;
++}
++
+ static void virtio_pci_kick_out_firmware_fb(struct pci_dev *pci_dev)
+ {
+ 	struct apertures_struct *ap;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
+index c13f70cfc461..5820b7020ae5 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
+@@ -117,6 +117,7 @@ static const struct file_operations virtio_gpu_driver_fops = {
+ 
+ static struct drm_driver driver = {
+ 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME | DRIVER_RENDER | DRIVER_ATOMIC,
++	.set_busid = drm_virtio_set_busid,
+ 	.load = virtio_gpu_driver_load,
+ 	.unload = virtio_gpu_driver_unload,
+ 	.open = virtio_gpu_driver_open,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
+index b18ef3111f0c..acf556a35cb2 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
+@@ -49,6 +49,7 @@
+ #define DRIVER_PATCHLEVEL 1
+ 
+ /* virtgpu_drm_bus.c */
++int drm_virtio_set_busid(struct drm_device *dev, struct drm_master *master);
+ int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev);
+ 
+ struct virtio_gpu_object {
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index 5da190e6011b..bcf76c33726b 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -932,8 +932,10 @@ void hfi1_send_rc_ack(struct hfi1_ctxtdata *rcd, struct rvt_qp *qp,
+ 	return;
+ 
+ queue_ack:
+-	this_cpu_inc(*ibp->rvp.rc_qacks);
+ 	spin_lock_irqsave(&qp->s_lock, flags);
++	if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK))
++		goto unlock;
++	this_cpu_inc(*ibp->rvp.rc_qacks);
+ 	qp->s_flags |= RVT_S_ACK_PENDING | RVT_S_RESP_PENDING;
+ 	qp->s_nak_state = qp->r_nak_state;
+ 	qp->s_ack_psn = qp->r_ack_psn;
+@@ -942,6 +944,7 @@ queue_ack:
+ 
+ 	/* Schedule the send tasklet. */
+ 	hfi1_schedule_send(qp);
++unlock:
+ 	spin_unlock_irqrestore(&qp->s_lock, flags);
+ }
+ 
+diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
+index a039a5df6f21..fd9271bc1a11 100644
+--- a/drivers/misc/mei/amthif.c
++++ b/drivers/misc/mei/amthif.c
+@@ -67,8 +67,12 @@ int mei_amthif_host_init(struct mei_device *dev, struct mei_me_client *me_cl)
+ 	struct mei_cl *cl = &dev->iamthif_cl;
+ 	int ret;
+ 
+-	if (mei_cl_is_connected(cl))
+-		return 0;
++	mutex_lock(&dev->device_lock);
++
++	if (mei_cl_is_connected(cl)) {
++		ret = 0;
++		goto out;
++	}
+ 
+ 	dev->iamthif_state = MEI_IAMTHIF_IDLE;
+ 
+@@ -77,11 +81,13 @@ int mei_amthif_host_init(struct mei_device *dev, struct mei_me_client *me_cl)
+ 	ret = mei_cl_link(cl);
+ 	if (ret < 0) {
+ 		dev_err(dev->dev, "amthif: failed cl_link %d\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	ret = mei_cl_connect(cl, me_cl, NULL);
+ 
++out:
++	mutex_unlock(&dev->device_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 1f33fea9299f..e094df3cf2d5 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -983,12 +983,10 @@ void mei_cl_bus_rescan_work(struct work_struct *work)
+ 		container_of(work, struct mei_device, bus_rescan_work);
+ 	struct mei_me_client *me_cl;
+ 
+-	mutex_lock(&bus->device_lock);
+ 	me_cl = mei_me_cl_by_uuid(bus, &mei_amthif_guid);
+ 	if (me_cl)
+ 		mei_amthif_host_init(bus, me_cl);
+ 	mei_me_cl_put(me_cl);
+-	mutex_unlock(&bus->device_lock);
+ 
+ 	mei_cl_bus_rescan(bus);
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d0b3a1bb82ca..dad15b6c66dd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -11360,6 +11360,12 @@ static pci_ers_result_t i40e_pci_error_detected(struct pci_dev *pdev,
+ 
+ 	dev_info(&pdev->dev, "%s: error %d\n", __func__, error);
+ 
++	if (!pf) {
++		dev_info(&pdev->dev,
++			 "Cannot recover - error happened during device probe\n");
++		return PCI_ERS_RESULT_DISCONNECT;
++	}
++
+ 	/* shutdown all operations */
+ 	if (!test_bit(__I40E_SUSPENDED, &pf->state)) {
+ 		rtnl_lock();
+diff --git a/drivers/net/wireless/ath/carl9170/debug.c b/drivers/net/wireless/ath/carl9170/debug.c
+index 6808db433283..ec3a64e5d2bb 100644
+--- a/drivers/net/wireless/ath/carl9170/debug.c
++++ b/drivers/net/wireless/ath/carl9170/debug.c
+@@ -75,7 +75,8 @@ static ssize_t carl9170_debugfs_read(struct file *file, char __user *userbuf,
+ 
+ 	if (!ar)
+ 		return -ENODEV;
+-	dfops = container_of(file->f_op, struct carl9170_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct carl9170_debugfs_fops, fops);
+ 
+ 	if (!dfops->read)
+ 		return -ENOSYS;
+@@ -127,7 +128,8 @@ static ssize_t carl9170_debugfs_write(struct file *file,
+ 
+ 	if (!ar)
+ 		return -ENODEV;
+-	dfops = container_of(file->f_op, struct carl9170_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct carl9170_debugfs_fops, fops);
+ 
+ 	if (!dfops->write)
+ 		return -ENOSYS;
+diff --git a/drivers/net/wireless/broadcom/b43/debugfs.c b/drivers/net/wireless/broadcom/b43/debugfs.c
+index b4bcd94aff6c..77046384dd80 100644
+--- a/drivers/net/wireless/broadcom/b43/debugfs.c
++++ b/drivers/net/wireless/broadcom/b43/debugfs.c
+@@ -524,7 +524,8 @@ static ssize_t b43_debugfs_read(struct file *file, char __user *userbuf,
+ 		goto out_unlock;
+ 	}
+ 
+-	dfops = container_of(file->f_op, struct b43_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct b43_debugfs_fops, fops);
+ 	if (!dfops->read) {
+ 		err = -ENOSYS;
+ 		goto out_unlock;
+@@ -585,7 +586,8 @@ static ssize_t b43_debugfs_write(struct file *file,
+ 		goto out_unlock;
+ 	}
+ 
+-	dfops = container_of(file->f_op, struct b43_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct b43_debugfs_fops, fops);
+ 	if (!dfops->write) {
+ 		err = -ENOSYS;
+ 		goto out_unlock;
+diff --git a/drivers/net/wireless/broadcom/b43legacy/debugfs.c b/drivers/net/wireless/broadcom/b43legacy/debugfs.c
+index 090910ea259e..82ef56ed7ca1 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/debugfs.c
++++ b/drivers/net/wireless/broadcom/b43legacy/debugfs.c
+@@ -221,7 +221,8 @@ static ssize_t b43legacy_debugfs_read(struct file *file, char __user *userbuf,
+ 		goto out_unlock;
+ 	}
+ 
+-	dfops = container_of(file->f_op, struct b43legacy_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct b43legacy_debugfs_fops, fops);
+ 	if (!dfops->read) {
+ 		err = -ENOSYS;
+ 		goto out_unlock;
+@@ -287,7 +288,8 @@ static ssize_t b43legacy_debugfs_write(struct file *file,
+ 		goto out_unlock;
+ 	}
+ 
+-	dfops = container_of(file->f_op, struct b43legacy_debugfs_fops, fops);
++	dfops = container_of(debugfs_real_fops(file),
++			     struct b43legacy_debugfs_fops, fops);
+ 	if (!dfops->write) {
+ 		err = -ENOSYS;
+ 		goto out_unlock;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index b8aec5e5ef93..abaf003a5b39 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -2533,7 +2533,7 @@ static void brcmf_fill_bss_param(struct brcmf_if *ifp, struct station_info *si)
+ 				     WL_BSS_INFO_MAX);
+ 	if (err) {
+ 		brcmf_err("Failed to get bss info (%d)\n", err);
+-		return;
++		goto out_kfree;
+ 	}
+ 	si->filled |= BIT(NL80211_STA_INFO_BSS_PARAM);
+ 	si->bss_param.beacon_interval = le16_to_cpu(buf->bss_le.beacon_period);
+@@ -2545,6 +2545,9 @@ static void brcmf_fill_bss_param(struct brcmf_if *ifp, struct station_info *si)
+ 		si->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_PREAMBLE;
+ 	if (capability & WLAN_CAPABILITY_SHORT_SLOT_TIME)
+ 		si->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_SLOT_TIME;
++
++out_kfree:
++	kfree(buf);
+ }
+ 
+ static s32
+@@ -3884,11 +3887,11 @@ brcmf_cfg80211_del_pmksa(struct wiphy *wiphy, struct net_device *ndev,
+ 	if (!check_vif_up(ifp->vif))
+ 		return -EIO;
+ 
+-	brcmf_dbg(CONN, "del_pmksa - PMK bssid = %pM\n", &pmksa->bssid);
++	brcmf_dbg(CONN, "del_pmksa - PMK bssid = %pM\n", pmksa->bssid);
+ 
+ 	npmk = le32_to_cpu(cfg->pmk_list.npmk);
+ 	for (i = 0; i < npmk; i++)
+-		if (!memcmp(&pmksa->bssid, &pmk[i].bssid, ETH_ALEN))
++		if (!memcmp(pmksa->bssid, pmk[i].bssid, ETH_ALEN))
+ 			break;
+ 
+ 	if ((npmk > 0) && (i < npmk)) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
+index 7e269f9aa607..63664442e687 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
+@@ -234,13 +234,20 @@ static void brcmf_flowring_block(struct brcmf_flowring *flow, u16 flowid,
+ 
+ void brcmf_flowring_delete(struct brcmf_flowring *flow, u16 flowid)
+ {
++	struct brcmf_bus *bus_if = dev_get_drvdata(flow->dev);
+ 	struct brcmf_flowring_ring *ring;
++	struct brcmf_if *ifp;
+ 	u16 hash_idx;
++	u8 ifidx;
+ 	struct sk_buff *skb;
+ 
+ 	ring = flow->rings[flowid];
+ 	if (!ring)
+ 		return;
++
++	ifidx = brcmf_flowring_ifidx_get(flow, flowid);
++	ifp = brcmf_get_ifp(bus_if->drvr, ifidx);
++
+ 	brcmf_flowring_block(flow, flowid, false);
+ 	hash_idx = ring->hash_id;
+ 	flow->hash[hash_idx].ifidx = BRCMF_FLOWRING_INVALID_IFIDX;
+@@ -249,7 +256,7 @@ void brcmf_flowring_delete(struct brcmf_flowring *flow, u16 flowid)
+ 
+ 	skb = skb_dequeue(&ring->skblist);
+ 	while (skb) {
+-		brcmu_pkt_buf_free_skb(skb);
++		brcmf_txfinalize(ifp, skb, false);
+ 		skb = skb_dequeue(&ring->skblist);
+ 	}
+ 
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index 7640498964a5..3d53d636b17b 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -2388,15 +2388,23 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
+ 	}
+ 	case ARCMSR_MESSAGE_WRITE_WQBUFFER: {
+ 		unsigned char *ver_addr;
+-		int32_t user_len, cnt2end;
++		uint32_t user_len;
++		int32_t cnt2end;
+ 		uint8_t *pQbuffer, *ptmpuserbuffer;
++
++		user_len = pcmdmessagefld->cmdmessage.Length;
++		if (user_len > ARCMSR_API_DATA_BUFLEN) {
++			retvalue = ARCMSR_MESSAGE_FAIL;
++			goto message_out;
++		}
++
+ 		ver_addr = kmalloc(ARCMSR_API_DATA_BUFLEN, GFP_ATOMIC);
+ 		if (!ver_addr) {
+ 			retvalue = ARCMSR_MESSAGE_FAIL;
+ 			goto message_out;
+ 		}
+ 		ptmpuserbuffer = ver_addr;
+-		user_len = pcmdmessagefld->cmdmessage.Length;
++
+ 		memcpy(ptmpuserbuffer,
+ 			pcmdmessagefld->messagedatabuffer, user_len);
+ 		spin_lock_irqsave(&acb->wqbuffer_lock, flags);
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index ab67ec4b6bd6..79c9860a165f 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -717,7 +717,6 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
+ 	spin_lock_irqsave(vhost->host->host_lock, flags);
+ 	vhost->state = IBMVFC_NO_CRQ;
+ 	vhost->logged_in = 0;
+-	ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_NONE);
+ 
+ 	/* Clean out the queue */
+ 	memset(crq->msgs, 0, PAGE_SIZE);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index e19969614203..b022f5a01e63 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -462,7 +462,7 @@ static int dw8250_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	data->pclk = devm_clk_get(&pdev->dev, "apb_pclk");
+-	if (IS_ERR(data->clk) && PTR_ERR(data->clk) == -EPROBE_DEFER) {
++	if (IS_ERR(data->pclk) && PTR_ERR(data->pclk) == -EPROBE_DEFER) {
+ 		err = -EPROBE_DEFER;
+ 		goto err_clk;
+ 	}
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index bdfa659b9606..858a54633664 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1414,12 +1414,8 @@ static void __do_stop_tx_rs485(struct uart_8250_port *p)
+ 	if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX)) {
+ 		serial8250_clear_fifos(p);
+ 
+-		serial8250_rpm_get(p);
+-
+ 		p->ier |= UART_IER_RLSI | UART_IER_RDI;
+ 		serial_port_out(&p->port, UART_IER, p->ier);
+-
+-		serial8250_rpm_put(p);
+ 	}
+ }
+ 
+@@ -1429,6 +1425,7 @@ static void serial8250_em485_handle_stop_tx(unsigned long arg)
+ 	struct uart_8250_em485 *em485 = p->em485;
+ 	unsigned long flags;
+ 
++	serial8250_rpm_get(p);
+ 	spin_lock_irqsave(&p->port.lock, flags);
+ 	if (em485 &&
+ 	    em485->active_timer == &em485->stop_tx_timer) {
+@@ -1436,6 +1433,7 @@ static void serial8250_em485_handle_stop_tx(unsigned long arg)
+ 		em485->active_timer = NULL;
+ 	}
+ 	spin_unlock_irqrestore(&p->port.lock, flags);
++	serial8250_rpm_put(p);
+ }
+ 
+ static void __stop_tx_rs485(struct uart_8250_port *p)
+@@ -1475,7 +1473,7 @@ static inline void __stop_tx(struct uart_8250_port *p)
+ 		unsigned char lsr = serial_in(p, UART_LSR);
+ 		/*
+ 		 * To provide required timeing and allow FIFO transfer,
+-		 * __stop_tx_rs485 must be called only when both FIFO and
++		 * __stop_tx_rs485() must be called only when both FIFO and
+ 		 * shift register are empty. It is for device driver to enable
+ 		 * interrupt on TEMT.
+ 		 */
+@@ -1484,9 +1482,10 @@ static inline void __stop_tx(struct uart_8250_port *p)
+ 
+ 		del_timer(&em485->start_tx_timer);
+ 		em485->active_timer = NULL;
++
++		__stop_tx_rs485(p);
+ 	}
+ 	__do_stop_tx(p);
+-	__stop_tx_rs485(p);
+ }
+ 
+ static void serial8250_stop_tx(struct uart_port *port)
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 2eaa18ddef61..8bbde52db376 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1929,6 +1929,9 @@ static void atmel_shutdown(struct uart_port *port)
+ {
+ 	struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ 
++	/* Disable modem control lines interrupts */
++	atmel_disable_ms(port);
++
+ 	/* Disable interrupts at device level */
+ 	atmel_uart_writel(port, ATMEL_US_IDR, -1);
+ 
+@@ -1979,8 +1982,6 @@ static void atmel_shutdown(struct uart_port *port)
+ 	 */
+ 	free_irq(port->irq, port);
+ 
+-	atmel_port->ms_irq_enabled = false;
+-
+ 	atmel_flush_buffer(port);
+ }
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 0df2b1c091ae..615c0279a1a6 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -740,12 +740,13 @@ static unsigned int imx_get_hwmctrl(struct imx_port *sport)
+ {
+ 	unsigned int tmp = TIOCM_DSR;
+ 	unsigned usr1 = readl(sport->port.membase + USR1);
++	unsigned usr2 = readl(sport->port.membase + USR2);
+ 
+ 	if (usr1 & USR1_RTSS)
+ 		tmp |= TIOCM_CTS;
+ 
+ 	/* in DCE mode DCDIN is always 0 */
+-	if (!(usr1 & USR2_DCDIN))
++	if (!(usr2 & USR2_DCDIN))
+ 		tmp |= TIOCM_CAR;
+ 
+ 	if (sport->dte_mode)
+diff --git a/fs/attr.c b/fs/attr.c
+index 42bb42bb3c72..3c42cab06b5d 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -202,6 +202,21 @@ int notify_change(struct dentry * dentry, struct iattr * attr, struct inode **de
+ 			return -EPERM;
+ 	}
+ 
++	/*
++	 * If utimes(2) and friends are called with times == NULL (or both
++	 * times are UTIME_NOW), then we need to check for write permission
++	 */
++	if (ia_valid & ATTR_TOUCH) {
++		if (IS_IMMUTABLE(inode))
++			return -EPERM;
++
++		if (!inode_owner_or_capable(inode)) {
++			error = inode_permission(inode, MAY_WRITE);
++			if (error)
++				return error;
++		}
++	}
++
+ 	if ((ia_valid & ATTR_MODE)) {
+ 		umode_t amode = attr->ia_mode;
+ 		/* Flag setting protected by i_mutex */
+diff --git a/fs/autofs4/waitq.c b/fs/autofs4/waitq.c
+index 431fd7ee3488..e44271dfceb6 100644
+--- a/fs/autofs4/waitq.c
++++ b/fs/autofs4/waitq.c
+@@ -431,8 +431,8 @@ int autofs4_wait(struct autofs_sb_info *sbi,
+ 		memcpy(&wq->name, &qstr, sizeof(struct qstr));
+ 		wq->dev = autofs4_get_dev(sbi);
+ 		wq->ino = autofs4_get_ino(sbi);
+-		wq->uid = current_uid();
+-		wq->gid = current_gid();
++		wq->uid = current_real_cred()->uid;
++		wq->gid = current_real_cred()->gid;
+ 		wq->pid = pid;
+ 		wq->tgid = tgid;
+ 		wq->status = -EINTR; /* Status return if interrupted */
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 029db6e1105c..60a850ee8c78 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -698,7 +698,7 @@ int btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 
+ 			ret = btrfs_map_bio(root, comp_bio, mirror_num, 0);
+ 			if (ret) {
+-				bio->bi_error = ret;
++				comp_bio->bi_error = ret;
+ 				bio_endio(comp_bio);
+ 			}
+ 
+@@ -728,7 +728,7 @@ int btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ 
+ 	ret = btrfs_map_bio(root, comp_bio, mirror_num, 0);
+ 	if (ret) {
+-		bio->bi_error = ret;
++		comp_bio->bi_error = ret;
+ 		bio_endio(comp_bio);
+ 	}
+ 
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 33fe03551105..791e47ce9d27 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -251,7 +251,8 @@ struct btrfs_super_block {
+ #define BTRFS_FEATURE_COMPAT_SAFE_CLEAR		0ULL
+ 
+ #define BTRFS_FEATURE_COMPAT_RO_SUPP			\
+-	(BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE)
++	(BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE |	\
++	 BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE_VALID)
+ 
+ #define BTRFS_FEATURE_COMPAT_RO_SAFE_SET	0ULL
+ #define BTRFS_FEATURE_COMPAT_RO_SAFE_CLEAR	0ULL
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 54bc8c7c6bcd..3dede6d53bad 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2566,6 +2566,7 @@ int open_ctree(struct super_block *sb,
+ 	int num_backups_tried = 0;
+ 	int backup_index = 0;
+ 	int max_active;
++	int clear_free_space_tree = 0;
+ 
+ 	tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
+ 	chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
+@@ -3129,6 +3130,26 @@ retry_root_backup:
+ 	if (sb->s_flags & MS_RDONLY)
+ 		return 0;
+ 
++	if (btrfs_test_opt(fs_info, CLEAR_CACHE) &&
++	    btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
++		clear_free_space_tree = 1;
++	} else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
++		   !btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) {
++		btrfs_warn(fs_info, "free space tree is invalid");
++		clear_free_space_tree = 1;
++	}
++
++	if (clear_free_space_tree) {
++		btrfs_info(fs_info, "clearing free space tree");
++		ret = btrfs_clear_free_space_tree(fs_info);
++		if (ret) {
++			btrfs_warn(fs_info,
++				   "failed to clear free space tree: %d", ret);
++			close_ctree(tree_root);
++			return ret;
++		}
++	}
++
+ 	if (btrfs_test_opt(tree_root->fs_info, FREE_SPACE_TREE) &&
+ 	    !btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
+ 		btrfs_info(fs_info, "creating free space tree");
+@@ -3166,18 +3187,6 @@ retry_root_backup:
+ 
+ 	btrfs_qgroup_rescan_resume(fs_info);
+ 
+-	if (btrfs_test_opt(tree_root->fs_info, CLEAR_CACHE) &&
+-	    btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
+-		btrfs_info(fs_info, "clearing free space tree");
+-		ret = btrfs_clear_free_space_tree(fs_info);
+-		if (ret) {
+-			btrfs_warn(fs_info,
+-				"failed to clear free space tree: %d", ret);
+-			close_ctree(tree_root);
+-			return ret;
+-		}
+-	}
+-
+ 	if (!fs_info->uuid_root) {
+ 		btrfs_info(fs_info, "creating UUID tree");
+ 		ret = btrfs_create_uuid_tree(fs_info);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 44fe66b53c8b..c3ec30dea9a5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5524,17 +5524,45 @@ void copy_extent_buffer(struct extent_buffer *dst, struct extent_buffer *src,
+ 	}
+ }
+ 
+-/*
+- * The extent buffer bitmap operations are done with byte granularity because
+- * bitmap items are not guaranteed to be aligned to a word and therefore a
+- * single word in a bitmap may straddle two pages in the extent buffer.
+- */
+-#define BIT_BYTE(nr) ((nr) / BITS_PER_BYTE)
+-#define BYTE_MASK ((1 << BITS_PER_BYTE) - 1)
+-#define BITMAP_FIRST_BYTE_MASK(start) \
+-	((BYTE_MASK << ((start) & (BITS_PER_BYTE - 1))) & BYTE_MASK)
+-#define BITMAP_LAST_BYTE_MASK(nbits) \
+-	(BYTE_MASK >> (-(nbits) & (BITS_PER_BYTE - 1)))
++void le_bitmap_set(u8 *map, unsigned int start, int len)
++{
++	u8 *p = map + BIT_BYTE(start);
++	const unsigned int size = start + len;
++	int bits_to_set = BITS_PER_BYTE - (start % BITS_PER_BYTE);
++	u8 mask_to_set = BITMAP_FIRST_BYTE_MASK(start);
++
++	while (len - bits_to_set >= 0) {
++		*p |= mask_to_set;
++		len -= bits_to_set;
++		bits_to_set = BITS_PER_BYTE;
++		mask_to_set = ~(u8)0;
++		p++;
++	}
++	if (len) {
++		mask_to_set &= BITMAP_LAST_BYTE_MASK(size);
++		*p |= mask_to_set;
++	}
++}
++
++void le_bitmap_clear(u8 *map, unsigned int start, int len)
++{
++	u8 *p = map + BIT_BYTE(start);
++	const unsigned int size = start + len;
++	int bits_to_clear = BITS_PER_BYTE - (start % BITS_PER_BYTE);
++	u8 mask_to_clear = BITMAP_FIRST_BYTE_MASK(start);
++
++	while (len - bits_to_clear >= 0) {
++		*p &= ~mask_to_clear;
++		len -= bits_to_clear;
++		bits_to_clear = BITS_PER_BYTE;
++		mask_to_clear = ~(u8)0;
++		p++;
++	}
++	if (len) {
++		mask_to_clear &= BITMAP_LAST_BYTE_MASK(size);
++		*p &= ~mask_to_clear;
++	}
++}
+ 
+ /*
+  * eb_bitmap_offset() - calculate the page and offset of the byte containing the
+@@ -5578,7 +5606,7 @@ static inline void eb_bitmap_offset(struct extent_buffer *eb,
+ int extent_buffer_test_bit(struct extent_buffer *eb, unsigned long start,
+ 			   unsigned long nr)
+ {
+-	char *kaddr;
++	u8 *kaddr;
+ 	struct page *page;
+ 	unsigned long i;
+ 	size_t offset;
+@@ -5600,13 +5628,13 @@ int extent_buffer_test_bit(struct extent_buffer *eb, unsigned long start,
+ void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start,
+ 			      unsigned long pos, unsigned long len)
+ {
+-	char *kaddr;
++	u8 *kaddr;
+ 	struct page *page;
+ 	unsigned long i;
+ 	size_t offset;
+ 	const unsigned int size = pos + len;
+ 	int bits_to_set = BITS_PER_BYTE - (pos % BITS_PER_BYTE);
+-	unsigned int mask_to_set = BITMAP_FIRST_BYTE_MASK(pos);
++	u8 mask_to_set = BITMAP_FIRST_BYTE_MASK(pos);
+ 
+ 	eb_bitmap_offset(eb, start, pos, &i, &offset);
+ 	page = eb->pages[i];
+@@ -5617,7 +5645,7 @@ void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start,
+ 		kaddr[offset] |= mask_to_set;
+ 		len -= bits_to_set;
+ 		bits_to_set = BITS_PER_BYTE;
+-		mask_to_set = ~0U;
++		mask_to_set = ~(u8)0;
+ 		if (++offset >= PAGE_SIZE && len > 0) {
+ 			offset = 0;
+ 			page = eb->pages[++i];
+@@ -5642,13 +5670,13 @@ void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start,
+ void extent_buffer_bitmap_clear(struct extent_buffer *eb, unsigned long start,
+ 				unsigned long pos, unsigned long len)
+ {
+-	char *kaddr;
++	u8 *kaddr;
+ 	struct page *page;
+ 	unsigned long i;
+ 	size_t offset;
+ 	const unsigned int size = pos + len;
+ 	int bits_to_clear = BITS_PER_BYTE - (pos % BITS_PER_BYTE);
+-	unsigned int mask_to_clear = BITMAP_FIRST_BYTE_MASK(pos);
++	u8 mask_to_clear = BITMAP_FIRST_BYTE_MASK(pos);
+ 
+ 	eb_bitmap_offset(eb, start, pos, &i, &offset);
+ 	page = eb->pages[i];
+@@ -5659,7 +5687,7 @@ void extent_buffer_bitmap_clear(struct extent_buffer *eb, unsigned long start,
+ 		kaddr[offset] &= ~mask_to_clear;
+ 		len -= bits_to_clear;
+ 		bits_to_clear = BITS_PER_BYTE;
+-		mask_to_clear = ~0U;
++		mask_to_clear = ~(u8)0;
+ 		if (++offset >= PAGE_SIZE && len > 0) {
+ 			offset = 0;
+ 			page = eb->pages[++i];
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 28cd88fccc7e..1cf4e4226fc8 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -59,6 +59,28 @@
+  */
+ #define EXTENT_PAGE_PRIVATE 1
+ 
++/*
++ * The extent buffer bitmap operations are done with byte granularity instead of
++ * word granularity for two reasons:
++ * 1. The bitmaps must be little-endian on disk.
++ * 2. Bitmap items are not guaranteed to be aligned to a word and therefore a
++ *    single word in a bitmap may straddle two pages in the extent buffer.
++ */
++#define BIT_BYTE(nr) ((nr) / BITS_PER_BYTE)
++#define BYTE_MASK ((1 << BITS_PER_BYTE) - 1)
++#define BITMAP_FIRST_BYTE_MASK(start) \
++	((BYTE_MASK << ((start) & (BITS_PER_BYTE - 1))) & BYTE_MASK)
++#define BITMAP_LAST_BYTE_MASK(nbits) \
++	(BYTE_MASK >> (-(nbits) & (BITS_PER_BYTE - 1)))
++
++static inline int le_test_bit(int nr, const u8 *addr)
++{
++	return 1U & (addr[BIT_BYTE(nr)] >> (nr & (BITS_PER_BYTE-1)));
++}
++
++extern void le_bitmap_set(u8 *map, unsigned int start, int len);
++extern void le_bitmap_clear(u8 *map, unsigned int start, int len);
++
+ struct extent_state;
+ struct btrfs_root;
+ struct btrfs_io_bio;
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 87e7e3d3e676..ea605ffd0e03 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -151,7 +151,7 @@ static inline u32 free_space_bitmap_size(u64 size, u32 sectorsize)
+ 	return DIV_ROUND_UP((u32)div_u64(size, sectorsize), BITS_PER_BYTE);
+ }
+ 
+-static unsigned long *alloc_bitmap(u32 bitmap_size)
++static u8 *alloc_bitmap(u32 bitmap_size)
+ {
+ 	void *mem;
+ 
+@@ -180,8 +180,7 @@ int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans,
+ 	struct btrfs_free_space_info *info;
+ 	struct btrfs_key key, found_key;
+ 	struct extent_buffer *leaf;
+-	unsigned long *bitmap;
+-	char *bitmap_cursor;
++	u8 *bitmap, *bitmap_cursor;
+ 	u64 start, end;
+ 	u64 bitmap_range, i;
+ 	u32 bitmap_size, flags, expected_extent_count;
+@@ -231,7 +230,7 @@ int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans,
+ 						block_group->sectorsize);
+ 				last = div_u64(found_key.objectid + found_key.offset - start,
+ 					       block_group->sectorsize);
+-				bitmap_set(bitmap, first, last - first);
++				le_bitmap_set(bitmap, first, last - first);
+ 
+ 				extent_count++;
+ 				nr++;
+@@ -269,7 +268,7 @@ int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 	}
+ 
+-	bitmap_cursor = (char *)bitmap;
++	bitmap_cursor = bitmap;
+ 	bitmap_range = block_group->sectorsize * BTRFS_FREE_SPACE_BITMAP_BITS;
+ 	i = start;
+ 	while (i < end) {
+@@ -318,7 +317,7 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans,
+ 	struct btrfs_free_space_info *info;
+ 	struct btrfs_key key, found_key;
+ 	struct extent_buffer *leaf;
+-	unsigned long *bitmap;
++	u8 *bitmap;
+ 	u64 start, end;
+ 	/* Initialize to silence GCC. */
+ 	u64 extent_start = 0;
+@@ -362,7 +361,7 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans,
+ 				break;
+ 			} else if (found_key.type == BTRFS_FREE_SPACE_BITMAP_KEY) {
+ 				unsigned long ptr;
+-				char *bitmap_cursor;
++				u8 *bitmap_cursor;
+ 				u32 bitmap_pos, data_size;
+ 
+ 				ASSERT(found_key.objectid >= start);
+@@ -372,7 +371,7 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans,
+ 				bitmap_pos = div_u64(found_key.objectid - start,
+ 						     block_group->sectorsize *
+ 						     BITS_PER_BYTE);
+-				bitmap_cursor = ((char *)bitmap) + bitmap_pos;
++				bitmap_cursor = bitmap + bitmap_pos;
+ 				data_size = free_space_bitmap_size(found_key.offset,
+ 								   block_group->sectorsize);
+ 
+@@ -409,7 +408,7 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans,
+ 	offset = start;
+ 	bitnr = 0;
+ 	while (offset < end) {
+-		bit = !!test_bit(bitnr, bitmap);
++		bit = !!le_test_bit(bitnr, bitmap);
+ 		if (prev_bit == 0 && bit == 1) {
+ 			extent_start = offset;
+ 		} else if (prev_bit == 1 && bit == 0) {
+@@ -1183,6 +1182,7 @@ int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info)
+ 	}
+ 
+ 	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE);
++	btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID);
+ 	fs_info->creating_free_space_tree = 0;
+ 
+ 	ret = btrfs_commit_transaction(trans, tree_root);
+@@ -1251,6 +1251,7 @@ int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info)
+ 		return PTR_ERR(trans);
+ 
+ 	btrfs_clear_fs_compat_ro(fs_info, FREE_SPACE_TREE);
++	btrfs_clear_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID);
+ 	fs_info->free_space_root = NULL;
+ 
+ 	ret = clear_free_space_tree(trans, free_space_root);
+diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
+index ce5f345d70f5..e7f16a77a22a 100644
+--- a/fs/cachefiles/interface.c
++++ b/fs/cachefiles/interface.c
+@@ -253,6 +253,8 @@ static void cachefiles_drop_object(struct fscache_object *_object)
+ 	struct cachefiles_object *object;
+ 	struct cachefiles_cache *cache;
+ 	const struct cred *saved_cred;
++	struct inode *inode;
++	blkcnt_t i_blocks = 0;
+ 
+ 	ASSERT(_object);
+ 
+@@ -279,6 +281,10 @@ static void cachefiles_drop_object(struct fscache_object *_object)
+ 		    _object != cache->cache.fsdef
+ 		    ) {
+ 			_debug("- retire object OBJ%x", object->fscache.debug_id);
++			inode = d_backing_inode(object->dentry);
++			if (inode)
++				i_blocks = inode->i_blocks;
++
+ 			cachefiles_begin_secure(cache, &saved_cred);
+ 			cachefiles_delete_object(cache, object);
+ 			cachefiles_end_secure(cache, saved_cred);
+@@ -292,7 +298,7 @@ static void cachefiles_drop_object(struct fscache_object *_object)
+ 
+ 	/* note that the object is now inactive */
+ 	if (test_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags))
+-		cachefiles_mark_object_inactive(cache, object);
++		cachefiles_mark_object_inactive(cache, object, i_blocks);
+ 
+ 	dput(object->dentry);
+ 	object->dentry = NULL;
+diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
+index 2fcde1a34b7c..cd1effee8a49 100644
+--- a/fs/cachefiles/internal.h
++++ b/fs/cachefiles/internal.h
+@@ -160,7 +160,8 @@ extern char *cachefiles_cook_key(const u8 *raw, int keylen, uint8_t type);
+  * namei.c
+  */
+ extern void cachefiles_mark_object_inactive(struct cachefiles_cache *cache,
+-					    struct cachefiles_object *object);
++					    struct cachefiles_object *object,
++					    blkcnt_t i_blocks);
+ extern int cachefiles_delete_object(struct cachefiles_cache *cache,
+ 				    struct cachefiles_object *object);
+ extern int cachefiles_walk_to_object(struct cachefiles_object *parent,
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index 3f7c2cd41f8f..c6ee4b5fb7e6 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -261,10 +261,9 @@ requeue:
+  * Mark an object as being inactive.
+  */
+ void cachefiles_mark_object_inactive(struct cachefiles_cache *cache,
+-				     struct cachefiles_object *object)
++				     struct cachefiles_object *object,
++				     blkcnt_t i_blocks)
+ {
+-	blkcnt_t i_blocks = d_backing_inode(object->dentry)->i_blocks;
+-
+ 	write_lock(&cache->active_lock);
+ 	rb_erase(&object->active_node, &cache->active_nodes);
+ 	clear_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags);
+@@ -707,7 +706,8 @@ mark_active_timed_out:
+ 
+ check_error:
+ 	_debug("check error %d", ret);
+-	cachefiles_mark_object_inactive(cache, object);
++	cachefiles_mark_object_inactive(
++		cache, object, d_backing_inode(object->dentry)->i_blocks);
+ release_dentry:
+ 	dput(object->dentry);
+ 	object->dentry = NULL;
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 592059f88e04..309f4e9b2419 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -97,9 +97,6 @@ EXPORT_SYMBOL_GPL(debugfs_use_file_finish);
+ 
+ #define F_DENTRY(filp) ((filp)->f_path.dentry)
+ 
+-#define REAL_FOPS_DEREF(dentry)					\
+-	((const struct file_operations *)(dentry)->d_fsdata)
+-
+ static int open_proxy_open(struct inode *inode, struct file *filp)
+ {
+ 	const struct dentry *dentry = F_DENTRY(filp);
+@@ -112,7 +109,7 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ 		goto out;
+ 	}
+ 
+-	real_fops = REAL_FOPS_DEREF(dentry);
++	real_fops = debugfs_real_fops(filp);
+ 	real_fops = fops_get(real_fops);
+ 	if (!real_fops) {
+ 		/* Huh? Module did not clean up after itself at exit? */
+@@ -143,7 +140,7 @@ static ret_type full_proxy_ ## name(proto)				\
+ {									\
+ 	const struct dentry *dentry = F_DENTRY(filp);			\
+ 	const struct file_operations *real_fops =			\
+-		REAL_FOPS_DEREF(dentry);				\
++		debugfs_real_fops(filp);				\
+ 	int srcu_idx;							\
+ 	ret_type r;							\
+ 									\
+@@ -176,7 +173,7 @@ static unsigned int full_proxy_poll(struct file *filp,
+ 				struct poll_table_struct *wait)
+ {
+ 	const struct dentry *dentry = F_DENTRY(filp);
+-	const struct file_operations *real_fops = REAL_FOPS_DEREF(dentry);
++	const struct file_operations *real_fops = debugfs_real_fops(filp);
+ 	int srcu_idx;
+ 	unsigned int r = 0;
+ 
+@@ -193,7 +190,7 @@ static unsigned int full_proxy_poll(struct file *filp,
+ static int full_proxy_release(struct inode *inode, struct file *filp)
+ {
+ 	const struct dentry *dentry = F_DENTRY(filp);
+-	const struct file_operations *real_fops = REAL_FOPS_DEREF(dentry);
++	const struct file_operations *real_fops = debugfs_real_fops(filp);
+ 	const struct file_operations *proxy_fops = filp->f_op;
+ 	int r = 0;
+ 
+@@ -241,7 +238,7 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ 		goto out;
+ 	}
+ 
+-	real_fops = REAL_FOPS_DEREF(dentry);
++	real_fops = debugfs_real_fops(filp);
+ 	real_fops = fops_get(real_fops);
+ 	if (!real_fops) {
+ 		/* Huh? Module did not cleanup after itself at exit? */
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 963016c8f3d1..609998de533e 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1656,16 +1656,12 @@ void dlm_lowcomms_stop(void)
+ 	mutex_lock(&connections_lock);
+ 	dlm_allow_conn = 0;
+ 	foreach_conn(stop_conn);
++	clean_writequeues();
++	foreach_conn(free_conn);
+ 	mutex_unlock(&connections_lock);
+ 
+ 	work_stop();
+ 
+-	mutex_lock(&connections_lock);
+-	clean_writequeues();
+-
+-	foreach_conn(free_conn);
+-
+-	mutex_unlock(&connections_lock);
+ 	kmem_cache_destroy(con_cache);
+ }
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index d7ccb7f51dfc..7f69347bd5a5 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5734,6 +5734,9 @@ int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			goto out_stop;
+ 		}
++	} else {
++		ext4_ext_drop_refs(path);
++		kfree(path);
+ 	}
+ 
+ 	ret = ext4_es_remove_extent(inode, offset_lblk,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index c6ea25a190f8..f4cdc647ecfc 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -647,11 +647,19 @@ found:
+ 		/*
+ 		 * We have to zeroout blocks before inserting them into extent
+ 		 * status tree. Otherwise someone could look them up there and
+-		 * use them before they are really zeroed.
++		 * use them before they are really zeroed. We also have to
++		 * unmap metadata before zeroing as otherwise writeback can
++		 * overwrite zeros with stale data from block device.
+ 		 */
+ 		if (flags & EXT4_GET_BLOCKS_ZERO &&
+ 		    map->m_flags & EXT4_MAP_MAPPED &&
+ 		    map->m_flags & EXT4_MAP_NEW) {
++			ext4_lblk_t i;
++
++			for (i = 0; i < map->m_len; i++) {
++				unmap_underlying_metadata(inode->i_sb->s_bdev,
++							  map->m_pblk + i);
++			}
+ 			ret = ext4_issue_zeroout(inode, map->m_lblk,
+ 						 map->m_pblk, map->m_len);
+ 			if (ret) {
+@@ -1649,6 +1657,8 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd,
+ 			BUG_ON(!PageLocked(page));
+ 			BUG_ON(PageWriteback(page));
+ 			if (invalidate) {
++				if (page_mapped(page))
++					clear_page_dirty_for_io(page);
+ 				block_invalidatepage(page, 0, PAGE_SIZE);
+ 				ClearPageUptodate(page);
+ 			}
+@@ -3890,7 +3900,7 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ }
+ 
+ /*
+- * ext4_punch_hole: punches a hole in a file by releaseing the blocks
++ * ext4_punch_hole: punches a hole in a file by releasing the blocks
+  * associated with the given offset and length
+  *
+  * @inode:  File inode
+@@ -3919,7 +3929,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ 	 * Write out all dirty pages to avoid race conditions
+ 	 * Then release them.
+ 	 */
+-	if (mapping->nrpages && mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
++	if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
+ 		ret = filemap_write_and_wait_range(mapping, offset,
+ 						   offset + length - 1);
+ 		if (ret)
+@@ -4814,14 +4824,14 @@ static int ext4_do_update_inode(handle_t *handle,
+  * Fix up interoperability with old kernels. Otherwise, old inodes get
+  * re-used with the upper 16 bits of the uid/gid intact
+  */
+-		if (!ei->i_dtime) {
++		if (ei->i_dtime && list_empty(&ei->i_orphan)) {
++			raw_inode->i_uid_high = 0;
++			raw_inode->i_gid_high = 0;
++		} else {
+ 			raw_inode->i_uid_high =
+ 				cpu_to_le16(high_16_bits(i_uid));
+ 			raw_inode->i_gid_high =
+ 				cpu_to_le16(high_16_bits(i_gid));
+-		} else {
+-			raw_inode->i_uid_high = 0;
+-			raw_inode->i_gid_high = 0;
+ 		}
+ 	} else {
+ 		raw_inode->i_uid_low = cpu_to_le16(fs_high2lowuid(i_uid));
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index a920c5d29fac..6fc14def0c70 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -598,6 +598,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (ext4_encrypted_inode(orig_inode) ||
++	    ext4_encrypted_inode(donor_inode)) {
++		ext4_msg(orig_inode->i_sb, KERN_ERR,
++			 "Online defrag not supported for encrypted files");
++		return -EOPNOTSUPP;
++	}
++
+ 	/* Protect orig and donor inodes against a truncate */
+ 	lock_two_nondirectories(orig_inode, donor_inode);
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 34c0142caf6a..7e2f8c3c11ce 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2044,33 +2044,31 @@ static int make_indexed_dir(handle_t *handle, struct ext4_filename *fname,
+ 	frame->entries = entries;
+ 	frame->at = entries;
+ 	frame->bh = bh;
+-	bh = bh2;
+ 
+ 	retval = ext4_handle_dirty_dx_node(handle, dir, frame->bh);
+ 	if (retval)
+ 		goto out_frames;	
+-	retval = ext4_handle_dirty_dirent_node(handle, dir, bh);
++	retval = ext4_handle_dirty_dirent_node(handle, dir, bh2);
+ 	if (retval)
+ 		goto out_frames;	
+ 
+-	de = do_split(handle,dir, &bh, frame, &fname->hinfo);
++	de = do_split(handle,dir, &bh2, frame, &fname->hinfo);
+ 	if (IS_ERR(de)) {
+ 		retval = PTR_ERR(de);
+ 		goto out_frames;
+ 	}
+-	dx_release(frames);
+ 
+-	retval = add_dirent_to_buf(handle, fname, dir, inode, de, bh);
+-	brelse(bh);
+-	return retval;
++	retval = add_dirent_to_buf(handle, fname, dir, inode, de, bh2);
+ out_frames:
+ 	/*
+ 	 * Even if the block split failed, we have to properly write
+ 	 * out all the changes we did so far. Otherwise we can end up
+ 	 * with corrupted filesystem.
+ 	 */
+-	ext4_mark_inode_dirty(handle, dir);
++	if (retval)
++		ext4_mark_inode_dirty(handle, dir);
+ 	dx_release(frames);
++	brelse(bh2);
+ 	return retval;
+ }
+ 
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index 4d83d9e05f2e..04a7850a0d45 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -65,13 +65,12 @@ static const char *ext4_encrypted_get_link(struct dentry *dentry,
+ 	res = fscrypt_fname_alloc_buffer(inode, cstr.len, &pstr);
+ 	if (res)
+ 		goto errout;
++	paddr = pstr.name;
+ 
+ 	res = fscrypt_fname_disk_to_usr(inode, 0, 0, &cstr, &pstr);
+ 	if (res < 0)
+ 		goto errout;
+ 
+-	paddr = pstr.name;
+-
+ 	/* Null-terminate the name */
+ 	if (res <= pstr.len)
+ 		paddr[res] = '\0';
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index c47b7780ce37..4ff9251e9d3a 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1702,14 +1702,46 @@ error:
+ static int fuse_setattr(struct dentry *entry, struct iattr *attr)
+ {
+ 	struct inode *inode = d_inode(entry);
++	struct file *file = (attr->ia_valid & ATTR_FILE) ? attr->ia_file : NULL;
++	int ret;
+ 
+ 	if (!fuse_allow_current_process(get_fuse_conn(inode)))
+ 		return -EACCES;
+ 
+-	if (attr->ia_valid & ATTR_FILE)
+-		return fuse_do_setattr(inode, attr, attr->ia_file);
+-	else
+-		return fuse_do_setattr(inode, attr, NULL);
++	if (attr->ia_valid & (ATTR_KILL_SUID | ATTR_KILL_SGID)) {
++		int kill;
++
++		attr->ia_valid &= ~(ATTR_KILL_SUID | ATTR_KILL_SGID |
++				    ATTR_MODE);
++		/*
++		 * ia_mode calculation may have used stale i_mode.  Refresh and
++		 * recalculate.
++		 */
++		ret = fuse_do_getattr(inode, NULL, file);
++		if (ret)
++			return ret;
++
++		attr->ia_mode = inode->i_mode;
++		kill = should_remove_suid(entry);
++		if (kill & ATTR_KILL_SUID) {
++			attr->ia_valid |= ATTR_MODE;
++			attr->ia_mode &= ~S_ISUID;
++		}
++		if (kill & ATTR_KILL_SGID) {
++			attr->ia_valid |= ATTR_MODE;
++			attr->ia_mode &= ~S_ISGID;
++		}
++	}
++	if (!attr->ia_valid)
++		return 0;
++
++	ret = fuse_do_setattr(inode, attr, file);
++	if (!ret) {
++		/* Directory mode changed, may need to revalidate access */
++		if (d_is_dir(entry) && (attr->ia_valid & ATTR_MODE))
++			fuse_invalidate_entry_cache(entry);
++	}
++	return ret;
+ }
+ 
+ static int fuse_getattr(struct vfsmount *mnt, struct dentry *entry,
+@@ -1801,6 +1833,23 @@ static ssize_t fuse_getxattr(struct dentry *entry, struct inode *inode,
+ 	return ret;
+ }
+ 
++static int fuse_verify_xattr_list(char *list, size_t size)
++{
++	size_t origsize = size;
++
++	while (size) {
++		size_t thislen = strnlen(list, size);
++
++		if (!thislen || thislen == size)
++			return -EIO;
++
++		size -= thislen + 1;
++		list += thislen + 1;
++	}
++
++	return origsize;
++}
++
+ static ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
+ {
+ 	struct inode *inode = d_inode(entry);
+@@ -1836,6 +1885,8 @@ static ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
+ 	ret = fuse_simple_request(fc, &args);
+ 	if (!ret && !size)
+ 		ret = outarg.size;
++	if (ret > 0 && size)
++		ret = fuse_verify_xattr_list(list, ret);
+ 	if (ret == -ENOSYS) {
+ 		fc->no_listxattr = 1;
+ 		ret = -EOPNOTSUPP;
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index b5bc3e249163..3d8246a9faa4 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -159,6 +159,7 @@ static void wait_transaction_locked(journal_t *journal)
+ 	read_unlock(&journal->j_state_lock);
+ 	if (need_to_start)
+ 		jbd2_log_start_commit(journal, tid);
++	jbd2_might_wait_for_commit(journal);
+ 	schedule();
+ 	finish_wait(&journal->j_wait_transaction_locked, &wait);
+ }
+@@ -182,8 +183,6 @@ static int add_transaction_credits(journal_t *journal, int blocks,
+ 	int needed;
+ 	int total = blocks + rsv_blocks;
+ 
+-	jbd2_might_wait_for_commit(journal);
+-
+ 	/*
+ 	 * If the current transaction is locked down for commit, wait
+ 	 * for the lock to be released.
+@@ -214,6 +213,7 @@ static int add_transaction_credits(journal_t *journal, int blocks,
+ 		if (atomic_read(&journal->j_reserved_credits) + total >
+ 		    journal->j_max_transaction_buffers) {
+ 			read_unlock(&journal->j_state_lock);
++			jbd2_might_wait_for_commit(journal);
+ 			wait_event(journal->j_wait_reserved,
+ 				   atomic_read(&journal->j_reserved_credits) + total <=
+ 				   journal->j_max_transaction_buffers);
+@@ -238,6 +238,7 @@ static int add_transaction_credits(journal_t *journal, int blocks,
+ 	if (jbd2_log_space_left(journal) < jbd2_space_needed(journal)) {
+ 		atomic_sub(total, &t->t_outstanding_credits);
+ 		read_unlock(&journal->j_state_lock);
++		jbd2_might_wait_for_commit(journal);
+ 		write_lock(&journal->j_state_lock);
+ 		if (jbd2_log_space_left(journal) < jbd2_space_needed(journal))
+ 			__jbd2_log_wait_for_space(journal);
+@@ -255,6 +256,7 @@ static int add_transaction_credits(journal_t *journal, int blocks,
+ 		sub_reserved_credits(journal, rsv_blocks);
+ 		atomic_sub(total, &t->t_outstanding_credits);
+ 		read_unlock(&journal->j_state_lock);
++		jbd2_might_wait_for_commit(journal);
+ 		wait_event(journal->j_wait_reserved,
+ 			 atomic_read(&journal->j_reserved_credits) + rsv_blocks
+ 			 <= journal->j_max_transaction_buffers / 2);
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index 7a4a85a6821e..74d5ddd26296 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -190,7 +190,15 @@ static int remove_save_link_only(struct super_block *s,
+ static int reiserfs_quota_on_mount(struct super_block *, int);
+ #endif
+ 
+-/* look for uncompleted unlinks and truncates and complete them */
++/*
++ * Look for uncompleted unlinks and truncates and complete them
++ *
++ * Called with superblock write locked.  If quotas are enabled, we have to
++ * release/retake lest we call dquot_quota_on_mount(), proceed to
++ * schedule_on_each_cpu() in invalidate_bdev() and deadlock waiting for the per
++ * cpu worklets to complete flush_async_commits() that in turn wait for the
++ * superblock write lock.
++ */
+ static int finish_unfinished(struct super_block *s)
+ {
+ 	INITIALIZE_PATH(path);
+@@ -237,7 +245,9 @@ static int finish_unfinished(struct super_block *s)
+ 				quota_enabled[i] = 0;
+ 				continue;
+ 			}
++			reiserfs_write_unlock(s);
+ 			ret = reiserfs_quota_on_mount(s, i);
++			reiserfs_write_lock(s);
+ 			if (ret < 0)
+ 				reiserfs_warning(s, "reiserfs-2500",
+ 						 "cannot turn on journaled "
+diff --git a/fs/utimes.c b/fs/utimes.c
+index 794f5f5b1fb5..ba54b9e648c9 100644
+--- a/fs/utimes.c
++++ b/fs/utimes.c
+@@ -87,21 +87,7 @@ static int utimes_common(struct path *path, struct timespec *times)
+ 		 */
+ 		newattrs.ia_valid |= ATTR_TIMES_SET;
+ 	} else {
+-		/*
+-		 * If times is NULL (or both times are UTIME_NOW),
+-		 * then we need to check permissions, because
+-		 * inode_change_ok() won't do it.
+-		 */
+-		error = -EPERM;
+-                if (IS_IMMUTABLE(inode))
+-			goto mnt_drop_write_and_out;
+-
+-		error = -EACCES;
+-		if (!inode_owner_or_capable(inode)) {
+-			error = inode_permission(inode, MAY_WRITE);
+-			if (error)
+-				goto mnt_drop_write_and_out;
+-		}
++		newattrs.ia_valid |= ATTR_TOUCH;
+ 	}
+ retry_deleg:
+ 	inode_lock(inode);
+@@ -113,7 +99,6 @@ retry_deleg:
+ 			goto retry_deleg;
+ 	}
+ 
+-mnt_drop_write_and_out:
+ 	mnt_drop_write(path->mnt);
+ out:
+ 	return error;
+diff --git a/include/crypto/ghash.h b/include/crypto/ghash.h
+new file mode 100644
+index 000000000000..2a61c9bbab8f
+--- /dev/null
++++ b/include/crypto/ghash.h
+@@ -0,0 +1,23 @@
++/*
++ * Common values for GHASH algorithms
++ */
++
++#ifndef __CRYPTO_GHASH_H__
++#define __CRYPTO_GHASH_H__
++
++#include <linux/types.h>
++#include <crypto/gf128mul.h>
++
++#define GHASH_BLOCK_SIZE	16
++#define GHASH_DIGEST_SIZE	16
++
++struct ghash_ctx {
++	struct gf128mul_4k *gf128;
++};
++
++struct ghash_desc_ctx {
++	u8 buffer[GHASH_BLOCK_SIZE];
++	u32 bytes;
++};
++
++#endif
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index 1438e2322d5c..4d3f0d1aec73 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -45,6 +45,23 @@ extern struct dentry *arch_debugfs_dir;
+ 
+ extern struct srcu_struct debugfs_srcu;
+ 
++/**
++ * debugfs_real_fops - getter for the real file operation
++ * @filp: a pointer to a struct file
++ *
++ * Must only be called under the protection established by
++ * debugfs_use_file_start().
++ */
++static inline const struct file_operations *debugfs_real_fops(struct file *filp)
++	__must_hold(&debugfs_srcu)
++{
++	/*
++	 * Neither the pointer to the struct file_operations, nor its
++	 * contents ever change -- srcu_dereference() is not needed here.
++	 */
++	return filp->f_path.dentry->d_fsdata;
++}
++
+ #if defined(CONFIG_DEBUG_FS)
+ 
+ struct dentry *debugfs_create_file(const char *name, umode_t mode,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 901e25d495cc..7c391366fb43 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -224,6 +224,7 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ #define ATTR_KILL_PRIV	(1 << 14)
+ #define ATTR_OPEN	(1 << 15) /* Truncating from open(O_TRUNC) */
+ #define ATTR_TIMES_SET	(1 << 16)
++#define ATTR_TOUCH	(1 << 17)
+ 
+ /*
+  * Whiteout is represented by a char device.  The following constants define the
+diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
+index 4c45105dece3..52b97db93830 100644
+--- a/include/linux/radix-tree.h
++++ b/include/linux/radix-tree.h
+@@ -280,9 +280,9 @@ bool __radix_tree_delete_node(struct radix_tree_root *root,
+ 			      struct radix_tree_node *node);
+ void *radix_tree_delete_item(struct radix_tree_root *, unsigned long, void *);
+ void *radix_tree_delete(struct radix_tree_root *, unsigned long);
+-struct radix_tree_node *radix_tree_replace_clear_tags(
+-				struct radix_tree_root *root,
+-				unsigned long index, void *entry);
++void radix_tree_clear_tags(struct radix_tree_root *root,
++			   struct radix_tree_node *node,
++			   void **slot);
+ unsigned int radix_tree_gang_lookup(struct radix_tree_root *root,
+ 			void **results, unsigned long first_index,
+ 			unsigned int max_items);
+diff --git a/include/linux/sem.h b/include/linux/sem.h
+index 976ce3a19f1b..d0efd6e6c20a 100644
+--- a/include/linux/sem.h
++++ b/include/linux/sem.h
+@@ -21,6 +21,7 @@ struct sem_array {
+ 	struct list_head	list_id;	/* undo requests on this array */
+ 	int			sem_nsems;	/* no. of semaphores in array */
+ 	int			complex_count;	/* pending complex operations */
++	bool			complex_mode;	/* no parallel simple ops */
+ };
+ 
+ #ifdef CONFIG_SYSVIPC
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index ac5eacd3055b..db4c253f8011 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -239,7 +239,17 @@ struct btrfs_ioctl_fs_info_args {
+  * Used by:
+  * struct btrfs_ioctl_feature_flags
+  */
+-#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE	(1ULL << 0)
++#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE		(1ULL << 0)
++/*
++ * Older kernels (< 4.9) on big-endian systems produced broken free space tree
++ * bitmaps, and btrfs-progs also used to corrupt the free space tree (versions
++ * < 4.7.3).  If this bit is clear, then the free space tree cannot be trusted.
++ * btrfs-progs can also intentionally clear this bit to ask the kernel to
++ * rebuild the free space tree, however this might not work on older kernels
++ * that do not know about this bit. If not sure, clear the cache manually on
++ * first mount when booting older kernel versions.
++ */
++#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE_VALID	(1ULL << 1)
+ 
+ #define BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF	(1ULL << 0)
+ #define BTRFS_FEATURE_INCOMPAT_DEFAULT_SUBVOL	(1ULL << 1)
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 7c9d4f7683c0..5e318c5f749d 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -162,14 +162,21 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
+ 
+ /*
+  * Locking:
++ * a) global sem_lock() for read/write
+  *	sem_undo.id_next,
+  *	sem_array.complex_count,
+- *	sem_array.pending{_alter,_cont},
+- *	sem_array.sem_undo: global sem_lock() for read/write
+- *	sem_undo.proc_next: only "current" is allowed to read/write that field.
++ *	sem_array.complex_mode
++ *	sem_array.pending{_alter,_const},
++ *	sem_array.sem_undo
+  *
++ * b) global or semaphore sem_lock() for read/write:
+  *	sem_array.sem_base[i].pending_{const,alter}:
+- *		global or semaphore sem_lock() for read/write
++ *	sem_array.complex_mode (for read)
++ *
++ * c) special:
++ *	sem_undo_list.list_proc:
++ *	* undo_list->lock for write
++ *	* rcu for read
+  */
+ 
+ #define sc_semmsl	sem_ctls[0]
+@@ -260,30 +267,61 @@ static void sem_rcu_free(struct rcu_head *head)
+ }
+ 
+ /*
+- * Wait until all currently ongoing simple ops have completed.
++ * Enter the mode suitable for non-simple operations:
+  * Caller must own sem_perm.lock.
+- * New simple ops cannot start, because simple ops first check
+- * that sem_perm.lock is free.
+- * that a) sem_perm.lock is free and b) complex_count is 0.
+  */
+-static void sem_wait_array(struct sem_array *sma)
++static void complexmode_enter(struct sem_array *sma)
+ {
+ 	int i;
+ 	struct sem *sem;
+ 
+-	if (sma->complex_count)  {
+-		/* The thread that increased sma->complex_count waited on
+-		 * all sem->lock locks. Thus we don't need to wait again.
+-		 */
++	if (sma->complex_mode)  {
++		/* We are already in complex_mode. Nothing to do */
+ 		return;
+ 	}
+ 
++	/* We need a full barrier after seting complex_mode:
++	 * The write to complex_mode must be visible
++	 * before we read the first sem->lock spinlock state.
++	 */
++	smp_store_mb(sma->complex_mode, true);
++
+ 	for (i = 0; i < sma->sem_nsems; i++) {
+ 		sem = sma->sem_base + i;
+ 		spin_unlock_wait(&sem->lock);
+ 	}
++	/*
++	 * spin_unlock_wait() is not a memory barriers, it is only a
++	 * control barrier. The code must pair with spin_unlock(&sem->lock),
++	 * thus just the control barrier is insufficient.
++	 *
++	 * smp_rmb() is sufficient, as writes cannot pass the control barrier.
++	 */
++	smp_rmb();
++}
++
++/*
++ * Try to leave the mode that disallows simple operations:
++ * Caller must own sem_perm.lock.
++ */
++static void complexmode_tryleave(struct sem_array *sma)
++{
++	if (sma->complex_count)  {
++		/* Complex ops are sleeping.
++		 * We must stay in complex mode
++		 */
++		return;
++	}
++	/*
++	 * Immediately after setting complex_mode to false,
++	 * a simple op can start. Thus: all memory writes
++	 * performed by the current operation must be visible
++	 * before we set complex_mode to false.
++	 */
++	smp_store_release(&sma->complex_mode, false);
+ }
+ 
++#define SEM_GLOBAL_LOCK	(-1)
+ /*
+  * If the request contains only one semaphore operation, and there are
+  * no complex transactions pending, lock only the semaphore involved.
+@@ -300,56 +338,42 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
+ 		/* Complex operation - acquire a full lock */
+ 		ipc_lock_object(&sma->sem_perm);
+ 
+-		/* And wait until all simple ops that are processed
+-		 * right now have dropped their locks.
+-		 */
+-		sem_wait_array(sma);
+-		return -1;
++		/* Prevent parallel simple ops */
++		complexmode_enter(sma);
++		return SEM_GLOBAL_LOCK;
+ 	}
+ 
+ 	/*
+ 	 * Only one semaphore affected - try to optimize locking.
+-	 * The rules are:
+-	 * - optimized locking is possible if no complex operation
+-	 *   is either enqueued or processed right now.
+-	 * - The test for enqueued complex ops is simple:
+-	 *      sma->complex_count != 0
+-	 * - Testing for complex ops that are processed right now is
+-	 *   a bit more difficult. Complex ops acquire the full lock
+-	 *   and first wait that the running simple ops have completed.
+-	 *   (see above)
+-	 *   Thus: If we own a simple lock and the global lock is free
+-	 *	and complex_count is now 0, then it will stay 0 and
+-	 *	thus just locking sem->lock is sufficient.
++	 * Optimized locking is possible if no complex operation
++	 * is either enqueued or processed right now.
++	 *
++	 * Both facts are tracked by complex_mode.
+ 	 */
+ 	sem = sma->sem_base + sops->sem_num;
+ 
+-	if (sma->complex_count == 0) {
++	/*
++	 * Initial check for complex_mode. Just an optimization,
++	 * no locking, no memory barrier.
++	 */
++	if (!sma->complex_mode) {
+ 		/*
+ 		 * It appears that no complex operation is around.
+ 		 * Acquire the per-semaphore lock.
+ 		 */
+ 		spin_lock(&sem->lock);
+ 
+-		/* Then check that the global lock is free */
+-		if (!spin_is_locked(&sma->sem_perm.lock)) {
+-			/*
+-			 * We need a memory barrier with acquire semantics,
+-			 * otherwise we can race with another thread that does:
+-			 *	complex_count++;
+-			 *	spin_unlock(sem_perm.lock);
+-			 */
+-			smp_acquire__after_ctrl_dep();
++		/*
++		 * See 51d7d5205d33
++		 * ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
++		 * A full barrier is required: the write of sem->lock
++		 * must be visible before the read is executed
++		 */
++		smp_mb();
+ 
+-			/*
+-			 * Now repeat the test of complex_count:
+-			 * It can't change anymore until we drop sem->lock.
+-			 * Thus: if is now 0, then it will stay 0.
+-			 */
+-			if (sma->complex_count == 0) {
+-				/* fast path successful! */
+-				return sops->sem_num;
+-			}
++		if (!smp_load_acquire(&sma->complex_mode)) {
++			/* fast path successful! */
++			return sops->sem_num;
+ 		}
+ 		spin_unlock(&sem->lock);
+ 	}
+@@ -369,15 +393,16 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
+ 		/* Not a false alarm, thus complete the sequence for a
+ 		 * full lock.
+ 		 */
+-		sem_wait_array(sma);
+-		return -1;
++		complexmode_enter(sma);
++		return SEM_GLOBAL_LOCK;
+ 	}
+ }
+ 
+ static inline void sem_unlock(struct sem_array *sma, int locknum)
+ {
+-	if (locknum == -1) {
++	if (locknum == SEM_GLOBAL_LOCK) {
+ 		unmerge_queues(sma);
++		complexmode_tryleave(sma);
+ 		ipc_unlock_object(&sma->sem_perm);
+ 	} else {
+ 		struct sem *sem = sma->sem_base + locknum;
+@@ -529,6 +554,7 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
+ 	}
+ 
+ 	sma->complex_count = 0;
++	sma->complex_mode = true; /* dropped by sem_unlock below */
+ 	INIT_LIST_HEAD(&sma->pending_alter);
+ 	INIT_LIST_HEAD(&sma->pending_const);
+ 	INIT_LIST_HEAD(&sma->list_id);
+@@ -2184,10 +2210,10 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it)
+ 	/*
+ 	 * The proc interface isn't aware of sem_lock(), it calls
+ 	 * ipc_lock_object() directly (in sysvipc_find_ipc).
+-	 * In order to stay compatible with sem_lock(), we must wait until
+-	 * all simple semop() calls have left their critical regions.
++	 * In order to stay compatible with sem_lock(), we must
++	 * enter / leave complex_mode.
+ 	 */
+-	sem_wait_array(sma);
++	complexmode_enter(sma);
+ 
+ 	sem_otime = get_semotime(sma);
+ 
+@@ -2204,6 +2230,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it)
+ 		   sem_otime,
+ 		   sma->sem_ctime);
+ 
++	complexmode_tryleave(sma);
++
+ 	return 0;
+ }
+ #endif
+diff --git a/lib/radix-tree.c b/lib/radix-tree.c
+index 91f0727e3cad..8e6d552c40dd 100644
+--- a/lib/radix-tree.c
++++ b/lib/radix-tree.c
+@@ -1583,15 +1583,10 @@ void *radix_tree_delete(struct radix_tree_root *root, unsigned long index)
+ }
+ EXPORT_SYMBOL(radix_tree_delete);
+ 
+-struct radix_tree_node *radix_tree_replace_clear_tags(
+-			struct radix_tree_root *root,
+-			unsigned long index, void *entry)
++void radix_tree_clear_tags(struct radix_tree_root *root,
++			   struct radix_tree_node *node,
++			   void **slot)
+ {
+-	struct radix_tree_node *node;
+-	void **slot;
+-
+-	__radix_tree_lookup(root, index, &node, &slot);
+-
+ 	if (node) {
+ 		unsigned int tag, offset = get_slot_offset(node, slot);
+ 		for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
+@@ -1600,9 +1595,6 @@ struct radix_tree_node *radix_tree_replace_clear_tags(
+ 		/* Clear root node tags */
+ 		root->gfp_mask &= __GFP_BITS_MASK;
+ 	}
+-
+-	radix_tree_replace_slot(slot, entry);
+-	return node;
+ }
+ 
+ /**
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 2d0986a64f1f..ced9ef6c06b0 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -169,33 +169,35 @@ static int page_cache_tree_insert(struct address_space *mapping,
+ static void page_cache_tree_delete(struct address_space *mapping,
+ 				   struct page *page, void *shadow)
+ {
+-	struct radix_tree_node *node;
+ 	int i, nr = PageHuge(page) ? 1 : hpage_nr_pages(page);
+ 
+ 	VM_BUG_ON_PAGE(!PageLocked(page), page);
+ 	VM_BUG_ON_PAGE(PageTail(page), page);
+ 	VM_BUG_ON_PAGE(nr != 1 && shadow, page);
+ 
+-	if (shadow) {
+-		mapping->nrexceptional += nr;
+-		/*
+-		 * Make sure the nrexceptional update is committed before
+-		 * the nrpages update so that final truncate racing
+-		 * with reclaim does not see both counters 0 at the
+-		 * same time and miss a shadow entry.
+-		 */
+-		smp_wmb();
+-	}
+-	mapping->nrpages -= nr;
+-
+ 	for (i = 0; i < nr; i++) {
+-		node = radix_tree_replace_clear_tags(&mapping->page_tree,
+-				page->index + i, shadow);
++		struct radix_tree_node *node;
++		void **slot;
++
++		__radix_tree_lookup(&mapping->page_tree, page->index + i,
++				    &node, &slot);
++
++		radix_tree_clear_tags(&mapping->page_tree, node, slot);
++
+ 		if (!node) {
+ 			VM_BUG_ON_PAGE(nr != 1, page);
+-			return;
++			/*
++			 * We need a node to properly account shadow
++			 * entries. Don't plant any without. XXX
++			 */
++			shadow = NULL;
+ 		}
+ 
++		radix_tree_replace_slot(slot, shadow);
++
++		if (!node)
++			break;
++
+ 		workingset_node_pages_dec(node);
+ 		if (shadow)
+ 			workingset_node_shadows_inc(node);
+@@ -219,6 +221,18 @@ static void page_cache_tree_delete(struct address_space *mapping,
+ 					&node->private_list);
+ 		}
+ 	}
++
++	if (shadow) {
++		mapping->nrexceptional += nr;
++		/*
++		 * Make sure the nrexceptional update is committed before
++		 * the nrpages update so that final truncate racing
++		 * with reclaim does not see both counters 0 at the
++		 * same time and miss a shadow entry.
++		 */
++		smp_wmb();
++	}
++	mapping->nrpages -= nr;
+ }
+ 
+ /*
+@@ -619,7 +633,6 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
+ 		__delete_from_page_cache(old, NULL);
+ 		error = page_cache_tree_insert(mapping, new, NULL);
+ 		BUG_ON(error);
+-		mapping->nrpages++;
+ 
+ 		/*
+ 		 * hugetlb pages do not participate in page cache accounting.
+@@ -1674,6 +1687,10 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
+ 	unsigned int prev_offset;
+ 	int error = 0;
+ 
++	if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
++		return -EINVAL;
++	iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
++
+ 	index = *ppos >> PAGE_SHIFT;
+ 	prev_index = ra->prev_pos >> PAGE_SHIFT;
+ 	prev_offset = ra->prev_pos & (PAGE_SIZE-1);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 87e11d8ad536..603bdd01ec2c 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1443,13 +1443,14 @@ static void dissolve_free_huge_page(struct page *page)
+ {
+ 	spin_lock(&hugetlb_lock);
+ 	if (PageHuge(page) && !page_count(page)) {
+-		struct hstate *h = page_hstate(page);
+-		int nid = page_to_nid(page);
+-		list_del(&page->lru);
++		struct page *head = compound_head(page);
++		struct hstate *h = page_hstate(head);
++		int nid = page_to_nid(head);
++		list_del(&head->lru);
+ 		h->free_huge_pages--;
+ 		h->free_huge_pages_node[nid]--;
+ 		h->max_huge_pages--;
+-		update_and_free_page(h, page);
++		update_and_free_page(h, head);
+ 	}
+ 	spin_unlock(&hugetlb_lock);
+ }
+@@ -1457,7 +1458,8 @@ static void dissolve_free_huge_page(struct page *page)
+ /*
+  * Dissolve free hugepages in a given pfn range. Used by memory hotplug to
+  * make specified memory blocks removable from the system.
+- * Note that start_pfn should aligned with (minimum) hugepage size.
++ * Note that this will dissolve a free gigantic hugepage completely, if any
++ * part of it lies within the given range.
+  */
+ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
+ {
+@@ -1466,7 +1468,6 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
+ 	if (!hugepages_supported())
+ 		return;
+ 
+-	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
+ 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
+ 		dissolve_free_huge_page(pfn_to_page(pfn));
+ }
+diff --git a/sound/soc/codecs/nau8825.c b/sound/soc/codecs/nau8825.c
+index 2e59a85e360b..ff566376da40 100644
+--- a/sound/soc/codecs/nau8825.c
++++ b/sound/soc/codecs/nau8825.c
+@@ -1907,7 +1907,7 @@ static int nau8825_calc_fll_param(unsigned int fll_in, unsigned int fs,
+ 	/* Calculate the FLL 10-bit integer input and the FLL 16-bit fractional
+ 	 * input based on FDCO, FREF and FLL ratio.
+ 	 */
+-	fvco = div_u64(fvco << 16, fref * fll_param->ratio);
++	fvco = div_u64(fvco_max << 16, fref * fll_param->ratio);
+ 	fll_param->fll_int = (fvco >> 16) & 0x3FF;
+ 	fll_param->fll_frac = fvco & 0xFFFF;
+ 	return 0;
+diff --git a/sound/soc/intel/atom/sst/sst_pvt.c b/sound/soc/intel/atom/sst/sst_pvt.c
+index adb32fefd693..b1e6b8f34a6a 100644
+--- a/sound/soc/intel/atom/sst/sst_pvt.c
++++ b/sound/soc/intel/atom/sst/sst_pvt.c
+@@ -279,17 +279,15 @@ int sst_prepare_and_post_msg(struct intel_sst_drv *sst,
+ 
+ 	if (response) {
+ 		ret = sst_wait_timeout(sst, block);
+-		if (ret < 0) {
++		if (ret < 0)
+ 			goto out;
+-		} else if(block->data) {
+-			if (!data)
+-				goto out;
+-			*data = kzalloc(block->size, GFP_KERNEL);
+-			if (!(*data)) {
++
++		if (data && block->data) {
++			*data = kmemdup(block->data, block->size, GFP_KERNEL);
++			if (!*data) {
+ 				ret = -ENOMEM;
+ 				goto out;
+-			} else
+-				memcpy(data, (void *) block->data, block->size);
++			}
+ 		}
+ 	}
+ out:


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-23 13:59 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-10-23 13:59 UTC (permalink / raw
  To: gentoo-commits

commit:     5f366798d7e0ab4865e24b35e1f3af8b438d9d4b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 23 13:59:25 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 23 13:59:25 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f366798

BFQ patchset for 4.8 v7r11.

 0000_README                                        |   16 +
 ...oups-kconfig-build-bits-for-BFQ-v7r11-4.8.patch |  103 +
 ...ntroduce-the-BFQ-v7r11-I-O-sched-for-4.8.patch1 | 7110 +++++++++++++++++++
 ...arly-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch | 1101 +++
 ...-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1 | 7392 ++++++++++++++++++++
 5 files changed, 15722 insertions(+)

diff --git a/0000_README b/0000_README
index 5a8b43e..c65f1ed 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,22 @@ Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
 
+Patch:  5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.8.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r11 patch 1 for 4.8: Build, cgroups and kconfig bits
+
+Patch:  5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.8.patch1
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r11 patch 2 for 4.8: BFQ Scheduler
+
+Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r11 patch 3 for 4.8: Early Queue Merge (EQM)
+
+Patch:  5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v8r3 patch 4 for 4.8: Early Queue Merge (EQM)
+
 Patch:  5010_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.8.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.8.patch
new file mode 100644
index 0000000..35cd1ce
--- /dev/null
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.8.patch
@@ -0,0 +1,103 @@
+From f2ebe596e7d72e96e0fb2be87be90f0b96e6f1b3 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Tue, 7 Apr 2015 13:39:12 +0200
+Subject: [PATCH 1/4] block: cgroups, kconfig, build bits for BFQ-v7r11-4.8.0
+
+Update Kconfig.iosched and do the related Makefile changes to include
+kernel configuration options for BFQ. Also increase the number of
+policies supported by the blkio controller so that BFQ can add its
+own.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/Kconfig.iosched  | 32 ++++++++++++++++++++++++++++++++
+ block/Makefile         |  1 +
+ include/linux/blkdev.h |  2 +-
+ 3 files changed, 34 insertions(+), 1 deletion(-)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 421bef9..0ee5f0f 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
+ 	---help---
+ 	  Enable group IO scheduling in CFQ.
+ 
++config IOSCHED_BFQ
++	tristate "BFQ I/O scheduler"
++	default n
++	---help---
++	  The BFQ I/O scheduler tries to distribute bandwidth among
++	  all processes according to their weights.
++	  It aims at distributing the bandwidth as desired, independently of
++	  the disk parameters and with any workload. It also tries to
++	  guarantee low latency to interactive and soft real-time
++	  applications. If compiled built-in (saying Y here), BFQ can
++	  be configured to support hierarchical scheduling.
++
++config CGROUP_BFQIO
++	bool "BFQ hierarchical scheduling support"
++	depends on CGROUPS && IOSCHED_BFQ=y
++	default n
++	---help---
++	  Enable hierarchical scheduling in BFQ, using the cgroups
++	  filesystem interface.  The name of the subsystem will be
++	  bfqio.
++
+ choice
+ 	prompt "Default I/O scheduler"
+ 	default DEFAULT_CFQ
+@@ -52,6 +73,16 @@ choice
+ 	config DEFAULT_CFQ
+ 		bool "CFQ" if IOSCHED_CFQ=y
+ 
++	config DEFAULT_BFQ
++		bool "BFQ" if IOSCHED_BFQ=y
++		help
++		  Selects BFQ as the default I/O scheduler which will be
++		  used by default for all block devices.
++		  The BFQ I/O scheduler aims at distributing the bandwidth
++		  as desired, independently of the disk parameters and with
++		  any workload. It also tries to guarantee low latency to
++		  interactive and soft real-time applications.
++
+ 	config DEFAULT_NOOP
+ 		bool "No-op"
+ 
+@@ -61,6 +92,7 @@ config DEFAULT_IOSCHED
+ 	string
+ 	default "deadline" if DEFAULT_DEADLINE
+ 	default "cfq" if DEFAULT_CFQ
++	default "bfq" if DEFAULT_BFQ
+ 	default "noop" if DEFAULT_NOOP
+ 
+ endmenu
+diff --git a/block/Makefile b/block/Makefile
+index 9eda232..4a36683 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
+ obj-$(CONFIG_IOSCHED_NOOP)	+= noop-iosched.o
+ obj-$(CONFIG_IOSCHED_DEADLINE)	+= deadline-iosched.o
+ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
++obj-$(CONFIG_IOSCHED_BFQ)	+= bfq-iosched.o
+ 
+ obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
+ obj-$(CONFIG_BLK_CMDLINE_PARSER)	+= cmdline-parser.o
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index e79055c..931ff1e 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -45,7 +45,7 @@ struct pr_ops;
+  * Maximum number of blkcg policies allowed to be registered concurrently.
+  * Defined here to simplify include dependency.
+  */
+-#define BLKCG_MAX_POLS		2
++#define BLKCG_MAX_POLS		3
+ 
+ typedef void (rq_end_io_fn)(struct request *, int);
+ 
+-- 
+2.7.4 (Apple Git-66)
+

diff --git a/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.8.patch1 b/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.8.patch1
new file mode 100644
index 0000000..7cc8ce1
--- /dev/null
+++ b/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.8.patch1
@@ -0,0 +1,7110 @@
+From d9af6fcc4167cbb8433b10bbf3663c8297487f52 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Thu, 9 May 2013 19:10:02 +0200
+Subject: [PATCH 2/4] block: introduce the BFQ-v7r11 I/O sched, to be ported to
+ 4.8.0
+
+The general structure is borrowed from CFQ, as much of the code for
+handling I/O contexts. Over time, several useful features have been
+ported from CFQ as well (details in the changelog in README.BFQ). A
+(bfq_)queue is associated to each task doing I/O on a device, and each
+time a scheduling decision has to be made a queue is selected and served
+until it expires.
+
+    - Slices are given in the service domain: tasks are assigned
+      budgets, measured in number of sectors. Once got the disk, a task
+      must however consume its assigned budget within a configurable
+      maximum time (by default, the maximum possible value of the
+      budgets is automatically computed to comply with this timeout).
+      This allows the desired latency vs "throughput boosting" tradeoff
+      to be set.
+
+    - Budgets are scheduled according to a variant of WF2Q+, implemented
+      using an augmented rb-tree to take eligibility into account while
+      preserving an O(log N) overall complexity.
+
+    - A low-latency tunable is provided; if enabled, both interactive
+      and soft real-time applications are guaranteed a very low latency.
+
+    - Latency guarantees are preserved also in the presence of NCQ.
+
+    - Also with flash-based devices, a high throughput is achieved
+      while still preserving latency guarantees.
+
+    - BFQ features Early Queue Merge (EQM), a sort of fusion of the
+      cooperating-queue-merging and the preemption mechanisms present
+      in CFQ. EQM is in fact a unified mechanism that tries to get a
+      sequential read pattern, and hence a high throughput, with any
+      set of processes performing interleaved I/O over a contiguous
+      sequence of sectors.
+
+    - BFQ supports full hierarchical scheduling, exporting a cgroups
+      interface.  Since each node has a full scheduler, each group can
+      be assigned its own weight.
+
+    - If the cgroups interface is not used, only I/O priorities can be
+      assigned to processes, with ioprio values mapped to weights
+      with the relation weight = IOPRIO_BE_NR - ioprio.
+
+    - ioprio classes are served in strict priority order, i.e., lower
+      priority queues are not served as long as there are higher
+      priority queues.  Among queues in the same class the bandwidth is
+      distributed in proportion to the weight of each queue. A very
+      thin extra bandwidth is however guaranteed to the Idle class, to
+      prevent it from starving.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/Kconfig.iosched |    6 +-
+ block/bfq-cgroup.c    | 1186 ++++++++++++++++
+ block/bfq-ioc.c       |   36 +
+ block/bfq-iosched.c   | 3763 +++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c     | 1199 ++++++++++++++++
+ block/bfq.h           |  801 +++++++++++
+ 6 files changed, 6987 insertions(+), 4 deletions(-)
+ create mode 100644 block/bfq-cgroup.c
+ create mode 100644 block/bfq-ioc.c
+ create mode 100644 block/bfq-iosched.c
+ create mode 100644 block/bfq-sched.c
+ create mode 100644 block/bfq.h
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 0ee5f0f..f78cd1a 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -51,14 +51,12 @@ config IOSCHED_BFQ
+ 	  applications. If compiled built-in (saying Y here), BFQ can
+ 	  be configured to support hierarchical scheduling.
+ 
+-config CGROUP_BFQIO
++config BFQ_GROUP_IOSCHED
+ 	bool "BFQ hierarchical scheduling support"
+ 	depends on CGROUPS && IOSCHED_BFQ=y
+ 	default n
+ 	---help---
+-	  Enable hierarchical scheduling in BFQ, using the cgroups
+-	  filesystem interface.  The name of the subsystem will be
+-	  bfqio.
++	  Enable hierarchical scheduling in BFQ, using the blkio controller.
+ 
+ choice
+ 	prompt "Default I/O scheduler"
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+new file mode 100644
+index 0000000..8b08a57
+--- /dev/null
++++ b/block/bfq-cgroup.c
+@@ -0,0 +1,1186 @@
++/*
++ * BFQ: CGROUPS support.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ */
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++/* bfqg stats flags */
++enum bfqg_stats_flags {
++	BFQG_stats_waiting = 0,
++	BFQG_stats_idling,
++	BFQG_stats_empty,
++};
++
++#define BFQG_FLAG_FNS(name)						\
++static void bfqg_stats_mark_##name(struct bfqg_stats *stats)	\
++{									\
++	stats->flags |= (1 << BFQG_stats_##name);			\
++}									\
++static void bfqg_stats_clear_##name(struct bfqg_stats *stats)	\
++{									\
++	stats->flags &= ~(1 << BFQG_stats_##name);			\
++}									\
++static int bfqg_stats_##name(struct bfqg_stats *stats)		\
++{									\
++	return (stats->flags & (1 << BFQG_stats_##name)) != 0;		\
++}									\
++
++BFQG_FLAG_FNS(waiting)
++BFQG_FLAG_FNS(idling)
++BFQG_FLAG_FNS(empty)
++#undef BFQG_FLAG_FNS
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
++{
++	unsigned long long now;
++
++	if (!bfqg_stats_waiting(stats))
++		return;
++
++	now = sched_clock();
++	if (time_after64(now, stats->start_group_wait_time))
++		blkg_stat_add(&stats->group_wait_time,
++			      now - stats->start_group_wait_time);
++	bfqg_stats_clear_waiting(stats);
++}
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
++						 struct bfq_group *curr_bfqg)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++
++	if (bfqg_stats_waiting(stats))
++		return;
++	if (bfqg == curr_bfqg)
++		return;
++	stats->start_group_wait_time = sched_clock();
++	bfqg_stats_mark_waiting(stats);
++}
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)
++{
++	unsigned long long now;
++
++	if (!bfqg_stats_empty(stats))
++		return;
++
++	now = sched_clock();
++	if (time_after64(now, stats->start_empty_time))
++		blkg_stat_add(&stats->empty_time,
++			      now - stats->start_empty_time);
++	bfqg_stats_clear_empty(stats);
++}
++
++static void bfqg_stats_update_dequeue(struct bfq_group *bfqg)
++{
++	blkg_stat_add(&bfqg->stats.dequeue, 1);
++}
++
++static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++
++	if (blkg_rwstat_total(&stats->queued))
++		return;
++
++	/*
++	 * group is already marked empty. This can happen if bfqq got new
++	 * request in parent group and moved to this group while being added
++	 * to service tree. Just ignore the event and move on.
++	 */
++	if (bfqg_stats_empty(stats))
++		return;
++
++	stats->start_empty_time = sched_clock();
++	bfqg_stats_mark_empty(stats);
++}
++
++static void bfqg_stats_update_idle_time(struct bfq_group *bfqg)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++
++	if (bfqg_stats_idling(stats)) {
++		unsigned long long now = sched_clock();
++
++		if (time_after64(now, stats->start_idle_time))
++			blkg_stat_add(&stats->idle_time,
++				      now - stats->start_idle_time);
++		bfqg_stats_clear_idling(stats);
++	}
++}
++
++static void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++
++	stats->start_idle_time = sched_clock();
++	bfqg_stats_mark_idling(stats);
++}
++
++static void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++
++	blkg_stat_add(&stats->avg_queue_size_sum,
++		      blkg_rwstat_total(&stats->queued));
++	blkg_stat_add(&stats->avg_queue_size_samples, 1);
++	bfqg_stats_update_group_wait_time(stats);
++}
++
++static struct blkcg_policy blkcg_policy_bfq;
++
++/*
++ * blk-cgroup policy-related handlers
++ * The following functions help in converting between blk-cgroup
++ * internal structures and BFQ-specific structures.
++ */
++
++static struct bfq_group *pd_to_bfqg(struct blkg_policy_data *pd)
++{
++	return pd ? container_of(pd, struct bfq_group, pd) : NULL;
++}
++
++static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg)
++{
++	return pd_to_blkg(&bfqg->pd);
++}
++
++static struct bfq_group *blkg_to_bfqg(struct blkcg_gq *blkg)
++{
++	struct blkg_policy_data *pd = blkg_to_pd(blkg, &blkcg_policy_bfq);
++
++	BUG_ON(!pd);
++
++	return pd_to_bfqg(pd);
++}
++
++/*
++ * bfq_group handlers
++ * The following functions help in navigating the bfq_group hierarchy
++ * by allowing to find the parent of a bfq_group or the bfq_group
++ * associated to a bfq_queue.
++ */
++
++static struct bfq_group *bfqg_parent(struct bfq_group *bfqg)
++{
++	struct blkcg_gq *pblkg = bfqg_to_blkg(bfqg)->parent;
++
++	return pblkg ? blkg_to_bfqg(pblkg) : NULL;
++}
++
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *group_entity = bfqq->entity.parent;
++
++	return group_entity ? container_of(group_entity, struct bfq_group,
++					   entity) :
++			      bfqq->bfqd->root_group;
++}
++
++/*
++ * The following two functions handle get and put of a bfq_group by
++ * wrapping the related blk-cgroup hooks.
++ */
++
++static void bfqg_get(struct bfq_group *bfqg)
++{
++	return blkg_get(bfqg_to_blkg(bfqg));
++}
++
++static void bfqg_put(struct bfq_group *bfqg)
++{
++	return blkg_put(bfqg_to_blkg(bfqg));
++}
++
++static void bfqg_stats_update_io_add(struct bfq_group *bfqg,
++				     struct bfq_queue *bfqq,
++				     int rw)
++{
++	blkg_rwstat_add(&bfqg->stats.queued, rw, 1);
++	bfqg_stats_end_empty_time(&bfqg->stats);
++	if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
++		bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
++}
++
++static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, int rw)
++{
++	blkg_rwstat_add(&bfqg->stats.queued, rw, -1);
++}
++
++static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, int rw)
++{
++	blkg_rwstat_add(&bfqg->stats.merged, rw, 1);
++}
++
++static void bfqg_stats_update_dispatch(struct bfq_group *bfqg,
++					      uint64_t bytes, int rw)
++{
++	blkg_stat_add(&bfqg->stats.sectors, bytes >> 9);
++	blkg_rwstat_add(&bfqg->stats.serviced, rw, 1);
++	blkg_rwstat_add(&bfqg->stats.service_bytes, rw, bytes);
++}
++
++static void bfqg_stats_update_completion(struct bfq_group *bfqg,
++			uint64_t start_time, uint64_t io_start_time, int rw)
++{
++	struct bfqg_stats *stats = &bfqg->stats;
++	unsigned long long now = sched_clock();
++
++	if (time_after64(now, io_start_time))
++		blkg_rwstat_add(&stats->service_time, rw, now - io_start_time);
++	if (time_after64(io_start_time, start_time))
++		blkg_rwstat_add(&stats->wait_time, rw,
++				io_start_time - start_time);
++}
++
++/* @stats = 0 */
++static void bfqg_stats_reset(struct bfqg_stats *stats)
++{
++	if (!stats)
++		return;
++
++	/* queued stats shouldn't be cleared */
++	blkg_rwstat_reset(&stats->service_bytes);
++	blkg_rwstat_reset(&stats->serviced);
++	blkg_rwstat_reset(&stats->merged);
++	blkg_rwstat_reset(&stats->service_time);
++	blkg_rwstat_reset(&stats->wait_time);
++	blkg_stat_reset(&stats->time);
++	blkg_stat_reset(&stats->unaccounted_time);
++	blkg_stat_reset(&stats->avg_queue_size_sum);
++	blkg_stat_reset(&stats->avg_queue_size_samples);
++	blkg_stat_reset(&stats->dequeue);
++	blkg_stat_reset(&stats->group_wait_time);
++	blkg_stat_reset(&stats->idle_time);
++	blkg_stat_reset(&stats->empty_time);
++}
++
++/* @to += @from */
++static void bfqg_stats_merge(struct bfqg_stats *to, struct bfqg_stats *from)
++{
++	if (!to || !from)
++		return;
++
++	/* queued stats shouldn't be cleared */
++	blkg_rwstat_add_aux(&to->service_bytes, &from->service_bytes);
++	blkg_rwstat_add_aux(&to->serviced, &from->serviced);
++	blkg_rwstat_add_aux(&to->merged, &from->merged);
++	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
++	blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
++	blkg_stat_add_aux(&from->time, &from->time);
++	blkg_stat_add_aux(&to->unaccounted_time, &from->unaccounted_time);
++	blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
++	blkg_stat_add_aux(&to->avg_queue_size_samples,
++			  &from->avg_queue_size_samples);
++	blkg_stat_add_aux(&to->dequeue, &from->dequeue);
++	blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
++	blkg_stat_add_aux(&to->idle_time, &from->idle_time);
++	blkg_stat_add_aux(&to->empty_time, &from->empty_time);
++}
++
++/*
++ * Transfer @bfqg's stats to its parent's dead_stats so that the ancestors'
++ * recursive stats can still account for the amount used by this bfqg after
++ * it's gone.
++ */
++static void bfqg_stats_xfer_dead(struct bfq_group *bfqg)
++{
++	struct bfq_group *parent;
++
++	if (!bfqg) /* root_group */
++		return;
++
++	parent = bfqg_parent(bfqg);
++
++	lockdep_assert_held(bfqg_to_blkg(bfqg)->q->queue_lock);
++
++	if (unlikely(!parent))
++		return;
++
++	bfqg_stats_merge(&parent->dead_stats, &bfqg->stats);
++	bfqg_stats_merge(&parent->dead_stats, &bfqg->dead_stats);
++	bfqg_stats_reset(&bfqg->stats);
++	bfqg_stats_reset(&bfqg->dead_stats);
++}
++
++static void bfq_init_entity(struct bfq_entity *entity,
++			    struct bfq_group *bfqg)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	if (bfqq) {
++		bfqq->ioprio = bfqq->new_ioprio;
++		bfqq->ioprio_class = bfqq->new_ioprio_class;
++		bfqg_get(bfqg);
++	}
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static void bfqg_stats_exit(struct bfqg_stats *stats)
++{
++	blkg_rwstat_exit(&stats->service_bytes);
++	blkg_rwstat_exit(&stats->serviced);
++	blkg_rwstat_exit(&stats->merged);
++	blkg_rwstat_exit(&stats->service_time);
++	blkg_rwstat_exit(&stats->wait_time);
++	blkg_rwstat_exit(&stats->queued);
++	blkg_stat_exit(&stats->sectors);
++	blkg_stat_exit(&stats->time);
++	blkg_stat_exit(&stats->unaccounted_time);
++	blkg_stat_exit(&stats->avg_queue_size_sum);
++	blkg_stat_exit(&stats->avg_queue_size_samples);
++	blkg_stat_exit(&stats->dequeue);
++	blkg_stat_exit(&stats->group_wait_time);
++	blkg_stat_exit(&stats->idle_time);
++	blkg_stat_exit(&stats->empty_time);
++}
++
++static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
++{
++	if (blkg_rwstat_init(&stats->service_bytes, gfp) ||
++	    blkg_rwstat_init(&stats->serviced, gfp) ||
++	    blkg_rwstat_init(&stats->merged, gfp) ||
++	    blkg_rwstat_init(&stats->service_time, gfp) ||
++	    blkg_rwstat_init(&stats->wait_time, gfp) ||
++	    blkg_rwstat_init(&stats->queued, gfp) ||
++	    blkg_stat_init(&stats->sectors, gfp) ||
++	    blkg_stat_init(&stats->time, gfp) ||
++	    blkg_stat_init(&stats->unaccounted_time, gfp) ||
++	    blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
++	    blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
++	    blkg_stat_init(&stats->dequeue, gfp) ||
++	    blkg_stat_init(&stats->group_wait_time, gfp) ||
++	    blkg_stat_init(&stats->idle_time, gfp) ||
++	    blkg_stat_init(&stats->empty_time, gfp)) {
++		bfqg_stats_exit(stats);
++		return -ENOMEM;
++	}
++
++	return 0;
++}
++
++static struct bfq_group_data *cpd_to_bfqgd(struct blkcg_policy_data *cpd)
++{
++	return cpd ? container_of(cpd, struct bfq_group_data, pd) : NULL;
++}
++
++static struct bfq_group_data *blkcg_to_bfqgd(struct blkcg *blkcg)
++{
++	return cpd_to_bfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_bfq));
++}
++
++static void bfq_cpd_init(struct blkcg_policy_data *cpd)
++{
++	struct bfq_group_data *d = cpd_to_bfqgd(cpd);
++
++	d->weight = BFQ_DEFAULT_GRP_WEIGHT;
++}
++
++static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
++{
++	struct bfq_group *bfqg;
++
++	bfqg = kzalloc_node(sizeof(*bfqg), gfp, node);
++	if (!bfqg)
++		return NULL;
++
++	if (bfqg_stats_init(&bfqg->stats, gfp) ||
++	    bfqg_stats_init(&bfqg->dead_stats, gfp)) {
++		kfree(bfqg);
++		return NULL;
++	}
++
++	return &bfqg->pd;
++}
++
++static void bfq_group_set_parent(struct bfq_group *bfqg,
++					struct bfq_group *parent)
++{
++	struct bfq_entity *entity;
++
++	BUG_ON(!parent);
++	BUG_ON(!bfqg);
++	BUG_ON(bfqg == parent);
++
++	entity = &bfqg->entity;
++	entity->parent = parent->my_entity;
++	entity->sched_data = &parent->sched_data;
++}
++
++static void bfq_pd_init(struct blkg_policy_data *pd)
++{
++	struct blkcg_gq *blkg = pd_to_blkg(pd);
++	struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++	struct bfq_data *bfqd = blkg->q->elevator->elevator_data;
++	struct bfq_entity *entity = &bfqg->entity;
++	struct bfq_group_data *d = blkcg_to_bfqgd(blkg->blkcg);
++
++	entity->orig_weight = entity->weight = entity->new_weight = d->weight;
++	entity->my_sched_data = &bfqg->sched_data;
++	bfqg->my_entity = entity; /*
++				   * the root_group's will be set to NULL
++				   * in bfq_init_queue()
++				   */
++	bfqg->bfqd = bfqd;
++	bfqg->active_entities = 0;
++}
++
++static void bfq_pd_free(struct blkg_policy_data *pd)
++{
++	struct bfq_group *bfqg = pd_to_bfqg(pd);
++
++	bfqg_stats_exit(&bfqg->stats);
++	bfqg_stats_exit(&bfqg->dead_stats);
++
++	return kfree(bfqg);
++}
++
++/* offset delta from bfqg->stats to bfqg->dead_stats */
++static const int dead_stats_off_delta = offsetof(struct bfq_group, dead_stats) -
++					offsetof(struct bfq_group, stats);
++
++/* to be used by recursive prfill, sums live and dead stats recursively */
++static u64 bfqg_stat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++{
++	u64 sum = 0;
++
++	sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
++	sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
++				       off + dead_stats_off_delta);
++	return sum;
++}
++
++/* to be used by recursive prfill, sums live and dead rwstats recursively */
++static struct blkg_rwstat
++bfqg_rwstat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++{
++	struct blkg_rwstat a, b;
++
++	a = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
++	b = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
++				      off + dead_stats_off_delta);
++	blkg_rwstat_add_aux(&a, &b);
++	return a;
++}
++
++static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
++{
++	struct bfq_group *bfqg = pd_to_bfqg(pd);
++
++	bfqg_stats_reset(&bfqg->stats);
++	bfqg_stats_reset(&bfqg->dead_stats);
++}
++
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++					      struct blkcg *blkcg)
++{
++	struct request_queue *q = bfqd->queue;
++	struct bfq_group *bfqg = NULL, *parent;
++	struct bfq_entity *entity = NULL;
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++
++	/* avoid lookup for the common case where there's no blkcg */
++	if (blkcg == &blkcg_root) {
++		bfqg = bfqd->root_group;
++	} else {
++		struct blkcg_gq *blkg;
++
++		blkg = blkg_lookup_create(blkcg, q);
++		if (!IS_ERR(blkg))
++			bfqg = blkg_to_bfqg(blkg);
++		else /* fallback to root_group */
++			bfqg = bfqd->root_group;
++	}
++
++	BUG_ON(!bfqg);
++
++	/*
++	 * Update chain of bfq_groups as we might be handling a leaf group
++	 * which, along with some of its relatives, has not been hooked yet
++	 * to the private hierarchy of BFQ.
++	 */
++	entity = &bfqg->entity;
++	for_each_entity(entity) {
++		bfqg = container_of(entity, struct bfq_group, entity);
++		BUG_ON(!bfqg);
++		if (bfqg != bfqd->root_group) {
++			parent = bfqg_parent(bfqg);
++			if (!parent)
++				parent = bfqd->root_group;
++			BUG_ON(!parent);
++			bfq_group_set_parent(bfqg, parent);
++		}
++	}
++
++	return bfqg;
++}
++
++/**
++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
++ * @bfqd: queue descriptor.
++ * @bfqq: the queue to move.
++ * @entity: @bfqq's entity.
++ * @bfqg: the group to move to.
++ *
++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
++ * it on the new one.  Avoid putting the entity on the old group idle tree.
++ *
++ * Must be called under the queue lock; the cgroup owning @bfqg must
++ * not disappear (by now this just means that we are called under
++ * rcu_read_lock()).
++ */
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  struct bfq_entity *entity, struct bfq_group *bfqg)
++{
++	int busy, resume;
++
++	busy = bfq_bfqq_busy(bfqq);
++	resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++
++	BUG_ON(resume && !entity->on_st);
++	BUG_ON(busy && !resume && entity->on_st &&
++	       bfqq != bfqd->in_service_queue);
++
++	if (busy) {
++		BUG_ON(atomic_read(&bfqq->ref) < 2);
++
++		if (!resume)
++			bfq_del_bfqq_busy(bfqd, bfqq, 0);
++		else
++			bfq_deactivate_bfqq(bfqd, bfqq, 0);
++	} else if (entity->on_st)
++		bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++	bfqg_put(bfqq_group(bfqq));
++
++	/*
++	 * Here we use a reference to bfqg.  We don't need a refcounter
++	 * as the cgroup reference will not be dropped, so that its
++	 * destroy() callback will not be invoked.
++	 */
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++	bfqg_get(bfqg);
++
++	if (busy) {
++		if (resume)
++			bfq_activate_bfqq(bfqd, bfqq);
++	}
++
++	if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++}
++
++/**
++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bfqd: the queue descriptor.
++ * @bic: the bic to move.
++ * @blkcg: the blk-cgroup to move to.
++ *
++ * Move bic to blkcg, assuming that bfqd->queue is locked; the caller
++ * has to make sure that the reference to cgroup is valid across the call.
++ *
++ * NOTE: an alternative approach might have been to store the current
++ * cgroup in bfqq and getting a reference to it, reducing the lookup
++ * time here, at the price of slightly more complex code.
++ */
++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++						struct bfq_io_cq *bic,
++						struct blkcg *blkcg)
++{
++	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
++	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++	struct bfq_group *bfqg;
++	struct bfq_entity *entity;
++
++	lockdep_assert_held(bfqd->queue->queue_lock);
++
++	bfqg = bfq_find_alloc_group(bfqd, blkcg);
++	if (async_bfqq) {
++		entity = &async_bfqq->entity;
++
++		if (entity->sched_data != &bfqg->sched_data) {
++			bic_set_bfqq(bic, NULL, 0);
++			bfq_log_bfqq(bfqd, async_bfqq,
++				     "bic_change_group: %p %d",
++				     async_bfqq, atomic_read(&async_bfqq->ref));
++			bfq_put_queue(async_bfqq);
++		}
++	}
++
++	if (sync_bfqq) {
++		entity = &sync_bfqq->entity;
++		if (entity->sched_data != &bfqg->sched_data)
++			bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++	}
++
++	return bfqg;
++}
++
++static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	struct blkcg *blkcg;
++	struct bfq_group *bfqg = NULL;
++	uint64_t id;
++
++	rcu_read_lock();
++	blkcg = bio_blkcg(bio);
++	id = blkcg->css.serial_nr;
++	rcu_read_unlock();
++
++	/*
++	 * Check whether blkcg has changed.  The condition may trigger
++	 * spuriously on a newly created cic but there's no harm.
++	 */
++	if (unlikely(!bfqd) || likely(bic->blkcg_id == id))
++		return;
++
++	bfqg = __bfq_bic_change_cgroup(bfqd, bic, blkcg);
++	BUG_ON(!bfqg);
++	bic->blkcg_id = id;
++}
++
++/**
++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
++ * @st: the service tree being flushed.
++ */
++static void bfq_flush_idle_tree(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entity = st->first_idle;
++
++	for (; entity ; entity = st->first_idle)
++		__bfq_deactivate_entity(entity, 0);
++}
++
++/**
++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
++ * @bfqd: the device data structure with the root group.
++ * @entity: the entity to move.
++ */
++static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
++				     struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(!bfqq);
++	bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++}
++
++/**
++ * bfq_reparent_active_entities - move to the root group all active
++ *                                entities.
++ * @bfqd: the device data structure with the root group.
++ * @bfqg: the group to move from.
++ * @st: the service tree with the entities.
++ *
++ * Needs queue_lock to be taken and reference to be valid over the call.
++ */
++static void bfq_reparent_active_entities(struct bfq_data *bfqd,
++					 struct bfq_group *bfqg,
++					 struct bfq_service_tree *st)
++{
++	struct rb_root *active = &st->active;
++	struct bfq_entity *entity = NULL;
++
++	if (!RB_EMPTY_ROOT(&st->active))
++		entity = bfq_entity_of(rb_first(active));
++
++	for (; entity ; entity = bfq_entity_of(rb_first(active)))
++		bfq_reparent_leaf_entity(bfqd, entity);
++
++	if (bfqg->sched_data.in_service_entity)
++		bfq_reparent_leaf_entity(bfqd,
++			bfqg->sched_data.in_service_entity);
++}
++
++/**
++ * bfq_destroy_group - destroy @bfqg.
++ * @bfqg: the group being destroyed.
++ *
++ * Destroy @bfqg, making sure that it is not referenced from its parent.
++ * blkio already grabs the queue_lock for us, so no need to use RCU-based magic
++ */
++static void bfq_pd_offline(struct blkg_policy_data *pd)
++{
++	struct bfq_service_tree *st;
++	struct bfq_group *bfqg;
++	struct bfq_data *bfqd;
++	struct bfq_entity *entity;
++	int i;
++
++	BUG_ON(!pd);
++	bfqg = pd_to_bfqg(pd);
++	BUG_ON(!bfqg);
++	bfqd = bfqg->bfqd;
++	BUG_ON(bfqd && !bfqd->root_group);
++
++	entity = bfqg->my_entity;
++
++	if (!entity) /* root group */
++		return;
++
++	/*
++	 * Empty all service_trees belonging to this group before
++	 * deactivating the group itself.
++	 */
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
++		BUG_ON(!bfqg->sched_data.service_tree);
++		st = bfqg->sched_data.service_tree + i;
++		/*
++		 * The idle tree may still contain bfq_queues belonging
++		 * to exited task because they never migrated to a different
++		 * cgroup from the one being destroyed now.  No one else
++		 * can access them so it's safe to act without any lock.
++		 */
++		bfq_flush_idle_tree(st);
++
++		/*
++		 * It may happen that some queues are still active
++		 * (busy) upon group destruction (if the corresponding
++		 * processes have been forced to terminate). We move
++		 * all the leaf entities corresponding to these queues
++		 * to the root_group.
++		 * Also, it may happen that the group has an entity
++		 * in service, which is disconnected from the active
++		 * tree: it must be moved, too.
++		 * There is no need to put the sync queues, as the
++		 * scheduler has taken no reference.
++		 */
++		bfq_reparent_active_entities(bfqd, bfqg, st);
++		BUG_ON(!RB_EMPTY_ROOT(&st->active));
++		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
++	}
++	BUG_ON(bfqg->sched_data.next_in_service);
++	BUG_ON(bfqg->sched_data.in_service_entity);
++
++	__bfq_deactivate_entity(entity, 0);
++	bfq_put_async_queues(bfqd, bfqg);
++	BUG_ON(entity->tree);
++
++	bfqg_stats_xfer_dead(bfqg);
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	struct blkcg_gq *blkg;
++
++	list_for_each_entry(blkg, &bfqd->queue->blkg_list, q_node) {
++		struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++
++		bfq_end_wr_async_queues(bfqd, bfqg);
++	}
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static u64 bfqio_cgroup_weight_read(struct cgroup_subsys_state *css,
++				       struct cftype *cftype)
++{
++	struct blkcg *blkcg = css_to_blkcg(css);
++	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++	int ret = -EINVAL;
++
++	spin_lock_irq(&blkcg->lock);
++	ret = bfqgd->weight;
++	spin_unlock_irq(&blkcg->lock);
++
++	return ret;
++}
++
++static int bfqio_cgroup_weight_read_dfl(struct seq_file *sf, void *v)
++{
++	struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
++	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++
++	spin_lock_irq(&blkcg->lock);
++	seq_printf(sf, "%u\n", bfqgd->weight);
++	spin_unlock_irq(&blkcg->lock);
++
++	return 0;
++}
++
++static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
++					struct cftype *cftype,
++					u64 val)
++{
++	struct blkcg *blkcg = css_to_blkcg(css);
++	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++	struct blkcg_gq *blkg;
++	int ret = -EINVAL;
++
++	if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT)
++		return ret;
++
++	ret = 0;
++	spin_lock_irq(&blkcg->lock);
++	bfqgd->weight = (unsigned short)val;
++	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
++		struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++
++		if (!bfqg)
++			continue;
++		/*
++		 * Setting the prio_changed flag of the entity
++		 * to 1 with new_weight == weight would re-set
++		 * the value of the weight to its ioprio mapping.
++		 * Set the flag only if necessary.
++		 */
++		if ((unsigned short)val != bfqg->entity.new_weight) {
++			bfqg->entity.new_weight = (unsigned short)val;
++			/*
++			 * Make sure that the above new value has been
++			 * stored in bfqg->entity.new_weight before
++			 * setting the prio_changed flag. In fact,
++			 * this flag may be read asynchronously (in
++			 * critical sections protected by a different
++			 * lock than that held here), and finding this
++			 * flag set may cause the execution of the code
++			 * for updating parameters whose value may
++			 * depend also on bfqg->entity.new_weight (in
++			 * __bfq_entity_update_weight_prio).
++			 * This barrier makes sure that the new value
++			 * of bfqg->entity.new_weight is correctly
++			 * seen in that code.
++			 */
++			smp_wmb();
++			bfqg->entity.prio_changed = 1;
++		}
++	}
++	spin_unlock_irq(&blkcg->lock);
++
++	return ret;
++}
++
++static ssize_t bfqio_cgroup_weight_write_dfl(struct kernfs_open_file *of,
++					     char *buf, size_t nbytes,
++					     loff_t off)
++{
++	/* First unsigned long found in the file is used */
++	return bfqio_cgroup_weight_write(of_css(of), NULL,
++					 simple_strtoull(strim(buf), NULL, 0));
++}
++
++static int bfqg_print_stat(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat,
++			  &blkcg_policy_bfq, seq_cft(sf)->private, false);
++	return 0;
++}
++
++static int bfqg_print_rwstat(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_rwstat,
++			  &blkcg_policy_bfq, seq_cft(sf)->private, true);
++	return 0;
++}
++
++static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
++				      struct blkg_policy_data *pd, int off)
++{
++	u64 sum = bfqg_stat_pd_recursive_sum(pd, off);
++
++	return __blkg_prfill_u64(sf, pd, sum);
++}
++
++static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
++					struct blkg_policy_data *pd, int off)
++{
++	struct blkg_rwstat sum = bfqg_rwstat_pd_recursive_sum(pd, off);
++
++	return __blkg_prfill_rwstat(sf, pd, &sum);
++}
++
++static int bfqg_print_stat_recursive(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++			  bfqg_prfill_stat_recursive, &blkcg_policy_bfq,
++			  seq_cft(sf)->private, false);
++	return 0;
++}
++
++static int bfqg_print_rwstat_recursive(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++			  bfqg_prfill_rwstat_recursive, &blkcg_policy_bfq,
++			  seq_cft(sf)->private, true);
++	return 0;
++}
++
++static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
++				      struct blkg_policy_data *pd, int off)
++{
++	struct bfq_group *bfqg = pd_to_bfqg(pd);
++	u64 samples = blkg_stat_read(&bfqg->stats.avg_queue_size_samples);
++	u64 v = 0;
++
++	if (samples) {
++		v = blkg_stat_read(&bfqg->stats.avg_queue_size_sum);
++		v = div64_u64(v, samples);
++	}
++	__blkg_prfill_u64(sf, pd, v);
++	return 0;
++}
++
++/* print avg_queue_size */
++static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++			  bfqg_prfill_avg_queue_size, &blkcg_policy_bfq,
++			  0, false);
++	return 0;
++}
++
++static struct bfq_group *
++bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
++{
++	int ret;
++
++	ret = blkcg_activate_policy(bfqd->queue, &blkcg_policy_bfq);
++	if (ret)
++		return NULL;
++
++	return blkg_to_bfqg(bfqd->queue->root_blkg);
++}
++
++static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
++{
++	struct bfq_group_data *bgd;
++
++	bgd = kzalloc(sizeof(*bgd), GFP_KERNEL);
++	if (!bgd)
++		return NULL;
++	return &bgd->pd;
++}
++
++static void bfq_cpd_free(struct blkcg_policy_data *cpd)
++{
++	kfree(cpd_to_bfqgd(cpd));
++}
++
++static struct cftype bfqio_files_dfl[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = bfqio_cgroup_weight_read_dfl,
++		.write = bfqio_cgroup_weight_write_dfl,
++	},
++	{} /* terminate */
++};
++
++static struct cftype bfqio_files[] = {
++	{
++		.name = "bfq.weight",
++		.read_u64 = bfqio_cgroup_weight_read,
++		.write_u64 = bfqio_cgroup_weight_write,
++	},
++	/* statistics, cover only the tasks in the bfqg */
++	{
++		.name = "bfq.time",
++		.private = offsetof(struct bfq_group, stats.time),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.sectors",
++		.private = offsetof(struct bfq_group, stats.sectors),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.io_service_bytes",
++		.private = offsetof(struct bfq_group, stats.service_bytes),
++		.seq_show = bfqg_print_rwstat,
++	},
++	{
++		.name = "bfq.io_serviced",
++		.private = offsetof(struct bfq_group, stats.serviced),
++		.seq_show = bfqg_print_rwstat,
++	},
++	{
++		.name = "bfq.io_service_time",
++		.private = offsetof(struct bfq_group, stats.service_time),
++		.seq_show = bfqg_print_rwstat,
++	},
++	{
++		.name = "bfq.io_wait_time",
++		.private = offsetof(struct bfq_group, stats.wait_time),
++		.seq_show = bfqg_print_rwstat,
++	},
++	{
++		.name = "bfq.io_merged",
++		.private = offsetof(struct bfq_group, stats.merged),
++		.seq_show = bfqg_print_rwstat,
++	},
++	{
++		.name = "bfq.io_queued",
++		.private = offsetof(struct bfq_group, stats.queued),
++		.seq_show = bfqg_print_rwstat,
++	},
++
++	/* the same statictics which cover the bfqg and its descendants */
++	{
++		.name = "bfq.time_recursive",
++		.private = offsetof(struct bfq_group, stats.time),
++		.seq_show = bfqg_print_stat_recursive,
++	},
++	{
++		.name = "bfq.sectors_recursive",
++		.private = offsetof(struct bfq_group, stats.sectors),
++		.seq_show = bfqg_print_stat_recursive,
++	},
++	{
++		.name = "bfq.io_service_bytes_recursive",
++		.private = offsetof(struct bfq_group, stats.service_bytes),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.io_serviced_recursive",
++		.private = offsetof(struct bfq_group, stats.serviced),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.io_service_time_recursive",
++		.private = offsetof(struct bfq_group, stats.service_time),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.io_wait_time_recursive",
++		.private = offsetof(struct bfq_group, stats.wait_time),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.io_merged_recursive",
++		.private = offsetof(struct bfq_group, stats.merged),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.io_queued_recursive",
++		.private = offsetof(struct bfq_group, stats.queued),
++		.seq_show = bfqg_print_rwstat_recursive,
++	},
++	{
++		.name = "bfq.avg_queue_size",
++		.seq_show = bfqg_print_avg_queue_size,
++	},
++	{
++		.name = "bfq.group_wait_time",
++		.private = offsetof(struct bfq_group, stats.group_wait_time),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.idle_time",
++		.private = offsetof(struct bfq_group, stats.idle_time),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.empty_time",
++		.private = offsetof(struct bfq_group, stats.empty_time),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.dequeue",
++		.private = offsetof(struct bfq_group, stats.dequeue),
++		.seq_show = bfqg_print_stat,
++	},
++	{
++		.name = "bfq.unaccounted_time",
++		.private = offsetof(struct bfq_group, stats.unaccounted_time),
++		.seq_show = bfqg_print_stat,
++	},
++	{ }	/* terminate */
++};
++
++static struct blkcg_policy blkcg_policy_bfq = {
++	.dfl_cftypes            = bfqio_files_dfl,
++	.legacy_cftypes		= bfqio_files,
++
++	.pd_alloc_fn		= bfq_pd_alloc,
++	.pd_init_fn		= bfq_pd_init,
++	.pd_offline_fn		= bfq_pd_offline,
++	.pd_free_fn		= bfq_pd_free,
++	.pd_reset_stats_fn	= bfq_pd_reset_stats,
++
++	.cpd_alloc_fn		= bfq_cpd_alloc,
++	.cpd_init_fn		= bfq_cpd_init,
++	.cpd_bind_fn		= bfq_cpd_init,
++	.cpd_free_fn		= bfq_cpd_free,
++};
++
++#else
++
++static void bfq_init_entity(struct bfq_entity *entity,
++			    struct bfq_group *bfqg)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	if (bfqq) {
++		bfqq->ioprio = bfqq->new_ioprio;
++		bfqq->ioprio_class = bfqq->new_ioprio_class;
++	}
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static struct bfq_group *
++bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++	return bfqd->root_group;
++}
++
++static void bfq_bfqq_move(struct bfq_data *bfqd,
++			  struct bfq_queue *bfqq,
++			  struct bfq_entity *entity,
++			  struct bfq_group *bfqg)
++{
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++	bfq_put_async_queues(bfqd, bfqd->root_group);
++}
++
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++					      struct blkcg *blkcg)
++{
++	return bfqd->root_group;
++}
++
++static struct bfq_group *
++bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
++{
++	struct bfq_group *bfqg;
++	int i;
++
++	bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
++	if (!bfqg)
++		return NULL;
++
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++	return bfqg;
++}
++#endif
+diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
+new file mode 100644
+index 0000000..fb7bb8f
+--- /dev/null
++++ b/block/bfq-ioc.c
+@@ -0,0 +1,36 @@
++/*
++ * BFQ: I/O context handling.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++/**
++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
++ * @icq: the iocontext queue.
++ */
++static struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
++{
++	/* bic->icq is the first member, %NULL will convert to %NULL */
++	return container_of(icq, struct bfq_io_cq, icq);
++}
++
++/**
++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
++ * @bfqd: the lookup key.
++ * @ioc: the io_context of the process doing I/O.
++ *
++ * Queue lock must be held.
++ */
++static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
++					struct io_context *ioc)
++{
++	if (ioc)
++		return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
++	return NULL;
++}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+new file mode 100644
+index 0000000..85e2169
+--- /dev/null
++++ b/block/bfq-iosched.c
+@@ -0,0 +1,3763 @@
++/*
++ * Budget Fair Queueing (BFQ) disk scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based on
++ * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
++ * measured in number of sectors, to processes instead of time slices. The
++ * device is not granted to the in-service process for a given time slice,
++ * but until it has exhausted its assigned budget. This change from the time
++ * to the service domain allows BFQ to distribute the device throughput
++ * among processes as desired, without any distortion due to ZBR, workload
++ * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
++ * called B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated to processes. Thanks to the
++ * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
++ * I/O-bound processes issuing sequential requests (to boost the
++ * throughput), and yet guarantee a low latency to interactive and soft
++ * real-time applications.
++ *
++ * BFQ is described in [1], where also a reference to the initial, more
++ * theoretical paper on BFQ can be found. The interested reader can find
++ * in the latter paper full details on the main algorithm, as well as
++ * formulas of the guarantees and formal proofs of all the properties.
++ * With respect to the version of BFQ presented in these papers, this
++ * implementation adds a few more heuristics, such as the one that
++ * guarantees a low latency to soft real-time applications, and a
++ * hierarchical extension based on H-WF2Q+.
++ *
++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
++ * complexity derives from the one introduced with EEVDF in [3].
++ *
++ * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
++ *     with the BFQ Disk I/O Scheduler'',
++ *     Proceedings of the 5th Annual International Systems and Storage
++ *     Conference (SYSTOR '12), June 2012.
++ *
++ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
++ *
++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
++ *     Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
++ *     Oct 1997.
++ *
++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
++ *
++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
++ *     First: A Flexible and Accurate Mechanism for Proportional Share
++ *     Resource Allocation,'' technical report.
++ *
++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
++ */
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/cgroup.h>
++#include <linux/elevator.h>
++#include <linux/jiffies.h>
++#include <linux/rbtree.h>
++#include <linux/ioprio.h>
++#include "bfq.h"
++#include "blk.h"
++
++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++
++/* Maximum backwards seek, in KiB. */
++static const int bfq_back_max = 16 * 1024;
++
++/* Penalty of a backwards seek, in number of sectors. */
++static const int bfq_back_penalty = 2;
++
++/* Idling period duration, in jiffies. */
++static int bfq_slice_idle = HZ / 125;
++
++/* Minimum number of assigned budgets for which stats are safe to compute. */
++static const int bfq_stats_min_budgets = 194;
++
++/* Default maximum budget values, in sectors and number of requests. */
++static const int bfq_default_max_budget = 16 * 1024;
++static const int bfq_max_budget_async_rq = 4;
++
++/*
++ * Async to sync throughput distribution is controlled as follows:
++ * when an async request is served, the entity is charged the number
++ * of sectors of the request, multiplied by the factor below
++ */
++static const int bfq_async_charge_factor = 10;
++
++/* Default timeout values, in jiffies, approximating CFQ defaults. */
++static const int bfq_timeout_sync = HZ / 8;
++static int bfq_timeout_async = HZ / 25;
++
++struct kmem_cache *bfq_pool;
++
++/* Below this threshold (in ms), we consider thinktime immediate. */
++#define BFQ_MIN_TT		2
++
++/* hw_tag detection: parallel requests threshold and min samples needed. */
++#define BFQ_HW_QUEUE_THRESHOLD	4
++#define BFQ_HW_QUEUE_SAMPLES	32
++
++#define BFQQ_SEEK_THR	 (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++
++/* Min samples used for peak rate estimation (for autotuning). */
++#define BFQ_PEAK_RATE_SAMPLES	32
++
++/* Shift used for peak rate fixed precision calculations. */
++#define BFQ_RATE_SHIFT		16
++
++/*
++ * By default, BFQ computes the duration of the weight raising for
++ * interactive applications automatically, using the following formula:
++ * duration = (R / r) * T, where r is the peak rate of the device, and
++ * R and T are two reference parameters.
++ * In particular, R is the peak rate of the reference device (see below),
++ * and T is a reference time: given the systems that are likely to be
++ * installed on the reference device according to its speed class, T is
++ * about the maximum time needed, under BFQ and while reading two files in
++ * parallel, to load typical large applications on these systems.
++ * In practice, the slower/faster the device at hand is, the more/less it
++ * takes to load applications with respect to the reference device.
++ * Accordingly, the longer/shorter BFQ grants weight raising to interactive
++ * applications.
++ *
++ * BFQ uses four different reference pairs (R, T), depending on:
++ * . whether the device is rotational or non-rotational;
++ * . whether the device is slow, such as old or portable HDDs, as well as
++ *   SD cards, or fast, such as newer HDDs and SSDs.
++ *
++ * The device's speed class is dynamically (re)detected in
++ * bfq_update_peak_rate() every time the estimated peak rate is updated.
++ *
++ * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
++ * are the reference values for a slow/fast rotational device, whereas
++ * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
++ * a slow/fast non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes.
++ * Both the reference peak rates and the thresholds are measured in
++ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
++ */
++static int R_slow[2] = {1536, 10752};
++static int R_fast[2] = {17415, 34791};
++/*
++ * To improve readability, a conversion function is used to initialize the
++ * following arrays, which entails that they can be initialized only in a
++ * function.
++ */
++static int T_slow[2];
++static int T_fast[2];
++static int device_speed_thresh[2];
++
++#define BFQ_SERVICE_TREE_INIT	((struct bfq_service_tree)		\
++				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
++
++#define RQ_BIC(rq)		((struct bfq_io_cq *) (rq)->elv.priv[0])
++#define RQ_BFQQ(rq)		((rq)->elv.priv[1])
++
++static void bfq_schedule_dispatch(struct bfq_data *bfqd);
++
++#include "bfq-ioc.c"
++#include "bfq-sched.c"
++#include "bfq-cgroup.c"
++
++#define bfq_class_idle(bfqq)	((bfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
++#define bfq_class_rt(bfqq)	((bfqq)->ioprio_class == IOPRIO_CLASS_RT)
++
++#define bfq_sample_valid(samples)	((samples) > 80)
++
++/*
++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
++ * set (in which case it could also be a direct WRITE).
++ */
++static int bfq_bio_sync(struct bio *bio)
++{
++	if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
++		return 1;
++
++	return 0;
++}
++
++/*
++ * Scheduler run of queue, if there are requests pending and no one in the
++ * driver that will restart queueing.
++ */
++static void bfq_schedule_dispatch(struct bfq_data *bfqd)
++{
++	if (bfqd->queued != 0) {
++		bfq_log(bfqd, "schedule dispatch");
++		kblockd_schedule_work(&bfqd->unplug_work);
++	}
++}
++
++/*
++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
++ * We choose the request that is closesr to the head right now.  Distance
++ * behind the head is penalized and only allowed to a certain extent.
++ */
++static struct request *bfq_choose_req(struct bfq_data *bfqd,
++				      struct request *rq1,
++				      struct request *rq2,
++				      sector_t last)
++{
++	sector_t s1, s2, d1 = 0, d2 = 0;
++	unsigned long back_max;
++#define BFQ_RQ1_WRAP	0x01 /* request 1 wraps */
++#define BFQ_RQ2_WRAP	0x02 /* request 2 wraps */
++	unsigned int wrap = 0; /* bit mask: requests behind the disk head? */
++
++	if (!rq1 || rq1 == rq2)
++		return rq2;
++	if (!rq2)
++		return rq1;
++
++	if (rq_is_sync(rq1) && !rq_is_sync(rq2))
++		return rq1;
++	else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
++		return rq2;
++	if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
++		return rq1;
++	else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
++		return rq2;
++
++	s1 = blk_rq_pos(rq1);
++	s2 = blk_rq_pos(rq2);
++
++	/*
++	 * By definition, 1KiB is 2 sectors.
++	 */
++	back_max = bfqd->bfq_back_max * 2;
++
++	/*
++	 * Strict one way elevator _except_ in the case where we allow
++	 * short backward seeks which are biased as twice the cost of a
++	 * similar forward seek.
++	 */
++	if (s1 >= last)
++		d1 = s1 - last;
++	else if (s1 + back_max >= last)
++		d1 = (last - s1) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ1_WRAP;
++
++	if (s2 >= last)
++		d2 = s2 - last;
++	else if (s2 + back_max >= last)
++		d2 = (last - s2) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ2_WRAP;
++
++	/* Found required data */
++
++	/*
++	 * By doing switch() on the bit mask "wrap" we avoid having to
++	 * check two variables for all permutations: --> faster!
++	 */
++	switch (wrap) {
++	case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
++		if (d1 < d2)
++			return rq1;
++		else if (d2 < d1)
++			return rq2;
++
++		if (s1 >= s2)
++			return rq1;
++		else
++			return rq2;
++
++	case BFQ_RQ2_WRAP:
++		return rq1;
++	case BFQ_RQ1_WRAP:
++		return rq2;
++	case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
++	default:
++		/*
++		 * Since both rqs are wrapped,
++		 * start with the one that's further behind head
++		 * (--> only *one* back seek required),
++		 * since back seek takes more time than forward.
++		 */
++		if (s1 <= s2)
++			return rq1;
++		else
++			return rq2;
++	}
++}
++
++/*
++ * Tell whether there are active queues or groups with differentiated weights.
++ */
++static bool bfq_differentiated_weights(struct bfq_data *bfqd)
++{
++	/*
++	 * For weights to differ, at least one of the trees must contain
++	 * at least two nodes.
++	 */
++	return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) &&
++		(bfqd->queue_weights_tree.rb_node->rb_left ||
++		 bfqd->queue_weights_tree.rb_node->rb_right)
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	       ) ||
++	       (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) &&
++		(bfqd->group_weights_tree.rb_node->rb_left ||
++		 bfqd->group_weights_tree.rb_node->rb_right)
++#endif
++	       );
++}
++
++/*
++ * The following function returns true if every queue must receive the
++ * same share of the throughput (this condition is used when deciding
++ * whether idling may be disabled, see the comments in the function
++ * bfq_bfqq_may_idle()).
++ *
++ * Such a scenario occurs when:
++ * 1) all active queues have the same weight,
++ * 2) all active groups at the same level in the groups tree have the same
++ *    weight,
++ * 3) all active groups at the same level in the groups tree have the same
++ *    number of children.
++ *
++ * Unfortunately, keeping the necessary state for evaluating exactly the
++ * above symmetry conditions would be quite complex and time-consuming.
++ * Therefore this function evaluates, instead, the following stronger
++ * sub-conditions, for which it is much easier to maintain the needed
++ * state:
++ * 1) all active queues have the same weight,
++ * 2) all active groups have the same weight,
++ * 3) all active groups have at most one active child each.
++ * In particular, the last two conditions are always true if hierarchical
++ * support and the cgroups interface are not enabled, thus no state needs
++ * to be maintained in this case.
++ */
++static bool bfq_symmetric_scenario(struct bfq_data *bfqd)
++{
++	return
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		!bfqd->active_numerous_groups &&
++#endif
++		!bfq_differentiated_weights(bfqd);
++}
++
++/*
++ * If the weight-counter tree passed as input contains no counter for
++ * the weight of the input entity, then add that counter; otherwise just
++ * increment the existing counter.
++ *
++ * Note that weight-counter trees contain few nodes in mostly symmetric
++ * scenarios. For example, if all queues have the same weight, then the
++ * weight-counter tree for the queues may contain at most one node.
++ * This holds even if low_latency is on, because weight-raised queues
++ * are not inserted in the tree.
++ * In most scenarios, the rate at which nodes are created/destroyed
++ * should be low too.
++ */
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root)
++{
++	struct rb_node **new = &(root->rb_node), *parent = NULL;
++
++	/*
++	 * Do not insert if the entity is already associated with a
++	 * counter, which happens if:
++	 *   1) the entity is associated with a queue,
++	 *   2) a request arrival has caused the queue to become both
++	 *      non-weight-raised, and hence change its weight, and
++	 *      backlogged; in this respect, each of the two events
++	 *      causes an invocation of this function,
++	 *   3) this is the invocation of this function caused by the
++	 *      second event. This second invocation is actually useless,
++	 *      and we handle this fact by exiting immediately. More
++	 *      efficient or clearer solutions might possibly be adopted.
++	 */
++	if (entity->weight_counter)
++		return;
++
++	while (*new) {
++		struct bfq_weight_counter *__counter = container_of(*new,
++						struct bfq_weight_counter,
++						weights_node);
++		parent = *new;
++
++		if (entity->weight == __counter->weight) {
++			entity->weight_counter = __counter;
++			goto inc_counter;
++		}
++		if (entity->weight < __counter->weight)
++			new = &((*new)->rb_left);
++		else
++			new = &((*new)->rb_right);
++	}
++
++	entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter),
++					 GFP_ATOMIC);
++	entity->weight_counter->weight = entity->weight;
++	rb_link_node(&entity->weight_counter->weights_node, parent, new);
++	rb_insert_color(&entity->weight_counter->weights_node, root);
++
++inc_counter:
++	entity->weight_counter->num_active++;
++}
++
++/*
++ * Decrement the weight counter associated with the entity, and, if the
++ * counter reaches 0, remove the counter from the tree.
++ * See the comments to the function bfq_weights_tree_add() for considerations
++ * about overhead.
++ */
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root)
++{
++	if (!entity->weight_counter)
++		return;
++
++	BUG_ON(RB_EMPTY_ROOT(root));
++	BUG_ON(entity->weight_counter->weight != entity->weight);
++
++	BUG_ON(!entity->weight_counter->num_active);
++	entity->weight_counter->num_active--;
++	if (entity->weight_counter->num_active > 0)
++		goto reset_entity_pointer;
++
++	rb_erase(&entity->weight_counter->weights_node, root);
++	kfree(entity->weight_counter);
++
++reset_entity_pointer:
++	entity->weight_counter = NULL;
++}
++
++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq,
++					struct request *last)
++{
++	struct rb_node *rbnext = rb_next(&last->rb_node);
++	struct rb_node *rbprev = rb_prev(&last->rb_node);
++	struct request *next = NULL, *prev = NULL;
++
++	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
++
++	if (rbprev)
++		prev = rb_entry_rq(rbprev);
++
++	if (rbnext)
++		next = rb_entry_rq(rbnext);
++	else {
++		rbnext = rb_first(&bfqq->sort_list);
++		if (rbnext && rbnext != &last->rb_node)
++			next = rb_entry_rq(rbnext);
++	}
++
++	return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
++}
++
++/* see the definition of bfq_async_charge_factor for details */
++static unsigned long bfq_serv_to_charge(struct request *rq,
++					struct bfq_queue *bfqq)
++{
++	return blk_rq_sectors(rq) *
++		(1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
++		bfq_async_charge_factor));
++}
++
++/**
++ * bfq_updated_next_req - update the queue after a new next_rq selection.
++ * @bfqd: the device data the queue belongs to.
++ * @bfqq: the queue to update.
++ *
++ * If the first request of a queue changes we make sure that the queue
++ * has enough budget to serve at least its first request (if the
++ * request has grown).  We do this because if the queue has not enough
++ * budget for its first request, it has to go through two dispatch
++ * rounds to actually get it dispatched.
++ */
++static void bfq_updated_next_req(struct bfq_data *bfqd,
++				 struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	struct request *next_rq = bfqq->next_rq;
++	unsigned long new_budget;
++
++	if (!next_rq)
++		return;
++
++	if (bfqq == bfqd->in_service_queue)
++		/*
++		 * In order not to break guarantees, budgets cannot be
++		 * changed after an entity has been selected.
++		 */
++		return;
++
++	BUG_ON(entity->tree != &st->active);
++	BUG_ON(entity == entity->sched_data->in_service_entity);
++
++	new_budget = max_t(unsigned long, bfqq->max_budget,
++			   bfq_serv_to_charge(next_rq, bfqq));
++	if (entity->budget != new_budget) {
++		entity->budget = new_budget;
++		bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
++					 new_budget);
++		bfq_activate_bfqq(bfqd, bfqq);
++	}
++}
++
++static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
++{
++	u64 dur;
++
++	if (bfqd->bfq_wr_max_time > 0)
++		return bfqd->bfq_wr_max_time;
++
++	dur = bfqd->RT_prod;
++	do_div(dur, bfqd->peak_rate);
++
++	return dur;
++}
++
++/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
++static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct bfq_queue *item;
++	struct hlist_node *n;
++
++	hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
++		hlist_del_init(&item->burst_list_node);
++	hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++	bfqd->burst_size = 1;
++}
++
++/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
++static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	/* Increment burst size to take into account also bfqq */
++	bfqd->burst_size++;
++
++	if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
++		struct bfq_queue *pos, *bfqq_item;
++		struct hlist_node *n;
++
++		/*
++		 * Enough queues have been activated shortly after each
++		 * other to consider this burst as large.
++		 */
++		bfqd->large_burst = true;
++
++		/*
++		 * We can now mark all queues in the burst list as
++		 * belonging to a large burst.
++		 */
++		hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
++				     burst_list_node)
++			bfq_mark_bfqq_in_large_burst(bfqq_item);
++		bfq_mark_bfqq_in_large_burst(bfqq);
++
++		/*
++		 * From now on, and until the current burst finishes, any
++		 * new queue being activated shortly after the last queue
++		 * was inserted in the burst can be immediately marked as
++		 * belonging to a large burst. So the burst list is not
++		 * needed any more. Remove it.
++		 */
++		hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
++					  burst_list_node)
++			hlist_del_init(&pos->burst_list_node);
++	} else /* burst not yet large: add bfqq to the burst list */
++		hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++}
++
++/*
++ * If many queues happen to become active shortly after each other, then,
++ * to help the processes associated to these queues get their job done as
++ * soon as possible, it is usually better to not grant either weight-raising
++ * or device idling to these queues. In this comment we describe, firstly,
++ * the reasons why this fact holds, and, secondly, the next function, which
++ * implements the main steps needed to properly mark these queues so that
++ * they can then be treated in a different way.
++ *
++ * As for the terminology, we say that a queue becomes active, i.e.,
++ * switches from idle to backlogged, either when it is created (as a
++ * consequence of the arrival of an I/O request), or, if already existing,
++ * when a new request for the queue arrives while the queue is idle.
++ * Bursts of activations, i.e., activations of different queues occurring
++ * shortly after each other, are typically caused by services or applications
++ * that spawn or reactivate many parallel threads/processes. Examples are
++ * systemd during boot or git grep.
++ *
++ * These services or applications benefit mostly from a high throughput:
++ * the quicker the requests of the activated queues are cumulatively served,
++ * the sooner the target job of these queues gets completed. As a consequence,
++ * weight-raising any of these queues, which also implies idling the device
++ * for it, is almost always counterproductive: in most cases it just lowers
++ * throughput.
++ *
++ * On the other hand, a burst of activations may be also caused by the start
++ * of an application that does not consist in a lot of parallel I/O-bound
++ * threads. In fact, with a complex application, the burst may be just a
++ * consequence of the fact that several processes need to be executed to
++ * start-up the application. To start an application as quickly as possible,
++ * the best thing to do is to privilege the I/O related to the application
++ * with respect to all other I/O. Therefore, the best strategy to start as
++ * quickly as possible an application that causes a burst of activations is
++ * to weight-raise all the queues activated during the burst. This is the
++ * exact opposite of the best strategy for the other type of bursts.
++ *
++ * In the end, to take the best action for each of the two cases, the two
++ * types of bursts need to be distinguished. Fortunately, this seems
++ * relatively easy to do, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that bursts with a larger size
++ * than that threshold are apparently caused only by services or commands
++ * such as systemd or git grep. For brevity, hereafter we call just 'large'
++ * these bursts. BFQ *does not* weight-raise queues whose activations occur
++ * in a large burst. In addition, for each of these queues BFQ performs or
++ * does not perform idling depending on which choice boosts the throughput
++ * most. The exact choice depends on the device and request pattern at
++ * hand.
++ *
++ * Turning back to the next function, it implements all the steps needed
++ * to detect the occurrence of a large burst and to properly mark all the
++ * queues belonging to it (so that they can then be treated in a different
++ * way). This goal is achieved by maintaining a special "burst list" that
++ * holds, temporarily, the queues that belong to the burst in progress. The
++ * list is then used to mark these queues as belonging to a large burst if
++ * the burst does become large. The main steps are the following.
++ *
++ * . when the very first queue is activated, the queue is inserted into the
++ *   list (as it could be the first queue in a possible burst)
++ *
++ * . if the current burst has not yet become large, and a queue Q that does
++ *   not yet belong to the burst is activated shortly after the last time
++ *   at which a new queue entered the burst list, then the function appends
++ *   Q to the burst list
++ *
++ * . if, as a consequence of the previous step, the burst size reaches
++ *   the large-burst threshold, then
++ *
++ *     . all the queues in the burst list are marked as belonging to a
++ *       large burst
++ *
++ *     . the burst list is deleted; in fact, the burst list already served
++ *       its purpose (keeping temporarily track of the queues in a burst,
++ *       so as to be able to mark them as belonging to a large burst in the
++ *       previous sub-step), and now is not needed any more
++ *
++ *     . the device enters a large-burst mode
++ *
++ * . if a queue Q that does not belong to the burst is activated while
++ *   the device is in large-burst mode and shortly after the last time
++ *   at which a queue either entered the burst list or was marked as
++ *   belonging to the current large burst, then Q is immediately marked
++ *   as belonging to a large burst.
++ *
++ * . if a queue Q that does not belong to the burst is activated a while
++ *   later, i.e., not shortly after, than the last time at which a queue
++ *   either entered the burst list or was marked as belonging to the
++ *   current large burst, then the current burst is deemed as finished and:
++ *
++ *        . the large-burst mode is reset if set
++ *
++ *        . the burst list is emptied
++ *
++ *        . Q is inserted in the burst list, as Q may be the first queue
++ *          in a possible new burst (then the burst list contains just Q
++ *          after this step).
++ */
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			     bool idle_for_long_time)
++{
++	/*
++	 * If bfqq happened to be activated in a burst, but has been idle
++	 * for at least as long as an interactive queue, then we assume
++	 * that, in the overall I/O initiated in the burst, the I/O
++	 * associated to bfqq is finished. So bfqq does not need to be
++	 * treated as a queue belonging to a burst anymore. Accordingly,
++	 * we reset bfqq's in_large_burst flag if set, and remove bfqq
++	 * from the burst list if it's there. We do not decrement instead
++	 * burst_size, because the fact that bfqq does not need to belong
++	 * to the burst list any more does not invalidate the fact that
++	 * bfqq may have been activated during the current burst.
++	 */
++	if (idle_for_long_time) {
++		hlist_del_init(&bfqq->burst_list_node);
++		bfq_clear_bfqq_in_large_burst(bfqq);
++	}
++
++	/*
++	 * If bfqq is already in the burst list or is part of a large
++	 * burst, then there is nothing else to do.
++	 */
++	if (!hlist_unhashed(&bfqq->burst_list_node) ||
++	    bfq_bfqq_in_large_burst(bfqq))
++		return;
++
++	/*
++	 * If bfqq's activation happens late enough, then the current
++	 * burst is finished, and related data structures must be reset.
++	 *
++	 * In this respect, consider the special case where bfqq is the very
++	 * first queue being activated. In this case, last_ins_in_burst is
++	 * not yet significant when we get here. But it is easy to verify
++	 * that, whether or not the following condition is true, bfqq will
++	 * end up being inserted into the burst list. In particular the
++	 * list will happen to contain only bfqq. And this is exactly what
++	 * has to happen, as bfqq may be the first queue in a possible
++	 * burst.
++	 */
++	if (time_is_before_jiffies(bfqd->last_ins_in_burst +
++	    bfqd->bfq_burst_interval)) {
++		bfqd->large_burst = false;
++		bfq_reset_burst_list(bfqd, bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then bfqq is being activated shortly after the
++	 * last queue. So, if the current burst is also large, we can mark
++	 * bfqq as belonging to this large burst immediately.
++	 */
++	if (bfqd->large_burst) {
++		bfq_mark_bfqq_in_large_burst(bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then a large-burst state has not yet been
++	 * reached, but bfqq is being activated shortly after the last
++	 * queue. Then we add bfqq to the burst.
++	 */
++	bfq_add_to_burst(bfqd, bfqq);
++}
++
++static void bfq_add_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_data *bfqd = bfqq->bfqd;
++	struct request *next_rq, *prev;
++	unsigned long old_wr_coeff = bfqq->wr_coeff;
++	bool interactive = false;
++
++	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++	bfqq->queued[rq_is_sync(rq)]++;
++	bfqd->queued++;
++
++	elv_rb_add(&bfqq->sort_list, rq);
++
++	/*
++	 * Check if this request is a better next-serve candidate.
++	 */
++	prev = bfqq->next_rq;
++	next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
++	BUG_ON(!next_rq);
++	bfqq->next_rq = next_rq;
++
++	if (!bfq_bfqq_busy(bfqq)) {
++		bool soft_rt, in_burst,
++		     idle_for_long_time = time_is_before_jiffies(
++						bfqq->budget_timeout +
++						bfqd->bfq_wr_min_idle_time);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq,
++					 rq->cmd_flags);
++#endif
++		if (bfq_bfqq_sync(bfqq)) {
++			bool already_in_burst =
++			   !hlist_unhashed(&bfqq->burst_list_node) ||
++			   bfq_bfqq_in_large_burst(bfqq);
++			bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
++			/*
++			 * If bfqq was not already in the current burst,
++			 * then, at this point, bfqq either has been
++			 * added to the current burst or has caused the
++			 * current burst to terminate. In particular, in
++			 * the second case, bfqq has become the first
++			 * queue in a possible new burst.
++			 * In both cases last_ins_in_burst needs to be
++			 * moved forward.
++			 */
++			if (!already_in_burst)
++				bfqd->last_ins_in_burst = jiffies;
++		}
++
++		in_burst = bfq_bfqq_in_large_burst(bfqq);
++		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++			!in_burst &&
++			time_is_before_jiffies(bfqq->soft_rt_next_start);
++		interactive = !in_burst && idle_for_long_time;
++		entity->budget = max_t(unsigned long, bfqq->max_budget,
++				       bfq_serv_to_charge(next_rq, bfqq));
++
++		if (!bfq_bfqq_IO_bound(bfqq)) {
++			if (time_before(jiffies,
++					RQ_BIC(rq)->ttime.last_end_request +
++					bfqd->bfq_slice_idle)) {
++				bfqq->requests_within_timer++;
++				if (bfqq->requests_within_timer >=
++				    bfqd->bfq_requests_within_timer)
++					bfq_mark_bfqq_IO_bound(bfqq);
++			} else
++				bfqq->requests_within_timer = 0;
++		}
++
++		if (!bfqd->low_latency)
++			goto add_bfqq_busy;
++
++		/*
++		 * If the queue:
++		 * - is not being boosted,
++		 * - has been idle for enough time,
++		 * - is not a sync queue or is linked to a bfq_io_cq (it is
++		 *   shared "for its nature" or it is not shared and its
++		 *   requests have not been redirected to a shared queue)
++		 * start a weight-raising period.
++		 */
++		if (old_wr_coeff == 1 && (interactive || soft_rt) &&
++		    (!bfq_bfqq_sync(bfqq) || bfqq->bic)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais starting at %lu, rais_max_time %u",
++				     jiffies,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++		} else if (old_wr_coeff > 1) {
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else if (in_burst ||
++				 (bfqq->wr_cur_max_time ==
++				  bfqd->bfq_wr_rt_max_time &&
++				  !soft_rt)) {
++				bfqq->wr_coeff = 1;
++				bfq_log_bfqq(bfqd, bfqq,
++					"wrais ending at %lu, rais_max_time %u",
++					jiffies,
++					jiffies_to_msecs(bfqq->
++						wr_cur_max_time));
++			} else if (time_before(
++					bfqq->last_wr_start_finish +
++					bfqq->wr_cur_max_time,
++					jiffies +
++					bfqd->bfq_wr_rt_max_time) &&
++				   soft_rt) {
++				/*
++				 *
++				 * The remaining weight-raising time is lower
++				 * than bfqd->bfq_wr_rt_max_time, which means
++				 * that the application is enjoying weight
++				 * raising either because deemed soft-rt in
++				 * the near past, or because deemed interactive
++				 * a long ago.
++				 * In both cases, resetting now the current
++				 * remaining weight-raising time for the
++				 * application to the weight-raising duration
++				 * for soft rt applications would not cause any
++				 * latency increase for the application (as the
++				 * new duration would be higher than the
++				 * remaining time).
++				 *
++				 * In addition, the application is now meeting
++				 * the requirements for being deemed soft rt.
++				 * In the end we can correctly and safely
++				 * (re)charge the weight-raising duration for
++				 * the application with the weight-raising
++				 * duration for soft rt applications.
++				 *
++				 * In particular, doing this recharge now, i.e.,
++				 * before the weight-raising period for the
++				 * application finishes, reduces the probability
++				 * of the following negative scenario:
++				 * 1) the weight of a soft rt application is
++				 *    raised at startup (as for any newly
++				 *    created application),
++				 * 2) since the application is not interactive,
++				 *    at a certain time weight-raising is
++				 *    stopped for the application,
++				 * 3) at that time the application happens to
++				 *    still have pending requests, and hence
++				 *    is destined to not have a chance to be
++				 *    deemed soft rt before these requests are
++				 *    completed (see the comments to the
++				 *    function bfq_bfqq_softrt_next_start()
++				 *    for details on soft rt detection),
++				 * 4) these pending requests experience a high
++				 *    latency because the application is not
++				 *    weight-raised while they are pending.
++				 */
++				bfqq->last_wr_start_finish = jiffies;
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			}
++		}
++		if (old_wr_coeff != bfqq->wr_coeff)
++			entity->prio_changed = 1;
++add_bfqq_busy:
++		bfqq->last_idle_bklogged = jiffies;
++		bfqq->service_from_backlogged = 0;
++		bfq_clear_bfqq_softrt_update(bfqq);
++		bfq_add_bfqq_busy(bfqd, bfqq);
++	} else {
++		if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
++		    time_is_before_jiffies(
++				bfqq->last_wr_start_finish +
++				bfqd->bfq_wr_min_inter_arr_async)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++
++			bfqd->wr_busy_queues++;
++			entity->prio_changed = 1;
++			bfq_log_bfqq(bfqd, bfqq,
++			    "non-idle wrais starting at %lu, rais_max_time %u",
++			    jiffies,
++			    jiffies_to_msecs(bfqq->wr_cur_max_time));
++		}
++		if (prev != bfqq->next_rq)
++			bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	if (bfqd->low_latency &&
++		(old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
++		bfqq->last_wr_start_finish = jiffies;
++}
++
++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
++					  struct bio *bio)
++{
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (!bic)
++		return NULL;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	if (bfqq)
++		return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
++
++	return NULL;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	bfqd->rq_in_driver++;
++	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
++		(unsigned long long) bfqd->last_position);
++}
++
++static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	BUG_ON(bfqd->rq_in_driver == 0);
++	bfqd->rq_in_driver--;
++}
++
++static void bfq_remove_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	const int sync = rq_is_sync(rq);
++
++	if (bfqq->next_rq == rq) {
++		bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
++		bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	if (rq->queuelist.prev != &rq->queuelist)
++		list_del_init(&rq->queuelist);
++	BUG_ON(bfqq->queued[sync] == 0);
++	bfqq->queued[sync]--;
++	bfqd->queued--;
++	elv_rb_del(&bfqq->sort_list, rq);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++			bfq_del_bfqq_busy(bfqd, bfqq, 1);
++		/*
++		 * Remove queue from request-position tree as it is empty.
++		 */
++		if (bfqq->pos_root) {
++			rb_erase(&bfqq->pos_node, bfqq->pos_root);
++			bfqq->pos_root = NULL;
++		}
++	}
++
++	if (rq->cmd_flags & REQ_META) {
++		BUG_ON(bfqq->meta_pending == 0);
++		bfqq->meta_pending--;
++	}
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags);
++#endif
++}
++
++static int bfq_merge(struct request_queue *q, struct request **req,
++		     struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct request *__rq;
++
++	__rq = bfq_find_rq_fmerge(bfqd, bio);
++	if (__rq && elv_rq_merge_ok(__rq, bio)) {
++		*req = __rq;
++		return ELEVATOR_FRONT_MERGE;
++	}
++
++	return ELEVATOR_NO_MERGE;
++}
++
++static void bfq_merged_request(struct request_queue *q, struct request *req,
++			       int type)
++{
++	if (type == ELEVATOR_FRONT_MERGE &&
++	    rb_prev(&req->rb_node) &&
++	    blk_rq_pos(req) <
++	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
++				    struct request, rb_node))) {
++		struct bfq_queue *bfqq = RQ_BFQQ(req);
++		struct bfq_data *bfqd = bfqq->bfqd;
++		struct request *prev, *next_rq;
++
++		/* Reposition request in its sort_list */
++		elv_rb_del(&bfqq->sort_list, req);
++		elv_rb_add(&bfqq->sort_list, req);
++		/* Choose next request to be served for bfqq */
++		prev = bfqq->next_rq;
++		next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req,
++					 bfqd->last_position);
++		BUG_ON(!next_rq);
++		bfqq->next_rq = next_rq;
++	}
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfq_bio_merged(struct request_queue *q, struct request *req,
++			   struct bio *bio)
++{
++	bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_rw);
++}
++#endif
++
++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
++				struct request *next)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
++
++	/*
++	 * If next and rq belong to the same bfq_queue and next is older
++	 * than rq, then reposition rq in the fifo (by substituting next
++	 * with rq). Otherwise, if next and rq belong to different
++	 * bfq_queues, never reposition rq: in fact, we would have to
++	 * reposition it with respect to next's position in its own fifo,
++	 * which would most certainly be too expensive with respect to
++	 * the benefits.
++	 */
++	if (bfqq == next_bfqq &&
++	    !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++	    time_before(next->fifo_time, rq->fifo_time)) {
++		list_del_init(&rq->queuelist);
++		list_replace_init(&next->queuelist, &rq->queuelist);
++		rq->fifo_time = next->fifo_time;
++	}
++
++	if (bfqq->next_rq == next)
++		bfqq->next_rq = rq;
++
++	bfq_remove_request(next);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
++#endif
++}
++
++/* Must be called with bfqq != NULL */
++static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
++{
++	BUG_ON(!bfqq);
++	if (bfq_bfqq_busy(bfqq))
++		bfqq->bfqd->wr_busy_queues--;
++	bfqq->wr_coeff = 1;
++	bfqq->wr_cur_max_time = 0;
++	/* Trigger a weight change on the next activation of the queue */
++	bfqq->entity.prio_changed = 1;
++}
++
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			if (bfqg->async_bfqq[i][j])
++				bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
++	if (bfqg->async_idle_bfqq)
++		bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
++}
++
++static void bfq_end_wr(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq;
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	bfq_end_wr_async(bfqd);
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++			   struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic;
++
++	/*
++	 * Disallow merge of a sync bio into an async request.
++	 */
++	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++		return 0;
++
++	/*
++	 * Lookup the bfqq that this bio will be queued with. Allow
++	 * merge only if rq is queued there.
++	 * Queue lock is held here.
++	 */
++	bic = bfq_bic_lookup(bfqd, current->io_context);
++	if (!bic)
++		return 0;
++
++	return bic_to_bfqq(bic, bfq_bio_sync(bio)) == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++				       struct bfq_queue *bfqq)
++{
++	if (bfqq) {
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		bfqg_stats_update_avg_queue_size(bfqq_group(bfqq));
++#endif
++		bfq_mark_bfqq_must_alloc(bfqq);
++		bfq_mark_bfqq_budget_new(bfqq);
++		bfq_clear_bfqq_fifo_expire(bfqq);
++
++		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "set_in_service_queue, cur-budget = %d",
++			     bfqq->entity.budget);
++	}
++
++	bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
++
++	__bfq_set_in_service_queue(bfqd, bfqq);
++	return bfqq;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static int bfq_max_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++		return bfq_default_max_budget;
++	else
++		return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static int bfq_min_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++		return bfq_default_max_budget / 32;
++	else
++		return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	struct bfq_io_cq *bic;
++	unsigned long sl;
++
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Processes have exited, don't wait. */
++	bic = bfqd->in_service_bic;
++	if (!bic || atomic_read(&bic->icq.ioc->active_ref) == 0)
++		return;
++
++	bfq_mark_bfqq_wait_request(bfqq);
++
++	/*
++	 * We don't want to idle for seeks, but we do want to allow
++	 * fair distribution of slice time for a process doing back-to-back
++	 * seeks. So allow a little bit of time for him to submit a new rq.
++	 *
++	 * To prevent processes with (partly) seeky workloads from
++	 * being too ill-treated, grant them a small fraction of the
++	 * assigned budget before reducing the waiting time to
++	 * BFQ_MIN_TT. This happened to help reduce latency.
++	 */
++	sl = bfqd->bfq_slice_idle;
++	/*
++	 * Unless the queue is being weight-raised or the scenario is
++	 * asymmetric, grant only minimum idle time if the queue either
++	 * has been seeky for long enough or has already proved to be
++	 * constantly seeky.
++	 */
++	if (bfq_sample_valid(bfqq->seek_samples) &&
++	    ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
++				  bfq_max_budget(bfqq->bfqd) / 8) ||
++	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++	    bfq_symmetric_scenario(bfqd))
++		sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
++	else if (bfqq->wr_coeff > 1)
++		sl = sl * 3;
++	bfqd->last_idling_start = ktime_get();
++	mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_set_start_idle_time(bfqq_group(bfqq));
++#endif
++	bfq_log(bfqd, "arm idle: %u/%u ms",
++		jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the disk
++ * throughput (always guaranteed with a time slice scheme as in CFQ).
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	unsigned int timeout_coeff;
++
++	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++		timeout_coeff = 1;
++	else
++		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++	bfqd->last_budget_start = ktime_get();
++
++	bfq_clear_bfqq_budget_new(bfqq);
++	bfqq->budget_timeout = jiffies +
++		bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++
++	bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++		jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
++		timeout_coeff));
++}
++
++/*
++ * Move request from internal lists to the request queue dispatch list.
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	/*
++	 * For consistency, the next instruction should have been executed
++	 * after removing the request from the queue and dispatching it.
++	 * We execute instead this instruction before bfq_remove_request()
++	 * (and hence introduce a temporary inconsistency), for efficiency.
++	 * In fact, in a forced_dispatch, this prevents two counters related
++	 * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++	 * is not in service, and then to be incremented again after
++	 * incrementing bfqq->dispatched.
++	 */
++	bfqq->dispatched++;
++	bfq_remove_request(rq);
++	elv_dispatch_sort(q, rq);
++
++	if (bfq_bfqq_sync(bfqq))
++		bfqd->sync_flight++;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_update_dispatch(bfqq_group(bfqq), blk_rq_bytes(rq),
++				   rq->cmd_flags);
++#endif
++}
++
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++{
++	struct request *rq = NULL;
++
++	if (bfq_bfqq_fifo_expire(bfqq))
++		return NULL;
++
++	bfq_mark_bfqq_fifo_expire(bfqq);
++
++	if (list_empty(&bfqq->fifo))
++		return NULL;
++
++	rq = rq_entry_fifo(bfqq->fifo.next);
++
++	if (time_before(jiffies, rq->fifo_time))
++		return NULL;
++
++	return rq;
++}
++
++static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	return entity->budget - entity->service;
++}
++
++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	__bfq_bfqd_reset_in_service(bfqd);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * Overloading budget_timeout field to store the time
++		 * at which the queue remains with no backlog; used by
++		 * the weight-raising mechanism.
++		 */
++		bfqq->budget_timeout = jiffies;
++		bfq_del_bfqq_busy(bfqd, bfqq, 1);
++	} else
++		bfq_activate_bfqq(bfqd, bfqq);
++}
++
++/**
++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
++ * @bfqd: device data.
++ * @bfqq: queue to update.
++ * @reason: reason for expiration.
++ *
++ * Handle the feedback on @bfqq budget at queue expiration.
++ * See the body for detailed comments.
++ */
++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
++				     struct bfq_queue *bfqq,
++				     enum bfqq_expiration reason)
++{
++	struct request *next_rq;
++	int budget, min_budget;
++
++	budget = bfqq->max_budget;
++	min_budget = bfq_min_budget(bfqd);
++
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d",
++		bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %d, min budg %d",
++		budget, bfq_min_budget(bfqd));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
++		bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
++
++	if (bfq_bfqq_sync(bfqq)) {
++		switch (reason) {
++		/*
++		 * Caveat: in all the following cases we trade latency
++		 * for throughput.
++		 */
++		case BFQ_BFQQ_TOO_IDLE:
++			/*
++			 * This is the only case where we may reduce
++			 * the budget: if there is no request of the
++			 * process still waiting for completion, then
++			 * we assume (tentatively) that the timer has
++			 * expired because the batch of requests of
++			 * the process could have been served with a
++			 * smaller budget.  Hence, betting that
++			 * process will behave in the same way when it
++			 * becomes backlogged again, we reduce its
++			 * next budget.  As long as we guess right,
++			 * this budget cut reduces the latency
++			 * experienced by the process.
++			 *
++			 * However, if there are still outstanding
++			 * requests, then the process may have not yet
++			 * issued its next request just because it is
++			 * still waiting for the completion of some of
++			 * the still outstanding ones.  So in this
++			 * subcase we do not reduce its budget, on the
++			 * contrary we increase it to possibly boost
++			 * the throughput, as discussed in the
++			 * comments to the BUDGET_TIMEOUT case.
++			 */
++			if (bfqq->dispatched > 0) /* still outstanding reqs */
++				budget = min(budget * 2, bfqd->bfq_max_budget);
++			else {
++				if (budget > 5 * min_budget)
++					budget -= 4 * min_budget;
++				else
++					budget = min_budget;
++			}
++			break;
++		case BFQ_BFQQ_BUDGET_TIMEOUT:
++			/*
++			 * We double the budget here because: 1) it
++			 * gives the chance to boost the throughput if
++			 * this is not a seeky process (which may have
++			 * bumped into this timeout because of, e.g.,
++			 * ZBR), 2) together with charge_full_budget
++			 * it helps give seeky processes higher
++			 * timestamps, and hence be served less
++			 * frequently.
++			 */
++			budget = min(budget * 2, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_BUDGET_EXHAUSTED:
++			/*
++			 * The process still has backlog, and did not
++			 * let either the budget timeout or the disk
++			 * idling timeout expire. Hence it is not
++			 * seeky, has a short thinktime and may be
++			 * happy with a higher budget too. So
++			 * definitely increase the budget of this good
++			 * candidate to boost the disk throughput.
++			 */
++			budget = min(budget * 4, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_NO_MORE_REQUESTS:
++		       /*
++			* Leave the budget unchanged.
++			*/
++		default:
++			return;
++		}
++	} else
++		/*
++		 * Async queues get always the maximum possible budget
++		 * (their ability to dispatch is limited by
++		 * @bfqd->bfq_max_budget_async_rq).
++		 */
++		budget = bfqd->bfq_max_budget;
++
++	bfqq->max_budget = budget;
++
++	if (bfqd->budgets_assigned >= bfq_stats_min_budgets &&
++	    !bfqd->bfq_user_max_budget)
++		bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget);
++
++	/*
++	 * Make sure that we have enough budget for the next request.
++	 * Since the finish time of the bfqq must be kept in sync with
++	 * the budget, be sure to call __bfq_bfqq_expire() after the
++	 * update.
++	 */
++	next_rq = bfqq->next_rq;
++	if (next_rq)
++		bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
++					    bfq_serv_to_charge(next_rq, bfqq));
++	else
++		bfqq->entity.budget = bfqq->max_budget;
++
++	bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d",
++			next_rq ? blk_rq_sectors(next_rq) : 0,
++			bfqq->entity.budget);
++}
++
++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
++{
++	unsigned long max_budget;
++
++	/*
++	 * The max_budget calculated when autotuning is equal to the
++	 * amount of sectors transfered in timeout_sync at the
++	 * estimated peak rate.
++	 */
++	max_budget = (unsigned long)(peak_rate * 1000 *
++				     timeout >> BFQ_RATE_SHIFT);
++
++	return max_budget;
++}
++
++/*
++ * In addition to updating the peak rate, checks whether the process
++ * is "slow", and returns 1 if so. This slow flag is used, in addition
++ * to the budget timeout, to reduce the amount of service provided to
++ * seeky processes, and hence reduce their chances to lower the
++ * throughput. See the code for more details.
++ */
++static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				 bool compensate, enum bfqq_expiration reason)
++{
++	u64 bw, usecs, expected, timeout;
++	ktime_t delta;
++	int update = 0;
++
++	if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++		return false;
++
++	if (compensate)
++		delta = bfqd->last_idling_start;
++	else
++		delta = ktime_get();
++	delta = ktime_sub(delta, bfqd->last_budget_start);
++	usecs = ktime_to_us(delta);
++
++	/* Don't trust short/unrealistic values. */
++	if (usecs < 100 || usecs >= LONG_MAX)
++		return false;
++
++	/*
++	 * Calculate the bandwidth for the last slice.  We use a 64 bit
++	 * value to store the peak rate, in sectors per usec in fixed
++	 * point math.  We do so to have enough precision in the estimate
++	 * and to avoid overflows.
++	 */
++	bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
++	do_div(bw, (unsigned long)usecs);
++
++	timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	/*
++	 * Use only long (> 20ms) intervals to filter out spikes for
++	 * the peak rate estimation.
++	 */
++	if (usecs > 20000) {
++		if (bw > bfqd->peak_rate ||
++		   (!BFQQ_SEEKY(bfqq) &&
++		    reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
++			bfq_log(bfqd, "measured bw =%llu", bw);
++			/*
++			 * To smooth oscillations use a low-pass filter with
++			 * alpha=7/8, i.e.,
++			 * new_rate = (7/8) * old_rate + (1/8) * bw
++			 */
++			do_div(bw, 8);
++			if (bw == 0)
++				return 0;
++			bfqd->peak_rate *= 7;
++			do_div(bfqd->peak_rate, 8);
++			bfqd->peak_rate += bw;
++			update = 1;
++			bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
++		}
++
++		update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
++
++		if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
++			bfqd->peak_rate_samples++;
++
++		if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
++		    update) {
++			int dev_type = blk_queue_nonrot(bfqd->queue);
++
++			if (bfqd->bfq_user_max_budget == 0) {
++				bfqd->bfq_max_budget =
++					bfq_calc_max_budget(bfqd->peak_rate,
++							    timeout);
++				bfq_log(bfqd, "new max_budget=%d",
++					bfqd->bfq_max_budget);
++			}
++			if (bfqd->device_speed == BFQ_BFQD_FAST &&
++			    bfqd->peak_rate < device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_SLOW;
++				bfqd->RT_prod = R_slow[dev_type] *
++						T_slow[dev_type];
++			} else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++			    bfqd->peak_rate > device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_FAST;
++				bfqd->RT_prod = R_fast[dev_type] *
++						T_fast[dev_type];
++			}
++		}
++	}
++
++	/*
++	 * If the process has been served for a too short time
++	 * interval to let its possible sequential accesses prevail on
++	 * the initial seek time needed to move the disk head on the
++	 * first sector it requested, then give the process a chance
++	 * and for the moment return false.
++	 */
++	if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
++		return false;
++
++	/*
++	 * A process is considered ``slow'' (i.e., seeky, so that we
++	 * cannot treat it fairly in the service domain, as it would
++	 * slow down too much the other processes) if, when a slice
++	 * ends for whatever reason, it has received service at a
++	 * rate that would not be high enough to complete the budget
++	 * before the budget timeout expiration.
++	 */
++	expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++
++	/*
++	 * Caveat: processes doing IO in the slower disk zones will
++	 * tend to be slow(er) even if not seeky. And the estimated
++	 * peak rate will actually be an average over the disk
++	 * surface. Hence, to not be too harsh with unlucky processes,
++	 * we keep a budget/3 margin of safety before declaring a
++	 * process slow.
++	 */
++	return expected > (4 * bfqq->entity.budget) / 3;
++}
++
++/*
++ * To be deemed as soft real-time, an application must meet two
++ * requirements. First, the application must not require an average
++ * bandwidth higher than the approximate bandwidth required to playback or
++ * record a compressed high-definition video.
++ * The next function is invoked on the completion of the last request of a
++ * batch, to compute the next-start time instant, soft_rt_next_start, such
++ * that, if the next request of the application does not arrive before
++ * soft_rt_next_start, then the above requirement on the bandwidth is met.
++ *
++ * The second requirement is that the request pattern of the application is
++ * isochronous, i.e., that, after issuing a request or a batch of requests,
++ * the application stops issuing new requests until all its pending requests
++ * have been completed. After that, the application may issue a new batch,
++ * and so on.
++ * For this reason the next function is invoked to compute
++ * soft_rt_next_start only for applications that meet this requirement,
++ * whereas soft_rt_next_start is set to infinity for applications that do
++ * not.
++ *
++ * Unfortunately, even a greedy application may happen to behave in an
++ * isochronous way if the CPU load is high. In fact, the application may
++ * stop issuing requests while the CPUs are busy serving other processes,
++ * then restart, then stop again for a while, and so on. In addition, if
++ * the disk achieves a low enough throughput with the request pattern
++ * issued by the application (e.g., because the request pattern is random
++ * and/or the device is slow), then the application may meet the above
++ * bandwidth requirement too. To prevent such a greedy application to be
++ * deemed as soft real-time, a further rule is used in the computation of
++ * soft_rt_next_start: soft_rt_next_start must be higher than the current
++ * time plus the maximum time for which the arrival of a request is waited
++ * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle.
++ * This filters out greedy applications, as the latter issue instead their
++ * next request as soon as possible after the last one has been completed
++ * (in contrast, when a batch of requests is completed, a soft real-time
++ * application spends some time processing data).
++ *
++ * Unfortunately, the last filter may easily generate false positives if
++ * only bfqd->bfq_slice_idle is used as a reference time interval and one
++ * or both the following cases occur:
++ * 1) HZ is so low that the duration of a jiffy is comparable to or higher
++ *    than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with
++ *    HZ=100.
++ * 2) jiffies, instead of increasing at a constant rate, may stop increasing
++ *    for a while, then suddenly 'jump' by several units to recover the lost
++ *    increments. This seems to happen, e.g., inside virtual machines.
++ * To address this issue, we do not use as a reference time interval just
++ * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In
++ * particular we add the minimum number of jiffies for which the filter
++ * seems to be quite precise also in embedded systems and KVM/QEMU virtual
++ * machines.
++ */
++static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
++						struct bfq_queue *bfqq)
++{
++	return max(bfqq->last_idle_bklogged +
++		   HZ * bfqq->service_from_backlogged /
++		   bfqd->bfq_wr_max_softrt_rate,
++		   jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++}
++
++/*
++ * Return the largest-possible time instant such that, for as long as possible,
++ * the current time will be lower than this time instant according to the macro
++ * time_is_before_jiffies().
++ */
++static unsigned long bfq_infinity_from_now(unsigned long now)
++{
++	return now + ULONG_MAX / 2;
++}
++
++/**
++ * bfq_bfqq_expire - expire a queue.
++ * @bfqd: device owning the queue.
++ * @bfqq: the queue to expire.
++ * @compensate: if true, compensate for the time spent idling.
++ * @reason: the reason causing the expiration.
++ *
++ *
++ * If the process associated to the queue is slow (i.e., seeky), or in
++ * case of budget timeout, or, finally, if it is async, we
++ * artificially charge it an entire budget (independently of the
++ * actual service it received). As a consequence, the queue will get
++ * higher timestamps than the correct ones upon reactivation, and
++ * hence it will be rescheduled as if it had received more service
++ * than what it actually received. In the end, this class of processes
++ * will receive less service in proportion to how slowly they consume
++ * their budgets (and hence how seriously they tend to lower the
++ * throughput).
++ *
++ * In contrast, when a queue expires because it has been idling for
++ * too much or because it exhausted its budget, we do not touch the
++ * amount of service it has received. Hence when the queue will be
++ * reactivated and its timestamps updated, the latter will be in sync
++ * with the actual service received by the queue until expiration.
++ *
++ * Charging a full budget to the first type of queues and the exact
++ * service to the others has the effect of using the WF2Q+ policy to
++ * schedule the former on a timeslice basis, without violating the
++ * service domain guarantees of the latter.
++ */
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++			    struct bfq_queue *bfqq,
++			    bool compensate,
++			    enum bfqq_expiration reason)
++{
++	bool slow;
++
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	/*
++	 * Update disk peak rate for autotuning and check whether the
++	 * process is slow (see bfq_update_peak_rate).
++	 */
++	slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++
++	/*
++	 * As above explained, 'punish' slow (i.e., seeky), timed-out
++	 * and async queues, to favor sequential sync workloads.
++	 *
++	 * Processes doing I/O in the slower disk zones will tend to be
++	 * slow(er) even if not seeky. Hence, since the estimated peak
++	 * rate is actually an average over the disk surface, these
++	 * processes may timeout just for bad luck. To avoid punishing
++	 * them we do not charge a full budget to a process that
++	 * succeeded in consuming at least 2/3 of its budget.
++	 */
++	if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++		     bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3))
++		bfq_bfqq_charge_full_budget(bfqq);
++
++	bfqq->service_from_backlogged += bfqq->entity.service;
++
++	if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++	    !bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_mark_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue))
++			bfqd->const_seeky_busy_in_flight_queues++;
++	}
++
++	if (reason == BFQ_BFQQ_TOO_IDLE &&
++	    bfqq->entity.service <= 2 * bfqq->entity.budget / 10)
++		bfq_clear_bfqq_IO_bound(bfqq);
++
++	if (bfqd->low_latency && bfqq->wr_coeff == 1)
++		bfqq->last_wr_start_finish = jiffies;
++
++	if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * If we get here, and there are no outstanding requests,
++		 * then the request pattern is isochronous (see the comments
++		 * to the function bfq_bfqq_softrt_next_start()). Hence we
++		 * can compute soft_rt_next_start. If, instead, the queue
++		 * still has outstanding requests, then we have to wait
++		 * for the completion of all the outstanding requests to
++		 * discover whether the request pattern is actually
++		 * isochronous.
++		 */
++		if (bfqq->dispatched == 0)
++			bfqq->soft_rt_next_start =
++				bfq_bfqq_softrt_next_start(bfqd, bfqq);
++		else {
++			/*
++			 * The application is still waiting for the
++			 * completion of one or more requests:
++			 * prevent it from possibly being incorrectly
++			 * deemed as soft real-time by setting its
++			 * soft_rt_next_start to infinity. In fact,
++			 * without this assignment, the application
++			 * would be incorrectly deemed as soft
++			 * real-time if:
++			 * 1) it issued a new request before the
++			 *    completion of all its in-flight
++			 *    requests, and
++			 * 2) at that time, its soft_rt_next_start
++			 *    happened to be in the past.
++			 */
++			bfqq->soft_rt_next_start =
++				bfq_infinity_from_now(jiffies);
++			/*
++			 * Schedule an update of soft_rt_next_start to when
++			 * the task may be discovered to be isochronous.
++			 */
++			bfq_mark_bfqq_softrt_update(bfqq);
++		}
++	}
++
++	bfq_log_bfqq(bfqd, bfqq,
++		"expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
++		slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++
++	/*
++	 * Increase, decrease or leave budget unchanged according to
++	 * reason.
++	 */
++	__bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++	__bfq_bfqq_expire(bfqd, bfqq);
++}
++
++/*
++ * Budget timeout is not implemented through a dedicated timer, but
++ * just checked on request arrivals and completions, as well as on
++ * idle timer expirations.
++ */
++static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_budget_new(bfqq) ||
++	    time_before(jiffies, bfqq->budget_timeout))
++		return false;
++	return true;
++}
++
++/*
++ * If we expire a queue that is waiting for the arrival of a new
++ * request, we may prevent the fictitious timestamp back-shifting that
++ * allows the guarantees of the queue to be preserved (see [1] for
++ * this tricky aspect). Hence we return true only if this condition
++ * does not hold, or if the queue is slow enough to deserve only to be
++ * kicked off for preserving a high throughput.
++*/
++static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		"may_budget_timeout: wait_request %d left %d timeout %d",
++		bfq_bfqq_wait_request(bfqq),
++			bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3,
++		bfq_bfqq_budget_timeout(bfqq));
++
++	return (!bfq_bfqq_wait_request(bfqq) ||
++		bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3)
++		&&
++		bfq_bfqq_budget_timeout(bfqq);
++}
++
++/*
++ * For a queue that becomes empty, device idling is allowed only if
++ * this function returns true for that queue. As a consequence, since
++ * device idling plays a critical role for both throughput boosting
++ * and service guarantees, the return value of this function plays a
++ * critical role as well.
++ *
++ * In a nutshell, this function returns true only if idling is
++ * beneficial for throughput or, even if detrimental for throughput,
++ * idling is however necessary to preserve service guarantees (low
++ * latency, desired throughput distribution, ...). In particular, on
++ * NCQ-capable devices, this function tries to return false, so as to
++ * help keep the drives' internal queues full, whenever this helps the
++ * device boost the throughput without causing any service-guarantee
++ * issue.
++ *
++ * In more detail, the return value of this function is obtained by,
++ * first, computing a number of boolean variables that take into
++ * account throughput and service-guarantee issues, and, then,
++ * combining these variables in a logical expression. Most of the
++ * issues taken into account are not trivial. We discuss these issues
++ * while introducing the variables.
++ */
++static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++	bool idling_boosts_thr, idling_boosts_thr_without_issues,
++		all_queues_seeky, on_hdd_and_not_all_queues_seeky,
++		idling_needed_for_service_guarantees,
++		asymmetric_scenario;
++
++	/*
++	 * The next variable takes into account the cases where idling
++	 * boosts the throughput.
++	 *
++	 * The value of the variable is computed considering, first, that
++	 * idling is virtually always beneficial for the throughput if:
++	 * (a) the device is not NCQ-capable, or
++	 * (b) regardless of the presence of NCQ, the device is rotational
++	 *     and the request pattern for bfqq is I/O-bound and sequential.
++	 *
++	 * Secondly, and in contrast to the above item (b), idling an
++	 * NCQ-capable flash-based device would not boost the
++	 * throughput even with sequential I/O; rather it would lower
++	 * the throughput in proportion to how fast the device
++	 * is. Accordingly, the next variable is true if any of the
++	 * above conditions (a) and (b) is true, and, in particular,
++	 * happens to be false if bfqd is an NCQ-capable flash-based
++	 * device.
++	 */
++	idling_boosts_thr = !bfqd->hw_tag ||
++		(!blk_queue_nonrot(bfqd->queue) && bfq_bfqq_IO_bound(bfqq) &&
++		 bfq_bfqq_idle_window(bfqq));
++
++	/*
++	 * The value of the next variable,
++	 * idling_boosts_thr_without_issues, is equal to that of
++	 * idling_boosts_thr, unless a special case holds. In this
++	 * special case, described below, idling may cause problems to
++	 * weight-raised queues.
++	 *
++	 * When the request pool is saturated (e.g., in the presence
++	 * of write hogs), if the processes associated with
++	 * non-weight-raised queues ask for requests at a lower rate,
++	 * then processes associated with weight-raised queues have a
++	 * higher probability to get a request from the pool
++	 * immediately (or at least soon) when they need one. Thus
++	 * they have a higher probability to actually get a fraction
++	 * of the device throughput proportional to their high
++	 * weight. This is especially true with NCQ-capable drives,
++	 * which enqueue several requests in advance, and further
++	 * reorder internally-queued requests.
++	 *
++	 * For this reason, we force to false the value of
++	 * idling_boosts_thr_without_issues if there are weight-raised
++	 * busy queues. In this case, and if bfqq is not weight-raised,
++	 * this guarantees that the device is not idled for bfqq (if,
++	 * instead, bfqq is weight-raised, then idling will be
++	 * guaranteed by another variable, see below). Combined with
++	 * the timestamping rules of BFQ (see [1] for details), this
++	 * behavior causes bfqq, and hence any sync non-weight-raised
++	 * queue, to get a lower number of requests served, and thus
++	 * to ask for a lower number of requests from the request
++	 * pool, before the busy weight-raised queues get served
++	 * again. This often mitigates starvation problems in the
++	 * presence of heavy write workloads and NCQ, thereby
++	 * guaranteeing a higher application and system responsiveness
++	 * in these hostile scenarios.
++	 */
++	idling_boosts_thr_without_issues = idling_boosts_thr &&
++		bfqd->wr_busy_queues == 0;
++
++	/*
++	 * There are then two cases where idling must be performed not
++	 * for throughput concerns, but to preserve service
++	 * guarantees. In the description of these cases, we say, for
++	 * short, that a queue is sequential/random if the process
++	 * associated to the queue issues sequential/random requests
++	 * (in the second case the queue may be tagged as seeky or
++	 * even constantly_seeky).
++	 *
++	 * To introduce the first case, we note that, since
++	 * bfq_bfqq_idle_window(bfqq) is false if the device is
++	 * NCQ-capable and bfqq is random (see
++	 * bfq_update_idle_window()), then, from the above two
++	 * assignments it follows that
++	 * idling_boosts_thr_without_issues is false if the device is
++	 * NCQ-capable and bfqq is random. Therefore, for this case,
++	 * device idling would never be allowed if we used just
++	 * idling_boosts_thr_without_issues to decide whether to allow
++	 * it. And, beneficially, this would imply that throughput
++	 * would always be boosted also with random I/O on NCQ-capable
++	 * HDDs.
++	 *
++	 * But we must be careful on this point, to avoid an unfair
++	 * treatment for bfqq. In fact, because of the same above
++	 * assignments, idling_boosts_thr_without_issues is, on the
++	 * other hand, true if 1) the device is an HDD and bfqq is
++	 * sequential, and 2) there are no busy weight-raised
++	 * queues. As a consequence, if we used just
++	 * idling_boosts_thr_without_issues to decide whether to idle
++	 * the device, then with an HDD we might easily bump into a
++	 * scenario where queues that are sequential and I/O-bound
++	 * would enjoy idling, whereas random queues would not. The
++	 * latter might then get a low share of the device throughput,
++	 * simply because the former would get many requests served
++	 * after being set as in service, while the latter would not.
++	 *
++	 * To address this issue, we start by setting to true a
++	 * sentinel variable, on_hdd_and_not_all_queues_seeky, if the
++	 * device is rotational and not all queues with pending or
++	 * in-flight requests are constantly seeky (i.e., there are
++	 * active sequential queues, and bfqq might then be mistreated
++	 * if it does not enjoy idling because it is random).
++	 */
++	all_queues_seeky = bfq_bfqq_constantly_seeky(bfqq) &&
++			   bfqd->busy_in_flight_queues ==
++			   bfqd->const_seeky_busy_in_flight_queues;
++
++	on_hdd_and_not_all_queues_seeky =
++		!blk_queue_nonrot(bfqd->queue) && !all_queues_seeky;
++
++	/*
++	 * To introduce the second case where idling needs to be
++	 * performed to preserve service guarantees, we can note that
++	 * allowing the drive to enqueue more than one request at a
++	 * time, and hence delegating de facto final scheduling
++	 * decisions to the drive's internal scheduler, causes loss of
++	 * control on the actual request service order. In particular,
++	 * the critical situation is when requests from different
++	 * processes happens to be present, at the same time, in the
++	 * internal queue(s) of the drive. In such a situation, the
++	 * drive, by deciding the service order of the
++	 * internally-queued requests, does determine also the actual
++	 * throughput distribution among these processes. But the
++	 * drive typically has no notion or concern about per-process
++	 * throughput distribution, and makes its decisions only on a
++	 * per-request basis. Therefore, the service distribution
++	 * enforced by the drive's internal scheduler is likely to
++	 * coincide with the desired device-throughput distribution
++	 * only in a completely symmetric scenario where:
++	 * (i)  each of these processes must get the same throughput as
++	 *      the others;
++	 * (ii) all these processes have the same I/O pattern
++	 *      (either sequential or random).
++	 * In fact, in such a scenario, the drive will tend to treat
++	 * the requests of each of these processes in about the same
++	 * way as the requests of the others, and thus to provide
++	 * each of these processes with about the same throughput
++	 * (which is exactly the desired throughput distribution). In
++	 * contrast, in any asymmetric scenario, device idling is
++	 * certainly needed to guarantee that bfqq receives its
++	 * assigned fraction of the device throughput (see [1] for
++	 * details).
++	 *
++	 * We address this issue by controlling, actually, only the
++	 * symmetry sub-condition (i), i.e., provided that
++	 * sub-condition (i) holds, idling is not performed,
++	 * regardless of whether sub-condition (ii) holds. In other
++	 * words, only if sub-condition (i) holds, then idling is
++	 * allowed, and the device tends to be prevented from queueing
++	 * many requests, possibly of several processes. The reason
++	 * for not controlling also sub-condition (ii) is that, first,
++	 * in the case of an HDD, the asymmetry in terms of types of
++	 * I/O patterns is already taken in to account in the above
++	 * sentinel variable
++	 * on_hdd_and_not_all_queues_seeky. Secondly, in the case of a
++	 * flash-based device, we prefer however to privilege
++	 * throughput (and idling lowers throughput for this type of
++	 * devices), for the following reasons:
++	 * 1) differently from HDDs, the service time of random
++	 *    requests is not orders of magnitudes lower than the service
++	 *    time of sequential requests; thus, even if processes doing
++	 *    sequential I/O get a preferential treatment with respect to
++	 *    others doing random I/O, the consequences are not as
++	 *    dramatic as with HDDs;
++	 * 2) if a process doing random I/O does need strong
++	 *    throughput guarantees, it is hopefully already being
++	 *    weight-raised, or the user is likely to have assigned it a
++	 *    higher weight than the other processes (and thus
++	 *    sub-condition (i) is likely to be false, which triggers
++	 *    idling).
++	 *
++	 * According to the above considerations, the next variable is
++	 * true (only) if sub-condition (i) holds. To compute the
++	 * value of this variable, we not only use the return value of
++	 * the function bfq_symmetric_scenario(), but also check
++	 * whether bfqq is being weight-raised, because
++	 * bfq_symmetric_scenario() does not take into account also
++	 * weight-raised queues (see comments to
++	 * bfq_weights_tree_add()).
++	 *
++	 * As a side note, it is worth considering that the above
++	 * device-idling countermeasures may however fail in the
++	 * following unlucky scenario: if idling is (correctly)
++	 * disabled in a time period during which all symmetry
++	 * sub-conditions hold, and hence the device is allowed to
++	 * enqueue many requests, but at some later point in time some
++	 * sub-condition stops to hold, then it may become impossible
++	 * to let requests be served in the desired order until all
++	 * the requests already queued in the device have been served.
++	 */
++	asymmetric_scenario = bfqq->wr_coeff > 1 ||
++		!bfq_symmetric_scenario(bfqd);
++
++	/*
++	 * Finally, there is a case where maximizing throughput is the
++	 * best choice even if it may cause unfairness toward
++	 * bfqq. Such a case is when bfqq became active in a burst of
++	 * queue activations. Queues that became active during a large
++	 * burst benefit only from throughput, as discussed in the
++	 * comments to bfq_handle_burst. Thus, if bfqq became active
++	 * in a burst and not idling the device maximizes throughput,
++	 * then the device must no be idled, because not idling the
++	 * device provides bfqq and all other queues in the burst with
++	 * maximum benefit. Combining this and the two cases above, we
++	 * can now establish when idling is actually needed to
++	 * preserve service guarantees.
++	 */
++	idling_needed_for_service_guarantees =
++		(on_hdd_and_not_all_queues_seeky || asymmetric_scenario) &&
++		!bfq_bfqq_in_large_burst(bfqq);
++
++	/*
++	 * We have now all the components we need to compute the return
++	 * value of the function, which is true only if both the following
++	 * conditions hold:
++	 * 1) bfqq is sync, because idling make sense only for sync queues;
++	 * 2) idling either boosts the throughput (without issues), or
++	 *    is necessary to preserve service guarantees.
++	 */
++	return bfq_bfqq_sync(bfqq) &&
++		(idling_boosts_thr_without_issues ||
++		 idling_needed_for_service_guarantees);
++}
++
++/*
++ * If the in-service queue is empty but the function bfq_bfqq_may_idle
++ * returns true, then:
++ * 1) the queue must remain in service and cannot be expired, and
++ * 2) the device must be idled to wait for the possible arrival of a new
++ *    request for the queue.
++ * See the comments to the function bfq_bfqq_may_idle for the reasons
++ * why performing device idling is the best choice to boost the throughput
++ * and preserve service guarantees when bfq_bfqq_may_idle itself
++ * returns true.
++ */
++static bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	return RB_EMPTY_ROOT(&bfqq->sort_list) && bfqd->bfq_slice_idle != 0 &&
++	       bfq_bfqq_may_idle(bfqq);
++}
++
++/*
++ * Select a queue for service.  If we have a current queue in service,
++ * check whether to continue servicing it, or retrieve and set a new one.
++ */
++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq;
++	struct request *next_rq;
++	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++
++	bfqq = bfqd->in_service_queue;
++	if (!bfqq)
++		goto new_queue;
++
++	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
++
++	if (bfq_may_expire_for_budg_timeout(bfqq) &&
++	    !timer_pending(&bfqd->idle_slice_timer) &&
++	    !bfq_bfqq_must_idle(bfqq))
++		goto expire;
++
++	next_rq = bfqq->next_rq;
++	/*
++	 * If bfqq has requests queued and it has enough budget left to
++	 * serve them, keep the queue, otherwise expire it.
++	 */
++	if (next_rq) {
++		if (bfq_serv_to_charge(next_rq, bfqq) >
++			bfq_bfqq_budget_left(bfqq)) {
++			reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
++			goto expire;
++		} else {
++			/*
++			 * The idle timer may be pending because we may
++			 * not disable disk idling even when a new request
++			 * arrives.
++			 */
++			if (timer_pending(&bfqd->idle_slice_timer)) {
++				/*
++				 * If we get here: 1) at least a new request
++				 * has arrived but we have not disabled the
++				 * timer because the request was too small,
++				 * 2) then the block layer has unplugged
++				 * the device, causing the dispatch to be
++				 * invoked.
++				 *
++				 * Since the device is unplugged, now the
++				 * requests are probably large enough to
++				 * provide a reasonable throughput.
++				 * So we disable idling.
++				 */
++				bfq_clear_bfqq_wait_request(bfqq);
++				del_timer(&bfqd->idle_slice_timer);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++				bfqg_stats_update_idle_time(bfqq_group(bfqq));
++#endif
++			}
++			goto keep_queue;
++		}
++	}
++
++	/*
++	 * No requests pending. However, if the in-service queue is idling
++	 * for a new request, or has requests waiting for a completion and
++	 * may idle after their completion, then keep it anyway.
++	 */
++	if (timer_pending(&bfqd->idle_slice_timer) ||
++	    (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) {
++		bfqq = NULL;
++		goto keep_queue;
++	}
++
++	reason = BFQ_BFQQ_NO_MORE_REQUESTS;
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, false, reason);
++new_queue:
++	bfqq = bfq_set_in_service_queue(bfqd);
++	bfq_log(bfqd, "select_queue: new queue %d returned",
++		bfqq ? bfqq->pid : 0);
++keep_queue:
++	return bfqq;
++}
++
++static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
++		bfq_log_bfqq(bfqd, bfqq,
++			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time),
++			bfqq->wr_coeff,
++			bfqq->entity.weight, bfqq->entity.orig_weight);
++
++		BUG_ON(bfqq != bfqd->in_service_queue && entity->weight !=
++		       entity->orig_weight * bfqq->wr_coeff);
++		if (entity->prio_changed)
++			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++
++		/*
++		 * If the queue was activated in a burst, or
++		 * too much time has elapsed from the beginning
++		 * of this weight-raising period, then end weight
++		 * raising.
++		 */
++		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    time_is_before_jiffies(bfqq->last_wr_start_finish +
++					   bfqq->wr_cur_max_time)) {
++			bfqq->last_wr_start_finish = jiffies;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais ending at %lu, rais_max_time %u",
++				     bfqq->last_wr_start_finish,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++			bfq_bfqq_end_wr(bfqq);
++		}
++	}
++	/* Update weight both if it must be raised and if it must be lowered */
++	if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1))
++		__bfq_entity_update_weight_prio(
++			bfq_entity_service_tree(entity),
++			entity);
++}
++
++/*
++ * Dispatch one request from bfqq, moving it to the request queue
++ * dispatch list.
++ */
++static int bfq_dispatch_request(struct bfq_data *bfqd,
++				struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++	struct request *rq;
++	unsigned long service_to_charge;
++
++	BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Follow expired path, else get first next available. */
++	rq = bfq_check_fifo(bfqq);
++	if (!rq)
++		rq = bfqq->next_rq;
++	service_to_charge = bfq_serv_to_charge(rq, bfqq);
++
++	if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
++		/*
++		 * This may happen if the next rq is chosen in fifo order
++		 * instead of sector order. The budget is properly
++		 * dimensioned to be always sufficient to serve the next
++		 * request only if it is chosen in sector order. The reason
++		 * is that it would be quite inefficient and little useful
++		 * to always make sure that the budget is large enough to
++		 * serve even the possible next rq in fifo order.
++		 * In fact, requests are seldom served in fifo order.
++		 *
++		 * Expire the queue for budget exhaustion, and make sure
++		 * that the next act_budget is enough to serve the next
++		 * request, even if it comes from the fifo expired path.
++		 */
++		bfqq->next_rq = rq;
++		/*
++		 * Since this dispatch is failed, make sure that
++		 * a new one will be performed
++		 */
++		if (!bfqd->rq_in_driver)
++			bfq_schedule_dispatch(bfqd);
++		goto expire;
++	}
++
++	/* Finally, insert request into driver dispatch list. */
++	bfq_bfqq_served(bfqq, service_to_charge);
++	bfq_dispatch_insert(bfqd->queue, rq);
++
++	bfq_update_wr_data(bfqd, bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq,
++			"dispatched %u sec req (%llu), budg left %d",
++			blk_rq_sectors(rq),
++			(unsigned long long) blk_rq_pos(rq),
++			bfq_bfqq_budget_left(bfqq));
++
++	dispatched++;
++
++	if (!bfqd->in_service_bic) {
++		atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
++		bfqd->in_service_bic = RQ_BIC(rq);
++	}
++
++	if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
++	    dispatched >= bfqd->bfq_max_budget_async_rq) ||
++	    bfq_class_idle(bfqq)))
++		goto expire;
++
++	return dispatched;
++
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, false, BFQ_BFQQ_BUDGET_EXHAUSTED);
++	return dispatched;
++}
++
++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++
++	while (bfqq->next_rq) {
++		bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
++		dispatched++;
++	}
++
++	BUG_ON(!list_empty(&bfqq->fifo));
++	return dispatched;
++}
++
++/*
++ * Drain our current requests.
++ * Used for barriers and when switching io schedulers on-the-fly.
++ */
++static int bfq_forced_dispatch(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq, *n;
++	struct bfq_service_tree *st;
++	int dispatched = 0;
++
++	bfqq = bfqd->in_service_queue;
++	if (bfqq)
++		__bfq_bfqq_expire(bfqd, bfqq);
++
++	/*
++	 * Loop through classes, and be careful to leave the scheduler
++	 * in a consistent state, as feedback mechanisms and vtime
++	 * updates cannot be disabled during the process.
++	 */
++	list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
++		st = bfq_entity_service_tree(&bfqq->entity);
++
++		dispatched += __bfq_forced_dispatch_bfqq(bfqq);
++		bfqq->max_budget = bfq_max_budget(bfqd);
++
++		bfq_forget_idle(st);
++	}
++
++	BUG_ON(bfqd->busy_queues != 0);
++
++	return dispatched;
++}
++
++static int bfq_dispatch_requests(struct request_queue *q, int force)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq;
++	int max_dispatch;
++
++	bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++	if (bfqd->busy_queues == 0)
++		return 0;
++
++	if (unlikely(force))
++		return bfq_forced_dispatch(bfqd);
++
++	bfqq = bfq_select_queue(bfqd);
++	if (!bfqq)
++		return 0;
++
++	if (bfq_class_idle(bfqq))
++		max_dispatch = 1;
++
++	if (!bfq_bfqq_sync(bfqq))
++		max_dispatch = bfqd->bfq_max_budget_async_rq;
++
++	if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
++		if (bfqd->busy_queues > 1)
++			return 0;
++		if (bfqq->dispatched >= 4 * max_dispatch)
++			return 0;
++	}
++
++	if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
++		return 0;
++
++	bfq_clear_bfqq_wait_request(bfqq);
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++	if (!bfq_dispatch_request(bfqd, bfqq))
++		return 0;
++
++	bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
++			bfq_bfqq_sync(bfqq) ? "sync" : "async");
++
++	return 1;
++}
++
++/*
++ * Task holds one reference to the queue, dropped when task exits.  Each rq
++ * in-flight on this queue also holds a reference, dropped when rq is freed.
++ *
++ * Queue lock must be held here.
++ */
++static void bfq_put_queue(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	struct bfq_group *bfqg = bfqq_group(bfqq);
++#endif
++
++	BUG_ON(atomic_read(&bfqq->ref) <= 0);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	if (!atomic_dec_and_test(&bfqq->ref))
++		return;
++
++	BUG_ON(rb_first(&bfqq->sort_list));
++	BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
++	BUG_ON(bfqq->entity.tree);
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqd->in_service_queue == bfqq);
++
++	if (bfq_bfqq_sync(bfqq))
++		/*
++		 * The fact that this queue is being destroyed does not
++		 * invalidate the fact that this queue may have been
++		 * activated during the current burst. As a consequence,
++		 * although the queue does not exist anymore, and hence
++		 * needs to be removed from the burst list if there,
++		 * the burst size has not to be decremented.
++		 */
++		hlist_del_init(&bfqq->burst_list_node);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++
++	kmem_cache_free(bfq_pool, bfqq);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_put(bfqg);
++#endif
++}
++
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	if (bfqq == bfqd->in_service_queue) {
++		__bfq_bfqq_expire(bfqd, bfqq);
++		bfq_schedule_dispatch(bfqd);
++	}
++
++	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	bfq_put_queue(bfqq);
++}
++
++static void bfq_init_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++
++	bic->ttime.last_end_request = jiffies;
++}
++
++static void bfq_exit_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++	if (bic->bfqq[BLK_RW_ASYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
++		bic->bfqq[BLK_RW_ASYNC] = NULL;
++	}
++
++	if (bic->bfqq[BLK_RW_SYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
++		bic->bfqq[BLK_RW_SYNC] = NULL;
++	}
++}
++
++/*
++ * Update the entity prio values; note that the new values will not
++ * be used until the next (re)activation.
++ */
++static void
++bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	struct task_struct *tsk = current;
++	int ioprio_class;
++
++	ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	switch (ioprio_class) {
++	default:
++		dev_err(bfqq->bfqd->queue->backing_dev_info.dev,
++			"bfq: bad prio class %d\n", ioprio_class);
++	case IOPRIO_CLASS_NONE:
++		/*
++		 * No prio set, inherit CPU scheduling settings.
++		 */
++		bfqq->new_ioprio = task_nice_ioprio(tsk);
++		bfqq->new_ioprio_class = task_nice_ioclass(tsk);
++		break;
++	case IOPRIO_CLASS_RT:
++		bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->new_ioprio_class = IOPRIO_CLASS_RT;
++		break;
++	case IOPRIO_CLASS_BE:
++		bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->new_ioprio_class = IOPRIO_CLASS_BE;
++		break;
++	case IOPRIO_CLASS_IDLE:
++		bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE;
++		bfqq->new_ioprio = 7;
++		bfq_clear_bfqq_idle_window(bfqq);
++		break;
++	}
++
++	if (bfqq->new_ioprio < 0 || bfqq->new_ioprio >= IOPRIO_BE_NR) {
++		pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
++			bfqq->new_ioprio);
++		BUG();
++	}
++
++	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
++	bfqq->entity.prio_changed = 1;
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
++{
++	struct bfq_data *bfqd;
++	struct bfq_queue *bfqq, *new_bfqq;
++	unsigned long uninitialized_var(flags);
++	int ioprio = bic->icq.ioc->ioprio;
++
++	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++				   &flags);
++	/*
++	 * This condition may trigger on a newly created bic, be sure to
++	 * drop the lock before returning.
++	 */
++	if (unlikely(!bfqd) || likely(bic->ioprio == ioprio))
++		goto out;
++
++	bic->ioprio = ioprio;
++
++	bfqq = bic->bfqq[BLK_RW_ASYNC];
++	if (bfqq) {
++		new_bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic,
++					 GFP_ATOMIC);
++		if (new_bfqq) {
++			bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "check_ioprio_change: bfqq %p %d",
++				     bfqq, atomic_read(&bfqq->ref));
++			bfq_put_queue(bfqq);
++		}
++	}
++
++	bfqq = bic->bfqq[BLK_RW_SYNC];
++	if (bfqq)
++		bfq_set_next_ioprio_data(bfqq, bic);
++
++out:
++	bfq_put_bfqd_unlock(bfqd, &flags);
++}
++
++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
++{
++	RB_CLEAR_NODE(&bfqq->entity.rb_node);
++	INIT_LIST_HEAD(&bfqq->fifo);
++	INIT_HLIST_NODE(&bfqq->burst_list_node);
++
++	atomic_set(&bfqq->ref, 0);
++	bfqq->bfqd = bfqd;
++
++	if (bic)
++		bfq_set_next_ioprio_data(bfqq, bic);
++
++	if (is_sync) {
++		if (!bfq_class_idle(bfqq))
++			bfq_mark_bfqq_idle_window(bfqq);
++		bfq_mark_bfqq_sync(bfqq);
++	} else
++		bfq_clear_bfqq_sync(bfqq);
++	bfq_mark_bfqq_IO_bound(bfqq);
++
++	/* Tentative initial value to trade off between thr and lat */
++	bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
++	bfqq->pid = pid;
++
++	bfqq->wr_coeff = 1;
++	bfqq->last_wr_start_finish = 0;
++	/*
++	 * Set to the value for which bfqq will not be deemed as
++	 * soft rt when it becomes backlogged.
++	 */
++	bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
++}
++
++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
++					      struct bio *bio, int is_sync,
++					      struct bfq_io_cq *bic,
++					      gfp_t gfp_mask)
++{
++	struct bfq_group *bfqg;
++	struct bfq_queue *bfqq, *new_bfqq = NULL;
++	struct blkcg *blkcg;
++
++retry:
++	rcu_read_lock();
++
++	blkcg = bio_blkcg(bio);
++	bfqg = bfq_find_alloc_group(bfqd, blkcg);
++	/* bic always exists here */
++	bfqq = bic_to_bfqq(bic, is_sync);
++
++	/*
++	 * Always try a new alloc if we fall back to the OOM bfqq
++	 * originally, since it should just be a temporary situation.
++	 */
++	if (!bfqq || bfqq == &bfqd->oom_bfqq) {
++		bfqq = NULL;
++		if (new_bfqq) {
++			bfqq = new_bfqq;
++			new_bfqq = NULL;
++		} else if (gfpflags_allow_blocking(gfp_mask)) {
++			rcu_read_unlock();
++			spin_unlock_irq(bfqd->queue->queue_lock);
++			new_bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++			spin_lock_irq(bfqd->queue->queue_lock);
++			if (new_bfqq)
++				goto retry;
++		} else {
++			bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++		}
++
++		if (bfqq) {
++			bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++				      is_sync);
++			bfq_init_entity(&bfqq->entity, bfqg);
++			bfq_log_bfqq(bfqd, bfqq, "allocated");
++		} else {
++			bfqq = &bfqd->oom_bfqq;
++			bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++		}
++	}
++
++	if (new_bfqq)
++		kmem_cache_free(bfq_pool, new_bfqq);
++
++	rcu_read_unlock();
++
++	return bfqq;
++}
++
++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
++					       struct bfq_group *bfqg,
++					       int ioprio_class, int ioprio)
++{
++	switch (ioprio_class) {
++	case IOPRIO_CLASS_RT:
++		return &bfqg->async_bfqq[0][ioprio];
++	case IOPRIO_CLASS_NONE:
++		ioprio = IOPRIO_NORM;
++		/* fall through */
++	case IOPRIO_CLASS_BE:
++		return &bfqg->async_bfqq[1][ioprio];
++	case IOPRIO_CLASS_IDLE:
++		return &bfqg->async_idle_bfqq;
++	default:
++		BUG();
++	}
++}
++
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bio *bio, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask)
++{
++	const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++	const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	struct bfq_queue **async_bfqq = NULL;
++	struct bfq_queue *bfqq = NULL;
++
++	if (!is_sync) {
++		struct blkcg *blkcg;
++		struct bfq_group *bfqg;
++
++		rcu_read_lock();
++		blkcg = bio_blkcg(bio);
++		rcu_read_unlock();
++		bfqg = bfq_find_alloc_group(bfqd, blkcg);
++		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
++						  ioprio);
++		bfqq = *async_bfqq;
++	}
++
++	if (!bfqq)
++		bfqq = bfq_find_alloc_queue(bfqd, bio, is_sync, bic, gfp_mask);
++
++	/*
++	 * Pin the queue now that it's allocated, scheduler exit will
++	 * prune it.
++	 */
++	if (!is_sync && !(*async_bfqq)) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		*async_bfqq = bfqq;
++	}
++
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	return bfqq;
++}
++
++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
++				    struct bfq_io_cq *bic)
++{
++	unsigned long elapsed = jiffies - bic->ttime.last_end_request;
++	unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++
++	bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++	bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
++	bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
++				bic->ttime.ttime_samples;
++}
++
++static void bfq_update_io_seektime(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct request *rq)
++{
++	sector_t sdist;
++	u64 total;
++
++	if (bfqq->last_request_pos < blk_rq_pos(rq))
++		sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
++	else
++		sdist = bfqq->last_request_pos - blk_rq_pos(rq);
++
++	/*
++	 * Don't allow the seek distance to get too large from the
++	 * odd fragment, pagein, etc.
++	 */
++	if (bfqq->seek_samples == 0) /* first request, not really a seek */
++		sdist = 0;
++	else if (bfqq->seek_samples <= 60) /* second & third seek */
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
++	else
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
++
++	bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
++	bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
++	total = bfqq->seek_total + (bfqq->seek_samples/2);
++	do_div(total, bfqq->seek_samples);
++	bfqq->seek_mean = (sector_t)total;
++
++	bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
++			(u64)bfqq->seek_mean);
++}
++
++/*
++ * Disable idle window if the process thinks too long or seeks so much that
++ * it doesn't matter.
++ */
++static void bfq_update_idle_window(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct bfq_io_cq *bic)
++{
++	int enable_idle;
++
++	/* Don't idle for async or idle io prio class. */
++	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
++		return;
++
++	enable_idle = bfq_bfqq_idle_window(bfqq);
++
++	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
++	    bfqd->bfq_slice_idle == 0 ||
++		(bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
++			bfqq->wr_coeff == 1))
++		enable_idle = 0;
++	else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
++		if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
++			bfqq->wr_coeff == 1)
++			enable_idle = 0;
++		else
++			enable_idle = 1;
++	}
++	bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
++		enable_idle);
++
++	if (enable_idle)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++}
++
++/*
++ * Called when a new fs request (rq) is added to bfqq.  Check if there's
++ * something we should do about it.
++ */
++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			    struct request *rq)
++{
++	struct bfq_io_cq *bic = RQ_BIC(rq);
++
++	if (rq->cmd_flags & REQ_META)
++		bfqq->meta_pending++;
++
++	bfq_update_io_thinktime(bfqd, bic);
++	bfq_update_io_seektime(bfqd, bfqq, rq);
++	if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_clear_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
++			bfqd->const_seeky_busy_in_flight_queues--;
++		}
++	}
++	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
++	    !BFQQ_SEEKY(bfqq))
++		bfq_update_idle_window(bfqd, bfqq, bic);
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
++		     bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
++		     (unsigned long long) bfqq->seek_mean);
++
++	bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
++
++	if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) {
++		bool small_req = bfqq->queued[rq_is_sync(rq)] == 1 &&
++				 blk_rq_sectors(rq) < 32;
++		bool budget_timeout = bfq_bfqq_budget_timeout(bfqq);
++
++		/*
++		 * There is just this request queued: if the request
++		 * is small and the queue is not to be expired, then
++		 * just exit.
++		 *
++		 * In this way, if the disk is being idled to wait for
++		 * a new request from the in-service queue, we avoid
++		 * unplugging the device and committing the disk to serve
++		 * just a small request. On the contrary, we wait for
++		 * the block layer to decide when to unplug the device:
++		 * hopefully, new requests will be merged to this one
++		 * quickly, then the device will be unplugged and
++		 * larger requests will be dispatched.
++		 */
++		if (small_req && !budget_timeout)
++			return;
++
++		/*
++		 * A large enough request arrived, or the queue is to
++		 * be expired: in both cases disk idling is to be
++		 * stopped, so clear wait_request flag and reset
++		 * timer.
++		 */
++		bfq_clear_bfqq_wait_request(bfqq);
++		del_timer(&bfqd->idle_slice_timer);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		bfqg_stats_update_idle_time(bfqq_group(bfqq));
++#endif
++
++		/*
++		 * The queue is not empty, because a new request just
++		 * arrived. Hence we can safely expire the queue, in
++		 * case of budget timeout, without risking that the
++		 * timestamps of the queue are not updated correctly.
++		 * See [1] for more details.
++		 */
++		if (budget_timeout)
++			bfq_bfqq_expire(bfqd, bfqq, false,
++					BFQ_BFQQ_BUDGET_TIMEOUT);
++
++		/*
++		 * Let the request rip immediately, or let a new queue be
++		 * selected if bfqq has just been expired.
++		 */
++		__blk_run_queue(bfqd->queue);
++	}
++}
++
++static void bfq_insert_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++
++	bfq_add_request(rq);
++
++	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++	list_add_tail(&rq->queuelist, &bfqq->fifo);
++
++	bfq_rq_enqueued(bfqd, bfqq, rq);
++}
++
++static void bfq_update_hw_tag(struct bfq_data *bfqd)
++{
++	bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
++				     bfqd->rq_in_driver);
++
++	if (bfqd->hw_tag == 1)
++		return;
++
++	/*
++	 * This sample is valid if the number of outstanding requests
++	 * is large enough to allow a queueing behavior.  Note that the
++	 * sum is not exact, as it's not taking into account deactivated
++	 * requests.
++	 */
++	if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
++		return;
++
++	if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
++		return;
++
++	bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
++	bfqd->max_rq_in_driver = 0;
++	bfqd->hw_tag_samples = 0;
++}
++
++static void bfq_completed_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	bool sync = bfq_bfqq_sync(bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
++		     blk_rq_sectors(rq), sync);
++
++	bfq_update_hw_tag(bfqd);
++
++	BUG_ON(!bfqd->rq_in_driver);
++	BUG_ON(!bfqq->dispatched);
++	bfqd->rq_in_driver--;
++	bfqq->dispatched--;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_update_completion(bfqq_group(bfqq),
++				     rq_start_time_ns(rq),
++				     rq_io_start_time_ns(rq), rq->cmd_flags);
++#endif
++
++	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++
++	if (sync) {
++		bfqd->sync_flight--;
++		RQ_BIC(rq)->ttime.last_end_request = jiffies;
++	}
++
++	/*
++	 * If we are waiting to discover whether the request pattern of the
++	 * task associated with the queue is actually isochronous, and
++	 * both requisites for this condition to hold are satisfied, then
++	 * compute soft_rt_next_start (see the comments to the function
++	 * bfq_bfqq_softrt_next_start()).
++	 */
++	if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list))
++		bfqq->soft_rt_next_start =
++			bfq_bfqq_softrt_next_start(bfqd, bfqq);
++
++	/*
++	 * If this is the in-service queue, check if it needs to be expired,
++	 * or if we want to idle in case it has no pending requests.
++	 */
++	if (bfqd->in_service_queue == bfqq) {
++		if (bfq_bfqq_budget_new(bfqq))
++			bfq_set_budget_timeout(bfqd);
++
++		if (bfq_bfqq_must_idle(bfqq)) {
++			bfq_arm_slice_timer(bfqd);
++			goto out;
++		} else if (bfq_may_expire_for_budg_timeout(bfqq))
++			bfq_bfqq_expire(bfqd, bfqq, false,
++					BFQ_BFQQ_BUDGET_TIMEOUT);
++		else if (RB_EMPTY_ROOT(&bfqq->sort_list) &&
++			 (bfqq->dispatched == 0 ||
++			  !bfq_bfqq_may_idle(bfqq)))
++			bfq_bfqq_expire(bfqd, bfqq, false,
++					BFQ_BFQQ_NO_MORE_REQUESTS);
++	}
++
++	if (!bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++
++out:
++	return;
++}
++
++static int __bfq_may_queue(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
++		bfq_clear_bfqq_must_alloc(bfqq);
++		return ELV_MQUEUE_MUST;
++	}
++
++	return ELV_MQUEUE_MAY;
++}
++
++static int bfq_may_queue(struct request_queue *q, int rw)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	/*
++	 * Don't force setup of a queue from here, as a call to may_queue
++	 * does not necessarily imply that a request actually will be
++	 * queued. So just lookup a possibly existing queue, or return
++	 * 'may queue' if that fails.
++	 */
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (!bic)
++		return ELV_MQUEUE_MAY;
++
++	bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++	if (bfqq)
++		return __bfq_may_queue(bfqq);
++
++	return ELV_MQUEUE_MAY;
++}
++
++/*
++ * Queue lock held here.
++ */
++static void bfq_put_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	if (bfqq) {
++		const int rw = rq_data_dir(rq);
++
++		BUG_ON(!bfqq->allocated[rw]);
++		bfqq->allocated[rw]--;
++
++		rq->elv.priv[0] = NULL;
++		rq->elv.priv[1] = NULL;
++
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++/*
++ * Allocate bfq data structures associated with this request.
++ */
++static int bfq_set_request(struct request_queue *q, struct request *rq,
++			   struct bio *bio, gfp_t gfp_mask)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
++	const int rw = rq_data_dir(rq);
++	const int is_sync = rq_is_sync(rq);
++	struct bfq_queue *bfqq;
++	unsigned long flags;
++
++	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
++
++	bfq_check_ioprio_change(bic, bio);
++
++	spin_lock_irqsave(q->queue_lock, flags);
++
++	if (!bic)
++		goto queue_fail;
++
++	bfq_bic_update_cgroup(bic, bio);
++
++	bfqq = bic_to_bfqq(bic, is_sync);
++	if (!bfqq || bfqq == &bfqd->oom_bfqq) {
++		bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
++		bic_set_bfqq(bic, bfqq, is_sync);
++		if (is_sync) {
++			if (bfqd->large_burst)
++				bfq_mark_bfqq_in_large_burst(bfqq);
++			else
++				bfq_clear_bfqq_in_large_burst(bfqq);
++		}
++	}
++
++	bfqq->allocated[rw]++;
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	rq->elv.priv[0] = bic;
++	rq->elv.priv[1] = bfqq;
++
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 0;
++
++queue_fail:
++	bfq_schedule_dispatch(bfqd);
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 1;
++}
++
++static void bfq_kick_queue(struct work_struct *work)
++{
++	struct bfq_data *bfqd =
++		container_of(work, struct bfq_data, unplug_work);
++	struct request_queue *q = bfqd->queue;
++
++	spin_lock_irq(q->queue_lock);
++	__blk_run_queue(q);
++	spin_unlock_irq(q->queue_lock);
++}
++
++/*
++ * Handler of the expiration of the timer running if the in-service queue
++ * is idling inside its time slice.
++ */
++static void bfq_idle_slice_timer(unsigned long data)
++{
++	struct bfq_data *bfqd = (struct bfq_data *)data;
++	struct bfq_queue *bfqq;
++	unsigned long flags;
++	enum bfqq_expiration reason;
++
++	spin_lock_irqsave(bfqd->queue->queue_lock, flags);
++
++	bfqq = bfqd->in_service_queue;
++	/*
++	 * Theoretical race here: the in-service queue can be NULL or
++	 * different from the queue that was idling if the timer handler
++	 * spins on the queue_lock and a new request arrives for the
++	 * current queue and there is a full dispatch cycle that changes
++	 * the in-service queue.  This can hardly happen, but in the worst
++	 * case we just expire a queue too early.
++	 */
++	if (bfqq) {
++		bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++		if (bfq_bfqq_budget_timeout(bfqq))
++			/*
++			 * Also here the queue can be safely expired
++			 * for budget timeout without wasting
++			 * guarantees
++			 */
++			reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++		else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
++			/*
++			 * The queue may not be empty upon timer expiration,
++			 * because we may not disable the timer when the
++			 * first request of the in-service queue arrives
++			 * during disk idling.
++			 */
++			reason = BFQ_BFQQ_TOO_IDLE;
++		else
++			goto schedule_dispatch;
++
++		bfq_bfqq_expire(bfqd, bfqq, true, reason);
++	}
++
++schedule_dispatch:
++	bfq_schedule_dispatch(bfqd);
++
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++}
++
++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
++{
++	del_timer_sync(&bfqd->idle_slice_timer);
++	cancel_work_sync(&bfqd->unplug_work);
++}
++
++static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
++					struct bfq_queue **bfqq_ptr)
++{
++	struct bfq_group *root_group = bfqd->root_group;
++	struct bfq_queue *bfqq = *bfqq_ptr;
++
++	bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
++	if (bfqq) {
++		bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++		bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++		*bfqq_ptr = NULL;
++	}
++}
++
++/*
++ * Release all the bfqg references to its async queues.  If we are
++ * deallocating the group these queues may still contain requests, so
++ * we reparent them to the root cgroup (i.e., the only one that will
++ * exist for sure until all the requests on a device are gone).
++ */
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
++
++	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
++}
++
++static void bfq_exit_queue(struct elevator_queue *e)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	struct request_queue *q = bfqd->queue;
++	struct bfq_queue *bfqq, *n;
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	spin_lock_irq(q->queue_lock);
++
++	BUG_ON(bfqd->in_service_queue);
++	list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
++		bfq_deactivate_bfqq(bfqd, bfqq, 0);
++
++	spin_unlock_irq(q->queue_lock);
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	synchronize_rcu();
++
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	blkcg_deactivate_policy(q, &blkcg_policy_bfq);
++#else
++	kfree(bfqd->root_group);
++#endif
++
++	kfree(bfqd);
++}
++
++static void bfq_init_root_group(struct bfq_group *root_group,
++				struct bfq_data *bfqd)
++{
++	int i;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	root_group->entity.parent = NULL;
++	root_group->my_entity = NULL;
++	root_group->bfqd = bfqd;
++#endif
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++}
++
++static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++{
++	struct bfq_data *bfqd;
++	struct elevator_queue *eq;
++
++	eq = elevator_alloc(q, e);
++	if (!eq)
++		return -ENOMEM;
++
++	bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
++	if (!bfqd) {
++		kobject_put(&eq->kobj);
++		return -ENOMEM;
++	}
++	eq->elevator_data = bfqd;
++
++	/*
++	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
++	 * Grab a permanent reference to it, so that the normal code flow
++	 * will not attempt to free it.
++	 */
++	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
++	atomic_inc(&bfqd->oom_bfqq.ref);
++	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
++	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
++	bfqd->oom_bfqq.entity.new_weight =
++		bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio);
++	/*
++	 * Trigger weight initialization, according to ioprio, at the
++	 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
++	 * class won't be changed any more.
++	 */
++	bfqd->oom_bfqq.entity.prio_changed = 1;
++
++	bfqd->queue = q;
++
++	spin_lock_irq(q->queue_lock);
++	q->elevator = eq;
++	spin_unlock_irq(q->queue_lock);
++
++	bfqd->root_group = bfq_create_group_hierarchy(bfqd, q->node);
++	if (!bfqd->root_group)
++		goto out_free;
++	bfq_init_root_group(bfqd->root_group, bfqd);
++	bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqd->active_numerous_groups = 0;
++#endif
++
++	init_timer(&bfqd->idle_slice_timer);
++	bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
++	bfqd->idle_slice_timer.data = (unsigned long)bfqd;
++
++	bfqd->queue_weights_tree = RB_ROOT;
++	bfqd->group_weights_tree = RB_ROOT;
++
++	INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
++
++	INIT_LIST_HEAD(&bfqd->active_list);
++	INIT_LIST_HEAD(&bfqd->idle_list);
++	INIT_HLIST_HEAD(&bfqd->burst_list);
++
++	bfqd->hw_tag = -1;
++
++	bfqd->bfq_max_budget = bfq_default_max_budget;
++
++	bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
++	bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
++	bfqd->bfq_back_max = bfq_back_max;
++	bfqd->bfq_back_penalty = bfq_back_penalty;
++	bfqd->bfq_slice_idle = bfq_slice_idle;
++	bfqd->bfq_class_idle_last_service = 0;
++	bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
++	bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
++	bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++
++	bfqd->bfq_requests_within_timer = 120;
++
++	bfqd->bfq_large_burst_thresh = 11;
++	bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++
++	bfqd->low_latency = true;
++
++	bfqd->bfq_wr_coeff = 20;
++	bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
++	bfqd->bfq_wr_max_time = 0;
++	bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
++	bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500);
++	bfqd->bfq_wr_max_softrt_rate = 7000; /*
++					      * Approximate rate required
++					      * to playback or record a
++					      * high-definition compressed
++					      * video.
++					      */
++	bfqd->wr_busy_queues = 0;
++	bfqd->busy_in_flight_queues = 0;
++	bfqd->const_seeky_busy_in_flight_queues = 0;
++
++	/*
++	 * Begin by assuming, optimistically, that the device peak rate is
++	 * equal to the highest reference rate.
++	 */
++	bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
++			T_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->device_speed = BFQ_BFQD_FAST;
++
++	return 0;
++
++out_free:
++	kfree(bfqd);
++	kobject_put(&eq->kobj);
++	return -ENOMEM;
++}
++
++static void bfq_slab_kill(void)
++{
++	kmem_cache_destroy(bfq_pool);
++}
++
++static int __init bfq_slab_setup(void)
++{
++	bfq_pool = KMEM_CACHE(bfq_queue, 0);
++	if (!bfq_pool)
++		return -ENOMEM;
++	return 0;
++}
++
++static ssize_t bfq_var_show(unsigned int var, char *page)
++{
++	return sprintf(page, "%d\n", var);
++}
++
++static ssize_t bfq_var_store(unsigned long *var, const char *page,
++			     size_t count)
++{
++	unsigned long new_val;
++	int ret = kstrtoul(page, 10, &new_val);
++
++	if (ret == 0)
++		*var = new_val;
++
++	return count;
++}
++
++static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++
++	return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ?
++		       jiffies_to_msecs(bfqd->bfq_wr_max_time) :
++		       jiffies_to_msecs(bfq_wr_duration(bfqd)));
++}
++
++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_queue *bfqq;
++	struct bfq_data *bfqd = e->elevator_data;
++	ssize_t num_char = 0;
++
++	num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
++			    bfqd->queued);
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	num_char += sprintf(page + num_char, "Active:\n");
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
++		num_char += sprintf(page + num_char,
++				    "pid%d: weight %hu, nr_queued %d %d, ",
++				    bfqq->pid,
++				    bfqq->entity.weight,
++				    bfqq->queued[0],
++				    bfqq->queued[1]);
++		num_char += sprintf(page + num_char,
++				    "dur %d/%u\n",
++				    jiffies_to_msecs(
++					    jiffies -
++					    bfqq->last_wr_start_finish),
++				    jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	num_char += sprintf(page + num_char, "Idle:\n");
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
++		num_char += sprintf(page + num_char,
++				    "pid%d: weight %hu, dur %d/%u\n",
++				    bfqq->pid,
++				    bfqq->entity.weight,
++				    jiffies_to_msecs(jiffies -
++						     bfqq->last_wr_start_finish),
++				    jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++
++	return num_char;
++}
++
++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
++static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned int __data = __VAR;					\
++	if (__CONV)							\
++		__data = jiffies_to_msecs(__data);			\
++	return bfq_var_show(__data, (page));				\
++}
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
++SHOW_FUNCTION(bfq_max_budget_async_rq_show,
++	      bfqd->bfq_max_budget_async_rq, 0);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
++SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
++SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
++SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1);
++SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
++	1);
++SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
++static ssize_t								\
++__FUNC(struct elevator_queue *e, const char *page, size_t count)	\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned long uninitialized_var(__data);			\
++	int ret = bfq_var_store(&__data, (page), count);		\
++	if (__data < (MIN))						\
++		__data = (MIN);						\
++	else if (__data > (MAX))					\
++		__data = (MAX);						\
++	if (__CONV)							\
++		*(__PTR) = msecs_to_jiffies(__data);			\
++	else								\
++		*(__PTR) = __data;					\
++	return ret;							\
++}
++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
++		INT_MAX, 0);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
++		1, INT_MAX, 0);
++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
++		1);
++STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_min_inter_arr_async_store,
++		&bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
++		INT_MAX, 0);
++#undef STORE_FUNCTION
++
++/* do nothing for the moment */
++static ssize_t bfq_weights_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	return count;
++}
++
++static unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
++{
++	u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
++		return bfq_calc_max_budget(bfqd->peak_rate, timeout);
++	else
++		return bfq_default_max_budget;
++}
++
++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++	else {
++		if (__data > INT_MAX)
++			__data = INT_MAX;
++		bfqd->bfq_max_budget = __data;
++	}
++
++	bfqd->bfq_user_max_budget = __data;
++
++	return ret;
++}
++
++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
++				      const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data < 1)
++		__data = 1;
++	else if (__data > INT_MAX)
++		__data = INT_MAX;
++
++	bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++	if (bfqd->bfq_user_max_budget == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++
++	return ret;
++}
++
++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
++				     const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data > 1)
++		__data = 1;
++	if (__data == 0 && bfqd->low_latency != 0)
++		bfq_end_wr(bfqd);
++	bfqd->low_latency = __data;
++
++	return ret;
++}
++
++#define BFQ_ATTR(name) \
++	__ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
++
++static struct elv_fs_entry bfq_attrs[] = {
++	BFQ_ATTR(fifo_expire_sync),
++	BFQ_ATTR(fifo_expire_async),
++	BFQ_ATTR(back_seek_max),
++	BFQ_ATTR(back_seek_penalty),
++	BFQ_ATTR(slice_idle),
++	BFQ_ATTR(max_budget),
++	BFQ_ATTR(max_budget_async_rq),
++	BFQ_ATTR(timeout_sync),
++	BFQ_ATTR(timeout_async),
++	BFQ_ATTR(low_latency),
++	BFQ_ATTR(wr_coeff),
++	BFQ_ATTR(wr_max_time),
++	BFQ_ATTR(wr_rt_max_time),
++	BFQ_ATTR(wr_min_idle_time),
++	BFQ_ATTR(wr_min_inter_arr_async),
++	BFQ_ATTR(wr_max_softrt_rate),
++	BFQ_ATTR(weights),
++	__ATTR_NULL
++};
++
++static struct elevator_type iosched_bfq = {
++	.ops = {
++		.elevator_merge_fn =		bfq_merge,
++		.elevator_merged_fn =		bfq_merged_request,
++		.elevator_merge_req_fn =	bfq_merged_requests,
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		.elevator_bio_merged_fn =	bfq_bio_merged,
++#endif
++		.elevator_allow_merge_fn =	bfq_allow_merge,
++		.elevator_dispatch_fn =		bfq_dispatch_requests,
++		.elevator_add_req_fn =		bfq_insert_request,
++		.elevator_activate_req_fn =	bfq_activate_request,
++		.elevator_deactivate_req_fn =	bfq_deactivate_request,
++		.elevator_completed_req_fn =	bfq_completed_request,
++		.elevator_former_req_fn =	elv_rb_former_request,
++		.elevator_latter_req_fn =	elv_rb_latter_request,
++		.elevator_init_icq_fn =		bfq_init_icq,
++		.elevator_exit_icq_fn =		bfq_exit_icq,
++		.elevator_set_req_fn =		bfq_set_request,
++		.elevator_put_req_fn =		bfq_put_request,
++		.elevator_may_queue_fn =	bfq_may_queue,
++		.elevator_init_fn =		bfq_init_queue,
++		.elevator_exit_fn =		bfq_exit_queue,
++	},
++	.icq_size =		sizeof(struct bfq_io_cq),
++	.icq_align =		__alignof__(struct bfq_io_cq),
++	.elevator_attrs =	bfq_attrs,
++	.elevator_name =	"bfq",
++	.elevator_owner =	THIS_MODULE,
++};
++
++static int __init bfq_init(void)
++{
++	int ret;
++
++	/*
++	 * Can be 0 on HZ < 1000 setups.
++	 */
++	if (bfq_slice_idle == 0)
++		bfq_slice_idle = 1;
++
++	if (bfq_timeout_async == 0)
++		bfq_timeout_async = 1;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	ret = blkcg_policy_register(&blkcg_policy_bfq);
++	if (ret)
++		return ret;
++#endif
++
++	ret = -ENOMEM;
++	if (bfq_slab_setup())
++		goto err_pol_unreg;
++
++	/*
++	 * Times to load large popular applications for the typical systems
++	 * installed on the reference devices (see the comments before the
++	 * definitions of the two arrays).
++	 */
++	T_slow[0] = msecs_to_jiffies(2600);
++	T_slow[1] = msecs_to_jiffies(1000);
++	T_fast[0] = msecs_to_jiffies(5500);
++	T_fast[1] = msecs_to_jiffies(2000);
++
++	/*
++	 * Thresholds that determine the switch between speed classes (see
++	 * the comments before the definition of the array).
++	 */
++	device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
++	device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++
++	ret = elv_register(&iosched_bfq);
++	if (ret)
++		goto err_pol_unreg;
++
++	pr_info("BFQ I/O-scheduler: v7r11");
++
++	return 0;
++
++err_pol_unreg:
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	blkcg_policy_unregister(&blkcg_policy_bfq);
++#endif
++	return ret;
++}
++
++static void __exit bfq_exit(void)
++{
++	elv_unregister(&iosched_bfq);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	blkcg_policy_unregister(&blkcg_policy_bfq);
++#endif
++	bfq_slab_kill();
++}
++
++module_init(bfq_init);
++module_exit(bfq_exit);
++
++MODULE_AUTHOR("Arianna Avanzini, Fabio Checconi, Paolo Valente");
++MODULE_LICENSE("GPL");
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+new file mode 100644
+index 0000000..a5ed694
+--- /dev/null
++++ b/block/bfq-sched.c
+@@ -0,0 +1,1199 @@
++/*
++ * BFQ: Hierarchical B-WF2Q+ scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++#define for_each_entity(entity)	\
++	for (; entity ; entity = entity->parent)
++
++#define for_each_entity_safe(entity, parent) \
++	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
++
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd);
++
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++
++static void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++	struct bfq_entity *bfqg_entity;
++	struct bfq_group *bfqg;
++	struct bfq_sched_data *group_sd;
++
++	BUG_ON(!next_in_service);
++
++	group_sd = next_in_service->sched_data;
++
++	bfqg = container_of(group_sd, struct bfq_group, sched_data);
++	/*
++	 * bfq_group's my_entity field is not NULL only if the group
++	 * is not the root group. We must not touch the root entity
++	 * as it must never become an in-service entity.
++	 */
++	bfqg_entity = bfqg->my_entity;
++	if (bfqg_entity)
++		bfqg_entity->budget = next_in_service->budget;
++}
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	struct bfq_entity *next_in_service;
++
++	if (sd->in_service_entity)
++		/* will update/requeue at the end of service */
++		return 0;
++
++	/*
++	 * NOTE: this can be improved in many ways, such as returning
++	 * 1 (and thus propagating upwards the update) only when the
++	 * budget changes, or caching the bfqq that will be scheduled
++	 * next from this subtree.  By now we worry more about
++	 * correctness than about performance...
++	 */
++	next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
++	sd->next_in_service = next_in_service;
++
++	if (next_in_service)
++		bfq_update_budget(next_in_service);
++
++	return 1;
++}
++
++static void bfq_check_next_in_service(struct bfq_sched_data *sd,
++				      struct bfq_entity *entity)
++{
++	BUG_ON(sd->next_in_service != entity);
++}
++#else
++#define for_each_entity(entity)	\
++	for (; entity ; entity = NULL)
++
++#define for_each_entity_safe(entity, parent) \
++	for (parent = NULL; entity ; entity = parent)
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	return 0;
++}
++
++static void bfq_check_next_in_service(struct bfq_sched_data *sd,
++				      struct bfq_entity *entity)
++{
++}
++
++static void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++}
++#endif
++
++/*
++ * Shift for timestamp calculations.  This actually limits the maximum
++ * service allowed in one timestamp delta (small shift values increase it),
++ * the maximum total weight that can be used for the queues in the system
++ * (big shift values increase it), and the period of virtual time
++ * wraparounds.
++ */
++#define WFQ_SERVICE_SHIFT	22
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static int bfq_gt(u64 a, u64 b)
++{
++	return (s64)(a - b) > 0;
++}
++
++static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = NULL;
++
++	BUG_ON(!entity);
++
++	if (!entity->my_sched_data)
++		bfqq = container_of(entity, struct bfq_queue, entity);
++
++	return bfqq;
++}
++
++
++/**
++ * bfq_delta - map service into the virtual time domain.
++ * @service: amount of service.
++ * @weight: scale factor (weight of an entity or weight sum).
++ */
++static u64 bfq_delta(unsigned long service, unsigned long weight)
++{
++	u64 d = (u64)service << WFQ_SERVICE_SHIFT;
++
++	do_div(d, weight);
++	return d;
++}
++
++/**
++ * bfq_calc_finish - assign the finish time to an entity.
++ * @entity: the entity to act upon.
++ * @service: the service to be charged to the entity.
++ */
++static void bfq_calc_finish(struct bfq_entity *entity, unsigned long service)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(entity->weight == 0);
++
++	entity->finish = entity->start +
++		bfq_delta(service, entity->weight);
++
++	if (bfqq) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: serv %lu, w %d",
++			service, entity->weight);
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: start %llu, finish %llu, delta %llu",
++			entity->start, entity->finish,
++			bfq_delta(service, entity->weight));
++	}
++}
++
++/**
++ * bfq_entity_of - get an entity from a node.
++ * @node: the node field of the entity.
++ *
++ * Convert a node pointer to the relative entity.  This is used only
++ * to simplify the logic of some functions and not as the generic
++ * conversion mechanism because, e.g., in the tree walking functions,
++ * the check for a %NULL value would be redundant.
++ */
++static struct bfq_entity *bfq_entity_of(struct rb_node *node)
++{
++	struct bfq_entity *entity = NULL;
++
++	if (node)
++		entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	return entity;
++}
++
++/**
++ * bfq_extract - remove an entity from a tree.
++ * @root: the tree root.
++ * @entity: the entity to remove.
++ */
++static void bfq_extract(struct rb_root *root, struct bfq_entity *entity)
++{
++	BUG_ON(entity->tree != root);
++
++	entity->tree = NULL;
++	rb_erase(&entity->rb_node, root);
++}
++
++/**
++ * bfq_idle_extract - extract an entity from the idle tree.
++ * @st: the service tree of the owning @entity.
++ * @entity: the entity being removed.
++ */
++static void bfq_idle_extract(struct bfq_service_tree *st,
++			     struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *next;
++
++	BUG_ON(entity->tree != &st->idle);
++
++	if (entity == st->first_idle) {
++		next = rb_next(&entity->rb_node);
++		st->first_idle = bfq_entity_of(next);
++	}
++
++	if (entity == st->last_idle) {
++		next = rb_prev(&entity->rb_node);
++		st->last_idle = bfq_entity_of(next);
++	}
++
++	bfq_extract(&st->idle, entity);
++
++	if (bfqq)
++		list_del(&bfqq->bfqq_list);
++}
++
++/**
++ * bfq_insert - generic tree insertion.
++ * @root: tree root.
++ * @entity: entity to insert.
++ *
++ * This is used for the idle and the active tree, since they are both
++ * ordered by finish time.
++ */
++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
++{
++	struct bfq_entity *entry;
++	struct rb_node **node = &root->rb_node;
++	struct rb_node *parent = NULL;
++
++	BUG_ON(entity->tree);
++
++	while (*node) {
++		parent = *node;
++		entry = rb_entry(parent, struct bfq_entity, rb_node);
++
++		if (bfq_gt(entry->finish, entity->finish))
++			node = &parent->rb_left;
++		else
++			node = &parent->rb_right;
++	}
++
++	rb_link_node(&entity->rb_node, parent, node);
++	rb_insert_color(&entity->rb_node, root);
++
++	entity->tree = root;
++}
++
++/**
++ * bfq_update_min - update the min_start field of a entity.
++ * @entity: the entity to update.
++ * @node: one of its children.
++ *
++ * This function is called when @entity may store an invalid value for
++ * min_start due to updates to the active tree.  The function  assumes
++ * that the subtree rooted at @node (which may be its left or its right
++ * child) has a valid min_start value.
++ */
++static void bfq_update_min(struct bfq_entity *entity, struct rb_node *node)
++{
++	struct bfq_entity *child;
++
++	if (node) {
++		child = rb_entry(node, struct bfq_entity, rb_node);
++		if (bfq_gt(entity->min_start, child->min_start))
++			entity->min_start = child->min_start;
++	}
++}
++
++/**
++ * bfq_update_active_node - recalculate min_start.
++ * @node: the node to update.
++ *
++ * @node may have changed position or one of its children may have moved,
++ * this function updates its min_start value.  The left and right subtrees
++ * are assumed to hold a correct min_start value.
++ */
++static void bfq_update_active_node(struct rb_node *node)
++{
++	struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	entity->min_start = entity->start;
++	bfq_update_min(entity, node->rb_right);
++	bfq_update_min(entity, node->rb_left);
++}
++
++/**
++ * bfq_update_active_tree - update min_start for the whole active tree.
++ * @node: the starting node.
++ *
++ * @node must be the deepest modified node after an update.  This function
++ * updates its min_start using the values held by its children, assuming
++ * that they did not change, and then updates all the nodes that may have
++ * changed in the path to the root.  The only nodes that may have changed
++ * are the ones in the path or their siblings.
++ */
++static void bfq_update_active_tree(struct rb_node *node)
++{
++	struct rb_node *parent;
++
++up:
++	bfq_update_active_node(node);
++
++	parent = rb_parent(node);
++	if (!parent)
++		return;
++
++	if (node == parent->rb_left && parent->rb_right)
++		bfq_update_active_node(parent->rb_right);
++	else if (parent->rb_left)
++		bfq_update_active_node(parent->rb_left);
++
++	node = parent;
++	goto up;
++}
++
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root);
++
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root);
++
++
++/**
++ * bfq_active_insert - insert an entity in the active tree of its
++ *                     group/device.
++ * @st: the service tree of the entity.
++ * @entity: the entity being inserted.
++ *
++ * The active tree is ordered by finish time, but an extra key is kept
++ * per each node, containing the minimum value for the start times of
++ * its children (and the node itself), so it's possible to search for
++ * the eligible node with the lowest finish time in logarithmic time.
++ */
++static void bfq_active_insert(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node = &entity->rb_node;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	bfq_insert(&st->active, entity);
++
++	if (node->rb_left)
++		node = node->rb_left;
++	else if (node->rb_right)
++		node = node->rb_right;
++
++	bfq_update_active_tree(node);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		bfqg->active_entities++;
++		if (bfqg->active_entities == 2)
++			bfqd->active_numerous_groups++;
++	}
++#endif
++}
++
++/**
++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
++ * @ioprio: the ioprio value to convert.
++ */
++static unsigned short bfq_ioprio_to_weight(int ioprio)
++{
++	BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
++	return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - ioprio;
++}
++
++/**
++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
++ * @weight: the weight value to convert.
++ *
++ * To preserve as much as possible the old only-ioprio user interface,
++ * 0 is used as an escape ioprio value for weights (numerically) equal or
++ * larger than IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF.
++ */
++static unsigned short bfq_weight_to_ioprio(int weight)
++{
++	BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
++	return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight < 0 ?
++		0 : IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight;
++}
++
++static void bfq_get_entity(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	if (bfqq) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++	}
++}
++
++/**
++ * bfq_find_deepest - find the deepest node that an extraction can modify.
++ * @node: the node being removed.
++ *
++ * Do the first step of an extraction in an rb tree, looking for the
++ * node that will replace @node, and returning the deepest node that
++ * the following modifications to the tree can touch.  If @node is the
++ * last node in the tree return %NULL.
++ */
++static struct rb_node *bfq_find_deepest(struct rb_node *node)
++{
++	struct rb_node *deepest;
++
++	if (!node->rb_right && !node->rb_left)
++		deepest = rb_parent(node);
++	else if (!node->rb_right)
++		deepest = node->rb_left;
++	else if (!node->rb_left)
++		deepest = node->rb_right;
++	else {
++		deepest = rb_next(node);
++		if (deepest->rb_right)
++			deepest = deepest->rb_right;
++		else if (rb_parent(deepest) != node)
++			deepest = rb_parent(deepest);
++	}
++
++	return deepest;
++}
++
++/**
++ * bfq_active_extract - remove an entity from the active tree.
++ * @st: the service_tree containing the tree.
++ * @entity: the entity being removed.
++ */
++static void bfq_active_extract(struct bfq_service_tree *st,
++			       struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	node = bfq_find_deepest(&entity->rb_node);
++	bfq_extract(&st->active, entity);
++
++	if (node)
++		bfq_update_active_tree(node);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq)
++		list_del(&bfqq->bfqq_list);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_remove(bfqd, entity,
++					&bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		BUG_ON(!bfqg->active_entities);
++		bfqg->active_entities--;
++		if (bfqg->active_entities == 1) {
++			BUG_ON(!bfqd->active_numerous_groups);
++			bfqd->active_numerous_groups--;
++		}
++	}
++#endif
++}
++
++/**
++ * bfq_idle_insert - insert an entity into the idle tree.
++ * @st: the service tree containing the tree.
++ * @entity: the entity to insert.
++ */
++static void bfq_idle_insert(struct bfq_service_tree *st,
++			    struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (!first_idle || bfq_gt(first_idle->finish, entity->finish))
++		st->first_idle = entity;
++	if (!last_idle || bfq_gt(entity->finish, last_idle->finish))
++		st->last_idle = entity;
++
++	bfq_insert(&st->idle, entity);
++
++	if (bfqq)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
++}
++
++/**
++ * bfq_forget_entity - remove an entity from the wfq trees.
++ * @st: the service tree.
++ * @entity: the entity being removed.
++ *
++ * Update the device status and forget everything about @entity, putting
++ * the device reference to it, if it is a queue.  Entities belonging to
++ * groups are not refcounted.
++ */
++static void bfq_forget_entity(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_sched_data *sd;
++
++	BUG_ON(!entity->on_st);
++
++	entity->on_st = 0;
++	st->wsum -= entity->weight;
++	if (bfqq) {
++		sd = entity->sched_data;
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++/**
++ * bfq_put_idle_entity - release the idle tree ref of an entity.
++ * @st: service tree for the entity.
++ * @entity: the entity being released.
++ */
++static void bfq_put_idle_entity(struct bfq_service_tree *st,
++				struct bfq_entity *entity)
++{
++	bfq_idle_extract(st, entity);
++	bfq_forget_entity(st, entity);
++}
++
++/**
++ * bfq_forget_idle - update the idle tree if necessary.
++ * @st: the service tree to act upon.
++ *
++ * To preserve the global O(log N) complexity we only remove one entry here;
++ * as the idle tree will not grow indefinitely this can be done safely.
++ */
++static void bfq_forget_idle(struct bfq_service_tree *st)
++{
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (RB_EMPTY_ROOT(&st->active) && last_idle &&
++	    !bfq_gt(last_idle->finish, st->vtime)) {
++		/*
++		 * Forget the whole idle tree, increasing the vtime past
++		 * the last finish time of idle entities.
++		 */
++		st->vtime = last_idle->finish;
++	}
++
++	if (first_idle && !bfq_gt(first_idle->finish, st->vtime))
++		bfq_put_idle_entity(st, first_idle);
++}
++
++static struct bfq_service_tree *
++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
++			 struct bfq_entity *entity)
++{
++	struct bfq_service_tree *new_st = old_st;
++
++	if (entity->prio_changed) {
++		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++		unsigned short prev_weight, new_weight;
++		struct bfq_data *bfqd = NULL;
++		struct rb_root *root;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		struct bfq_sched_data *sd;
++		struct bfq_group *bfqg;
++#endif
++
++		if (bfqq)
++			bfqd = bfqq->bfqd;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		else {
++			sd = entity->my_sched_data;
++			bfqg = container_of(sd, struct bfq_group, sched_data);
++			BUG_ON(!bfqg);
++			bfqd = (struct bfq_data *)bfqg->bfqd;
++			BUG_ON(!bfqd);
++		}
++#endif
++
++		BUG_ON(old_st->wsum < entity->weight);
++		old_st->wsum -= entity->weight;
++
++		if (entity->new_weight != entity->orig_weight) {
++			if (entity->new_weight < BFQ_MIN_WEIGHT ||
++			    entity->new_weight > BFQ_MAX_WEIGHT) {
++				pr_crit("update_weight_prio: new_weight %d\n",
++					entity->new_weight);
++				BUG();
++			}
++			entity->orig_weight = entity->new_weight;
++			if (bfqq)
++				bfqq->ioprio =
++				  bfq_weight_to_ioprio(entity->orig_weight);
++		}
++
++		if (bfqq)
++			bfqq->ioprio_class = bfqq->new_ioprio_class;
++		entity->prio_changed = 0;
++
++		/*
++		 * NOTE: here we may be changing the weight too early,
++		 * this will cause unfairness.  The correct approach
++		 * would have required additional complexity to defer
++		 * weight changes to the proper time instants (i.e.,
++		 * when entity->finish <= old_st->vtime).
++		 */
++		new_st = bfq_entity_service_tree(entity);
++
++		prev_weight = entity->weight;
++		new_weight = entity->orig_weight *
++			     (bfqq ? bfqq->wr_coeff : 1);
++		/*
++		 * If the weight of the entity changes, remove the entity
++		 * from its old weight counter (if there is a counter
++		 * associated with the entity), and add it to the counter
++		 * associated with its new weight.
++		 */
++		if (prev_weight != new_weight) {
++			root = bfqq ? &bfqd->queue_weights_tree :
++				      &bfqd->group_weights_tree;
++			bfq_weights_tree_remove(bfqd, entity, root);
++		}
++		entity->weight = new_weight;
++		/*
++		 * Add the entity to its weights tree only if it is
++		 * not associated with a weight-raised queue.
++		 */
++		if (prev_weight != new_weight &&
++		    (bfqq ? bfqq->wr_coeff == 1 : 1))
++			/* If we get here, root has been initialized. */
++			bfq_weights_tree_add(bfqd, entity, root);
++
++		new_st->wsum += entity->weight;
++
++		if (new_st != old_st)
++			entity->start = new_st->vtime;
++	}
++
++	return new_st;
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg);
++#endif
++
++/**
++ * bfq_bfqq_served - update the scheduler status after selection for
++ *                   service.
++ * @bfqq: the queue being served.
++ * @served: bytes to transfer.
++ *
++ * NOTE: this can be optimized, as the timestamps of upper level entities
++ * are synchronized every time a new bfqq is selected for service.  By now,
++ * we keep it to better check consistency.
++ */
++static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st;
++
++	for_each_entity(entity) {
++		st = bfq_entity_service_tree(entity);
++
++		entity->service += served;
++		BUG_ON(entity->service > entity->budget);
++		BUG_ON(st->wsum == 0);
++
++		st->vtime += bfq_delta(served, st->wsum);
++		bfq_forget_idle(st);
++	}
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_set_start_empty_time(bfqq_group(bfqq));
++#endif
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served);
++}
++
++/**
++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * @bfqq: the queue that needs a service update.
++ *
++ * When it's not possible to be fair in the service domain, because
++ * a queue is not consuming its budget fast enough (the meaning of
++ * fast depends on the timeout parameter), we charge it a full
++ * budget.  In this way we should obtain a sort of time-domain
++ * fairness among all the seeky/slow queues.
++ */
++static void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++
++	bfq_bfqq_served(bfqq, entity->budget - entity->service);
++}
++
++/**
++ * __bfq_activate_entity - activate an entity.
++ * @entity: the entity being activated.
++ *
++ * Called whenever an entity is activated, i.e., it is not active and one
++ * of its children receives a new request, or has to be reactivated due to
++ * budget exhaustion.  It uses the current budget of the entity (and the
++ * service received if @entity is active) of the queue to calculate its
++ * timestamps.
++ */
++static void __bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++	if (entity == sd->in_service_entity) {
++		BUG_ON(entity->tree);
++		/*
++		 * If we are requeueing the current entity we have
++		 * to take care of not charging to it service it has
++		 * not received.
++		 */
++		bfq_calc_finish(entity, entity->service);
++		entity->start = entity->finish;
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active) {
++		/*
++		 * Requeueing an entity due to a change of some
++		 * next_in_service entity below it.  We reuse the
++		 * old start time.
++		 */
++		bfq_active_extract(st, entity);
++	} else if (entity->tree == &st->idle) {
++		/*
++		 * Must be on the idle tree, bfq_idle_extract() will
++		 * check for that.
++		 */
++		bfq_idle_extract(st, entity);
++		entity->start = bfq_gt(st->vtime, entity->finish) ?
++				       st->vtime : entity->finish;
++	} else {
++		/*
++		 * The finish time of the entity may be invalid, and
++		 * it is in the past for sure, otherwise the queue
++		 * would have been on the idle tree.
++		 */
++		entity->start = st->vtime;
++		st->wsum += entity->weight;
++		bfq_get_entity(entity);
++
++		BUG_ON(entity->on_st);
++		entity->on_st = 1;
++	}
++
++	st = __bfq_entity_update_weight_prio(st, entity);
++	bfq_calc_finish(entity, entity->budget);
++	bfq_active_insert(st, entity);
++}
++
++/**
++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
++ * @entity: the entity to activate.
++ *
++ * Activate @entity and all the entities on the path from it to the root.
++ */
++static void bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd;
++
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			/*
++			 * No need to propagate the activation to the
++			 * upper entities, as they will be updated when
++			 * the in-service entity is rescheduled.
++			 */
++			break;
++	}
++}
++
++/**
++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
++ * @entity: the entity to deactivate.
++ * @requeue: if false, the entity will not be put into the idle tree.
++ *
++ * Deactivate an entity, independently from its previous state.  If the
++ * entity was not on a service tree just return, otherwise if it is on
++ * any scheduler tree, extract it from that tree, and if necessary
++ * and if the caller did not specify @requeue, put it on the idle tree.
++ *
++ * Return %1 if the caller should update the entity hierarchy, i.e.,
++ * if the entity was in service or if it was the next_in_service for
++ * its sched_data; return %0 otherwise.
++ */
++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st;
++	int was_in_service;
++	int ret = 0;
++
++	if (sd == NULL || !entity->on_st) /* never activated, or inactive */
++		return 0;
++
++	st = bfq_entity_service_tree(entity);
++	was_in_service = entity == sd->in_service_entity;
++
++	BUG_ON(was_in_service && entity->tree);
++
++	if (was_in_service) {
++		bfq_calc_finish(entity, entity->service);
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active)
++		bfq_active_extract(st, entity);
++	else if (entity->tree == &st->idle)
++		bfq_idle_extract(st, entity);
++	else if (entity->tree)
++		BUG();
++
++	if (was_in_service || sd->next_in_service == entity)
++		ret = bfq_update_next_in_service(sd);
++
++	if (!requeue || !bfq_gt(entity->finish, st->vtime))
++		bfq_forget_entity(st, entity);
++	else
++		bfq_idle_insert(st, entity);
++
++	BUG_ON(sd->in_service_entity == entity);
++	BUG_ON(sd->next_in_service == entity);
++
++	return ret;
++}
++
++/**
++ * bfq_deactivate_entity - deactivate an entity.
++ * @entity: the entity to deactivate.
++ * @requeue: true if the entity can be put on the idle tree
++ */
++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd;
++	struct bfq_entity *parent;
++
++	for_each_entity_safe(entity, parent) {
++		sd = entity->sched_data;
++
++		if (!__bfq_deactivate_entity(entity, requeue))
++			/*
++			 * The parent entity is still backlogged, and
++			 * we don't need to update it as it is still
++			 * in service.
++			 */
++			break;
++
++		if (sd->next_in_service)
++			/*
++			 * The parent entity is still backlogged and
++			 * the budgets on the path towards the root
++			 * need to be updated.
++			 */
++			goto update;
++
++		/*
++		 * If we reach there the parent is no more backlogged and
++		 * we want to propagate the dequeue upwards.
++		 */
++		requeue = 1;
++	}
++
++	return;
++
++update:
++	entity = parent;
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			break;
++	}
++}
++
++/**
++ * bfq_update_vtime - update vtime if necessary.
++ * @st: the service tree to act upon.
++ *
++ * If necessary update the service tree vtime to have at least one
++ * eligible entity, skipping to its start time.  Assumes that the
++ * active tree of the device is not empty.
++ *
++ * NOTE: this hierarchical implementation updates vtimes quite often,
++ * we may end up with reactivated processes getting timestamps after a
++ * vtime skip done because we needed a ->first_active entity on some
++ * intermediate node.
++ */
++static void bfq_update_vtime(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry;
++	struct rb_node *node = st->active.rb_node;
++
++	entry = rb_entry(node, struct bfq_entity, rb_node);
++	if (bfq_gt(entry->min_start, st->vtime)) {
++		st->vtime = entry->min_start;
++		bfq_forget_idle(st);
++	}
++}
++
++/**
++ * bfq_first_active_entity - find the eligible entity with
++ *                           the smallest finish time
++ * @st: the service tree to select from.
++ *
++ * This function searches the first schedulable entity, starting from the
++ * root of the tree and going on the left every time on this side there is
++ * a subtree with at least one eligible (start >= vtime) entity. The path on
++ * the right is followed only if a) the left subtree contains no eligible
++ * entities and b) no eligible entity has been found yet.
++ */
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry, *first = NULL;
++	struct rb_node *node = st->active.rb_node;
++
++	while (node) {
++		entry = rb_entry(node, struct bfq_entity, rb_node);
++left:
++		if (!bfq_gt(entry->start, st->vtime))
++			first = entry;
++
++		BUG_ON(bfq_gt(entry->min_start, st->vtime));
++
++		if (node->rb_left) {
++			entry = rb_entry(node->rb_left,
++					 struct bfq_entity, rb_node);
++			if (!bfq_gt(entry->min_start, st->vtime)) {
++				node = node->rb_left;
++				goto left;
++			}
++		}
++		if (first)
++			break;
++		node = node->rb_right;
++	}
++
++	BUG_ON(!first && !RB_EMPTY_ROOT(&st->active));
++	return first;
++}
++
++/**
++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
++ * @st: the service tree.
++ *
++ * Update the virtual time in @st and return the first eligible entity
++ * it contains.
++ */
++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
++						   bool force)
++{
++	struct bfq_entity *entity, *new_next_in_service = NULL;
++
++	if (RB_EMPTY_ROOT(&st->active))
++		return NULL;
++
++	bfq_update_vtime(st);
++	entity = bfq_first_active_entity(st);
++	BUG_ON(bfq_gt(entity->start, st->vtime));
++
++	/*
++	 * If the chosen entity does not match with the sched_data's
++	 * next_in_service and we are forcedly serving the IDLE priority
++	 * class tree, bubble up budget update.
++	 */
++	if (unlikely(force && entity != entity->sched_data->next_in_service)) {
++		new_next_in_service = entity;
++		for_each_entity(new_next_in_service)
++			bfq_update_budget(new_next_in_service);
++	}
++
++	return entity;
++}
++
++/**
++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
++ * @sd: the sched_data.
++ * @extract: if true the returned entity will be also extracted from @sd.
++ *
++ * NOTE: since we cache the next_in_service entity at each level of the
++ * hierarchy, the complexity of the lookup can be decreased with
++ * absolutely no effort just returning the cached next_in_service value;
++ * we prefer to do full lookups to test the consistency of * the data
++ * structures.
++ */
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd)
++{
++	struct bfq_service_tree *st = sd->service_tree;
++	struct bfq_entity *entity;
++	int i = 0;
++
++	BUG_ON(sd->in_service_entity);
++
++	if (bfqd &&
++	    jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++		entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
++						  true);
++		if (entity) {
++			i = BFQ_IOPRIO_CLASSES - 1;
++			bfqd->bfq_class_idle_last_service = jiffies;
++			sd->next_in_service = entity;
++		}
++	}
++	for (; i < BFQ_IOPRIO_CLASSES; i++) {
++		entity = __bfq_lookup_next_entity(st + i, false);
++		if (entity) {
++			if (extract) {
++				bfq_check_next_in_service(sd, entity);
++				bfq_active_extract(st + i, entity);
++				sd->in_service_entity = entity;
++				sd->next_in_service = NULL;
++			}
++			break;
++		}
++	}
++
++	return entity;
++}
++
++/*
++ * Get next queue for service.
++ */
++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
++{
++	struct bfq_entity *entity = NULL;
++	struct bfq_sched_data *sd;
++	struct bfq_queue *bfqq;
++
++	BUG_ON(bfqd->in_service_queue);
++
++	if (bfqd->busy_queues == 0)
++		return NULL;
++
++	sd = &bfqd->root_group->sched_data;
++	for (; sd ; sd = entity->my_sched_data) {
++		entity = bfq_lookup_next_entity(sd, 1, bfqd);
++		BUG_ON(!entity);
++		entity->service = 0;
++	}
++
++	bfqq = bfq_entity_to_bfqq(entity);
++	BUG_ON(!bfqq);
++
++	return bfqq;
++}
++
++static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++{
++	if (bfqd->in_service_bic) {
++		put_io_context(bfqd->in_service_bic->icq.ioc);
++		bfqd->in_service_bic = NULL;
++	}
++
++	bfqd->in_service_queue = NULL;
++	del_timer(&bfqd->idle_slice_timer);
++}
++
++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				int requeue)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	if (bfqq == bfqd->in_service_queue)
++		__bfq_bfqd_reset_in_service(bfqd);
++
++	bfq_deactivate_entity(entity, requeue);
++}
++
++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_activate_entity(entity);
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
++#endif
++
++/*
++ * Called when the bfqq no longer has requests pending, remove it from
++ * the service tree.
++ */
++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			      int requeue)
++{
++	BUG_ON(!bfq_bfqq_busy(bfqq));
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	bfq_log_bfqq(bfqd, bfqq, "del from busy");
++
++	bfq_clear_bfqq_busy(bfqq);
++
++	BUG_ON(bfqd->busy_queues == 0);
++	bfqd->busy_queues--;
++
++	if (!bfqq->dispatched) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues--;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	bfqg_stats_update_dequeue(bfqq_group(bfqq));
++#endif
++
++	bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++}
++
++/*
++ * Called when an inactive queue receives a new request.
++ */
++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqq == bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "add to busy");
++
++	bfq_activate_bfqq(bfqd, bfqq);
++
++	bfq_mark_bfqq_busy(bfqq);
++	bfqd->busy_queues++;
++
++	if (!bfqq->dispatched) {
++		if (bfqq->wr_coeff == 1)
++			bfq_weights_tree_add(bfqd, &bfqq->entity,
++					     &bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			bfqd->busy_in_flight_queues++;
++			if (bfq_bfqq_constantly_seeky(bfqq))
++				bfqd->const_seeky_busy_in_flight_queues++;
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues++;
++}
+diff --git a/block/bfq.h b/block/bfq.h
+new file mode 100644
+index 0000000..2bf54ae
+--- /dev/null
++++ b/block/bfq.h
+@@ -0,0 +1,801 @@
++/*
++ * BFQ-v7r11 for 4.5.0: data structures and common functions prototypes.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifndef _BFQ_H
++#define _BFQ_H
++
++#include <linux/blktrace_api.h>
++#include <linux/hrtimer.h>
++#include <linux/ioprio.h>
++#include <linux/rbtree.h>
++#include <linux/blk-cgroup.h>
++
++#define BFQ_IOPRIO_CLASSES	3
++#define BFQ_CL_IDLE_TIMEOUT	(HZ/5)
++
++#define BFQ_MIN_WEIGHT			1
++#define BFQ_MAX_WEIGHT			1000
++#define BFQ_WEIGHT_CONVERSION_COEFF	10
++
++#define BFQ_DEFAULT_QUEUE_IOPRIO	4
++
++#define BFQ_DEFAULT_GRP_WEIGHT	10
++#define BFQ_DEFAULT_GRP_IOPRIO	0
++#define BFQ_DEFAULT_GRP_CLASS	IOPRIO_CLASS_BE
++
++struct bfq_entity;
++
++/**
++ * struct bfq_service_tree - per ioprio_class service tree.
++ * @active: tree for active entities (i.e., those backlogged).
++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
++ * @first_idle: idle entity with minimum F_i.
++ * @last_idle: idle entity with maximum F_i.
++ * @vtime: scheduler virtual time.
++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
++ *
++ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
++ * ioprio_class has its own independent scheduler, and so its own
++ * bfq_service_tree.  All the fields are protected by the queue lock
++ * of the containing bfqd.
++ */
++struct bfq_service_tree {
++	struct rb_root active;
++	struct rb_root idle;
++
++	struct bfq_entity *first_idle;
++	struct bfq_entity *last_idle;
++
++	u64 vtime;
++	unsigned long wsum;
++};
++
++/**
++ * struct bfq_sched_data - multi-class scheduler.
++ * @in_service_entity: entity in service.
++ * @next_in_service: head-of-the-line entity in the scheduler.
++ * @service_tree: array of service trees, one per ioprio_class.
++ *
++ * bfq_sched_data is the basic scheduler queue.  It supports three
++ * ioprio_classes, and can be used either as a toplevel queue or as
++ * an intermediate queue on a hierarchical setup.
++ * @next_in_service points to the active entity of the sched_data
++ * service trees that will be scheduled next.
++ *
++ * The supported ioprio_classes are the same as in CFQ, in descending
++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
++ * Requests from higher priority queues are served before all the
++ * requests from lower priority queues; among requests of the same
++ * queue requests are served according to B-WF2Q+.
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_sched_data {
++	struct bfq_entity *in_service_entity;
++	struct bfq_entity *next_in_service;
++	struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++};
++
++/**
++ * struct bfq_weight_counter - counter of the number of all active entities
++ *                             with a given weight.
++ * @weight: weight of the entities that this counter refers to.
++ * @num_active: number of active entities with this weight.
++ * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
++ *                and @group_weights_tree).
++ */
++struct bfq_weight_counter {
++	short int weight;
++	unsigned int num_active;
++	struct rb_node weights_node;
++};
++
++/**
++ * struct bfq_entity - schedulable entity.
++ * @rb_node: service_tree member.
++ * @weight_counter: pointer to the weight counter associated with this entity.
++ * @on_st: flag, true if the entity is on a tree (either the active or
++ *         the idle one of its service_tree).
++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
++ * @start: B-WF2Q+ start timestamp (aka S_i).
++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
++ * @min_start: minimum start time of the (active) subtree rooted at
++ *             this entity; used for O(log N) lookups into active trees.
++ * @service: service received during the last round of service.
++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
++ * @weight: weight of the queue
++ * @parent: parent entity, for hierarchical scheduling.
++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
++ *                 associated scheduler queue, %NULL on leaf nodes.
++ * @sched_data: the scheduler queue this entity belongs to.
++ * @ioprio: the ioprio in use.
++ * @new_weight: when a weight change is requested, the new weight value.
++ * @orig_weight: original weight, used to implement weight boosting
++ * @prio_changed: flag, true when the user requested a weight, ioprio or
++ *		  ioprio_class change.
++ *
++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
++ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
++ * entity belongs to the sched_data of the parent group in the cgroup
++ * hierarchy.  Non-leaf entities have also their own sched_data, stored
++ * in @my_sched_data.
++ *
++ * Each entity stores independently its priority values; this would
++ * allow different weights on different devices, but this
++ * functionality is not exported to userspace by now.  Priorities and
++ * weights are updated lazily, first storing the new values into the
++ * new_* fields, then setting the @prio_changed flag.  As soon as
++ * there is a transition in the entity state that allows the priority
++ * update to take place the effective and the requested priority
++ * values are synchronized.
++ *
++ * Unless cgroups are used, the weight value is calculated from the
++ * ioprio to export the same interface as CFQ.  When dealing with
++ * ``well-behaved'' queues (i.e., queues that do not spend too much
++ * time to consume their budget and have true sequential behavior, and
++ * when there are no external factors breaking anticipation) the
++ * relative weights at each level of the cgroups hierarchy should be
++ * guaranteed.  All the fields are protected by the queue lock of the
++ * containing bfqd.
++ */
++struct bfq_entity {
++	struct rb_node rb_node;
++	struct bfq_weight_counter *weight_counter;
++
++	int on_st;
++
++	u64 finish;
++	u64 start;
++
++	struct rb_root *tree;
++
++	u64 min_start;
++
++	int service, budget;
++	unsigned short weight, new_weight;
++	unsigned short orig_weight;
++
++	struct bfq_entity *parent;
++
++	struct bfq_sched_data *my_sched_data;
++	struct bfq_sched_data *sched_data;
++
++	int prio_changed;
++};
++
++struct bfq_group;
++
++/**
++ * struct bfq_queue - leaf schedulable entity.
++ * @ref: reference counter.
++ * @bfqd: parent bfq_data.
++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
++ * @ioprio_class: the ioprio_class in use.
++ * @new_ioprio_class: when an ioprio_class change is requested, the new
++ *                    ioprio_class value.
++ * @new_bfqq: shared bfq_queue if queue is cooperating with
++ *           one or more other queues.
++ * @sort_list: sorted list of pending requests.
++ * @next_rq: if fifo isn't expired, next request to serve.
++ * @queued: nr of requests queued in @sort_list.
++ * @allocated: currently allocated requests.
++ * @meta_pending: pending metadata requests.
++ * @fifo: fifo list of requests in sort_list.
++ * @entity: entity representing this queue in the scheduler.
++ * @max_budget: maximum budget allowed from the feedback mechanism.
++ * @budget_timeout: budget expiration (in jiffies).
++ * @dispatched: number of requests on the dispatch list or inside driver.
++ * @flags: status flags.
++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
++ * @burst_list_node: node for the device's burst list.
++ * @seek_samples: number of seeks sampled
++ * @seek_total: sum of the distances of the seeks sampled
++ * @seek_mean: mean seek distance
++ * @last_request_pos: position of the last request enqueued
++ * @requests_within_timer: number of consecutive pairs of request completion
++ *                         and arrival, such that the queue becomes idle
++ *                         after the completion, but the next request arrives
++ *                         within an idle time slice; used only if the queue's
++ *                         IO_bound has been cleared.
++ * @pid: pid of the process owning the queue, used for logging purposes.
++ * @last_wr_start_finish: start time of the current weight-raising period if
++ *                        the @bfq-queue is being weight-raised, otherwise
++ *                        finish time of the last weight-raising period
++ * @wr_cur_max_time: current max raising time for this queue
++ * @soft_rt_next_start: minimum time instant such that, only if a new
++ *                      request is enqueued after this time instant in an
++ *                      idle @bfq_queue with no outstanding requests, then
++ *                      the task associated with the queue it is deemed as
++ *                      soft real-time (see the comments to the function
++ *                      bfq_bfqq_softrt_next_start())
++ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
++ *                      idle to backlogged
++ * @service_from_backlogged: cumulative service received from the @bfq_queue
++ *                           since the last transition from idle to
++ *                           backlogged
++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
++ *	 queue is shared
++ *
++ * A bfq_queue is a leaf request queue; it can be associated with an
++ * io_context or more, if it  is  async or shared  between  cooperating
++ * processes. @cgroup holds a reference to the cgroup, to be sure that it
++ * does not disappear while a bfqq still references it (mostly to avoid
++ * races between request issuing and task migration followed by cgroup
++ * destruction).
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_queue {
++	atomic_t ref;
++	struct bfq_data *bfqd;
++
++	unsigned short ioprio, new_ioprio;
++	unsigned short ioprio_class, new_ioprio_class;
++
++	/* fields for cooperating queues handling */
++	struct bfq_queue *new_bfqq;
++	struct rb_node pos_node;
++	struct rb_root *pos_root;
++
++	struct rb_root sort_list;
++	struct request *next_rq;
++	int queued[2];
++	int allocated[2];
++	int meta_pending;
++	struct list_head fifo;
++
++	struct bfq_entity entity;
++
++	int max_budget;
++	unsigned long budget_timeout;
++
++	int dispatched;
++
++	unsigned int flags;
++
++	struct list_head bfqq_list;
++
++	struct hlist_node burst_list_node;
++
++	unsigned int seek_samples;
++	u64 seek_total;
++	sector_t seek_mean;
++	sector_t last_request_pos;
++
++	unsigned int requests_within_timer;
++
++	pid_t pid;
++	struct bfq_io_cq *bic;
++
++	/* weight-raising fields */
++	unsigned long wr_cur_max_time;
++	unsigned long soft_rt_next_start;
++	unsigned long last_wr_start_finish;
++	unsigned int wr_coeff;
++	unsigned long last_idle_bklogged;
++	unsigned long service_from_backlogged;
++};
++
++/**
++ * struct bfq_ttime - per process thinktime stats.
++ * @ttime_total: total process thinktime
++ * @ttime_samples: number of thinktime samples
++ * @ttime_mean: average process thinktime
++ */
++struct bfq_ttime {
++	unsigned long last_end_request;
++
++	unsigned long ttime_total;
++	unsigned long ttime_samples;
++	unsigned long ttime_mean;
++};
++
++/**
++ * struct bfq_io_cq - per (request_queue, io_context) structure.
++ * @icq: associated io_cq structure
++ * @bfqq: array of two process queues, the sync and the async
++ * @ttime: associated @bfq_ttime struct
++ * @ioprio: per (request_queue, blkcg) ioprio.
++ * @blkcg_id: id of the blkcg the related io_cq belongs to.
++ */
++struct bfq_io_cq {
++	struct io_cq icq; /* must be the first member */
++	struct bfq_queue *bfqq[2];
++	struct bfq_ttime ttime;
++	int ioprio;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	uint64_t blkcg_id; /* the current blkcg ID */
++#endif
++};
++
++enum bfq_device_speed {
++	BFQ_BFQD_FAST,
++	BFQ_BFQD_SLOW,
++};
++
++/**
++ * struct bfq_data - per device data structure.
++ * @queue: request queue for the managed device.
++ * @root_group: root bfq_group for the device.
++ * @active_numerous_groups: number of bfq_groups containing more than one
++ *                          active @bfq_entity.
++ * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
++ *                      weight. Used to keep track of whether all @bfq_queues
++ *                     have the same weight. The tree contains one counter
++ *                     for each distinct weight associated to some active
++ *                     and not weight-raised @bfq_queue (see the comments to
++ *                      the functions bfq_weights_tree_[add|remove] for
++ *                     further details).
++ * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
++ *                      by weight. Used to keep track of whether all
++ *                     @bfq_groups have the same weight. The tree contains
++ *                     one counter for each distinct weight associated to
++ *                     some active @bfq_group (see the comments to the
++ *                     functions bfq_weights_tree_[add|remove] for further
++ *                     details).
++ * @busy_queues: number of bfq_queues containing requests (including the
++ *		 queue in service, even if it is idling).
++ * @busy_in_flight_queues: number of @bfq_queues containing pending or
++ *                         in-flight requests, plus the @bfq_queue in
++ *                         service, even if idle but waiting for the
++ *                         possible arrival of its next sync request. This
++ *                         field is updated only if the device is rotational,
++ *                         but used only if the device is also NCQ-capable.
++ *                         The reason why the field is updated also for non-
++ *                         NCQ-capable rotational devices is related to the
++ *                         fact that the value of @hw_tag may be set also
++ *                         later than when busy_in_flight_queues may need to
++ *                         be incremented for the first time(s). Taking also
++ *                         this possibility into account, to avoid unbalanced
++ *                         increments/decrements, would imply more overhead
++ *                         than just updating busy_in_flight_queues
++ *                         regardless of the value of @hw_tag.
++ * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
++ *                                     (that is, seeky queues that expired
++ *                                     for budget timeout at least once)
++ *                                     containing pending or in-flight
++ *                                     requests, including the in-service
++ *                                     @bfq_queue if constantly seeky. This
++ *                                     field is updated only if the device
++ *                                     is rotational, but used only if the
++ *                                     device is also NCQ-capable (see the
++ *                                     comments to @busy_in_flight_queues).
++ * @wr_busy_queues: number of weight-raised busy @bfq_queues.
++ * @queued: number of queued requests.
++ * @rq_in_driver: number of requests dispatched and waiting for completion.
++ * @sync_flight: number of sync requests in the driver.
++ * @max_rq_in_driver: max number of reqs in driver in the last
++ *                    @hw_tag_samples completed requests.
++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
++ * @budgets_assigned: number of budgets assigned.
++ * @idle_slice_timer: timer set when idling for the next sequential request
++ *                    from the queue in service.
++ * @unplug_work: delayed work to restart dispatching on the request queue.
++ * @in_service_queue: bfq_queue in service.
++ * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
++ * @last_position: on-disk position of the last served request.
++ * @last_budget_start: beginning of the last budget.
++ * @last_idling_start: beginning of the last idle slice.
++ * @peak_rate: peak transfer rate observed for a budget.
++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before
++ *                  rescheduling.
++ * @active_list: list of all the bfq_queues active on the device.
++ * @idle_list: list of all the bfq_queues idle on the device.
++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
++ *                   requests are served in fifo order.
++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
++ * @bfq_back_max: maximum allowed backward seek.
++ * @bfq_slice_idle: maximum idling time.
++ * @bfq_user_max_budget: user-configured max budget value
++ *                       (0 for auto-tuning).
++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
++ *                           async queues.
++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
++ *               to prevent seeky queues to impose long latencies to well
++ *               behaved ones (this also implies that seeky queues cannot
++ *               receive guarantees in the service domain; after a timeout
++ *               they are charged for the whole allocated budget, to try
++ *               to preserve a behavior reasonably fair among them, but
++ *               without service-domain guarantees).
++ * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
++ *                   no more granted any weight-raising.
++ * @bfq_failed_cooperations: number of consecutive failed cooperation
++ *                           chances after which weight-raising is restored
++ *                           to a queue subject to more than bfq_coop_thresh
++ *                           queue merges.
++ * @bfq_requests_within_timer: number of consecutive requests that must be
++ *                             issued within the idle time slice to set
++ *                             again idling to a queue which was marked as
++ *                             non-I/O-bound (see the definition of the
++ *                             IO_bound flag for further details).
++ * @last_ins_in_burst: last time at which a queue entered the current
++ *                     burst of queues being activated shortly after
++ *                     each other; for more details about this and the
++ *                     following parameters related to a burst of
++ *                     activations, see the comments to the function
++ *                     @bfq_handle_burst.
++ * @bfq_burst_interval: reference time interval used to decide whether a
++ *                      queue has been activated shortly after
++ *                      @last_ins_in_burst.
++ * @burst_size: number of queues in the current burst of queue activations.
++ * @bfq_large_burst_thresh: maximum burst size above which the current
++ *			    queue-activation burst is deemed as 'large'.
++ * @large_burst: true if a large queue-activation burst is in progress.
++ * @burst_list: head of the burst list (as for the above fields, more details
++ *		in the comments to the function bfq_handle_burst).
++ * @low_latency: if set to true, low-latency heuristics are enabled.
++ * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
++ *                queue is multiplied.
++ * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
++ * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
++ * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
++ *			  may be reactivated for a queue (in jiffies).
++ * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
++ *				after which weight-raising may be
++ *				reactivated for an already busy queue
++ *				(in jiffies).
++ * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
++ *			    sectors per seconds.
++ * @RT_prod: cached value of the product R*T used for computing the maximum
++ *	     duration of the weight raising automatically.
++ * @device_speed: device-speed class for the low-latency heuristic.
++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ *
++ * All the fields are protected by the @queue lock.
++ */
++struct bfq_data {
++	struct request_queue *queue;
++
++	struct bfq_group *root_group;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	int active_numerous_groups;
++#endif
++
++	struct rb_root queue_weights_tree;
++	struct rb_root group_weights_tree;
++
++	int busy_queues;
++	int busy_in_flight_queues;
++	int const_seeky_busy_in_flight_queues;
++	int wr_busy_queues;
++	int queued;
++	int rq_in_driver;
++	int sync_flight;
++
++	int max_rq_in_driver;
++	int hw_tag_samples;
++	int hw_tag;
++
++	int budgets_assigned;
++
++	struct timer_list idle_slice_timer;
++	struct work_struct unplug_work;
++
++	struct bfq_queue *in_service_queue;
++	struct bfq_io_cq *in_service_bic;
++
++	sector_t last_position;
++
++	ktime_t last_budget_start;
++	ktime_t last_idling_start;
++	int peak_rate_samples;
++	u64 peak_rate;
++	int bfq_max_budget;
++
++	struct list_head active_list;
++	struct list_head idle_list;
++
++	unsigned int bfq_fifo_expire[2];
++	unsigned int bfq_back_penalty;
++	unsigned int bfq_back_max;
++	unsigned int bfq_slice_idle;
++	u64 bfq_class_idle_last_service;
++
++	int bfq_user_max_budget;
++	int bfq_max_budget_async_rq;
++	unsigned int bfq_timeout[2];
++
++	unsigned int bfq_coop_thresh;
++	unsigned int bfq_failed_cooperations;
++	unsigned int bfq_requests_within_timer;
++
++	unsigned long last_ins_in_burst;
++	unsigned long bfq_burst_interval;
++	int burst_size;
++	unsigned long bfq_large_burst_thresh;
++	bool large_burst;
++	struct hlist_head burst_list;
++
++	bool low_latency;
++
++	/* parameters of the low_latency heuristics */
++	unsigned int bfq_wr_coeff;
++	unsigned int bfq_wr_max_time;
++	unsigned int bfq_wr_rt_max_time;
++	unsigned int bfq_wr_min_idle_time;
++	unsigned long bfq_wr_min_inter_arr_async;
++	unsigned int bfq_wr_max_softrt_rate;
++	u64 RT_prod;
++	enum bfq_device_speed device_speed;
++
++	struct bfq_queue oom_bfqq;
++};
++
++enum bfqq_state_flags {
++	BFQ_BFQQ_FLAG_busy = 0,		/* has requests or is in service */
++	BFQ_BFQQ_FLAG_wait_request,	/* waiting for a request */
++	BFQ_BFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
++	BFQ_BFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
++	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
++	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
++	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
++	BFQ_BFQQ_FLAG_IO_bound,		/*
++					 * bfqq has timed-out at least once
++					 * having consumed at most 2/10 of
++					 * its budget
++					 */
++	BFQ_BFQQ_FLAG_in_large_burst,	/*
++					 * bfqq activated in a large burst,
++					 * see comments to bfq_handle_burst.
++					 */
++	BFQ_BFQQ_FLAG_constantly_seeky,	/*
++					 * bfqq has proved to be slow and
++					 * seeky until budget timeout
++					 */
++	BFQ_BFQQ_FLAG_softrt_update,	/*
++					 * may need softrt-next-start
++					 * update
++					 */
++};
++
++#define BFQ_BFQQ_FNS(name)						\
++static void bfq_mark_bfqq_##name(struct bfq_queue *bfqq)		\
++{									\
++	(bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static void bfq_clear_bfqq_##name(struct bfq_queue *bfqq)		\
++{									\
++	(bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static int bfq_bfqq_##name(const struct bfq_queue *bfqq)		\
++{									\
++	return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0;	\
++}
++
++BFQ_BFQQ_FNS(busy);
++BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(must_alloc);
++BFQ_BFQQ_FNS(fifo_expire);
++BFQ_BFQQ_FNS(idle_window);
++BFQ_BFQQ_FNS(sync);
++BFQ_BFQQ_FNS(budget_new);
++BFQ_BFQQ_FNS(IO_bound);
++BFQ_BFQQ_FNS(in_large_burst);
++BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(softrt_update);
++#undef BFQ_BFQQ_FNS
++
++/* Logging facilities. */
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++
++#define bfq_log(bfqd, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++
++/* Expiration reasons. */
++enum bfqq_expiration {
++	BFQ_BFQQ_TOO_IDLE = 0,		/*
++					 * queue has been idling for
++					 * too long
++					 */
++	BFQ_BFQQ_BUDGET_TIMEOUT,	/* budget took too long to be used */
++	BFQ_BFQQ_BUDGET_EXHAUSTED,	/* budget consumed */
++	BFQ_BFQQ_NO_MORE_REQUESTS,	/* the queue has no more requests */
++};
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++struct bfqg_stats {
++	/* total bytes transferred */
++	struct blkg_rwstat		service_bytes;
++	/* total IOs serviced, post merge */
++	struct blkg_rwstat		serviced;
++	/* number of ios merged */
++	struct blkg_rwstat		merged;
++	/* total time spent on device in ns, may not be accurate w/ queueing */
++	struct blkg_rwstat		service_time;
++	/* total time spent waiting in scheduler queue in ns */
++	struct blkg_rwstat		wait_time;
++	/* number of IOs queued up */
++	struct blkg_rwstat		queued;
++	/* total sectors transferred */
++	struct blkg_stat		sectors;
++	/* total disk time and nr sectors dispatched by this group */
++	struct blkg_stat		time;
++	/* time not charged to this cgroup */
++	struct blkg_stat		unaccounted_time;
++	/* sum of number of ios queued across all samples */
++	struct blkg_stat		avg_queue_size_sum;
++	/* count of samples taken for average */
++	struct blkg_stat		avg_queue_size_samples;
++	/* how many times this group has been removed from service tree */
++	struct blkg_stat		dequeue;
++	/* total time spent waiting for it to be assigned a timeslice. */
++	struct blkg_stat		group_wait_time;
++	/* time spent idling for this blkcg_gq */
++	struct blkg_stat		idle_time;
++	/* total time with empty current active q with other requests queued */
++	struct blkg_stat		empty_time;
++	/* fields after this shouldn't be cleared on stat reset */
++	uint64_t			start_group_wait_time;
++	uint64_t			start_idle_time;
++	uint64_t			start_empty_time;
++	uint16_t			flags;
++};
++
++/*
++ * struct bfq_group_data - per-blkcg storage for the blkio subsystem.
++ *
++ * @ps: @blkcg_policy_storage that this structure inherits
++ * @weight: weight of the bfq_group
++ */
++struct bfq_group_data {
++	/* must be the first member */
++	struct blkcg_policy_data pd;
++
++	unsigned short weight;
++};
++
++/**
++ * struct bfq_group - per (device, cgroup) data structure.
++ * @entity: schedulable entity to insert into the parent group sched_data.
++ * @sched_data: own sched_data, to contain child entities (they may be
++ *              both bfq_queues and bfq_groups).
++ * @bfqd: the bfq_data for the device this group acts upon.
++ * @async_bfqq: array of async queues for all the tasks belonging to
++ *              the group, one queue per ioprio value per ioprio_class,
++ *              except for the idle class that has only one queue.
++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
++ *             to avoid too many special cases during group creation/
++ *             migration.
++ * @active_entities: number of active entities belonging to the group;
++ *                   unused for the root group. Used to know whether there
++ *                   are groups with more than one active @bfq_entity
++ *                   (see the comments to the function
++ *                   bfq_bfqq_must_not_expire()).
++ *
++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
++ * there is a set of bfq_groups, each one collecting the lower-level
++ * entities belonging to the group that are acting on the same device.
++ *
++ * Locking works as follows:
++ *    o @bfqd is protected by the queue lock, RCU is used to access it
++ *      from the readers.
++ *    o All the other fields are protected by the @bfqd queue lock.
++ */
++struct bfq_group {
++	/* must be the first member */
++	struct blkg_policy_data pd;
++
++	struct bfq_entity entity;
++	struct bfq_sched_data sched_data;
++
++	void *bfqd;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++
++	struct bfq_entity *my_entity;
++
++	int active_entities;
++
++	struct bfqg_stats stats;
++	struct bfqg_stats dead_stats;	/* stats pushed from dead children */
++};
++
++#else
++struct bfq_group {
++	struct bfq_sched_data sched_data;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++};
++#endif
++
++static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity);
++
++static struct bfq_service_tree *
++bfq_entity_service_tree(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sched_data = entity->sched_data;
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	unsigned int idx = bfqq ? bfqq->ioprio_class - 1 :
++				  BFQ_DEFAULT_GRP_CLASS;
++
++	BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
++	BUG_ON(sched_data == NULL);
++
++	return sched_data->service_tree + idx;
++}
++
++static struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
++{
++	return bic->bfqq[is_sync];
++}
++
++static void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq,
++			 bool is_sync)
++{
++	bic->bfqq[is_sync] = bfqq;
++}
++
++static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
++{
++	return bic->icq.q->elevator->elevator_data;
++}
++
++/**
++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
++ * @ptr: a pointer to a bfqd.
++ * @flags: storage for the flags to be saved.
++ *
++ * This function allows bfqg->bfqd to be protected by the
++ * queue lock of the bfqd they reference; the pointer is dereferenced
++ * under RCU, so the storage for bfqd is assured to be safe as long
++ * as the RCU read side critical section does not end.  After the
++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
++ * sure that no other writer accessed it.  If we raced with a writer,
++ * the function returns NULL, with the queue unlocked, otherwise it
++ * returns the dereferenced pointer, with the queue locked.
++ */
++static struct bfq_data *bfq_get_bfqd_locked(void **ptr, unsigned long *flags)
++{
++	struct bfq_data *bfqd;
++
++	rcu_read_lock();
++	bfqd = rcu_dereference(*(struct bfq_data **)ptr);
++
++	if (bfqd != NULL) {
++		spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
++		if (ptr == NULL)
++			printk(KERN_CRIT "get_bfqd_locked pointer NULL\n");
++		else if (*ptr == bfqd)
++			goto out;
++		spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++	}
++
++	bfqd = NULL;
++out:
++	rcu_read_unlock();
++	return bfqd;
++}
++
++static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
++{
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
++static void bfq_put_queue(struct bfq_queue *bfqq);
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bio *bio, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask);
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg);
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
++
++#endif /* _BFQ_H */
+-- 
+2.7.4 (Apple Git-66)
+

diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch
new file mode 100644
index 0000000..2a53175
--- /dev/null
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch
@@ -0,0 +1,1101 @@
+From 409e62551360d2802992b0175062237352793a2a Mon Sep 17 00:00:00 2001
+From: Mauro Andreolini <mauro.andreolini@unimore.it>
+Date: Sun, 6 Sep 2015 16:09:05 +0200
+Subject: [PATCH 3/4] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r11, to
+ port to 4.8.0
+
+A set of processes may happen  to  perform interleaved reads, i.e.,requests
+whose union would give rise to a  sequential read  pattern.  There are two
+typical  cases: in the first  case,   processes  read  fixed-size chunks of
+data at a fixed distance from each other, while in the second case processes
+may read variable-size chunks at  variable distances. The latter case occurs
+for  example with  QEMU, which  splits the  I/O generated  by the  guest into
+multiple chunks,  and lets these chunks  be served by a  pool of cooperating
+processes,  iteratively  assigning  the  next  chunk of  I/O  to  the first
+available  process. CFQ  uses actual  queue merging  for the  first type of
+rocesses, whereas it  uses preemption to get a sequential  read pattern out
+of the read requests  performed by the second type of  processes. In the end
+it uses  two different  mechanisms to  achieve the  same goal: boosting the
+throughput with interleaved I/O.
+
+This patch introduces  Early Queue Merge (EQM), a unified mechanism to get a
+sequential  read pattern  with both  types of  processes. The  main idea is
+checking newly arrived requests against the next request of the active queue
+both in case of actual request insert and in case of request merge. By doing
+so, both the types of processes can be handled by just merging their queues.
+EQM is  then simpler and  more compact than the  pair of mechanisms used in
+CFQ.
+
+Finally, EQM  also preserves the  typical low-latency properties of BFQ, by
+properly restoring the weight-raising state of a queue when it gets back to
+a non-merged state.
+
+Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+---
+ block/bfq-cgroup.c  |   5 +
+ block/bfq-iosched.c | 685 +++++++++++++++++++++++++++++++++++++++++++++++++++-
+ block/bfq.h         |  66 +++++
+ 3 files changed, 743 insertions(+), 13 deletions(-)
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 8b08a57..0367996 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -440,6 +440,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
+ 				   */
+ 	bfqg->bfqd = bfqd;
+ 	bfqg->active_entities = 0;
++	bfqg->rq_pos_tree = RB_ROOT;
+ }
+ 
+ static void bfq_pd_free(struct blkg_policy_data *pd)
+@@ -533,6 +534,9 @@ static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+ 	return bfqg;
+ }
+ 
++static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
++				  struct bfq_queue *bfqq);
++
+ /**
+  * bfq_bfqq_move - migrate @bfqq to @bfqg.
+  * @bfqd: queue descriptor.
+@@ -580,6 +584,7 @@ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	bfqg_get(bfqg);
+ 
+ 	if (busy) {
++		bfq_pos_tree_add_move(bfqd, bfqq);
+ 		if (resume)
+ 			bfq_activate_bfqq(bfqd, bfqq);
+ 	}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 85e2169..cf3e9b1 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -295,6 +295,72 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
+ 	}
+ }
+ 
++static struct bfq_queue *
++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
++		     sector_t sector, struct rb_node **ret_parent,
++		     struct rb_node ***rb_link)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *bfqq = NULL;
++
++	parent = NULL;
++	p = &root->rb_node;
++	while (*p) {
++		struct rb_node **n;
++
++		parent = *p;
++		bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++
++		/*
++		 * Sort strictly based on sector. Smallest to the left,
++		 * largest to the right.
++		 */
++		if (sector > blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_right;
++		else if (sector < blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_left;
++		else
++			break;
++		p = n;
++		bfqq = NULL;
++	}
++
++	*ret_parent = parent;
++	if (rb_link)
++		*rb_link = p;
++
++	bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
++		(unsigned long long) sector,
++		bfqq ? bfqq->pid : 0);
++
++	return bfqq;
++}
++
++static void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *__bfqq;
++
++	if (bfqq->pos_root) {
++		rb_erase(&bfqq->pos_node, bfqq->pos_root);
++		bfqq->pos_root = NULL;
++	}
++
++	if (bfq_class_idle(bfqq))
++		return;
++	if (!bfqq->next_rq)
++		return;
++
++	bfqq->pos_root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree;
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
++			blk_rq_pos(bfqq->next_rq), &parent, &p);
++	if (!__bfqq) {
++		rb_link_node(&bfqq->pos_node, parent, p);
++		rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
++	} else
++		bfqq->pos_root = NULL;
++}
++
+ /*
+  * Tell whether there are active queues or groups with differentiated weights.
+  */
+@@ -527,6 +593,57 @@ static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ 	return dur;
+ }
+ 
++static unsigned int bfq_bfqq_cooperations(struct bfq_queue *bfqq)
++{
++	return bfqq->bic ? bfqq->bic->cooperations : 0;
++}
++
++static void
++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	if (bic->saved_idle_window)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++	if (bic->saved_IO_bound)
++		bfq_mark_bfqq_IO_bound(bfqq);
++	else
++		bfq_clear_bfqq_IO_bound(bfqq);
++	/* Assuming that the flag in_large_burst is already correctly set */
++	if (bic->wr_time_left && bfqq->bfqd->low_latency &&
++	    !bfq_bfqq_in_large_burst(bfqq) &&
++	    bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
++		/*
++		 * Start a weight raising period with the duration given by
++		 * the raising_time_left snapshot.
++		 */
++		if (bfq_bfqq_busy(bfqq))
++			bfqq->bfqd->wr_busy_queues++;
++		bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
++		bfqq->wr_cur_max_time = bic->wr_time_left;
++		bfqq->last_wr_start_finish = jiffies;
++		bfqq->entity.prio_changed = 1;
++	}
++	/*
++	 * Clear wr_time_left to prevent bfq_bfqq_save_state() from
++	 * getting confused about the queue's need of a weight-raising
++	 * period.
++	 */
++	bic->wr_time_left = 0;
++}
++
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++	int process_refs, io_refs;
++
++	lockdep_assert_held(bfqq->bfqd->queue->queue_lock);
++
++	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	BUG_ON(process_refs < 0);
++	return process_refs;
++}
++
+ /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
+ static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+@@ -763,8 +880,14 @@ static void bfq_add_request(struct request *rq)
+ 	BUG_ON(!next_rq);
+ 	bfqq->next_rq = next_rq;
+ 
++	/*
++	 * Adjust priority tree position, if next_rq changes.
++	 */
++	if (prev != bfqq->next_rq)
++		bfq_pos_tree_add_move(bfqd, bfqq);
++
+ 	if (!bfq_bfqq_busy(bfqq)) {
+-		bool soft_rt, in_burst,
++		bool soft_rt, coop_or_in_burst,
+ 		     idle_for_long_time = time_is_before_jiffies(
+ 						bfqq->budget_timeout +
+ 						bfqd->bfq_wr_min_idle_time);
+@@ -792,11 +915,12 @@ static void bfq_add_request(struct request *rq)
+ 				bfqd->last_ins_in_burst = jiffies;
+ 		}
+ 
+-		in_burst = bfq_bfqq_in_large_burst(bfqq);
++		coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
++			bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+ 		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+-			!in_burst &&
++			!coop_or_in_burst &&
+ 			time_is_before_jiffies(bfqq->soft_rt_next_start);
+-		interactive = !in_burst && idle_for_long_time;
++		interactive = !coop_or_in_burst && idle_for_long_time;
+ 		entity->budget = max_t(unsigned long, bfqq->max_budget,
+ 				       bfq_serv_to_charge(next_rq, bfqq));
+ 
+@@ -815,6 +939,9 @@ static void bfq_add_request(struct request *rq)
+ 		if (!bfqd->low_latency)
+ 			goto add_bfqq_busy;
+ 
++		if (bfq_bfqq_just_split(bfqq))
++			goto set_prio_changed;
++
+ 		/*
+ 		 * If the queue:
+ 		 * - is not being boosted,
+@@ -839,7 +966,7 @@ static void bfq_add_request(struct request *rq)
+ 		} else if (old_wr_coeff > 1) {
+ 			if (interactive)
+ 				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+-			else if (in_burst ||
++			else if (coop_or_in_burst ||
+ 				 (bfqq->wr_cur_max_time ==
+ 				  bfqd->bfq_wr_rt_max_time &&
+ 				  !soft_rt)) {
+@@ -904,6 +1031,7 @@ static void bfq_add_request(struct request *rq)
+ 					bfqd->bfq_wr_rt_max_time;
+ 			}
+ 		}
++set_prio_changed:
+ 		if (old_wr_coeff != bfqq->wr_coeff)
+ 			entity->prio_changed = 1;
+ add_bfqq_busy:
+@@ -1046,6 +1174,15 @@ static void bfq_merged_request(struct request_queue *q, struct request *req,
+ 					 bfqd->last_position);
+ 		BUG_ON(!next_rq);
+ 		bfqq->next_rq = next_rq;
++		/*
++		 * If next_rq changes, update both the queue's budget to
++		 * fit the new request and the queue's position in its
++		 * rq_pos_tree.
++		 */
++		if (prev != bfqq->next_rq) {
++			bfq_updated_next_req(bfqd, bfqq);
++			bfq_pos_tree_add_move(bfqd, bfqq);
++		}
+ 	}
+ }
+ 
+@@ -1128,11 +1265,346 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+ 	spin_unlock_irq(bfqd->queue->queue_lock);
+ }
+ 
++static sector_t bfq_io_struct_pos(void *io_struct, bool request)
++{
++	if (request)
++		return blk_rq_pos(io_struct);
++	else
++		return ((struct bio *)io_struct)->bi_iter.bi_sector;
++}
++
++static int bfq_rq_close_to_sector(void *io_struct, bool request,
++				  sector_t sector)
++{
++	return abs(bfq_io_struct_pos(io_struct, request) - sector) <=
++	       BFQQ_SEEK_THR;
++}
++
++static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd,
++					 struct bfq_queue *bfqq,
++					 sector_t sector)
++{
++	struct rb_root *root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree;
++	struct rb_node *parent, *node;
++	struct bfq_queue *__bfqq;
++
++	if (RB_EMPTY_ROOT(root))
++		return NULL;
++
++	/*
++	 * First, if we find a request starting at the end of the last
++	 * request, choose it.
++	 */
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
++	if (__bfqq)
++		return __bfqq;
++
++	/*
++	 * If the exact sector wasn't found, the parent of the NULL leaf
++	 * will contain the closest sector (rq_pos_tree sorted by
++	 * next_request position).
++	 */
++	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
++		return __bfqq;
++
++	if (blk_rq_pos(__bfqq->next_rq) < sector)
++		node = rb_next(&__bfqq->pos_node);
++	else
++		node = rb_prev(&__bfqq->pos_node);
++	if (!node)
++		return NULL;
++
++	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
++		return __bfqq;
++
++	return NULL;
++}
++
++static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd,
++						   struct bfq_queue *cur_bfqq,
++						   sector_t sector)
++{
++	struct bfq_queue *bfqq;
++
++	/*
++	 * We shall notice if some of the queues are cooperating,
++	 * e.g., working closely on the same area of the device. In
++	 * that case, we can group them together and: 1) don't waste
++	 * time idling, and 2) serve the union of their requests in
++	 * the best possible order for throughput.
++	 */
++	bfqq = bfqq_find_close(bfqd, cur_bfqq, sector);
++	if (!bfqq || bfqq == cur_bfqq)
++		return NULL;
++
++	return bfqq;
++}
++
++static struct bfq_queue *
++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	int process_refs, new_process_refs;
++	struct bfq_queue *__bfqq;
++
++	/*
++	 * If there are no process references on the new_bfqq, then it is
++	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++	 * may have dropped their last reference (not just their last process
++	 * reference).
++	 */
++	if (!bfqq_process_refs(new_bfqq))
++		return NULL;
++
++	/* Avoid a circular list and skip interim queue merges. */
++	while ((__bfqq = new_bfqq->new_bfqq)) {
++		if (__bfqq == bfqq)
++			return NULL;
++		new_bfqq = __bfqq;
++	}
++
++	process_refs = bfqq_process_refs(bfqq);
++	new_process_refs = bfqq_process_refs(new_bfqq);
++	/*
++	 * If the process for the bfqq has gone away, there is no
++	 * sense in merging the queues.
++	 */
++	if (process_refs == 0 || new_process_refs == 0)
++		return NULL;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++		new_bfqq->pid);
++
++	/*
++	 * Merging is just a redirection: the requests of the process
++	 * owning one of the two queues are redirected to the other queue.
++	 * The latter queue, in its turn, is set as shared if this is the
++	 * first time that the requests of some process are redirected to
++	 * it.
++	 *
++	 * We redirect bfqq to new_bfqq and not the opposite, because we
++	 * are in the context of the process owning bfqq, hence we have
++	 * the io_cq of this process. So we can immediately configure this
++	 * io_cq to redirect the requests of the process to new_bfqq.
++	 *
++	 * NOTE, even if new_bfqq coincides with the in-service queue, the
++	 * io_cq of new_bfqq is not available, because, if the in-service
++	 * queue is shared, bfqd->in_service_bic may not point to the
++	 * io_cq of the in-service queue.
++	 * Redirecting the requests of the process owning bfqq to the
++	 * currently in-service queue is in any case the best option, as
++	 * we feed the in-service queue with new requests close to the
++	 * last request served and, by doing so, hopefully increase the
++	 * throughput.
++	 */
++	bfqq->new_bfqq = new_bfqq;
++	atomic_add(process_refs, &new_bfqq->ref);
++	return new_bfqq;
++}
++
++static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
++					struct bfq_queue *new_bfqq)
++{
++	if (bfq_class_idle(bfqq) || bfq_class_idle(new_bfqq) ||
++	    (bfqq->ioprio_class != new_bfqq->ioprio_class))
++		return false;
++
++	/*
++	 * If either of the queues has already been detected as seeky,
++	 * then merging it with the other queue is unlikely to lead to
++	 * sequential I/O.
++	 */
++	if (BFQQ_SEEKY(bfqq) || BFQQ_SEEKY(new_bfqq))
++		return false;
++
++	/*
++	 * Interleaved I/O is known to be done by (some) applications
++	 * only for reads, so it does not make sense to merge async
++	 * queues.
++	 */
++	if (!bfq_bfqq_sync(bfqq) || !bfq_bfqq_sync(new_bfqq))
++		return false;
++
++	return true;
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service queue
++ * or with a close queue among the scheduled queues.
++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * structure otherwise.
++ *
++ * The OOM queue is not allowed to participate to cooperation: in fact, since
++ * the requests temporarily redirected to the OOM queue could be redirected
++ * again to dedicated queues at any time, the state needed to correctly
++ * handle merging with the OOM queue would be quite complex and expensive
++ * to maintain. Besides, in such a critical condition as an out of memory,
++ * the benefits of queue merging may be little relevant, or even negligible.
++ */
++static struct bfq_queue *
++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++		     void *io_struct, bool request)
++{
++	struct bfq_queue *in_service_bfqq, *new_bfqq;
++
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++		return NULL;
++	/* If device has only one backlogged bfq_queue, don't search. */
++	if (bfqd->busy_queues == 1)
++		return NULL;
++
++	in_service_bfqq = bfqd->in_service_queue;
++
++	if (!in_service_bfqq || in_service_bfqq == bfqq ||
++	    !bfqd->in_service_bic ||
++	    unlikely(in_service_bfqq == &bfqd->oom_bfqq))
++		goto check_scheduled;
++
++	if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++	    bfqq->entity.parent == in_service_bfqq->entity.parent &&
++	    bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
++		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
++		if (new_bfqq)
++			return new_bfqq;
++	}
++	/*
++	 * Check whether there is a cooperator among currently scheduled
++	 * queues. The only thing we need is that the bio/request is not
++	 * NULL, as we need it to establish whether a cooperator exists.
++	 */
++check_scheduled:
++	new_bfqq = bfq_find_close_cooperator(bfqd, bfqq,
++			bfq_io_struct_pos(io_struct, request));
++
++	BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent);
++
++	if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) &&
++	    bfq_may_be_close_cooperator(bfqq, new_bfqq))
++		return bfq_setup_merge(bfqq, new_bfqq);
++
++	return NULL;
++}
++
++static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
++{
++	/*
++	 * If !bfqq->bic, the queue is already shared or its requests
++	 * have already been redirected to a shared queue; both idle window
++	 * and weight raising state have already been saved. Do nothing.
++	 */
++	if (!bfqq->bic)
++		return;
++	if (bfqq->bic->wr_time_left)
++		/*
++		 * This is the queue of a just-started process, and would
++		 * deserve weight raising: we set wr_time_left to the full
++		 * weight-raising duration to trigger weight-raising when
++		 * and if the queue is split and the first request of the
++		 * queue is enqueued.
++		 */
++		bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
++	else if (bfqq->wr_coeff > 1) {
++		unsigned long wr_duration =
++			jiffies - bfqq->last_wr_start_finish;
++		/*
++		 * It may happen that a queue's weight raising period lasts
++		 * longer than its wr_cur_max_time, as weight raising is
++		 * handled only when a request is enqueued or dispatched (it
++		 * does not use any timer). If the weight raising period is
++		 * about to end, don't save it.
++		 */
++		if (bfqq->wr_cur_max_time <= wr_duration)
++			bfqq->bic->wr_time_left = 0;
++		else
++			bfqq->bic->wr_time_left =
++				bfqq->wr_cur_max_time - wr_duration;
++		/*
++		 * The bfq_queue is becoming shared or the requests of the
++		 * process owning the queue are being redirected to a shared
++		 * queue. Stop the weight raising period of the queue, as in
++		 * both cases it should not be owned by an interactive or
++		 * soft real-time application.
++		 */
++		bfq_bfqq_end_wr(bfqq);
++	} else
++		bfqq->bic->wr_time_left = 0;
++	bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++	bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++	bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++	bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++	bfqq->bic->cooperations++;
++	bfqq->bic->failed_cooperations = 0;
++}
++
++static void bfq_get_bic_reference(struct bfq_queue *bfqq)
++{
++	/*
++	 * If bfqq->bic has a non-NULL value, the bic to which it belongs
++	 * is about to begin using a shared bfq_queue.
++	 */
++	if (bfqq->bic)
++		atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
++}
++
++static void
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++		struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++		     (unsigned long) new_bfqq->pid);
++	/* Save weight raising and idle window of the merged queues */
++	bfq_bfqq_save_state(bfqq);
++	bfq_bfqq_save_state(new_bfqq);
++	if (bfq_bfqq_IO_bound(bfqq))
++		bfq_mark_bfqq_IO_bound(new_bfqq);
++	bfq_clear_bfqq_IO_bound(bfqq);
++	/*
++	 * Grab a reference to the bic, to prevent it from being destroyed
++	 * before being possibly touched by a bfq_split_bfqq().
++	 */
++	bfq_get_bic_reference(bfqq);
++	bfq_get_bic_reference(new_bfqq);
++	/*
++	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
++	 */
++	bic_set_bfqq(bic, new_bfqq, 1);
++	bfq_mark_bfqq_coop(new_bfqq);
++	/*
++	 * new_bfqq now belongs to at least two bics (it is a shared queue):
++	 * set new_bfqq->bic to NULL. bfqq either:
++	 * - does not belong to any bic any more, and hence bfqq->bic must
++	 *   be set to NULL, or
++	 * - is a queue whose owning bics have already been redirected to a
++	 *   different queue, hence the queue is destined to not belong to
++	 *   any bic soon and bfqq->bic is already NULL (therefore the next
++	 *   assignment causes no harm).
++	 */
++	new_bfqq->bic = NULL;
++	bfqq->bic = NULL;
++	bfq_put_queue(bfqq);
++}
++
++static void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
++{
++	struct bfq_io_cq *bic = bfqq->bic;
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
++		bic->failed_cooperations++;
++		if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
++			bic->cooperations = 0;
++	}
++}
++
+ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ 			   struct bio *bio)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq, *new_bfqq;
+ 
+ 	/*
+ 	 * Disallow merge of a sync bio into an async request.
+@@ -1149,7 +1621,26 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ 	if (!bic)
+ 		return 0;
+ 
+-	return bic_to_bfqq(bic, bfq_bio_sync(bio)) == RQ_BFQQ(rq);
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	/*
++	 * We take advantage of this function to perform an early merge
++	 * of the queues of possible cooperating processes.
++	 */
++	if (bfqq) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false);
++		if (new_bfqq) {
++			bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
++			/*
++			 * If we get here, the bio will be queued in the
++			 * shared queue, i.e., new_bfqq, so use new_bfqq
++			 * to decide whether bio and rq can be merged.
++			 */
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
++	return bfqq == RQ_BFQQ(rq);
+ }
+ 
+ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+@@ -1350,6 +1841,15 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 
+ 	__bfq_bfqd_reset_in_service(bfqd);
+ 
++	/*
++	 * If this bfqq is shared between multiple processes, check
++	 * to make sure that those processes are still issuing I/Os
++	 * within the mean seek distance. If not, it may be time to
++	 * break the queues apart again.
++	 */
++	if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
++		bfq_mark_bfqq_split_coop(bfqq);
++
+ 	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+ 		/*
+ 		 * Overloading budget_timeout field to store the time
+@@ -1358,8 +1858,13 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		 */
+ 		bfqq->budget_timeout = jiffies;
+ 		bfq_del_bfqq_busy(bfqd, bfqq, 1);
+-	} else
++	} else {
+ 		bfq_activate_bfqq(bfqd, bfqq);
++		/*
++		 * Resort priority tree of potential close cooperators.
++		 */
++		bfq_pos_tree_add_move(bfqd, bfqq);
++	}
+ }
+ 
+ /**
+@@ -2246,10 +2751,12 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		/*
+ 		 * If the queue was activated in a burst, or
+ 		 * too much time has elapsed from the beginning
+-		 * of this weight-raising period, then end weight
+-		 * raising.
++		 * of this weight-raising period, or the queue has
++		 * exceeded the acceptable number of cooperations,
++		 * then end weight raising.
+ 		 */
+ 		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+ 		    time_is_before_jiffies(bfqq->last_wr_start_finish +
+ 					   bfqq->wr_cur_max_time)) {
+ 			bfqq->last_wr_start_finish = jiffies;
+@@ -2478,6 +2985,25 @@ static void bfq_put_queue(struct bfq_queue *bfqq)
+ #endif
+ }
+ 
++static void bfq_put_cooperator(struct bfq_queue *bfqq)
++{
++	struct bfq_queue *__bfqq, *next;
++
++	/*
++	 * If this queue was scheduled to merge with another queue, be
++	 * sure to drop the reference taken on that queue (and others in
++	 * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
++	 */
++	__bfqq = bfqq->new_bfqq;
++	while (__bfqq) {
++		if (__bfqq == bfqq)
++			break;
++		next = __bfqq->new_bfqq;
++		bfq_put_queue(__bfqq);
++		__bfqq = next;
++	}
++}
++
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	if (bfqq == bfqd->in_service_queue) {
+@@ -2488,6 +3014,8 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
+ 		     atomic_read(&bfqq->ref));
+ 
++	bfq_put_cooperator(bfqq);
++
+ 	bfq_put_queue(bfqq);
+ }
+ 
+@@ -2496,6 +3024,25 @@ static void bfq_init_icq(struct io_cq *icq)
+ 	struct bfq_io_cq *bic = icq_to_bic(icq);
+ 
+ 	bic->ttime.last_end_request = jiffies;
++	/*
++	 * A newly created bic indicates that the process has just
++	 * started doing I/O, and is probably mapping into memory its
++	 * executable and libraries: it definitely needs weight raising.
++	 * There is however the possibility that the process performs,
++	 * for a while, I/O close to some other process. EQM intercepts
++	 * this behavior and may merge the queue corresponding to the
++	 * process  with some other queue, BEFORE the weight of the queue
++	 * is raised. Merged queues are not weight-raised (they are assumed
++	 * to belong to processes that benefit only from high throughput).
++	 * If the merge is basically the consequence of an accident, then
++	 * the queue will be split soon and will get back its old weight.
++	 * It is then important to write down somewhere that this queue
++	 * does need weight raising, even if it did not make it to get its
++	 * weight raised before being merged. To this purpose, we overload
++	 * the field raising_time_left and assign 1 to it, to mark the queue
++	 * as needing weight raising.
++	 */
++	bic->wr_time_left = 1;
+ }
+ 
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2509,6 +3056,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+ 	}
+ 
+ 	if (bic->bfqq[BLK_RW_SYNC]) {
++		/*
++		 * If the bic is using a shared queue, put the reference
++		 * taken on the io_context when the bic started using a
++		 * shared bfq_queue.
++		 */
++		if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++			put_io_context(icq->ioc);
+ 		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+ 		bic->bfqq[BLK_RW_SYNC] = NULL;
+ 	}
+@@ -2814,6 +3368,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ 	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
+ 		return;
+ 
++	/* Idle window just restored, statistics are meaningless. */
++	if (bfq_bfqq_just_split(bfqq))
++		return;
++
+ 	enable_idle = bfq_bfqq_idle_window(bfqq);
+ 
+ 	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
+@@ -2861,6 +3419,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ 	    !BFQQ_SEEKY(bfqq))
+ 		bfq_update_idle_window(bfqd, bfqq, bic);
++	bfq_clear_bfqq_just_split(bfqq);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+ 		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+@@ -2925,12 +3484,47 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+-	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
+ 
+ 	assert_spin_locked(bfqd->queue->queue_lock);
+ 
++	/*
++	 * An unplug may trigger a requeue of a request from the device
++	 * driver: make sure we are in process context while trying to
++	 * merge two bfq_queues.
++	 */
++	if (!in_interrupt()) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
++		if (new_bfqq) {
++			if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
++				new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
++			/*
++			 * Release the request's reference to the old bfqq
++			 * and make sure one is taken to the shared queue.
++			 */
++			new_bfqq->allocated[rq_data_dir(rq)]++;
++			bfqq->allocated[rq_data_dir(rq)]--;
++			atomic_inc(&new_bfqq->ref);
++			bfq_put_queue(bfqq);
++			if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
++				bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
++						bfqq, new_bfqq);
++			rq->elv.priv[1] = new_bfqq;
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
+ 	bfq_add_request(rq);
+ 
++	/*
++	 * Here a newly-created bfq_queue has already started a weight-raising
++	 * period: clear raising_time_left to prevent bfq_bfqq_save_state()
++	 * from assigning it a full weight-raising period. See the detailed
++	 * comments about this field in bfq_init_icq().
++	 */
++	if (bfqq->bic)
++		bfqq->bic->wr_time_left = 0;
+ 	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ 	list_add_tail(&rq->queuelist, &bfqq->fifo);
+ 
+@@ -3099,6 +3693,32 @@ static void bfq_put_request(struct request *rq)
+ }
+ 
+ /*
++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
++ * was the last process referring to said bfqq.
++ */
++static struct bfq_queue *
++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++
++	put_io_context(bic->icq.ioc);
++
++	if (bfqq_process_refs(bfqq) == 1) {
++		bfqq->pid = current->pid;
++		bfq_clear_bfqq_coop(bfqq);
++		bfq_clear_bfqq_split_coop(bfqq);
++		return bfqq;
++	}
++
++	bic_set_bfqq(bic, NULL, 1);
++
++	bfq_put_cooperator(bfqq);
++
++	bfq_put_queue(bfqq);
++	return NULL;
++}
++
++/*
+  * Allocate bfq data structures associated with this request.
+  */
+ static int bfq_set_request(struct request_queue *q, struct request *rq,
+@@ -3110,6 +3730,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 	const int is_sync = rq_is_sync(rq);
+ 	struct bfq_queue *bfqq;
+ 	unsigned long flags;
++	bool split = false;
+ 
+ 	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+ 
+@@ -3122,15 +3743,30 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 
+ 	bfq_bic_update_cgroup(bic, bio);
+ 
++new_queue:
+ 	bfqq = bic_to_bfqq(bic, is_sync);
+ 	if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+ 		bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
+ 		bic_set_bfqq(bic, bfqq, is_sync);
+-		if (is_sync) {
+-			if (bfqd->large_burst)
++		if (split && is_sync) {
++			if ((bic->was_in_burst_list && bfqd->large_burst) ||
++			    bic->saved_in_large_burst)
+ 				bfq_mark_bfqq_in_large_burst(bfqq);
+-			else
++			else {
+ 				bfq_clear_bfqq_in_large_burst(bfqq);
++				if (bic->was_in_burst_list)
++					hlist_add_head(&bfqq->burst_list_node,
++						       &bfqd->burst_list);
++			}
++		}
++	} else {
++		/* If the queue was seeky for too long, break it apart. */
++		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
++			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++			bfqq = bfq_split_bfqq(bic, bfqq);
++			split = true;
++			if (!bfqq)
++				goto new_queue;
+ 		}
+ 	}
+ 
+@@ -3142,6 +3778,26 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 	rq->elv.priv[0] = bic;
+ 	rq->elv.priv[1] = bfqq;
+ 
++	/*
++	 * If a bfq_queue has only one process reference, it is owned
++	 * by only one bfq_io_cq: we can set the bic field of the
++	 * bfq_queue to the address of that structure. Also, if the
++	 * queue has just been split, mark a flag so that the
++	 * information is available to the other scheduler hooks.
++	 */
++	if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++		bfqq->bic = bic;
++		if (split) {
++			bfq_mark_bfqq_just_split(bfqq);
++			/*
++			 * If the queue has just been split from a shared
++			 * queue, restore the idle window and the possible
++			 * weight raising period.
++			 */
++			bfq_bfqq_resume_state(bfqq, bic);
++		}
++	}
++
+ 	spin_unlock_irqrestore(q->queue_lock, flags);
+ 
+ 	return 0;
+@@ -3295,6 +3951,7 @@ static void bfq_init_root_group(struct bfq_group *root_group,
+ 	root_group->my_entity = NULL;
+ 	root_group->bfqd = bfqd;
+ #endif
++	root_group->rq_pos_tree = RB_ROOT;
+ 	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
+ 		root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
+ }
+@@ -3375,6 +4032,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 	bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
+ 	bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
+ 
++	bfqd->bfq_coop_thresh = 2;
++	bfqd->bfq_failed_cooperations = 7000;
+ 	bfqd->bfq_requests_within_timer = 120;
+ 
+ 	bfqd->bfq_large_burst_thresh = 11;
+diff --git a/block/bfq.h b/block/bfq.h
+index 2bf54ae..fcce855 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -183,6 +183,8 @@ struct bfq_group;
+  *                    ioprio_class value.
+  * @new_bfqq: shared bfq_queue if queue is cooperating with
+  *           one or more other queues.
++ * @pos_node: request-position tree member (see bfq_group's @rq_pos_tree).
++ * @pos_root: request-position tree root (see bfq_group's @rq_pos_tree).
+  * @sort_list: sorted list of pending requests.
+  * @next_rq: if fifo isn't expired, next request to serve.
+  * @queued: nr of requests queued in @sort_list.
+@@ -304,6 +306,26 @@ struct bfq_ttime {
+  * @ttime: associated @bfq_ttime struct
+  * @ioprio: per (request_queue, blkcg) ioprio.
+  * @blkcg_id: id of the blkcg the related io_cq belongs to.
++ * @wr_time_left: snapshot of the time left before weight raising ends
++ *                for the sync queue associated to this process; this
++ *		  snapshot is taken to remember this value while the weight
++ *		  raising is suspended because the queue is merged with a
++ *		  shared queue, and is used to set @raising_cur_max_time
++ *		  when the queue is split from the shared queue and its
++ *		  weight is raised again
++ * @saved_idle_window: same purpose as the previous field for the idle
++ *                     window
++ * @saved_IO_bound: same purpose as the previous two fields for the I/O
++ *                  bound classification of a queue
++ * @saved_in_large_burst: same purpose as the previous fields for the
++ *                        value of the field keeping the queue's belonging
++ *                        to a large burst
++ * @was_in_burst_list: true if the queue belonged to a burst list
++ *                     before its merge with another cooperating queue
++ * @cooperations: counter of consecutive successful queue merges underwent
++ *                by any of the process' @bfq_queues
++ * @failed_cooperations: counter of consecutive failed queue merges of any
++ *                       of the process' @bfq_queues
+  */
+ struct bfq_io_cq {
+ 	struct io_cq icq; /* must be the first member */
+@@ -314,6 +336,16 @@ struct bfq_io_cq {
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	uint64_t blkcg_id; /* the current blkcg ID */
+ #endif
++
++	unsigned int wr_time_left;
++	bool saved_idle_window;
++	bool saved_IO_bound;
++
++	bool saved_in_large_burst;
++	bool was_in_burst_list;
++
++	unsigned int cooperations;
++	unsigned int failed_cooperations;
+ };
+ 
+ enum bfq_device_speed {
+@@ -557,6 +589,9 @@ enum bfqq_state_flags {
+ 					 * may need softrt-next-start
+ 					 * update
+ 					 */
++	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
++	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be split */
++	BFQ_BFQQ_FLAG_just_split,	/* queue has just been split */
+ };
+ 
+ #define BFQ_BFQQ_FNS(name)						\
+@@ -583,6 +618,9 @@ BFQ_BFQQ_FNS(budget_new);
+ BFQ_BFQQ_FNS(IO_bound);
+ BFQ_BFQQ_FNS(in_large_burst);
+ BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(coop);
++BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+ 
+@@ -675,6 +713,9 @@ struct bfq_group_data {
+  *                   are groups with more than one active @bfq_entity
+  *                   (see the comments to the function
+  *                   bfq_bfqq_must_not_expire()).
++ * @rq_pos_tree: rbtree sorted by next_request position, used when
++ *               determining if two or more queues have interleaving
++ *               requests (see bfq_find_close_cooperator()).
+  *
+  * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
+  * there is a set of bfq_groups, each one collecting the lower-level
+@@ -701,6 +742,8 @@ struct bfq_group {
+ 
+ 	int active_entities;
+ 
++	struct rb_root rq_pos_tree;
++
+ 	struct bfqg_stats stats;
+ 	struct bfqg_stats dead_stats;	/* stats pushed from dead children */
+ };
+@@ -711,6 +754,8 @@ struct bfq_group {
+ 
+ 	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
+ 	struct bfq_queue *async_idle_bfqq;
++
++	struct rb_root rq_pos_tree;
+ };
+ #endif
+ 
+@@ -787,6 +832,27 @@ static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
+ 	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+ }
+ 
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *group_entity = bfqq->entity.parent;
++
++	if (!group_entity)
++		group_entity = &bfqq->bfqd->root_group->entity;
++
++	return container_of(group_entity, struct bfq_group, entity);
++}
++
++#else
++
++static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
++{
++	return bfqq->bfqd->root_group;
++}
++
++#endif
++
+ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
+ static void bfq_put_queue(struct bfq_queue *bfqq);
+ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
+-- 
+2.7.4 (Apple Git-66)
+

diff --git a/5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1 b/5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1
new file mode 100644
index 0000000..62cdd1a
--- /dev/null
+++ b/5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1
@@ -0,0 +1,7392 @@
+From ec8981e245dfe24bc6a80207e832ca9be18fd39d Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@linaro.org>
+Date: Tue, 17 May 2016 08:28:04 +0200
+Subject: [PATCH 4/4] Turn BFQ-v7r11 into BFQ-v8r4 for 4.8.0
+
+Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
+---
+ block/Kconfig.iosched |    2 +-
+ block/bfq-cgroup.c    |  495 ++++----
+ block/bfq-iosched.c   | 3230 +++++++++++++++++++++++++++++++------------------
+ block/bfq-sched.c     |  480 ++++++--
+ block/bfq.h           |  747 ++++++------
+ 5 files changed, 3073 insertions(+), 1881 deletions(-)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index f78cd1a..6d92579 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -53,7 +53,7 @@ config IOSCHED_BFQ
+ 
+ config BFQ_GROUP_IOSCHED
+ 	bool "BFQ hierarchical scheduling support"
+-	depends on CGROUPS && IOSCHED_BFQ=y
++	depends on IOSCHED_BFQ && BLK_CGROUP
+ 	default n
+ 	---help---
+ 	  Enable hierarchical scheduling in BFQ, using the blkio controller.
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 0367996..b50ae8e 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -7,7 +7,9 @@
+  * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+  *		      Paolo Valente <paolo.valente@unimore.it>
+  *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
+  *
+  * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
+  * file.
+@@ -163,8 +165,6 @@ static struct bfq_group *blkg_to_bfqg(struct blkcg_gq *blkg)
+ {
+ 	struct blkg_policy_data *pd = blkg_to_pd(blkg, &blkcg_policy_bfq);
+ 
+-	BUG_ON(!pd);
+-
+ 	return pd_to_bfqg(pd);
+ }
+ 
+@@ -208,59 +208,49 @@ static void bfqg_put(struct bfq_group *bfqg)
+ 
+ static void bfqg_stats_update_io_add(struct bfq_group *bfqg,
+ 				     struct bfq_queue *bfqq,
+-				     int rw)
++				     int op, int op_flags)
+ {
+-	blkg_rwstat_add(&bfqg->stats.queued, rw, 1);
++	blkg_rwstat_add(&bfqg->stats.queued, op, op_flags, 1);
+ 	bfqg_stats_end_empty_time(&bfqg->stats);
+ 	if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
+ 		bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
+ }
+ 
+-static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, int rw)
+-{
+-	blkg_rwstat_add(&bfqg->stats.queued, rw, -1);
+-}
+-
+-static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, int rw)
++static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, int op,
++					int op_flags)
+ {
+-	blkg_rwstat_add(&bfqg->stats.merged, rw, 1);
++	blkg_rwstat_add(&bfqg->stats.queued, op, op_flags, -1);
+ }
+ 
+-static void bfqg_stats_update_dispatch(struct bfq_group *bfqg,
+-					      uint64_t bytes, int rw)
++static void bfqg_stats_update_io_merged(struct bfq_group *bfqg,  int op,
++					int op_flags)
+ {
+-	blkg_stat_add(&bfqg->stats.sectors, bytes >> 9);
+-	blkg_rwstat_add(&bfqg->stats.serviced, rw, 1);
+-	blkg_rwstat_add(&bfqg->stats.service_bytes, rw, bytes);
++	blkg_rwstat_add(&bfqg->stats.merged, op, op_flags, 1);
+ }
+ 
+ static void bfqg_stats_update_completion(struct bfq_group *bfqg,
+-			uint64_t start_time, uint64_t io_start_time, int rw)
++			uint64_t start_time, uint64_t io_start_time, int op,
++			int op_flags)
+ {
+ 	struct bfqg_stats *stats = &bfqg->stats;
+ 	unsigned long long now = sched_clock();
+ 
+ 	if (time_after64(now, io_start_time))
+-		blkg_rwstat_add(&stats->service_time, rw, now - io_start_time);
++		blkg_rwstat_add(&stats->service_time, op, op_flags,
++				now - io_start_time);
+ 	if (time_after64(io_start_time, start_time))
+-		blkg_rwstat_add(&stats->wait_time, rw,
++		blkg_rwstat_add(&stats->wait_time, op, op_flags,
+ 				io_start_time - start_time);
+ }
+ 
+ /* @stats = 0 */
+ static void bfqg_stats_reset(struct bfqg_stats *stats)
+ {
+-	if (!stats)
+-		return;
+-
+ 	/* queued stats shouldn't be cleared */
+-	blkg_rwstat_reset(&stats->service_bytes);
+-	blkg_rwstat_reset(&stats->serviced);
+ 	blkg_rwstat_reset(&stats->merged);
+ 	blkg_rwstat_reset(&stats->service_time);
+ 	blkg_rwstat_reset(&stats->wait_time);
+ 	blkg_stat_reset(&stats->time);
+-	blkg_stat_reset(&stats->unaccounted_time);
+ 	blkg_stat_reset(&stats->avg_queue_size_sum);
+ 	blkg_stat_reset(&stats->avg_queue_size_samples);
+ 	blkg_stat_reset(&stats->dequeue);
+@@ -270,19 +260,16 @@ static void bfqg_stats_reset(struct bfqg_stats *stats)
+ }
+ 
+ /* @to += @from */
+-static void bfqg_stats_merge(struct bfqg_stats *to, struct bfqg_stats *from)
++static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
+ {
+ 	if (!to || !from)
+ 		return;
+ 
+ 	/* queued stats shouldn't be cleared */
+-	blkg_rwstat_add_aux(&to->service_bytes, &from->service_bytes);
+-	blkg_rwstat_add_aux(&to->serviced, &from->serviced);
+ 	blkg_rwstat_add_aux(&to->merged, &from->merged);
+ 	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
+ 	blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
+ 	blkg_stat_add_aux(&from->time, &from->time);
+-	blkg_stat_add_aux(&to->unaccounted_time, &from->unaccounted_time);
+ 	blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
+ 	blkg_stat_add_aux(&to->avg_queue_size_samples,
+ 			  &from->avg_queue_size_samples);
+@@ -311,10 +298,8 @@ static void bfqg_stats_xfer_dead(struct bfq_group *bfqg)
+ 	if (unlikely(!parent))
+ 		return;
+ 
+-	bfqg_stats_merge(&parent->dead_stats, &bfqg->stats);
+-	bfqg_stats_merge(&parent->dead_stats, &bfqg->dead_stats);
++	bfqg_stats_add_aux(&parent->stats, &bfqg->stats);
+ 	bfqg_stats_reset(&bfqg->stats);
+-	bfqg_stats_reset(&bfqg->dead_stats);
+ }
+ 
+ static void bfq_init_entity(struct bfq_entity *entity,
+@@ -335,15 +320,11 @@ static void bfq_init_entity(struct bfq_entity *entity,
+ 
+ static void bfqg_stats_exit(struct bfqg_stats *stats)
+ {
+-	blkg_rwstat_exit(&stats->service_bytes);
+-	blkg_rwstat_exit(&stats->serviced);
+ 	blkg_rwstat_exit(&stats->merged);
+ 	blkg_rwstat_exit(&stats->service_time);
+ 	blkg_rwstat_exit(&stats->wait_time);
+ 	blkg_rwstat_exit(&stats->queued);
+-	blkg_stat_exit(&stats->sectors);
+ 	blkg_stat_exit(&stats->time);
+-	blkg_stat_exit(&stats->unaccounted_time);
+ 	blkg_stat_exit(&stats->avg_queue_size_sum);
+ 	blkg_stat_exit(&stats->avg_queue_size_samples);
+ 	blkg_stat_exit(&stats->dequeue);
+@@ -354,15 +335,11 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
+ 
+ static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
+ {
+-	if (blkg_rwstat_init(&stats->service_bytes, gfp) ||
+-	    blkg_rwstat_init(&stats->serviced, gfp) ||
+-	    blkg_rwstat_init(&stats->merged, gfp) ||
++	if (blkg_rwstat_init(&stats->merged, gfp) ||
+ 	    blkg_rwstat_init(&stats->service_time, gfp) ||
+ 	    blkg_rwstat_init(&stats->wait_time, gfp) ||
+ 	    blkg_rwstat_init(&stats->queued, gfp) ||
+-	    blkg_stat_init(&stats->sectors, gfp) ||
+ 	    blkg_stat_init(&stats->time, gfp) ||
+-	    blkg_stat_init(&stats->unaccounted_time, gfp) ||
+ 	    blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
+ 	    blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
+ 	    blkg_stat_init(&stats->dequeue, gfp) ||
+@@ -386,11 +363,27 @@ static struct bfq_group_data *blkcg_to_bfqgd(struct blkcg *blkcg)
+ 	return cpd_to_bfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_bfq));
+ }
+ 
++static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
++{
++	struct bfq_group_data *bgd;
++
++	bgd = kzalloc(sizeof(*bgd), GFP_KERNEL);
++	if (!bgd)
++		return NULL;
++	return &bgd->pd;
++}
++
+ static void bfq_cpd_init(struct blkcg_policy_data *cpd)
+ {
+ 	struct bfq_group_data *d = cpd_to_bfqgd(cpd);
+ 
+-	d->weight = BFQ_DEFAULT_GRP_WEIGHT;
++	d->weight = cgroup_subsys_on_dfl(io_cgrp_subsys) ?
++		CGROUP_WEIGHT_DFL : BFQ_WEIGHT_LEGACY_DFL;
++}
++
++static void bfq_cpd_free(struct blkcg_policy_data *cpd)
++{
++	kfree(cpd_to_bfqgd(cpd));
+ }
+ 
+ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+@@ -401,8 +394,7 @@ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+ 	if (!bfqg)
+ 		return NULL;
+ 
+-	if (bfqg_stats_init(&bfqg->stats, gfp) ||
+-	    bfqg_stats_init(&bfqg->dead_stats, gfp)) {
++	if (bfqg_stats_init(&bfqg->stats, gfp)) {
+ 		kfree(bfqg);
+ 		return NULL;
+ 	}
+@@ -410,27 +402,20 @@ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+ 	return &bfqg->pd;
+ }
+ 
+-static void bfq_group_set_parent(struct bfq_group *bfqg,
+-					struct bfq_group *parent)
++static void bfq_pd_init(struct blkg_policy_data *pd)
+ {
++	struct blkcg_gq *blkg;
++	struct bfq_group *bfqg;
++	struct bfq_data *bfqd;
+ 	struct bfq_entity *entity;
++	struct bfq_group_data *d;
+ 
+-	BUG_ON(!parent);
+-	BUG_ON(!bfqg);
+-	BUG_ON(bfqg == parent);
+-
++	blkg = pd_to_blkg(pd);
++	BUG_ON(!blkg);
++	bfqg = blkg_to_bfqg(blkg);
++	bfqd = blkg->q->elevator->elevator_data;
+ 	entity = &bfqg->entity;
+-	entity->parent = parent->my_entity;
+-	entity->sched_data = &parent->sched_data;
+-}
+-
+-static void bfq_pd_init(struct blkg_policy_data *pd)
+-{
+-	struct blkcg_gq *blkg = pd_to_blkg(pd);
+-	struct bfq_group *bfqg = blkg_to_bfqg(blkg);
+-	struct bfq_data *bfqd = blkg->q->elevator->elevator_data;
+-	struct bfq_entity *entity = &bfqg->entity;
+-	struct bfq_group_data *d = blkcg_to_bfqgd(blkg->blkcg);
++	d = blkcg_to_bfqgd(blkg->blkcg);
+ 
+ 	entity->orig_weight = entity->weight = entity->new_weight = d->weight;
+ 	entity->my_sched_data = &bfqg->sched_data;
+@@ -448,70 +433,53 @@ static void bfq_pd_free(struct blkg_policy_data *pd)
+ 	struct bfq_group *bfqg = pd_to_bfqg(pd);
+ 
+ 	bfqg_stats_exit(&bfqg->stats);
+-	bfqg_stats_exit(&bfqg->dead_stats);
+-
+ 	return kfree(bfqg);
+ }
+ 
+-/* offset delta from bfqg->stats to bfqg->dead_stats */
+-static const int dead_stats_off_delta = offsetof(struct bfq_group, dead_stats) -
+-					offsetof(struct bfq_group, stats);
+-
+-/* to be used by recursive prfill, sums live and dead stats recursively */
+-static u64 bfqg_stat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
+ {
+-	u64 sum = 0;
++	struct bfq_group *bfqg = pd_to_bfqg(pd);
+ 
+-	sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
+-	sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
+-				       off + dead_stats_off_delta);
+-	return sum;
++	bfqg_stats_reset(&bfqg->stats);
+ }
+ 
+-/* to be used by recursive prfill, sums live and dead rwstats recursively */
+-static struct blkg_rwstat
+-bfqg_rwstat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++static void bfq_group_set_parent(struct bfq_group *bfqg,
++					struct bfq_group *parent)
+ {
+-	struct blkg_rwstat a, b;
++	struct bfq_entity *entity;
+ 
+-	a = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
+-	b = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
+-				      off + dead_stats_off_delta);
+-	blkg_rwstat_add_aux(&a, &b);
+-	return a;
++	BUG_ON(!parent);
++	BUG_ON(!bfqg);
++	BUG_ON(bfqg == parent);
++
++	entity = &bfqg->entity;
++	entity->parent = parent->my_entity;
++	entity->sched_data = &parent->sched_data;
+ }
+ 
+-static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
++static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd,
++					 struct blkcg *blkcg)
+ {
+-	struct bfq_group *bfqg = pd_to_bfqg(pd);
++	struct blkcg_gq *blkg;
+ 
+-	bfqg_stats_reset(&bfqg->stats);
+-	bfqg_stats_reset(&bfqg->dead_stats);
++	blkg = blkg_lookup(blkcg, bfqd->queue);
++	if (likely(blkg))
++		return blkg_to_bfqg(blkg);
++	return NULL;
+ }
+ 
+-static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+-					      struct blkcg *blkcg)
++static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
++					    struct blkcg *blkcg)
+ {
+-	struct request_queue *q = bfqd->queue;
+-	struct bfq_group *bfqg = NULL, *parent;
+-	struct bfq_entity *entity = NULL;
++	struct bfq_group *bfqg, *parent;
++	struct bfq_entity *entity;
+ 
+ 	assert_spin_locked(bfqd->queue->queue_lock);
+ 
+-	/* avoid lookup for the common case where there's no blkcg */
+-	if (blkcg == &blkcg_root) {
+-		bfqg = bfqd->root_group;
+-	} else {
+-		struct blkcg_gq *blkg;
+-
+-		blkg = blkg_lookup_create(blkcg, q);
+-		if (!IS_ERR(blkg))
+-			bfqg = blkg_to_bfqg(blkg);
+-		else /* fallback to root_group */
+-			bfqg = bfqd->root_group;
+-	}
++	bfqg = bfq_lookup_bfqg(bfqd, blkcg);
+ 
+-	BUG_ON(!bfqg);
++	if (unlikely(!bfqg))
++		return NULL;
+ 
+ 	/*
+ 	 * Update chain of bfq_groups as we might be handling a leaf group
+@@ -537,11 +505,15 @@ static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+ static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
+ 				  struct bfq_queue *bfqq);
+ 
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++			    struct bfq_queue *bfqq,
++			    bool compensate,
++			    enum bfqq_expiration reason);
++
+ /**
+  * bfq_bfqq_move - migrate @bfqq to @bfqg.
+  * @bfqd: queue descriptor.
+  * @bfqq: the queue to move.
+- * @entity: @bfqq's entity.
+  * @bfqg: the group to move to.
+  *
+  * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
+@@ -552,26 +524,40 @@ static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
+  * rcu_read_lock()).
+  */
+ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+-			  struct bfq_entity *entity, struct bfq_group *bfqg)
++			  struct bfq_group *bfqg)
+ {
+-	int busy, resume;
+-
+-	busy = bfq_bfqq_busy(bfqq);
+-	resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++	struct bfq_entity *entity = &bfqq->entity;
+ 
+-	BUG_ON(resume && !entity->on_st);
+-	BUG_ON(busy && !resume && entity->on_st &&
++	BUG_ON(!bfq_bfqq_busy(bfqq) && !RB_EMPTY_ROOT(&bfqq->sort_list));
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list) && !entity->on_st);
++	BUG_ON(bfq_bfqq_busy(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list)
++	       && entity->on_st &&
+ 	       bfqq != bfqd->in_service_queue);
++	BUG_ON(!bfq_bfqq_busy(bfqq) && bfqq == bfqd->in_service_queue);
+ 
+-	if (busy) {
+-		BUG_ON(atomic_read(&bfqq->ref) < 2);
++	/* If bfqq is empty, then bfq_bfqq_expire also invokes
++	 * bfq_del_bfqq_busy, thereby removing bfqq and its entity
++	 * from data structures related to current group. Otherwise we
++	 * need to remove bfqq explicitly with bfq_deactivate_bfqq, as
++	 * we do below.
++	 */
++	if (bfqq == bfqd->in_service_queue)
++		bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
++				false, BFQ_BFQQ_PREEMPTED);
++
++	BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq)
++	    && &bfq_entity_service_tree(entity)->idle !=
++	       entity->tree);
++
++	BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq));
+ 
+-		if (!resume)
+-			bfq_del_bfqq_busy(bfqd, bfqq, 0);
+-		else
+-			bfq_deactivate_bfqq(bfqd, bfqq, 0);
+-	} else if (entity->on_st)
++	if (bfq_bfqq_busy(bfqq))
++		bfq_deactivate_bfqq(bfqd, bfqq, 0);
++	else if (entity->on_st) {
++		BUG_ON(&bfq_entity_service_tree(entity)->idle !=
++		       entity->tree);
+ 		bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++	}
+ 	bfqg_put(bfqq_group(bfqq));
+ 
+ 	/*
+@@ -583,14 +569,17 @@ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	entity->sched_data = &bfqg->sched_data;
+ 	bfqg_get(bfqg);
+ 
+-	if (busy) {
++	BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq));
++	if (bfq_bfqq_busy(bfqq)) {
+ 		bfq_pos_tree_add_move(bfqd, bfqq);
+-		if (resume)
+-			bfq_activate_bfqq(bfqd, bfqq);
++		bfq_activate_bfqq(bfqd, bfqq);
+ 	}
+ 
+ 	if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
+ 		bfq_schedule_dispatch(bfqd);
++	BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq)
++	       && &bfq_entity_service_tree(entity)->idle !=
++	       entity->tree);
+ }
+ 
+ /**
+@@ -617,7 +606,11 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 
+ 	lockdep_assert_held(bfqd->queue->queue_lock);
+ 
+-	bfqg = bfq_find_alloc_group(bfqd, blkcg);
++	bfqg = bfq_find_set_group(bfqd, blkcg);
++
++	if (unlikely(!bfqg))
++		bfqg = bfqd->root_group;
++
+ 	if (async_bfqq) {
+ 		entity = &async_bfqq->entity;
+ 
+@@ -625,7 +618,8 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 			bic_set_bfqq(bic, NULL, 0);
+ 			bfq_log_bfqq(bfqd, async_bfqq,
+ 				     "bic_change_group: %p %d",
+-				     async_bfqq, atomic_read(&async_bfqq->ref));
++				     async_bfqq,
++				     async_bfqq->ref);
+ 			bfq_put_queue(async_bfqq);
+ 		}
+ 	}
+@@ -633,7 +627,7 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 	if (sync_bfqq) {
+ 		entity = &sync_bfqq->entity;
+ 		if (entity->sched_data != &bfqg->sched_data)
+-			bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
+ 	}
+ 
+ 	return bfqg;
+@@ -642,25 +636,23 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ {
+ 	struct bfq_data *bfqd = bic_to_bfqd(bic);
+-	struct blkcg *blkcg;
+ 	struct bfq_group *bfqg = NULL;
+-	uint64_t id;
++	uint64_t serial_nr;
+ 
+ 	rcu_read_lock();
+-	blkcg = bio_blkcg(bio);
+-	id = blkcg->css.serial_nr;
+-	rcu_read_unlock();
++	serial_nr = bio_blkcg(bio)->css.serial_nr;
+ 
+ 	/*
+ 	 * Check whether blkcg has changed.  The condition may trigger
+ 	 * spuriously on a newly created cic but there's no harm.
+ 	 */
+-	if (unlikely(!bfqd) || likely(bic->blkcg_id == id))
+-		return;
++	if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
++		goto out;
+ 
+-	bfqg = __bfq_bic_change_cgroup(bfqd, bic, blkcg);
+-	BUG_ON(!bfqg);
+-	bic->blkcg_id = id;
++	bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));
++	bic->blkcg_serial_nr = serial_nr;
++out:
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -686,7 +678,7 @@ static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
+ 	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+ 
+ 	BUG_ON(!bfqq);
+-	bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++	bfq_bfqq_move(bfqd, bfqq, bfqd->root_group);
+ }
+ 
+ /**
+@@ -717,11 +709,12 @@ static void bfq_reparent_active_entities(struct bfq_data *bfqd,
+ }
+ 
+ /**
+- * bfq_destroy_group - destroy @bfqg.
+- * @bfqg: the group being destroyed.
++ * bfq_pd_offline - deactivate the entity associated with @pd,
++ *		    and reparent its children entities.
++ * @pd: descriptor of the policy going offline.
+  *
+- * Destroy @bfqg, making sure that it is not referenced from its parent.
+- * blkio already grabs the queue_lock for us, so no need to use RCU-based magic
++ * blkio already grabs the queue_lock for us, so no need to use
++ * RCU-based magic
+  */
+ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ {
+@@ -780,6 +773,12 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ 	bfq_put_async_queues(bfqd, bfqg);
+ 	BUG_ON(entity->tree);
+ 
++	/*
++	 * @blkg is going offline and will be ignored by
++	 * blkg_[rw]stat_recursive_sum().  Transfer stats to the parent so
++	 * that they don't get lost.  If IOs complete after this point, the
++	 * stats for them will be lost.  Oh well...
++	 */
+ 	bfqg_stats_xfer_dead(bfqg);
+ }
+ 
+@@ -789,46 +788,35 @@ static void bfq_end_wr_async(struct bfq_data *bfqd)
+ 
+ 	list_for_each_entry(blkg, &bfqd->queue->blkg_list, q_node) {
+ 		struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++		BUG_ON(!bfqg);
+ 
+ 		bfq_end_wr_async_queues(bfqd, bfqg);
+ 	}
+ 	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+ 
+-static u64 bfqio_cgroup_weight_read(struct cgroup_subsys_state *css,
+-				       struct cftype *cftype)
+-{
+-	struct blkcg *blkcg = css_to_blkcg(css);
+-	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
+-	int ret = -EINVAL;
+-
+-	spin_lock_irq(&blkcg->lock);
+-	ret = bfqgd->weight;
+-	spin_unlock_irq(&blkcg->lock);
+-
+-	return ret;
+-}
+-
+-static int bfqio_cgroup_weight_read_dfl(struct seq_file *sf, void *v)
++static int bfq_io_show_weight(struct seq_file *sf, void *v)
+ {
+ 	struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
+ 	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++	unsigned int val = 0;
+ 
+-	spin_lock_irq(&blkcg->lock);
+-	seq_printf(sf, "%u\n", bfqgd->weight);
+-	spin_unlock_irq(&blkcg->lock);
++	if (bfqgd)
++		val = bfqgd->weight;
++
++	seq_printf(sf, "%u\n", val);
+ 
+ 	return 0;
+ }
+ 
+-static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
+-					struct cftype *cftype,
+-					u64 val)
++static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css,
++				    struct cftype *cftype,
++				    u64 val)
+ {
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+ 	struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
+ 	struct blkcg_gq *blkg;
+-	int ret = -EINVAL;
++	int ret = -ERANGE;
+ 
+ 	if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT)
+ 		return ret;
+@@ -873,13 +861,18 @@ static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
+ 	return ret;
+ }
+ 
+-static ssize_t bfqio_cgroup_weight_write_dfl(struct kernfs_open_file *of,
+-					     char *buf, size_t nbytes,
+-					     loff_t off)
++static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
++				 char *buf, size_t nbytes,
++				 loff_t off)
+ {
++	u64 weight;
+ 	/* First unsigned long found in the file is used */
+-	return bfqio_cgroup_weight_write(of_css(of), NULL,
+-					 simple_strtoull(strim(buf), NULL, 0));
++	int ret = kstrtoull(strim(buf), 0, &weight);
++
++	if (ret)
++		return ret;
++
++	return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
+ }
+ 
+ static int bfqg_print_stat(struct seq_file *sf, void *v)
+@@ -899,16 +892,17 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
+ static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
+ 				      struct blkg_policy_data *pd, int off)
+ {
+-	u64 sum = bfqg_stat_pd_recursive_sum(pd, off);
+-
++	u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd),
++					  &blkcg_policy_bfq, off);
+ 	return __blkg_prfill_u64(sf, pd, sum);
+ }
+ 
+ static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
+ 					struct blkg_policy_data *pd, int off)
+ {
+-	struct blkg_rwstat sum = bfqg_rwstat_pd_recursive_sum(pd, off);
+-
++	struct blkg_rwstat sum = blkg_rwstat_recursive_sum(pd_to_blkg(pd),
++							   &blkcg_policy_bfq,
++							   off);
+ 	return __blkg_prfill_rwstat(sf, pd, &sum);
+ }
+ 
+@@ -928,6 +922,41 @@ static int bfqg_print_rwstat_recursive(struct seq_file *sf, void *v)
+ 	return 0;
+ }
+ 
++static u64 bfqg_prfill_sectors(struct seq_file *sf, struct blkg_policy_data *pd,
++			       int off)
++{
++	u64 sum = blkg_rwstat_total(&pd->blkg->stat_bytes);
++
++	return __blkg_prfill_u64(sf, pd, sum >> 9);
++}
++
++static int bfqg_print_stat_sectors(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++			  bfqg_prfill_sectors, &blkcg_policy_bfq, 0, false);
++	return 0;
++}
++
++static u64 bfqg_prfill_sectors_recursive(struct seq_file *sf,
++					 struct blkg_policy_data *pd, int off)
++{
++	struct blkg_rwstat tmp = blkg_rwstat_recursive_sum(pd->blkg, NULL,
++					offsetof(struct blkcg_gq, stat_bytes));
++	u64 sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
++		atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
++
++	return __blkg_prfill_u64(sf, pd, sum >> 9);
++}
++
++static int bfqg_print_stat_sectors_recursive(struct seq_file *sf, void *v)
++{
++	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++			  bfqg_prfill_sectors_recursive, &blkcg_policy_bfq, 0,
++			  false);
++	return 0;
++}
++
++
+ static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
+ 				      struct blkg_policy_data *pd, int off)
+ {
+@@ -964,38 +993,15 @@ bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
+ 	return blkg_to_bfqg(bfqd->queue->root_blkg);
+ }
+ 
+-static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
+-{
+-	struct bfq_group_data *bgd;
+-
+-	bgd = kzalloc(sizeof(*bgd), GFP_KERNEL);
+-	if (!bgd)
+-		return NULL;
+-	return &bgd->pd;
+-}
+-
+-static void bfq_cpd_free(struct blkcg_policy_data *cpd)
+-{
+-	kfree(cpd_to_bfqgd(cpd));
+-}
+-
+-static struct cftype bfqio_files_dfl[] = {
++static struct cftype bfq_blkcg_legacy_files[] = {
+ 	{
+-		.name = "weight",
++		.name = "bfq.weight",
+ 		.flags = CFTYPE_NOT_ON_ROOT,
+-		.seq_show = bfqio_cgroup_weight_read_dfl,
+-		.write = bfqio_cgroup_weight_write_dfl,
++		.seq_show = bfq_io_show_weight,
++		.write_u64 = bfq_io_set_weight_legacy,
+ 	},
+-	{} /* terminate */
+-};
+ 
+-static struct cftype bfqio_files[] = {
+-	{
+-		.name = "bfq.weight",
+-		.read_u64 = bfqio_cgroup_weight_read,
+-		.write_u64 = bfqio_cgroup_weight_write,
+-	},
+-	/* statistics, cover only the tasks in the bfqg */
++	/* statistics, covers only the tasks in the bfqg */
+ 	{
+ 		.name = "bfq.time",
+ 		.private = offsetof(struct bfq_group, stats.time),
+@@ -1003,18 +1009,17 @@ static struct cftype bfqio_files[] = {
+ 	},
+ 	{
+ 		.name = "bfq.sectors",
+-		.private = offsetof(struct bfq_group, stats.sectors),
+-		.seq_show = bfqg_print_stat,
++		.seq_show = bfqg_print_stat_sectors,
+ 	},
+ 	{
+ 		.name = "bfq.io_service_bytes",
+-		.private = offsetof(struct bfq_group, stats.service_bytes),
+-		.seq_show = bfqg_print_rwstat,
++		.private = (unsigned long)&blkcg_policy_bfq,
++		.seq_show = blkg_print_stat_bytes,
+ 	},
+ 	{
+ 		.name = "bfq.io_serviced",
+-		.private = offsetof(struct bfq_group, stats.serviced),
+-		.seq_show = bfqg_print_rwstat,
++		.private = (unsigned long)&blkcg_policy_bfq,
++		.seq_show = blkg_print_stat_ios,
+ 	},
+ 	{
+ 		.name = "bfq.io_service_time",
+@@ -1045,18 +1050,17 @@ static struct cftype bfqio_files[] = {
+ 	},
+ 	{
+ 		.name = "bfq.sectors_recursive",
+-		.private = offsetof(struct bfq_group, stats.sectors),
+-		.seq_show = bfqg_print_stat_recursive,
++		.seq_show = bfqg_print_stat_sectors_recursive,
+ 	},
+ 	{
+ 		.name = "bfq.io_service_bytes_recursive",
+-		.private = offsetof(struct bfq_group, stats.service_bytes),
+-		.seq_show = bfqg_print_rwstat_recursive,
++		.private = (unsigned long)&blkcg_policy_bfq,
++		.seq_show = blkg_print_stat_bytes_recursive,
+ 	},
+ 	{
+ 		.name = "bfq.io_serviced_recursive",
+-		.private = offsetof(struct bfq_group, stats.serviced),
+-		.seq_show = bfqg_print_rwstat_recursive,
++		.private = (unsigned long)&blkcg_policy_bfq,
++		.seq_show = blkg_print_stat_ios_recursive,
+ 	},
+ 	{
+ 		.name = "bfq.io_service_time_recursive",
+@@ -1102,31 +1106,39 @@ static struct cftype bfqio_files[] = {
+ 		.private = offsetof(struct bfq_group, stats.dequeue),
+ 		.seq_show = bfqg_print_stat,
+ 	},
+-	{
+-		.name = "bfq.unaccounted_time",
+-		.private = offsetof(struct bfq_group, stats.unaccounted_time),
+-		.seq_show = bfqg_print_stat,
+-	},
+ 	{ }	/* terminate */
+ };
+ 
+-static struct blkcg_policy blkcg_policy_bfq = {
+-	.dfl_cftypes            = bfqio_files_dfl,
+-	.legacy_cftypes		= bfqio_files,
+-
+-	.pd_alloc_fn		= bfq_pd_alloc,
+-	.pd_init_fn		= bfq_pd_init,
+-	.pd_offline_fn		= bfq_pd_offline,
+-	.pd_free_fn		= bfq_pd_free,
+-	.pd_reset_stats_fn	= bfq_pd_reset_stats,
+-
+-	.cpd_alloc_fn		= bfq_cpd_alloc,
+-	.cpd_init_fn		= bfq_cpd_init,
+-	.cpd_bind_fn		= bfq_cpd_init,
+-	.cpd_free_fn		= bfq_cpd_free,
++static struct cftype bfq_blkg_files[] = {
++	{
++		.name = "bfq.weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = bfq_io_show_weight,
++		.write = bfq_io_set_weight,
++	},
++	{} /* terminate */
+ };
+ 
+-#else
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
++
++static inline void bfqg_stats_update_io_add(struct bfq_group *bfqg,
++			struct bfq_queue *bfqq, int op, int op_flags) { }
++static inline void
++bfqg_stats_update_io_remove(struct bfq_group *bfqg, int op, int op_flags) { }
++static inline void
++bfqg_stats_update_io_merged(struct bfq_group *bfqg, int op, int op_flags) { }
++static inline void bfqg_stats_update_completion(struct bfq_group *bfqg,
++			uint64_t start_time, uint64_t io_start_time, int op,
++			int op_flags) { }
++static inline void
++bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
++				     struct bfq_group *curr_bfqg) { }
++static inline void bfqg_stats_end_empty_time(struct bfqg_stats *stats) { }
++static inline void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
+ 
+ static void bfq_init_entity(struct bfq_entity *entity,
+ 			    struct bfq_group *bfqg)
+@@ -1150,27 +1162,20 @@ bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ 	return bfqd->root_group;
+ }
+ 
+-static void bfq_bfqq_move(struct bfq_data *bfqd,
+-			  struct bfq_queue *bfqq,
+-			  struct bfq_entity *entity,
+-			  struct bfq_group *bfqg)
+-{
+-}
+-
+ static void bfq_end_wr_async(struct bfq_data *bfqd)
+ {
+ 	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+ 
+-static void bfq_disconnect_groups(struct bfq_data *bfqd)
++static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
++					    struct blkcg *blkcg)
+ {
+-	bfq_put_async_queues(bfqd, bfqd->root_group);
++	return bfqd->root_group;
+ }
+ 
+-static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+-					      struct blkcg *blkcg)
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
+ {
+-	return bfqd->root_group;
++	return bfqq->bfqd->root_group;
+ }
+ 
+ static struct bfq_group *
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index cf3e9b1..eef6ff4 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -7,25 +7,28 @@
+  * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+  *		      Paolo Valente <paolo.valente@unimore.it>
+  *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
+  *
+  * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
+  * file.
+  *
+- * BFQ is a proportional-share storage-I/O scheduling algorithm based on
+- * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
+- * measured in number of sectors, to processes instead of time slices. The
+- * device is not granted to the in-service process for a given time slice,
+- * but until it has exhausted its assigned budget. This change from the time
+- * to the service domain allows BFQ to distribute the device throughput
+- * among processes as desired, without any distortion due to ZBR, workload
+- * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
+- * called B-WF2Q+, to schedule processes according to their budgets. More
+- * precisely, BFQ schedules queues associated to processes. Thanks to the
+- * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
+- * I/O-bound processes issuing sequential requests (to boost the
+- * throughput), and yet guarantee a low latency to interactive and soft
+- * real-time applications.
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based
++ * on the slice-by-slice service scheme of CFQ. But BFQ assigns
++ * budgets, measured in number of sectors, to processes instead of
++ * time slices. The device is not granted to the in-service process
++ * for a given time slice, but until it has exhausted its assigned
++ * budget. This change from the time to the service domain enables BFQ
++ * to distribute the device throughput among processes as desired,
++ * without any distortion due to throughput fluctuations, or to device
++ * internal queueing. BFQ uses an ad hoc internal scheduler, called
++ * B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated with processes. Thanks to
++ * the accurate policy of B-WF2Q+, BFQ can afford to assign high
++ * budgets to I/O-bound processes issuing sequential requests (to
++ * boost the throughput), and yet guarantee a low latency to
++ * interactive and soft real-time applications.
+  *
+  * BFQ is described in [1], where also a reference to the initial, more
+  * theoretical paper on BFQ can be found. The interested reader can find
+@@ -70,8 +73,8 @@
+ #include "bfq.h"
+ #include "blk.h"
+ 
+-/* Expiration time of sync (0) and async (1) requests, in jiffies. */
+-static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++/* Expiration time of sync (0) and async (1) requests, in ns. */
++static const u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 };
+ 
+ /* Maximum backwards seek, in KiB. */
+ static const int bfq_back_max = 16 * 1024;
+@@ -79,15 +82,14 @@ static const int bfq_back_max = 16 * 1024;
+ /* Penalty of a backwards seek, in number of sectors. */
+ static const int bfq_back_penalty = 2;
+ 
+-/* Idling period duration, in jiffies. */
+-static int bfq_slice_idle = HZ / 125;
++/* Idling period duration, in ns. */
++static u32 bfq_slice_idle = NSEC_PER_SEC / 125;
+ 
+ /* Minimum number of assigned budgets for which stats are safe to compute. */
+ static const int bfq_stats_min_budgets = 194;
+ 
+ /* Default maximum budget values, in sectors and number of requests. */
+ static const int bfq_default_max_budget = 16 * 1024;
+-static const int bfq_max_budget_async_rq = 4;
+ 
+ /*
+  * Async to sync throughput distribution is controlled as follows:
+@@ -97,23 +99,27 @@ static const int bfq_max_budget_async_rq = 4;
+ static const int bfq_async_charge_factor = 10;
+ 
+ /* Default timeout values, in jiffies, approximating CFQ defaults. */
+-static const int bfq_timeout_sync = HZ / 8;
+-static int bfq_timeout_async = HZ / 25;
++static const int bfq_timeout = HZ / 8;
+ 
+ struct kmem_cache *bfq_pool;
+ 
+-/* Below this threshold (in ms), we consider thinktime immediate. */
+-#define BFQ_MIN_TT		2
++/* Below this threshold (in ns), we consider thinktime immediate. */
++#define BFQ_MIN_TT		(2 * NSEC_PER_MSEC)
+ 
+ /* hw_tag detection: parallel requests threshold and min samples needed. */
+ #define BFQ_HW_QUEUE_THRESHOLD	4
+ #define BFQ_HW_QUEUE_SAMPLES	32
+ 
+-#define BFQQ_SEEK_THR	 (sector_t)(8 * 1024)
+-#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++#define BFQQ_SEEK_THR		(sector_t)(8 * 100)
++#define BFQQ_CLOSE_THR		(sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq)	(hweight32(bfqq->seek_history) > 32/8)
+ 
+-/* Min samples used for peak rate estimation (for autotuning). */
+-#define BFQ_PEAK_RATE_SAMPLES	32
++/* Min number of samples required to perform peak-rate update */
++#define BFQ_RATE_MIN_SAMPLES	32
++/* Min observation time interval required to perform a peak-rate update (ns) */
++#define BFQ_RATE_MIN_INTERVAL	300*NSEC_PER_MSEC
++/* Target observation time interval for a peak-rate update (ns) */
++#define BFQ_RATE_REF_INTERVAL	NSEC_PER_SEC
+ 
+ /* Shift used for peak rate fixed precision calculations. */
+ #define BFQ_RATE_SHIFT		16
+@@ -141,16 +147,24 @@ struct kmem_cache *bfq_pool;
+  * The device's speed class is dynamically (re)detected in
+  * bfq_update_peak_rate() every time the estimated peak rate is updated.
+  *
+- * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
+- * are the reference values for a slow/fast rotational device, whereas
+- * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
+- * a slow/fast non-rotational device. Finally, device_speed_thresh are the
+- * thresholds used to switch between speed classes.
++ * In the following definitions, R_slow[0]/R_fast[0] and
++ * T_slow[0]/T_fast[0] are the reference values for a slow/fast
++ * rotational device, whereas R_slow[1]/R_fast[1] and
++ * T_slow[1]/T_fast[1] are the reference values for a slow/fast
++ * non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes. The reference
++ * rates are not the actual peak rates of the devices used as a
++ * reference, but slightly lower values. The reason for using these
++ * slightly lower values is that the peak-rate estimator tends to
++ * yield slightly lower values than the actual peak rate (it can yield
++ * the actual peak rate only if there is only one process doing I/O,
++ * and the process does sequential I/O).
++ *
+  * Both the reference peak rates and the thresholds are measured in
+  * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
+  */
+-static int R_slow[2] = {1536, 10752};
+-static int R_fast[2] = {17415, 34791};
++static int R_slow[2] = {1000, 10700};
++static int R_fast[2] = {14000, 33000};
+ /*
+  * To improve readability, a conversion function is used to initialize the
+  * following arrays, which entails that they can be initialized only in a
+@@ -183,10 +197,7 @@ static void bfq_schedule_dispatch(struct bfq_data *bfqd);
+  */
+ static int bfq_bio_sync(struct bio *bio)
+ {
+-	if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
+-		return 1;
+-
+-	return 0;
++	return bio_data_dir(bio) == READ || (bio->bi_opf & REQ_SYNC);
+ }
+ 
+ /*
+@@ -409,11 +420,7 @@ static bool bfq_differentiated_weights(struct bfq_data *bfqd)
+  */
+ static bool bfq_symmetric_scenario(struct bfq_data *bfqd)
+ {
+-	return
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-		!bfqd->active_numerous_groups &&
+-#endif
+-		!bfq_differentiated_weights(bfqd);
++	return !bfq_differentiated_weights(bfqd);
+ }
+ 
+ /*
+@@ -533,9 +540,19 @@ static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
+ static unsigned long bfq_serv_to_charge(struct request *rq,
+ 					struct bfq_queue *bfqq)
+ {
+-	return blk_rq_sectors(rq) *
+-		(1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
+-		bfq_async_charge_factor));
++	if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1)
++		return blk_rq_sectors(rq);
++
++	/*
++	 * If there are no weight-raised queues, then amplify service
++	 * by just the async charge factor; otherwise amplify service
++	 * by twice the async charge factor, to further reduce latency
++	 * for weight-raised queues.
++	 */
++	if (bfqq->bfqd->wr_busy_queues == 0)
++		return blk_rq_sectors(rq) * bfq_async_charge_factor;
++
++	return blk_rq_sectors(rq) * 2 * bfq_async_charge_factor;
+ }
+ 
+ /**
+@@ -590,12 +607,23 @@ static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ 	dur = bfqd->RT_prod;
+ 	do_div(dur, bfqd->peak_rate);
+ 
+-	return dur;
+-}
++	/*
++	 * Limit duration between 3 and 13 seconds. Tests show that
++	 * higher values than 13 seconds often yield the opposite of
++	 * the desired result, i.e., worsen responsiveness by letting
++	 * non-interactive and non-soft-real-time applications
++	 * preserve weight raising for a too long time interval.
++	 *
++	 * On the other end, lower values than 3 seconds make it
++	 * difficult for most interactive tasks to complete their jobs
++	 * before weight-raising finishes.
++	 */
++	if (dur > msecs_to_jiffies(13000))
++		dur = msecs_to_jiffies(13000);
++	else if (dur < msecs_to_jiffies(3000))
++		dur = msecs_to_jiffies(3000);
+ 
+-static unsigned int bfq_bfqq_cooperations(struct bfq_queue *bfqq)
+-{
+-	return bfqq->bic ? bfqq->bic->cooperations : 0;
++	return dur;
+ }
+ 
+ static void
+@@ -605,31 +633,28 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 		bfq_mark_bfqq_idle_window(bfqq);
+ 	else
+ 		bfq_clear_bfqq_idle_window(bfqq);
++
+ 	if (bic->saved_IO_bound)
+ 		bfq_mark_bfqq_IO_bound(bfqq);
+ 	else
+ 		bfq_clear_bfqq_IO_bound(bfqq);
+-	/* Assuming that the flag in_large_burst is already correctly set */
+-	if (bic->wr_time_left && bfqq->bfqd->low_latency &&
+-	    !bfq_bfqq_in_large_burst(bfqq) &&
+-	    bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
+-		/*
+-		 * Start a weight raising period with the duration given by
+-		 * the raising_time_left snapshot.
+-		 */
+-		if (bfq_bfqq_busy(bfqq))
+-			bfqq->bfqd->wr_busy_queues++;
+-		bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
+-		bfqq->wr_cur_max_time = bic->wr_time_left;
+-		bfqq->last_wr_start_finish = jiffies;
+-		bfqq->entity.prio_changed = 1;
++
++	bfqq->wr_coeff = bic->saved_wr_coeff;
++	bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt;
++	BUG_ON(time_is_after_jiffies(bfqq->wr_start_at_switch_to_srt));
++	bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish;
++	BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
++
++	if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) ||
++	    time_is_before_jiffies(bfqq->last_wr_start_finish +
++				   bfqq->wr_cur_max_time))) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++		    "resume state: switching off wr");
++
++		bfqq->wr_coeff = 1;
+ 	}
+-	/*
+-	 * Clear wr_time_left to prevent bfq_bfqq_save_state() from
+-	 * getting confused about the queue's need of a weight-raising
+-	 * period.
+-	 */
+-	bic->wr_time_left = 0;
++	/* make sure weight will be updated, however we got here */
++	bfqq->entity.prio_changed = 1;
+ }
+ 
+ static int bfqq_process_refs(struct bfq_queue *bfqq)
+@@ -639,7 +664,7 @@ static int bfqq_process_refs(struct bfq_queue *bfqq)
+ 	lockdep_assert_held(bfqq->bfqd->queue->queue_lock);
+ 
+ 	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
+-	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	process_refs = bfqq->ref - io_refs - bfqq->entity.on_st;
+ 	BUG_ON(process_refs < 0);
+ 	return process_refs;
+ }
+@@ -654,6 +679,7 @@ static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		hlist_del_init(&item->burst_list_node);
+ 	hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
+ 	bfqd->burst_size = 1;
++	bfqd->burst_parent_entity = bfqq->entity.parent;
+ }
+ 
+ /* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
+@@ -662,6 +688,10 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	/* Increment burst size to take into account also bfqq */
+ 	bfqd->burst_size++;
+ 
++	bfq_log_bfqq(bfqd, bfqq, "add_to_burst %d", bfqd->burst_size);
++
++	BUG_ON(bfqd->burst_size > bfqd->bfq_large_burst_thresh);
++
+ 	if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
+ 		struct bfq_queue *pos, *bfqq_item;
+ 		struct hlist_node *n;
+@@ -671,15 +701,19 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		 * other to consider this burst as large.
+ 		 */
+ 		bfqd->large_burst = true;
++		bfq_log_bfqq(bfqd, bfqq, "add_to_burst: large burst started");
+ 
+ 		/*
+ 		 * We can now mark all queues in the burst list as
+ 		 * belonging to a large burst.
+ 		 */
+ 		hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
+-				     burst_list_node)
++				     burst_list_node) {
+ 			bfq_mark_bfqq_in_large_burst(bfqq_item);
++			bfq_log_bfqq(bfqd, bfqq_item, "marked in large burst");
++		}
+ 		bfq_mark_bfqq_in_large_burst(bfqq);
++		bfq_log_bfqq(bfqd, bfqq, "marked in large burst");
+ 
+ 		/*
+ 		 * From now on, and until the current burst finishes, any
+@@ -691,67 +725,79 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
+ 					  burst_list_node)
+ 			hlist_del_init(&pos->burst_list_node);
+-	} else /* burst not yet large: add bfqq to the burst list */
++	} else /*
++		* Burst not yet large: add bfqq to the burst list. Do
++		* not increment the ref counter for bfqq, because bfqq
++		* is removed from the burst list before freeing bfqq
++		* in put_queue.
++		*/
+ 		hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
+ }
+ 
+ /*
+- * If many queues happen to become active shortly after each other, then,
+- * to help the processes associated to these queues get their job done as
+- * soon as possible, it is usually better to not grant either weight-raising
+- * or device idling to these queues. In this comment we describe, firstly,
+- * the reasons why this fact holds, and, secondly, the next function, which
+- * implements the main steps needed to properly mark these queues so that
+- * they can then be treated in a different way.
++ * If many queues belonging to the same group happen to be created
++ * shortly after each other, then the processes associated with these
++ * queues have typically a common goal. In particular, bursts of queue
++ * creations are usually caused by services or applications that spawn
++ * many parallel threads/processes. Examples are systemd during boot,
++ * or git grep. To help these processes get their job done as soon as
++ * possible, it is usually better to not grant either weight-raising
++ * or device idling to their queues.
+  *
+- * As for the terminology, we say that a queue becomes active, i.e.,
+- * switches from idle to backlogged, either when it is created (as a
+- * consequence of the arrival of an I/O request), or, if already existing,
+- * when a new request for the queue arrives while the queue is idle.
+- * Bursts of activations, i.e., activations of different queues occurring
+- * shortly after each other, are typically caused by services or applications
+- * that spawn or reactivate many parallel threads/processes. Examples are
+- * systemd during boot or git grep.
++ * In this comment we describe, firstly, the reasons why this fact
++ * holds, and, secondly, the next function, which implements the main
++ * steps needed to properly mark these queues so that they can then be
++ * treated in a different way.
+  *
+- * These services or applications benefit mostly from a high throughput:
+- * the quicker the requests of the activated queues are cumulatively served,
+- * the sooner the target job of these queues gets completed. As a consequence,
+- * weight-raising any of these queues, which also implies idling the device
+- * for it, is almost always counterproductive: in most cases it just lowers
+- * throughput.
++ * The above services or applications benefit mostly from a high
++ * throughput: the quicker the requests of the activated queues are
++ * cumulatively served, the sooner the target job of these queues gets
++ * completed. As a consequence, weight-raising any of these queues,
++ * which also implies idling the device for it, is almost always
++ * counterproductive. In most cases it just lowers throughput.
+  *
+- * On the other hand, a burst of activations may be also caused by the start
+- * of an application that does not consist in a lot of parallel I/O-bound
+- * threads. In fact, with a complex application, the burst may be just a
+- * consequence of the fact that several processes need to be executed to
+- * start-up the application. To start an application as quickly as possible,
+- * the best thing to do is to privilege the I/O related to the application
+- * with respect to all other I/O. Therefore, the best strategy to start as
+- * quickly as possible an application that causes a burst of activations is
+- * to weight-raise all the queues activated during the burst. This is the
++ * On the other hand, a burst of queue creations may be caused also by
++ * the start of an application that does not consist of a lot of
++ * parallel I/O-bound threads. In fact, with a complex application,
++ * several short processes may need to be executed to start-up the
++ * application. In this respect, to start an application as quickly as
++ * possible, the best thing to do is in any case to privilege the I/O
++ * related to the application with respect to all other
++ * I/O. Therefore, the best strategy to start as quickly as possible
++ * an application that causes a burst of queue creations is to
++ * weight-raise all the queues created during the burst. This is the
+  * exact opposite of the best strategy for the other type of bursts.
+  *
+- * In the end, to take the best action for each of the two cases, the two
+- * types of bursts need to be distinguished. Fortunately, this seems
+- * relatively easy to do, by looking at the sizes of the bursts. In
+- * particular, we found a threshold such that bursts with a larger size
+- * than that threshold are apparently caused only by services or commands
+- * such as systemd or git grep. For brevity, hereafter we call just 'large'
+- * these bursts. BFQ *does not* weight-raise queues whose activations occur
+- * in a large burst. In addition, for each of these queues BFQ performs or
+- * does not perform idling depending on which choice boosts the throughput
+- * most. The exact choice depends on the device and request pattern at
++ * In the end, to take the best action for each of the two cases, the
++ * two types of bursts need to be distinguished. Fortunately, this
++ * seems relatively easy, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that only bursts with a
++ * larger size than that threshold are apparently caused by
++ * services or commands such as systemd or git grep. For brevity,
++ * hereafter we call just 'large' these bursts. BFQ *does not*
++ * weight-raise queues whose creation occurs in a large burst. In
++ * addition, for each of these queues BFQ performs or does not perform
++ * idling depending on which choice boosts the throughput more. The
++ * exact choice depends on the device and request pattern at
+  * hand.
+  *
+- * Turning back to the next function, it implements all the steps needed
+- * to detect the occurrence of a large burst and to properly mark all the
+- * queues belonging to it (so that they can then be treated in a different
+- * way). This goal is achieved by maintaining a special "burst list" that
+- * holds, temporarily, the queues that belong to the burst in progress. The
+- * list is then used to mark these queues as belonging to a large burst if
+- * the burst does become large. The main steps are the following.
++ * Unfortunately, false positives may occur while an interactive task
++ * is starting (e.g., an application is being started). The
++ * consequence is that the queues associated with the task do not
++ * enjoy weight raising as expected. Fortunately these false positives
++ * are very rare. They typically occur if some service happens to
++ * start doing I/O exactly when the interactive task starts.
+  *
+- * . when the very first queue is activated, the queue is inserted into the
++ * Turning back to the next function, it implements all the steps
++ * needed to detect the occurrence of a large burst and to properly
++ * mark all the queues belonging to it (so that they can then be
++ * treated in a different way). This goal is achieved by maintaining a
++ * "burst list" that holds, temporarily, the queues that belong to the
++ * burst in progress. The list is then used to mark these queues as
++ * belonging to a large burst if the burst does become large. The main
++ * steps are the following.
++ *
++ * . when the very first queue is created, the queue is inserted into the
+  *   list (as it could be the first queue in a possible burst)
+  *
+  * . if the current burst has not yet become large, and a queue Q that does
+@@ -772,13 +818,13 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+  *
+  *     . the device enters a large-burst mode
+  *
+- * . if a queue Q that does not belong to the burst is activated while
++ * . if a queue Q that does not belong to the burst is created while
+  *   the device is in large-burst mode and shortly after the last time
+  *   at which a queue either entered the burst list or was marked as
+  *   belonging to the current large burst, then Q is immediately marked
+  *   as belonging to a large burst.
+  *
+- * . if a queue Q that does not belong to the burst is activated a while
++ * . if a queue Q that does not belong to the burst is created a while
+  *   later, i.e., not shortly after, than the last time at which a queue
+  *   either entered the burst list or was marked as belonging to the
+  *   current large burst, then the current burst is deemed as finished and:
+@@ -791,52 +837,44 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+  *          in a possible new burst (then the burst list contains just Q
+  *          after this step).
+  */
+-static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+-			     bool idle_for_long_time)
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	/*
+-	 * If bfqq happened to be activated in a burst, but has been idle
+-	 * for at least as long as an interactive queue, then we assume
+-	 * that, in the overall I/O initiated in the burst, the I/O
+-	 * associated to bfqq is finished. So bfqq does not need to be
+-	 * treated as a queue belonging to a burst anymore. Accordingly,
+-	 * we reset bfqq's in_large_burst flag if set, and remove bfqq
+-	 * from the burst list if it's there. We do not decrement instead
+-	 * burst_size, because the fact that bfqq does not need to belong
+-	 * to the burst list any more does not invalidate the fact that
+-	 * bfqq may have been activated during the current burst.
+-	 */
+-	if (idle_for_long_time) {
+-		hlist_del_init(&bfqq->burst_list_node);
+-		bfq_clear_bfqq_in_large_burst(bfqq);
+-	}
+-
+-	/*
+ 	 * If bfqq is already in the burst list or is part of a large
+-	 * burst, then there is nothing else to do.
++	 * burst, or finally has just been split, then there is
++	 * nothing else to do.
+ 	 */
+ 	if (!hlist_unhashed(&bfqq->burst_list_node) ||
+-	    bfq_bfqq_in_large_burst(bfqq))
++	    bfq_bfqq_in_large_burst(bfqq) ||
++	    time_is_after_eq_jiffies(bfqq->split_time +
++				     msecs_to_jiffies(10)))
+ 		return;
+ 
+ 	/*
+-	 * If bfqq's activation happens late enough, then the current
+-	 * burst is finished, and related data structures must be reset.
++	 * If bfqq's creation happens late enough, or bfqq belongs to
++	 * a different group than the burst group, then the current
++	 * burst is finished, and related data structures must be
++	 * reset.
+ 	 *
+-	 * In this respect, consider the special case where bfqq is the very
+-	 * first queue being activated. In this case, last_ins_in_burst is
+-	 * not yet significant when we get here. But it is easy to verify
+-	 * that, whether or not the following condition is true, bfqq will
+-	 * end up being inserted into the burst list. In particular the
+-	 * list will happen to contain only bfqq. And this is exactly what
+-	 * has to happen, as bfqq may be the first queue in a possible
++	 * In this respect, consider the special case where bfqq is
++	 * the very first queue created after BFQ is selected for this
++	 * device. In this case, last_ins_in_burst and
++	 * burst_parent_entity are not yet significant when we get
++	 * here. But it is easy to verify that, whether or not the
++	 * following condition is true, bfqq will end up being
++	 * inserted into the burst list. In particular the list will
++	 * happen to contain only bfqq. And this is exactly what has
++	 * to happen, as bfqq may be the first queue of the first
+ 	 * burst.
+ 	 */
+ 	if (time_is_before_jiffies(bfqd->last_ins_in_burst +
+-	    bfqd->bfq_burst_interval)) {
++	    bfqd->bfq_burst_interval) ||
++	    bfqq->entity.parent != bfqd->burst_parent_entity) {
+ 		bfqd->large_burst = false;
+ 		bfq_reset_burst_list(bfqd, bfqq);
+-		return;
++		bfq_log_bfqq(bfqd, bfqq,
++			"handle_burst: late activation or different group");
++		goto end;
+ 	}
+ 
+ 	/*
+@@ -845,8 +883,9 @@ static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	 * bfqq as belonging to this large burst immediately.
+ 	 */
+ 	if (bfqd->large_burst) {
++		bfq_log_bfqq(bfqd, bfqq, "handle_burst: marked in burst");
+ 		bfq_mark_bfqq_in_large_burst(bfqq);
+-		return;
++		goto end;
+ 	}
+ 
+ 	/*
+@@ -855,25 +894,491 @@ static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	 * queue. Then we add bfqq to the burst.
+ 	 */
+ 	bfq_add_to_burst(bfqd, bfqq);
++end:
++	/*
++	 * At this point, bfqq either has been added to the current
++	 * burst or has caused the current burst to terminate and a
++	 * possible new burst to start. In particular, in the second
++	 * case, bfqq has become the first queue in the possible new
++	 * burst.  In both cases last_ins_in_burst needs to be moved
++	 * forward.
++	 */
++	bfqd->last_ins_in_burst = jiffies;
++
++}
++
++static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	return entity->budget - entity->service;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static int bfq_max_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++		return bfq_default_max_budget;
++	else
++		return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static int bfq_min_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++		return bfq_default_max_budget / 32;
++	else
++		return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++			    struct bfq_queue *bfqq,
++			    bool compensate,
++			    enum bfqq_expiration reason);
++
++/*
++ * The next function, invoked after the input queue bfqq switches from
++ * idle to busy, updates the budget of bfqq. The function also tells
++ * whether the in-service queue should be expired, by returning
++ * true. The purpose of expiring the in-service queue is to give bfqq
++ * the chance to possibly preempt the in-service queue, and the reason
++ * for preempting the in-service queue is to achieve one of the two
++ * goals below.
++ *
++ * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has
++ * expired because it has remained idle. In particular, bfqq may have
++ * expired for one of the following two reasons:
++ *
++ * - BFQ_BFQQ_NO_MORE_REQUEST bfqq did not enjoy any device idling and
++ *   did not make it to issue a new request before its last request
++ *   was served;
++ *
++ * - BFQ_BFQQ_TOO_IDLE bfqq did enjoy device idling, but did not issue
++ *   a new request before the expiration of the idling-time.
++ *
++ * Even if bfqq has expired for one of the above reasons, the process
++ * associated with the queue may be however issuing requests greedily,
++ * and thus be sensitive to the bandwidth it receives (bfqq may have
++ * remained idle for other reasons: CPU high load, bfqq not enjoying
++ * idling, I/O throttling somewhere in the path from the process to
++ * the I/O scheduler, ...). But if, after every expiration for one of
++ * the above two reasons, bfqq has to wait for the service of at least
++ * one full budget of another queue before being served again, then
++ * bfqq is likely to get a much lower bandwidth or resource time than
++ * its reserved ones. To address this issue, two countermeasures need
++ * to be taken.
++ *
++ * First, the budget and the timestamps of bfqq need to be updated in
++ * a special way on bfqq reactivation: they need to be updated as if
++ * bfqq did not remain idle and did not expire. In fact, if they are
++ * computed as if bfqq expired and remained idle until reactivation,
++ * then the process associated with bfqq is treated as if, instead of
++ * being greedy, it stopped issuing requests when bfqq remained idle,
++ * and restarts issuing requests only on this reactivation. In other
++ * words, the scheduler does not help the process recover the "service
++ * hole" between bfqq expiration and reactivation. As a consequence,
++ * the process receives a lower bandwidth than its reserved one. In
++ * contrast, to recover this hole, the budget must be updated as if
++ * bfqq was not expired at all before this reactivation, i.e., it must
++ * be set to the value of the remaining budget when bfqq was
++ * expired. Along the same line, timestamps need to be assigned the
++ * value they had the last time bfqq was selected for service, i.e.,
++ * before last expiration. Thus timestamps need to be back-shifted
++ * with respect to their normal computation (see [1] for more details
++ * on this tricky aspect).
++ *
++ * Secondly, to allow the process to recover the hole, the in-service
++ * queue must be expired too, to give bfqq the chance to preempt it
++ * immediately. In fact, if bfqq has to wait for a full budget of the
++ * in-service queue to be completed, then it may become impossible to
++ * let the process recover the hole, even if the back-shifted
++ * timestamps of bfqq are lower than those of the in-service queue. If
++ * this happens for most or all of the holes, then the process may not
++ * receive its reserved bandwidth. In this respect, it is worth noting
++ * that, being the service of outstanding requests unpreemptible, a
++ * little fraction of the holes may however be unrecoverable, thereby
++ * causing a little loss of bandwidth.
++ *
++ * The last important point is detecting whether bfqq does need this
++ * bandwidth recovery. In this respect, the next function deems the
++ * process associated with bfqq greedy, and thus allows it to recover
++ * the hole, if: 1) the process is waiting for the arrival of a new
++ * request (which implies that bfqq expired for one of the above two
++ * reasons), and 2) such a request has arrived soon. The first
++ * condition is controlled through the flag non_blocking_wait_rq,
++ * while the second through the flag arrived_in_time. If both
++ * conditions hold, then the function computes the budget in the
++ * above-described special way, and signals that the in-service queue
++ * should be expired. Timestamp back-shifting is done later in
++ * __bfq_activate_entity.
++ *
++ * 2. Reduce latency. Even if timestamps are not backshifted to let
++ * the process associated with bfqq recover a service hole, bfqq may
++ * however happen to have, after being (re)activated, a lower finish
++ * timestamp than the in-service queue.  That is, the next budget of
++ * bfqq may have to be completed before the one of the in-service
++ * queue. If this is the case, then preempting the in-service queue
++ * allows this goal to be achieved, apart from the unpreemptible,
++ * outstanding requests mentioned above.
++ *
++ * Unfortunately, regardless of which of the above two goals one wants
++ * to achieve, service trees need first to be updated to know whether
++ * the in-service queue must be preempted. To have service trees
++ * correctly updated, the in-service queue must be expired and
++ * rescheduled, and bfqq must be scheduled too. This is one of the
++ * most costly operations (in future versions, the scheduling
++ * mechanism may be re-designed in such a way to make it possible to
++ * know whether preemption is needed without needing to update service
++ * trees). In addition, queue preemptions almost always cause random
++ * I/O, and thus loss of throughput. Because of these facts, the next
++ * function adopts the following simple scheme to avoid both costly
++ * operations and too frequent preemptions: it requests the expiration
++ * of the in-service queue (unconditionally) only for queues that need
++ * to recover a hole, or that either are weight-raised or deserve to
++ * be weight-raised.
++ */
++static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd,
++						struct bfq_queue *bfqq,
++						bool arrived_in_time,
++						bool wr_or_deserves_wr)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time) {
++		/*
++		 * We do not clear the flag non_blocking_wait_rq here, as
++		 * the latter is used in bfq_activate_bfqq to signal
++		 * that timestamps need to be back-shifted (and is
++		 * cleared right after).
++		 */
++
++		/*
++		 * In next assignment we rely on that either
++		 * entity->service or entity->budget are not updated
++		 * on expiration if bfqq is empty (see
++		 * __bfq_bfqq_recalc_budget). Thus both quantities
++		 * remain unchanged after such an expiration, and the
++		 * following statement therefore assigns to
++		 * entity->budget the remaining budget on such an
++		 * expiration. For clarity, entity->service is not
++		 * updated on expiration in any case, and, in normal
++		 * operation, is reset only when bfqq is selected for
++		 * service (see bfq_get_next_queue).
++		 */
++		BUG_ON(bfqq->max_budget < 0);
++		entity->budget = min_t(unsigned long,
++				       bfq_bfqq_budget_left(bfqq),
++				       bfqq->max_budget);
++
++		BUG_ON(entity->budget < 0);
++		return true;
++	}
++
++	BUG_ON(bfqq->max_budget < 0);
++	entity->budget = max_t(unsigned long, bfqq->max_budget,
++			       bfq_serv_to_charge(bfqq->next_rq, bfqq));
++	BUG_ON(entity->budget < 0);
++
++	bfq_clear_bfqq_non_blocking_wait_rq(bfqq);
++	return wr_or_deserves_wr;
++}
++
++static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd,
++					     struct bfq_queue *bfqq,
++					     unsigned int old_wr_coeff,
++					     bool wr_or_deserves_wr,
++					     bool interactive,
++					     bool in_burst,
++					     bool soft_rt)
++{
++	if (old_wr_coeff == 1 && wr_or_deserves_wr) {
++		/* start a weight-raising period */
++		if (interactive) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++		} else {
++			bfqq->wr_start_at_switch_to_srt = jiffies;
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff *
++				BFQ_SOFTRT_WEIGHT_FACTOR;
++			bfqq->wr_cur_max_time =
++				bfqd->bfq_wr_rt_max_time;
++		}
++		/*
++		 * If needed, further reduce budget to make sure it is
++		 * close to bfqq's backlog, so as to reduce the
++		 * scheduling-error component due to a too large
++		 * budget. Do not care about throughput consequences,
++		 * but only about latency. Finally, do not assign a
++		 * too small budget either, to avoid increasing
++		 * latency by causing too frequent expirations.
++		 */
++		bfqq->entity.budget = min_t(unsigned long,
++					    bfqq->entity.budget,
++					    2 * bfq_min_budget(bfqd));
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "wrais starting at %lu, rais_max_time %u",
++			     jiffies,
++			     jiffies_to_msecs(bfqq->wr_cur_max_time));
++	} else if (old_wr_coeff > 1) {
++		if (interactive) { /* update wr coeff and duration */
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++		} else if (in_burst) {
++			bfqq->wr_coeff = 1;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais ending at %lu, rais_max_time %u",
++				     jiffies,
++				     jiffies_to_msecs(bfqq->
++						      wr_cur_max_time));
++		} else if (soft_rt) {
++			/*
++			 * The application is now or still meeting the
++			 * requirements for being deemed soft rt.  We
++			 * can then correctly and safely (re)charge
++			 * the weight-raising duration for the
++			 * application with the weight-raising
++			 * duration for soft rt applications.
++			 *
++			 * In particular, doing this recharge now, i.e.,
++			 * before the weight-raising period for the
++			 * application finishes, reduces the probability
++			 * of the following negative scenario:
++			 * 1) the weight of a soft rt application is
++			 *    raised at startup (as for any newly
++			 *    created application),
++			 * 2) since the application is not interactive,
++			 *    at a certain time weight-raising is
++			 *    stopped for the application,
++			 * 3) at that time the application happens to
++			 *    still have pending requests, and hence
++			 *    is destined to not have a chance to be
++			 *    deemed soft rt before these requests are
++			 *    completed (see the comments to the
++			 *    function bfq_bfqq_softrt_next_start()
++			 *    for details on soft rt detection),
++			 * 4) these pending requests experience a high
++			 *    latency because the application is not
++			 *    weight-raised while they are pending.
++			 */
++			if (bfqq->wr_cur_max_time !=
++				bfqd->bfq_wr_rt_max_time) {
++				bfqq->wr_start_at_switch_to_srt =
++					bfqq->last_wr_start_finish;
++                BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
++
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++				bfqq->wr_coeff = bfqd->bfq_wr_coeff *
++					BFQ_SOFTRT_WEIGHT_FACTOR;
++				bfq_log_bfqq(bfqd, bfqq,
++					     "switching to soft_rt wr");
++			} else
++				bfq_log_bfqq(bfqd, bfqq,
++					"moving forward soft_rt wr duration");
++			bfqq->last_wr_start_finish = jiffies;
++		}
++	}
++}
++
++static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq)
++{
++	return bfqq->dispatched == 0 &&
++		time_is_before_jiffies(
++			bfqq->budget_timeout +
++			bfqd->bfq_wr_min_idle_time);
++}
++
++static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
++					     struct bfq_queue *bfqq,
++					     int old_wr_coeff,
++					     struct request *rq,
++					     bool *interactive)
++{
++	bool soft_rt, in_burst,	wr_or_deserves_wr,
++		bfqq_wants_to_preempt,
++		idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq),
++		/*
++		 * See the comments on
++		 * bfq_bfqq_update_budg_for_activation for
++		 * details on the usage of the next variable.
++		 */
++		arrived_in_time =  ktime_get_ns() <=
++			RQ_BIC(rq)->ttime.last_end_request +
++			bfqd->bfq_slice_idle * 3;
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "bfq_add_request non-busy: "
++		     "jiffies %lu, in_time %d, idle_long %d busyw %d "
++		     "wr_coeff %u",
++		     jiffies, arrived_in_time,
++		     idle_for_long_time,
++		     bfq_bfqq_non_blocking_wait_rq(bfqq),
++		     old_wr_coeff);
++
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
++
++	BUG_ON(bfqq == bfqd->in_service_queue);
++	bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq,
++				 req_op(rq), rq->cmd_flags);
++
++	/*
++	 * bfqq deserves to be weight-raised if:
++	 * - it is sync,
++	 * - it does not belong to a large burst,
++	 * - it has been idle for enough time or is soft real-time,
++	 * - is linked to a bfq_io_cq (it is not shared in any sense)
++	 */
++	in_burst = bfq_bfqq_in_large_burst(bfqq);
++	soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++		!in_burst &&
++		time_is_before_jiffies(bfqq->soft_rt_next_start);
++	*interactive =
++		!in_burst &&
++		idle_for_long_time;
++	wr_or_deserves_wr = bfqd->low_latency &&
++		(bfqq->wr_coeff > 1 ||
++		 (bfq_bfqq_sync(bfqq) &&
++		  bfqq->bic && (*interactive || soft_rt)));
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "bfq_add_request: "
++		     "in_burst %d, "
++		     "soft_rt %d (next %lu), inter %d, bic %p",
++		     bfq_bfqq_in_large_burst(bfqq), soft_rt,
++		     bfqq->soft_rt_next_start,
++		     *interactive,
++		     bfqq->bic);
++
++	/*
++	 * Using the last flag, update budget and check whether bfqq
++	 * may want to preempt the in-service queue.
++	 */
++	bfqq_wants_to_preempt =
++		bfq_bfqq_update_budg_for_activation(bfqd, bfqq,
++						    arrived_in_time,
++						    wr_or_deserves_wr);
++
++	/*
++	 * If bfqq happened to be activated in a burst, but has been
++	 * idle for much more than an interactive queue, then we
++	 * assume that, in the overall I/O initiated in the burst, the
++	 * I/O associated with bfqq is finished. So bfqq does not need
++	 * to be treated as a queue belonging to a burst
++	 * anymore. Accordingly, we reset bfqq's in_large_burst flag
++	 * if set, and remove bfqq from the burst list if it's
++	 * there. We do not decrement burst_size, because the fact
++	 * that bfqq does not need to belong to the burst list any
++	 * more does not invalidate the fact that bfqq was created in
++	 * a burst.
++	 */
++	if (likely(!bfq_bfqq_just_created(bfqq)) &&
++	    idle_for_long_time &&
++	    time_is_before_jiffies(
++		    bfqq->budget_timeout +
++		    msecs_to_jiffies(10000))) {
++		hlist_del_init(&bfqq->burst_list_node);
++		bfq_clear_bfqq_in_large_burst(bfqq);
++	}
++
++	bfq_clear_bfqq_just_created(bfqq);
++
++	if (!bfq_bfqq_IO_bound(bfqq)) {
++		if (arrived_in_time) {
++			bfqq->requests_within_timer++;
++			if (bfqq->requests_within_timer >=
++			    bfqd->bfq_requests_within_timer)
++				bfq_mark_bfqq_IO_bound(bfqq);
++		} else
++			bfqq->requests_within_timer = 0;
++		bfq_log_bfqq(bfqd, bfqq, "requests in time %d",
++			     bfqq->requests_within_timer);
++	}
++
++	if (bfqd->low_latency) {
++		if (unlikely(time_is_after_jiffies(bfqq->split_time)))
++			/* wraparound */
++			bfqq->split_time =
++				jiffies - bfqd->bfq_wr_min_idle_time - 1;
++
++		if (time_is_before_jiffies(bfqq->split_time +
++					   bfqd->bfq_wr_min_idle_time)) {
++			bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq,
++							 old_wr_coeff,
++							 wr_or_deserves_wr,
++							 *interactive,
++							 in_burst,
++							 soft_rt);
++
++			if (old_wr_coeff != bfqq->wr_coeff)
++				bfqq->entity.prio_changed = 1;
++		}
++	}
++
++	bfqq->last_idle_bklogged = jiffies;
++	bfqq->service_from_backlogged = 0;
++	bfq_clear_bfqq_softrt_update(bfqq);
++
++	bfq_add_bfqq_busy(bfqd, bfqq);
++
++	/*
++	 * Expire in-service queue only if preemption may be needed
++	 * for guarantees. In this respect, the function
++	 * next_queue_may_preempt just checks a simple, necessary
++	 * condition, and not a sufficient condition based on
++	 * timestamps. In fact, for the latter condition to be
++	 * evaluated, timestamps would need first to be updated, and
++	 * this operation is quite costly (see the comments on the
++	 * function bfq_bfqq_update_budg_for_activation).
++	 */
++	if (bfqd->in_service_queue && bfqq_wants_to_preempt &&
++	    bfqd->in_service_queue->wr_coeff < bfqq->wr_coeff &&
++	    next_queue_may_preempt(bfqd)) {
++		struct bfq_queue *in_serv =
++			bfqd->in_service_queue;
++		BUG_ON(in_serv == bfqq);
++
++		bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
++				false, BFQ_BFQQ_PREEMPTED);
++		BUG_ON(in_serv->entity.budget < 0);
++	}
+ }
+ 
+ static void bfq_add_request(struct request *rq)
+ {
+ 	struct bfq_queue *bfqq = RQ_BFQQ(rq);
+-	struct bfq_entity *entity = &bfqq->entity;
+ 	struct bfq_data *bfqd = bfqq->bfqd;
+ 	struct request *next_rq, *prev;
+-	unsigned long old_wr_coeff = bfqq->wr_coeff;
++	unsigned int old_wr_coeff = bfqq->wr_coeff;
+ 	bool interactive = false;
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++	bfq_log_bfqq(bfqd, bfqq, "add_request: size %u %s",
++		     blk_rq_sectors(rq), rq_is_sync(rq) ? "S" : "A");
++
++	if (bfqq->wr_coeff > 1) /* queue is being weight-raised */
++		bfq_log_bfqq(bfqd, bfqq,
++			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time),
++			bfqq->wr_coeff,
++			bfqq->entity.weight, bfqq->entity.orig_weight);
++
+ 	bfqq->queued[rq_is_sync(rq)]++;
+ 	bfqd->queued++;
+ 
+ 	elv_rb_add(&bfqq->sort_list, rq);
+ 
+ 	/*
+-	 * Check if this request is a better next-serve candidate.
++	 * Check if this request is a better next-to-serve candidate.
+ 	 */
+ 	prev = bfqq->next_rq;
+ 	next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
+@@ -886,160 +1391,10 @@ static void bfq_add_request(struct request *rq)
+ 	if (prev != bfqq->next_rq)
+ 		bfq_pos_tree_add_move(bfqd, bfqq);
+ 
+-	if (!bfq_bfqq_busy(bfqq)) {
+-		bool soft_rt, coop_or_in_burst,
+-		     idle_for_long_time = time_is_before_jiffies(
+-						bfqq->budget_timeout +
+-						bfqd->bfq_wr_min_idle_time);
+-
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-		bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq,
+-					 rq->cmd_flags);
+-#endif
+-		if (bfq_bfqq_sync(bfqq)) {
+-			bool already_in_burst =
+-			   !hlist_unhashed(&bfqq->burst_list_node) ||
+-			   bfq_bfqq_in_large_burst(bfqq);
+-			bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
+-			/*
+-			 * If bfqq was not already in the current burst,
+-			 * then, at this point, bfqq either has been
+-			 * added to the current burst or has caused the
+-			 * current burst to terminate. In particular, in
+-			 * the second case, bfqq has become the first
+-			 * queue in a possible new burst.
+-			 * In both cases last_ins_in_burst needs to be
+-			 * moved forward.
+-			 */
+-			if (!already_in_burst)
+-				bfqd->last_ins_in_burst = jiffies;
+-		}
+-
+-		coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
+-			bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+-		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+-			!coop_or_in_burst &&
+-			time_is_before_jiffies(bfqq->soft_rt_next_start);
+-		interactive = !coop_or_in_burst && idle_for_long_time;
+-		entity->budget = max_t(unsigned long, bfqq->max_budget,
+-				       bfq_serv_to_charge(next_rq, bfqq));
+-
+-		if (!bfq_bfqq_IO_bound(bfqq)) {
+-			if (time_before(jiffies,
+-					RQ_BIC(rq)->ttime.last_end_request +
+-					bfqd->bfq_slice_idle)) {
+-				bfqq->requests_within_timer++;
+-				if (bfqq->requests_within_timer >=
+-				    bfqd->bfq_requests_within_timer)
+-					bfq_mark_bfqq_IO_bound(bfqq);
+-			} else
+-				bfqq->requests_within_timer = 0;
+-		}
+-
+-		if (!bfqd->low_latency)
+-			goto add_bfqq_busy;
+-
+-		if (bfq_bfqq_just_split(bfqq))
+-			goto set_prio_changed;
+-
+-		/*
+-		 * If the queue:
+-		 * - is not being boosted,
+-		 * - has been idle for enough time,
+-		 * - is not a sync queue or is linked to a bfq_io_cq (it is
+-		 *   shared "for its nature" or it is not shared and its
+-		 *   requests have not been redirected to a shared queue)
+-		 * start a weight-raising period.
+-		 */
+-		if (old_wr_coeff == 1 && (interactive || soft_rt) &&
+-		    (!bfq_bfqq_sync(bfqq) || bfqq->bic)) {
+-			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
+-			if (interactive)
+-				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+-			else
+-				bfqq->wr_cur_max_time =
+-					bfqd->bfq_wr_rt_max_time;
+-			bfq_log_bfqq(bfqd, bfqq,
+-				     "wrais starting at %lu, rais_max_time %u",
+-				     jiffies,
+-				     jiffies_to_msecs(bfqq->wr_cur_max_time));
+-		} else if (old_wr_coeff > 1) {
+-			if (interactive)
+-				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+-			else if (coop_or_in_burst ||
+-				 (bfqq->wr_cur_max_time ==
+-				  bfqd->bfq_wr_rt_max_time &&
+-				  !soft_rt)) {
+-				bfqq->wr_coeff = 1;
+-				bfq_log_bfqq(bfqd, bfqq,
+-					"wrais ending at %lu, rais_max_time %u",
+-					jiffies,
+-					jiffies_to_msecs(bfqq->
+-						wr_cur_max_time));
+-			} else if (time_before(
+-					bfqq->last_wr_start_finish +
+-					bfqq->wr_cur_max_time,
+-					jiffies +
+-					bfqd->bfq_wr_rt_max_time) &&
+-				   soft_rt) {
+-				/*
+-				 *
+-				 * The remaining weight-raising time is lower
+-				 * than bfqd->bfq_wr_rt_max_time, which means
+-				 * that the application is enjoying weight
+-				 * raising either because deemed soft-rt in
+-				 * the near past, or because deemed interactive
+-				 * a long ago.
+-				 * In both cases, resetting now the current
+-				 * remaining weight-raising time for the
+-				 * application to the weight-raising duration
+-				 * for soft rt applications would not cause any
+-				 * latency increase for the application (as the
+-				 * new duration would be higher than the
+-				 * remaining time).
+-				 *
+-				 * In addition, the application is now meeting
+-				 * the requirements for being deemed soft rt.
+-				 * In the end we can correctly and safely
+-				 * (re)charge the weight-raising duration for
+-				 * the application with the weight-raising
+-				 * duration for soft rt applications.
+-				 *
+-				 * In particular, doing this recharge now, i.e.,
+-				 * before the weight-raising period for the
+-				 * application finishes, reduces the probability
+-				 * of the following negative scenario:
+-				 * 1) the weight of a soft rt application is
+-				 *    raised at startup (as for any newly
+-				 *    created application),
+-				 * 2) since the application is not interactive,
+-				 *    at a certain time weight-raising is
+-				 *    stopped for the application,
+-				 * 3) at that time the application happens to
+-				 *    still have pending requests, and hence
+-				 *    is destined to not have a chance to be
+-				 *    deemed soft rt before these requests are
+-				 *    completed (see the comments to the
+-				 *    function bfq_bfqq_softrt_next_start()
+-				 *    for details on soft rt detection),
+-				 * 4) these pending requests experience a high
+-				 *    latency because the application is not
+-				 *    weight-raised while they are pending.
+-				 */
+-				bfqq->last_wr_start_finish = jiffies;
+-				bfqq->wr_cur_max_time =
+-					bfqd->bfq_wr_rt_max_time;
+-			}
+-		}
+-set_prio_changed:
+-		if (old_wr_coeff != bfqq->wr_coeff)
+-			entity->prio_changed = 1;
+-add_bfqq_busy:
+-		bfqq->last_idle_bklogged = jiffies;
+-		bfqq->service_from_backlogged = 0;
+-		bfq_clear_bfqq_softrt_update(bfqq);
+-		bfq_add_bfqq_busy(bfqd, bfqq);
+-	} else {
++	if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */
++		bfq_bfqq_handle_idle_busy_switch(bfqd, bfqq, old_wr_coeff,
++						 rq, &interactive);
++	else {
+ 		if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
+ 		    time_is_before_jiffies(
+ 				bfqq->last_wr_start_finish +
+@@ -1048,16 +1403,43 @@ add_bfqq_busy:
+ 			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+ 
+ 			bfqd->wr_busy_queues++;
+-			entity->prio_changed = 1;
++			bfqq->entity.prio_changed = 1;
+ 			bfq_log_bfqq(bfqd, bfqq,
+-			    "non-idle wrais starting at %lu, rais_max_time %u",
+-			    jiffies,
+-			    jiffies_to_msecs(bfqq->wr_cur_max_time));
++				     "non-idle wrais starting, "
++				     "wr_max_time %u wr_busy %d",
++				     jiffies_to_msecs(bfqq->wr_cur_max_time),
++				     bfqd->wr_busy_queues);
+ 		}
+ 		if (prev != bfqq->next_rq)
+ 			bfq_updated_next_req(bfqd, bfqq);
+ 	}
+ 
++	/*
++	 * Assign jiffies to last_wr_start_finish in the following
++	 * cases:
++	 *
++	 * . if bfqq is not going to be weight-raised, because, for
++	 *   non weight-raised queues, last_wr_start_finish stores the
++	 *   arrival time of the last request; as of now, this piece
++	 *   of information is used only for deciding whether to
++	 *   weight-raise async queues
++	 *
++	 * . if bfqq is not weight-raised, because, if bfqq is now
++	 *   switching to weight-raised, then last_wr_start_finish
++	 *   stores the time when weight-raising starts
++	 *
++	 * . if bfqq is interactive, because, regardless of whether
++	 *   bfqq is currently weight-raised, the weight-raising
++	 *   period must start or restart (this case is considered
++	 *   separately because it is not detected by the above
++	 *   conditions, if bfqq is already weight-raised)
++	 *
++	 * last_wr_start_finish has to be updated also if bfqq is soft
++	 * real-time, because the weight-raising period is constantly
++	 * restarted on idle-to-busy transitions for these queues, but
++	 * this is already done in bfq_bfqq_handle_idle_busy_switch if
++	 * needed.
++	 */
+ 	if (bfqd->low_latency &&
+ 		(old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
+ 		bfqq->last_wr_start_finish = jiffies;
+@@ -1081,14 +1463,24 @@ static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
+ 	return NULL;
+ }
+ 
++static sector_t get_sdist(sector_t last_pos, struct request *rq)
++{
++	sector_t sdist = 0;
++
++	if (last_pos) {
++		if (last_pos < blk_rq_pos(rq))
++			sdist = blk_rq_pos(rq) - last_pos;
++		else
++			sdist = last_pos - blk_rq_pos(rq);
++	}
++
++	return sdist;
++}
++
+ static void bfq_activate_request(struct request_queue *q, struct request *rq)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+-
+ 	bfqd->rq_in_driver++;
+-	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
+-	bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
+-		(unsigned long long) bfqd->last_position);
+ }
+ 
+ static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
+@@ -1105,6 +1497,9 @@ static void bfq_remove_request(struct request *rq)
+ 	struct bfq_data *bfqd = bfqq->bfqd;
+ 	const int sync = rq_is_sync(rq);
+ 
++	BUG_ON(bfqq->entity.service > bfqq->entity.budget &&
++	       bfqq == bfqd->in_service_queue);
++
+ 	if (bfqq->next_rq == rq) {
+ 		bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
+ 		bfq_updated_next_req(bfqd, bfqq);
+@@ -1118,8 +1513,25 @@ static void bfq_remove_request(struct request *rq)
+ 	elv_rb_del(&bfqq->sort_list, rq);
+ 
+ 	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+-		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++		BUG_ON(bfqq->entity.budget < 0);
++
++		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) {
+ 			bfq_del_bfqq_busy(bfqd, bfqq, 1);
++
++			/* bfqq emptied. In normal operation, when
++			 * bfqq is empty, bfqq->entity.service and
++			 * bfqq->entity.budget must contain,
++			 * respectively, the service received and the
++			 * budget used last time bfqq emptied. These
++			 * facts do not hold in this case, as at least
++			 * this last removal occurred while bfqq is
++			 * not in service. To avoid inconsistencies,
++			 * reset both bfqq->entity.service and
++			 * bfqq->entity.budget.
++			 */
++			bfqq->entity.budget = bfqq->entity.service = 0;
++		}
++
+ 		/*
+ 		 * Remove queue from request-position tree as it is empty.
+ 		 */
+@@ -1133,9 +1545,8 @@ static void bfq_remove_request(struct request *rq)
+ 		BUG_ON(bfqq->meta_pending == 0);
+ 		bfqq->meta_pending--;
+ 	}
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags);
+-#endif
++	bfqg_stats_update_io_remove(bfqq_group(bfqq), req_op(rq),
++				    rq->cmd_flags);
+ }
+ 
+ static int bfq_merge(struct request_queue *q, struct request **req,
+@@ -1145,7 +1556,7 @@ static int bfq_merge(struct request_queue *q, struct request **req,
+ 	struct request *__rq;
+ 
+ 	__rq = bfq_find_rq_fmerge(bfqd, bio);
+-	if (__rq && elv_rq_merge_ok(__rq, bio)) {
++	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
+ 		return ELEVATOR_FRONT_MERGE;
+ 	}
+@@ -1190,7 +1601,8 @@ static void bfq_merged_request(struct request_queue *q, struct request *req,
+ static void bfq_bio_merged(struct request_queue *q, struct request *req,
+ 			   struct bio *bio)
+ {
+-	bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_rw);
++	bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio_op(bio),
++				    bio->bi_opf);
+ }
+ #endif
+ 
+@@ -1210,7 +1622,7 @@ static void bfq_merged_requests(struct request_queue *q, struct request *rq,
+ 	 */
+ 	if (bfqq == next_bfqq &&
+ 	    !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
+-	    time_before(next->fifo_time, rq->fifo_time)) {
++	    next->fifo_time < rq->fifo_time) {
+ 		list_del_init(&rq->queuelist);
+ 		list_replace_init(&next->queuelist, &rq->queuelist);
+ 		rq->fifo_time = next->fifo_time;
+@@ -1220,21 +1632,31 @@ static void bfq_merged_requests(struct request_queue *q, struct request *rq,
+ 		bfqq->next_rq = rq;
+ 
+ 	bfq_remove_request(next);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
+-#endif
++	bfqg_stats_update_io_merged(bfqq_group(bfqq), req_op(next),
++				    next->cmd_flags);
+ }
+ 
+ /* Must be called with bfqq != NULL */
+ static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
+ {
+ 	BUG_ON(!bfqq);
++
+ 	if (bfq_bfqq_busy(bfqq))
+ 		bfqq->bfqd->wr_busy_queues--;
+ 	bfqq->wr_coeff = 1;
+ 	bfqq->wr_cur_max_time = 0;
+-	/* Trigger a weight change on the next activation of the queue */
++	bfqq->last_wr_start_finish = jiffies;
++	/*
++	 * Trigger a weight change on the next invocation of
++	 * __bfq_entity_update_weight_prio.
++	 */
+ 	bfqq->entity.prio_changed = 1;
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		     "end_wr: wrais ending at %lu, rais_max_time %u",
++		     bfqq->last_wr_start_finish,
++		     jiffies_to_msecs(bfqq->wr_cur_max_time));
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "end_wr: wr_busy %d",
++		     bfqq->bfqd->wr_busy_queues);
+ }
+ 
+ static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
+@@ -1277,7 +1699,7 @@ static int bfq_rq_close_to_sector(void *io_struct, bool request,
+ 				  sector_t sector)
+ {
+ 	return abs(bfq_io_struct_pos(io_struct, request) - sector) <=
+-	       BFQQ_SEEK_THR;
++	       BFQQ_CLOSE_THR;
+ }
+ 
+ static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd,
+@@ -1399,7 +1821,7 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
+-	atomic_add(process_refs, &new_bfqq->ref);
++	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+ 
+@@ -1430,9 +1852,23 @@ static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
+ }
+ 
+ /*
+- * Attempt to schedule a merge of bfqq with the currently in-service queue
+- * or with a close queue among the scheduled queues.
+- * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * If this function returns true, then bfqq cannot be merged. The idea
++ * is that true cooperation happens very early after processes start
++ * to do I/O. Usually, late cooperations are just accidental false
++ * positives. In case bfqq is weight-raised, such false positives
++ * would evidently degrade latency guarantees for bfqq.
++ */
++bool wr_from_too_long(struct bfq_queue *bfqq)
++{
++	return bfqq->wr_coeff > 1 &&
++		time_is_before_jiffies(bfqq->last_wr_start_finish +
++				       msecs_to_jiffies(100));
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service
++ * queue or with a close queue among the scheduled queues.  Return
++ * NULL if no merge was scheduled, a pointer to the shared bfq_queue
+  * structure otherwise.
+  *
+  * The OOM queue is not allowed to participate to cooperation: in fact, since
+@@ -1441,6 +1877,18 @@ static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
+  * handle merging with the OOM queue would be quite complex and expensive
+  * to maintain. Besides, in such a critical condition as an out of memory,
+  * the benefits of queue merging may be little relevant, or even negligible.
++ *
++ * Weight-raised queues can be merged only if their weight-raising
++ * period has just started. In fact cooperating processes are usually
++ * started together. Thus, with this filter we avoid false positives
++ * that would jeopardize low-latency guarantees.
++ *
++ * WARNING: queue merging may impair fairness among non-weight raised
++ * queues, for at least two reasons: 1) the original weight of a
++ * merged queue may change during the merged state, 2) even being the
++ * weight the same, a merged queue may be bloated with many more
++ * requests than the ones produced by its originally-associated
++ * process.
+  */
+ static struct bfq_queue *
+ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -1450,16 +1898,32 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	if (bfqq->new_bfqq)
+ 		return bfqq->new_bfqq;
+-	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++
++	if (io_struct && wr_from_too_long(bfqq) &&
++	    likely(bfqq != &bfqd->oom_bfqq))
++		bfq_log_bfqq(bfqd, bfqq,
++			     "would have looked for coop, but bfq%d wr",
++			bfqq->pid);
++
++	if (!io_struct ||
++	    wr_from_too_long(bfqq) ||
++	    unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+-	/* If device has only one backlogged bfq_queue, don't search. */
++
++	/* If there is only one backlogged queue, don't search. */
+ 	if (bfqd->busy_queues == 1)
+ 		return NULL;
+ 
+ 	in_service_bfqq = bfqd->in_service_queue;
+ 
++	if (in_service_bfqq && in_service_bfqq != bfqq &&
++	    bfqd->in_service_bic && wr_from_too_long(in_service_bfqq)
++	    && likely(in_service_bfqq == &bfqd->oom_bfqq))
++		bfq_log_bfqq(bfqd, bfqq,
++		"would have tried merge with in-service-queue, but wr");
++
+ 	if (!in_service_bfqq || in_service_bfqq == bfqq ||
+-	    !bfqd->in_service_bic ||
++	    !bfqd->in_service_bic || wr_from_too_long(in_service_bfqq) ||
+ 	    unlikely(in_service_bfqq == &bfqd->oom_bfqq))
+ 		goto check_scheduled;
+ 
+@@ -1481,7 +1945,15 @@ check_scheduled:
+ 
+ 	BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent);
+ 
+-	if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) &&
++	if (new_bfqq && wr_from_too_long(new_bfqq) &&
++	    likely(new_bfqq != &bfqd->oom_bfqq) &&
++	    bfq_may_be_close_cooperator(bfqq, new_bfqq))
++		bfq_log_bfqq(bfqd, bfqq,
++			     "would have merged with bfq%d, but wr",
++			     new_bfqq->pid);
++
++	if (new_bfqq && !wr_from_too_long(new_bfqq) &&
++	    likely(new_bfqq != &bfqd->oom_bfqq) &&
+ 	    bfq_may_be_close_cooperator(bfqq, new_bfqq))
+ 		return bfq_setup_merge(bfqq, new_bfqq);
+ 
+@@ -1490,53 +1962,24 @@ check_scheduled:
+ 
+ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
+ {
++	struct bfq_io_cq *bic = bfqq->bic;
++
+ 	/*
+ 	 * If !bfqq->bic, the queue is already shared or its requests
+ 	 * have already been redirected to a shared queue; both idle window
+ 	 * and weight raising state have already been saved. Do nothing.
+ 	 */
+-	if (!bfqq->bic)
++	if (!bic)
+ 		return;
+-	if (bfqq->bic->wr_time_left)
+-		/*
+-		 * This is the queue of a just-started process, and would
+-		 * deserve weight raising: we set wr_time_left to the full
+-		 * weight-raising duration to trigger weight-raising when
+-		 * and if the queue is split and the first request of the
+-		 * queue is enqueued.
+-		 */
+-		bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
+-	else if (bfqq->wr_coeff > 1) {
+-		unsigned long wr_duration =
+-			jiffies - bfqq->last_wr_start_finish;
+-		/*
+-		 * It may happen that a queue's weight raising period lasts
+-		 * longer than its wr_cur_max_time, as weight raising is
+-		 * handled only when a request is enqueued or dispatched (it
+-		 * does not use any timer). If the weight raising period is
+-		 * about to end, don't save it.
+-		 */
+-		if (bfqq->wr_cur_max_time <= wr_duration)
+-			bfqq->bic->wr_time_left = 0;
+-		else
+-			bfqq->bic->wr_time_left =
+-				bfqq->wr_cur_max_time - wr_duration;
+-		/*
+-		 * The bfq_queue is becoming shared or the requests of the
+-		 * process owning the queue are being redirected to a shared
+-		 * queue. Stop the weight raising period of the queue, as in
+-		 * both cases it should not be owned by an interactive or
+-		 * soft real-time application.
+-		 */
+-		bfq_bfqq_end_wr(bfqq);
+-	} else
+-		bfqq->bic->wr_time_left = 0;
+-	bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
+-	bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
+-	bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
+-	bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
+-	bfqq->bic->cooperations++;
+-	bfqq->bic->failed_cooperations = 0;
++
++	bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++	bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++	bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++	bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++	bic->saved_wr_coeff = bfqq->wr_coeff;
++	bic->saved_wr_start_at_switch_to_srt = bfqq->wr_start_at_switch_to_srt;
++	bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish;
++	BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
+ }
+ 
+ static void bfq_get_bic_reference(struct bfq_queue *bfqq)
+@@ -1561,6 +2004,40 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ 	if (bfq_bfqq_IO_bound(bfqq))
+ 		bfq_mark_bfqq_IO_bound(new_bfqq);
+ 	bfq_clear_bfqq_IO_bound(bfqq);
++
++	/*
++	 * If bfqq is weight-raised, then let new_bfqq inherit
++	 * weight-raising. To reduce false positives, neglect the case
++	 * where bfqq has just been created, but has not yet made it
++	 * to be weight-raised (which may happen because EQM may merge
++	 * bfqq even before bfq_add_request is executed for the first
++	 * time for bfqq). Handling this case would however be very
++	 * easy, thanks to the flag just_created.
++	 */
++	if (new_bfqq->wr_coeff == 1 && bfqq->wr_coeff > 1) {
++		new_bfqq->wr_coeff = bfqq->wr_coeff;
++		new_bfqq->wr_cur_max_time = bfqq->wr_cur_max_time;
++		new_bfqq->last_wr_start_finish = bfqq->last_wr_start_finish;
++		new_bfqq->wr_start_at_switch_to_srt = bfqq->wr_start_at_switch_to_srt;
++		if (bfq_bfqq_busy(new_bfqq))
++			bfqd->wr_busy_queues++;
++		new_bfqq->entity.prio_changed = 1;
++		bfq_log_bfqq(bfqd, new_bfqq,
++			     "wr start after merge with %d, rais_max_time %u",
++			     bfqq->pid,
++			     jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	if (bfqq->wr_coeff > 1) { /* bfqq has given its wr to new_bfqq */
++		bfqq->wr_coeff = 1;
++		bfqq->entity.prio_changed = 1;
++		if (bfq_bfqq_busy(bfqq))
++			bfqd->wr_busy_queues--;
++	}
++
++	bfq_log_bfqq(bfqd, new_bfqq, "merge_bfqqs: wr_busy %d",
++		     bfqd->wr_busy_queues);
++
+ 	/*
+ 	 * Grab a reference to the bic, to prevent it from being destroyed
+ 	 * before being possibly touched by a bfq_split_bfqq().
+@@ -1587,20 +2064,8 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ 	bfq_put_queue(bfqq);
+ }
+ 
+-static void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
+-{
+-	struct bfq_io_cq *bic = bfqq->bic;
+-	struct bfq_data *bfqd = bfqq->bfqd;
+-
+-	if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
+-		bic->failed_cooperations++;
+-		if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
+-			bic->cooperations = 0;
+-	}
+-}
+-
+-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+-			   struct bio *bio)
++static int bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
++			       struct bio *bio)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct bfq_io_cq *bic;
+@@ -1610,7 +2075,7 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ 	 * Disallow merge of a sync bio into an async request.
+ 	 */
+ 	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
+-		return 0;
++		return false;
+ 
+ 	/*
+ 	 * Lookup the bfqq that this bio will be queued with. Allow
+@@ -1619,7 +2084,7 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ 	 */
+ 	bic = bfq_bic_lookup(bfqd, current->io_context);
+ 	if (!bic)
+-		return 0;
++		return false;
+ 
+ 	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
+ 	/*
+@@ -1636,30 +2101,107 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ 			 * to decide whether bio and rq can be merged.
+ 			 */
+ 			bfqq = new_bfqq;
+-		} else
+-			bfq_bfqq_increase_failed_cooperations(bfqq);
++		}
+ 	}
+ 
+ 	return bfqq == RQ_BFQQ(rq);
+ }
+ 
++static int bfq_allow_rq_merge(struct request_queue *q, struct request *rq,
++			      struct request *next)
++{
++	return RQ_BFQQ(rq) == RQ_BFQQ(next);
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the throughput.
++ * In practice, a time-slice service scheme is used with seeky
++ * processes.
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq)
++{
++	unsigned int timeout_coeff;
++
++	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++		timeout_coeff = 1;
++	else
++		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++	bfqd->last_budget_start = ktime_get();
++
++	bfqq->budget_timeout = jiffies +
++		bfqd->bfq_timeout * timeout_coeff;
++
++	bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++		jiffies_to_msecs(bfqd->bfq_timeout * timeout_coeff));
++}
++
+ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+ 				       struct bfq_queue *bfqq)
+ {
+ 	if (bfqq) {
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 		bfqg_stats_update_avg_queue_size(bfqq_group(bfqq));
+-#endif
+ 		bfq_mark_bfqq_must_alloc(bfqq);
+-		bfq_mark_bfqq_budget_new(bfqq);
+ 		bfq_clear_bfqq_fifo_expire(bfqq);
+ 
+ 		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
+ 
++		BUG_ON(bfqq == bfqd->in_service_queue);
++		BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++		if (time_is_before_jiffies(bfqq->last_wr_start_finish) &&
++		    bfqq->wr_coeff > 1 &&
++		    bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time &&
++		    time_is_before_jiffies(bfqq->budget_timeout)) {
++			/*
++			 * For soft real-time queues, move the start
++			 * of the weight-raising period forward by the
++			 * time the queue has not received any
++			 * service. Otherwise, a relatively long
++			 * service delay is likely to cause the
++			 * weight-raising period of the queue to end,
++			 * because of the short duration of the
++			 * weight-raising period of a soft real-time
++			 * queue.  It is worth noting that this move
++			 * is not so dangerous for the other queues,
++			 * because soft real-time queues are not
++			 * greedy.
++			 *
++			 * To not add a further variable, we use the
++			 * overloaded field budget_timeout to
++			 * determine for how long the queue has not
++			 * received service, i.e., how much time has
++			 * elapsed since the queue expired. However,
++			 * this is a little imprecise, because
++			 * budget_timeout is set to jiffies if bfqq
++			 * not only expires, but also remains with no
++			 * request.
++			 */
++			bfqq->last_wr_start_finish += jiffies -
++				max_t(unsigned long, bfqq->last_wr_start_finish,
++				      bfqq->budget_timeout);
++			if (time_is_after_jiffies(bfqq->last_wr_start_finish)) {
++			       pr_crit(
++			       "BFQ WARNING:last %lu budget %lu jiffies %lu",
++			       bfqq->last_wr_start_finish,
++			       bfqq->budget_timeout,
++			       jiffies);
++			       pr_crit("diff %lu", jiffies -
++				       max_t(unsigned long,
++					     bfqq->last_wr_start_finish,
++					     bfqq->budget_timeout));
++			       bfqq->last_wr_start_finish = jiffies;
++			}
++		}
++
++		bfq_set_budget_timeout(bfqd, bfqq);
+ 		bfq_log_bfqq(bfqd, bfqq,
+ 			     "set_in_service_queue, cur-budget = %d",
+ 			     bfqq->entity.budget);
+-	}
++	} else
++		bfq_log(bfqd, "set_in_service_queue: NULL");
+ 
+ 	bfqd->in_service_queue = bfqq;
+ }
+@@ -1675,36 +2217,11 @@ static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
+ 	return bfqq;
+ }
+ 
+-/*
+- * If enough samples have been computed, return the current max budget
+- * stored in bfqd, which is dynamically updated according to the
+- * estimated disk peak rate; otherwise return the default max budget
+- */
+-static int bfq_max_budget(struct bfq_data *bfqd)
+-{
+-	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+-		return bfq_default_max_budget;
+-	else
+-		return bfqd->bfq_max_budget;
+-}
+-
+-/*
+- * Return min budget, which is a fraction of the current or default
+- * max budget (trying with 1/32)
+- */
+-static int bfq_min_budget(struct bfq_data *bfqd)
+-{
+-	if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+-		return bfq_default_max_budget / 32;
+-	else
+-		return bfqd->bfq_max_budget / 32;
+-}
+-
+ static void bfq_arm_slice_timer(struct bfq_data *bfqd)
+ {
+ 	struct bfq_queue *bfqq = bfqd->in_service_queue;
+ 	struct bfq_io_cq *bic;
+-	unsigned long sl;
++	u32 sl;
+ 
+ 	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
+ 
+@@ -1728,59 +2245,343 @@ static void bfq_arm_slice_timer(struct bfq_data *bfqd)
+ 	sl = bfqd->bfq_slice_idle;
+ 	/*
+ 	 * Unless the queue is being weight-raised or the scenario is
+-	 * asymmetric, grant only minimum idle time if the queue either
+-	 * has been seeky for long enough or has already proved to be
+-	 * constantly seeky.
++	 * asymmetric, grant only minimum idle time if the queue
++	 * is seeky. A long idling is preserved for a weight-raised
++	 * queue, or, more in general, in an asymemtric scenario,
++	 * because a long idling is needed for guaranteeing to a queue
++	 * its reserved share of the throughput (in particular, it is
++	 * needed if the queue has a higher weight than some other
++	 * queue).
+ 	 */
+-	if (bfq_sample_valid(bfqq->seek_samples) &&
+-	    ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
+-				  bfq_max_budget(bfqq->bfqd) / 8) ||
+-	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++	if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 &&
+ 	    bfq_symmetric_scenario(bfqd))
+-		sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
+-	else if (bfqq->wr_coeff > 1)
+-		sl = sl * 3;
++		sl = min_t(u32, sl, BFQ_MIN_TT);
++
+ 	bfqd->last_idling_start = ktime_get();
+-	mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl),
++		      HRTIMER_MODE_REL);
+ 	bfqg_stats_set_start_idle_time(bfqq_group(bfqq));
+-#endif
+-	bfq_log(bfqd, "arm idle: %u/%u ms",
+-		jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++	bfq_log(bfqd, "arm idle: %ld/%ld ms",
++		sl / NSEC_PER_MSEC, bfqd->bfq_slice_idle / NSEC_PER_MSEC);
+ }
+ 
+ /*
+- * Set the maximum time for the in-service queue to consume its
+- * budget. This prevents seeky processes from lowering the disk
+- * throughput (always guaranteed with a time slice scheme as in CFQ).
++ * In autotuning mode, max_budget is dynamically recomputed as the
++ * amount of sectors transferred in timeout at the estimated peak
++ * rate. This enables BFQ to utilize a full timeslice with a full
++ * budget, even if the in-service queue is served at peak rate. And
++ * this maximises throughput with sequential workloads.
+  */
+-static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++static unsigned long bfq_calc_max_budget(struct bfq_data *bfqd)
+ {
+-	struct bfq_queue *bfqq = bfqd->in_service_queue;
+-	unsigned int timeout_coeff;
++	return (u64)bfqd->peak_rate * USEC_PER_MSEC *
++		jiffies_to_msecs(bfqd->bfq_timeout)>>BFQ_RATE_SHIFT;
++}
+ 
+-	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
+-		timeout_coeff = 1;
++/*
++ * Update parameters related to throughput and responsiveness, as a
++ * function of the estimated peak rate. See comments on
++ * bfq_calc_max_budget(), and on T_slow and T_fast arrays.
++ */
++void update_thr_responsiveness_params(struct bfq_data *bfqd)
++{
++	int dev_type = blk_queue_nonrot(bfqd->queue);
++
++	if (bfqd->bfq_user_max_budget == 0) {
++		bfqd->bfq_max_budget =
++			bfq_calc_max_budget(bfqd);
++		BUG_ON(bfqd->bfq_max_budget < 0);
++		bfq_log(bfqd, "new max_budget = %d",
++			bfqd->bfq_max_budget);
++	}
++
++	if (bfqd->device_speed == BFQ_BFQD_FAST &&
++	    bfqd->peak_rate < device_speed_thresh[dev_type]) {
++		bfqd->device_speed = BFQ_BFQD_SLOW;
++		bfqd->RT_prod = R_slow[dev_type] *
++			T_slow[dev_type];
++	} else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++		   bfqd->peak_rate > device_speed_thresh[dev_type]) {
++		bfqd->device_speed = BFQ_BFQD_FAST;
++		bfqd->RT_prod = R_fast[dev_type] *
++			T_fast[dev_type];
++	}
++
++	bfq_log(bfqd,
++"dev_type %s dev_speed_class = %s (%llu sects/sec), thresh %llu setcs/sec",
++		dev_type == 0 ? "ROT" : "NONROT",
++		bfqd->device_speed == BFQ_BFQD_FAST ? "FAST" : "SLOW",
++		bfqd->device_speed == BFQ_BFQD_FAST ?
++		(USEC_PER_SEC*(u64)R_fast[dev_type])>>BFQ_RATE_SHIFT :
++		(USEC_PER_SEC*(u64)R_slow[dev_type])>>BFQ_RATE_SHIFT,
++		(USEC_PER_SEC*(u64)device_speed_thresh[dev_type])>>
++		BFQ_RATE_SHIFT);
++}
++
++void bfq_reset_rate_computation(struct bfq_data *bfqd, struct request *rq)
++{
++	if (rq != NULL) { /* new rq dispatch now, reset accordingly */
++		bfqd->last_dispatch = bfqd->first_dispatch = ktime_get_ns() ;
++		bfqd->peak_rate_samples = 1;
++		bfqd->sequential_samples = 0;
++		bfqd->tot_sectors_dispatched = bfqd->last_rq_max_size =
++			blk_rq_sectors(rq);
++	} else /* no new rq dispatched, just reset the number of samples */
++		bfqd->peak_rate_samples = 0; /* full re-init on next disp. */
++
++	bfq_log(bfqd,
++		"reset_rate_computation at end, sample %u/%u tot_sects %llu",
++		bfqd->peak_rate_samples, bfqd->sequential_samples,
++		bfqd->tot_sectors_dispatched);
++}
++
++void bfq_update_rate_reset(struct bfq_data *bfqd, struct request *rq)
++{
++	u32 rate, weight, divisor;
++
++	/*
++	 * For the convergence property to hold (see comments on
++	 * bfq_update_peak_rate()) and for the assessment to be
++	 * reliable, a minimum number of samples must be present, and
++	 * a minimum amount of time must have elapsed. If not so, do
++	 * not compute new rate. Just reset parameters, to get ready
++	 * for a new evaluation attempt.
++	 */
++	if (bfqd->peak_rate_samples < BFQ_RATE_MIN_SAMPLES ||
++	    bfqd->delta_from_first < BFQ_RATE_MIN_INTERVAL) {
++		bfq_log(bfqd,
++	"update_rate_reset: only resetting, delta_first %lluus samples %d",
++			bfqd->delta_from_first>>10, bfqd->peak_rate_samples);
++		goto reset_computation;
++	}
++
++	/*
++	 * If a new request completion has occurred after last
++	 * dispatch, then, to approximate the rate at which requests
++	 * have been served by the device, it is more precise to
++	 * extend the observation interval to the last completion.
++	 */
++	bfqd->delta_from_first =
++		max_t(u64, bfqd->delta_from_first,
++		      bfqd->last_completion - bfqd->first_dispatch);
++
++	BUG_ON(bfqd->delta_from_first == 0);
++	/*
++	 * Rate computed in sects/usec, and not sects/nsec, for
++	 * precision issues.
++	 */
++	rate = div64_ul(bfqd->tot_sectors_dispatched<<BFQ_RATE_SHIFT,
++			div_u64(bfqd->delta_from_first, NSEC_PER_USEC));
++
++	bfq_log(bfqd,
++"update_rate_reset: tot_sects %llu delta_first %lluus rate %llu sects/s (%d)",
++		bfqd->tot_sectors_dispatched, bfqd->delta_from_first>>10,
++		((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++		rate > 20<<BFQ_RATE_SHIFT);
++
++	/*
++	 * Peak rate not updated if:
++	 * - the percentage of sequential dispatches is below 3/4 of the
++	 *   total, and rate is below the current estimated peak rate
++	 * - rate is unreasonably high (> 20M sectors/sec)
++	 */
++	if ((bfqd->peak_rate_samples > (3 * bfqd->sequential_samples)>>2 &&
++	     rate <= bfqd->peak_rate) ||
++		rate > 20<<BFQ_RATE_SHIFT) {
++		bfq_log(bfqd,
++		"update_rate_reset: goto reset, samples %u/%u rate/peak %llu/%llu",
++		bfqd->peak_rate_samples, bfqd->sequential_samples,
++		((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++		((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++		goto reset_computation;
++	} else {
++		bfq_log(bfqd,
++		"update_rate_reset: do update, samples %u/%u rate/peak %llu/%llu",
++		bfqd->peak_rate_samples, bfqd->sequential_samples,
++		((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++		((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++	}
++
++	/*
++	 * We have to update the peak rate, at last! To this purpose,
++	 * we use a low-pass filter. We compute the smoothing constant
++	 * of the filter as a function of the 'weight' of the new
++	 * measured rate.
++	 *
++	 * As can be seen in next formulas, we define this weight as a
++	 * quantity proportional to how sequential the workload is,
++	 * and to how long the observation time interval is.
++	 *
++	 * The weight runs from 0 to 8. The maximum value of the
++	 * weight, 8, yields the minimum value for the smoothing
++	 * constant. At this minimum value for the smoothing constant,
++	 * the measured rate contributes for half of the next value of
++	 * the estimated peak rate.
++	 *
++	 * So, the first step is to compute the weight as a function
++	 * of how sequential the workload is. Note that the weight
++	 * cannot reach 9, because bfqd->sequential_samples cannot
++	 * become equal to bfqd->peak_rate_samples, which, in its
++	 * turn, holds true because bfqd->sequential_samples is not
++	 * incremented for the first sample.
++	 */
++	weight = (9 * bfqd->sequential_samples) / bfqd->peak_rate_samples;
++
++	/*
++	 * Second step: further refine the weight as a function of the
++	 * duration of the observation interval.
++	 */
++	weight = min_t(u32, 8,
++		       div_u64(weight * bfqd->delta_from_first,
++			       BFQ_RATE_REF_INTERVAL));
++
++	/*
++	 * Divisor ranging from 10, for minimum weight, to 2, for
++	 * maximum weight.
++	 */
++	divisor = 10 - weight;
++	BUG_ON(divisor == 0);
++
++	/*
++	 * Finally, update peak rate:
++	 *
++	 * peak_rate = peak_rate * (divisor-1) / divisor  +  rate / divisor
++	 */
++	bfqd->peak_rate *= divisor-1;
++	bfqd->peak_rate /= divisor;
++	rate /= divisor; /* smoothing constant alpha = 1/divisor */
++
++	bfq_log(bfqd,
++		"update_rate_reset: divisor %d tmp_peak_rate %llu tmp_rate %u",
++		divisor,
++		((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT),
++		(u32)((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT));
++
++	BUG_ON(bfqd->peak_rate == 0);
++	BUG_ON(bfqd->peak_rate > 20<<BFQ_RATE_SHIFT);
++
++	bfqd->peak_rate += rate;
++	update_thr_responsiveness_params(bfqd);
++	BUG_ON(bfqd->peak_rate > 20<<BFQ_RATE_SHIFT);
++
++reset_computation:
++	bfq_reset_rate_computation(bfqd, rq);
++}
++
++/*
++ * Update the read/write peak rate (the main quantity used for
++ * auto-tuning, see update_thr_responsiveness_params()).
++ *
++ * It is not trivial to estimate the peak rate (correctly): because of
++ * the presence of sw and hw queues between the scheduler and the
++ * device components that finally serve I/O requests, it is hard to
++ * say exactly when a given dispatched request is served inside the
++ * device, and for how long. As a consequence, it is hard to know
++ * precisely at what rate a given set of requests is actually served
++ * by the device.
++ *
++ * On the opposite end, the dispatch time of any request is trivially
++ * available, and, from this piece of information, the "dispatch rate"
++ * of requests can be immediately computed. So, the idea in the next
++ * function is to use what is known, namely request dispatch times
++ * (plus, when useful, request completion times), to estimate what is
++ * unknown, namely in-device request service rate.
++ *
++ * The main issue is that, because of the above facts, the rate at
++ * which a certain set of requests is dispatched over a certain time
++ * interval can vary greatly with respect to the rate at which the
++ * same requests are then served. But, since the size of any
++ * intermediate queue is limited, and the service scheme is lossless
++ * (no request is silently dropped), the following obvious convergence
++ * property holds: the number of requests dispatched MUST become
++ * closer and closer to the number of requests completed as the
++ * observation interval grows. This is the key property used in
++ * the next function to estimate the peak service rate as a function
++ * of the observed dispatch rate. The function assumes to be invoked
++ * on every request dispatch.
++ */
++void bfq_update_peak_rate(struct bfq_data *bfqd, struct request *rq)
++{
++	u64 now_ns = ktime_get_ns();
++
++	if (bfqd->peak_rate_samples == 0) { /* first dispatch */
++		bfq_log(bfqd,
++		"update_peak_rate: goto reset, samples %d",
++				bfqd->peak_rate_samples) ;
++		bfq_reset_rate_computation(bfqd, rq);
++		goto update_last_values; /* will add one sample */
++	}
++
++	/*
++	 * Device idle for very long: the observation interval lasting
++	 * up to this dispatch cannot be a valid observation interval
++	 * for computing a new peak rate (similarly to the late-
++	 * completion event in bfq_completed_request()). Go to
++	 * update_rate_and_reset to have the following three steps
++	 * taken:
++	 * - close the observation interval at the last (previous)
++	 *   request dispatch or completion
++	 * - compute rate, if possible, for that observation interval
++	 * - start a new observation interval with this dispatch
++	 */
++	if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC &&
++	    bfqd->rq_in_driver == 0) {
++		bfq_log(bfqd,
++"update_peak_rate: jumping to updating&resetting delta_last %lluus samples %d",
++			(now_ns - bfqd->last_dispatch)>>10,
++			bfqd->peak_rate_samples) ;
++		goto update_rate_and_reset;
++	}
++
++	/* Update sampling information */
++	bfqd->peak_rate_samples++;
++
++	if ((bfqd->rq_in_driver > 0 ||
++		now_ns - bfqd->last_completion < BFQ_MIN_TT)
++	     && get_sdist(bfqd->last_position, rq) < BFQQ_SEEK_THR)
++		bfqd->sequential_samples++;
++
++	bfqd->tot_sectors_dispatched += blk_rq_sectors(rq);
++
++	/* Reset max observed rq size every 32 dispatches */
++	if (likely(bfqd->peak_rate_samples % 32))
++		bfqd->last_rq_max_size =
++			max_t(u32, blk_rq_sectors(rq), bfqd->last_rq_max_size);
+ 	else
+-		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++		bfqd->last_rq_max_size = blk_rq_sectors(rq);
+ 
+-	bfqd->last_budget_start = ktime_get();
++	bfqd->delta_from_first = now_ns - bfqd->first_dispatch;
+ 
+-	bfq_clear_bfqq_budget_new(bfqq);
+-	bfqq->budget_timeout = jiffies +
+-		bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++	bfq_log(bfqd,
++	"update_peak_rate: added samples %u/%u tot_sects %llu delta_first %lluus",
++		bfqd->peak_rate_samples, bfqd->sequential_samples,
++		bfqd->tot_sectors_dispatched,
++		bfqd->delta_from_first>>10);
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
+-		jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
+-		timeout_coeff));
++	/* Target observation interval not yet reached, go on sampling */
++	if (bfqd->delta_from_first < BFQ_RATE_REF_INTERVAL)
++		goto update_last_values;
++
++update_rate_and_reset:
++	bfq_update_rate_reset(bfqd, rq);
++update_last_values:
++	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	bfqd->last_dispatch = now_ns;
++
++	bfq_log(bfqd,
++	"update_peak_rate: delta_first %lluus last_pos %llu peak_rate %llu",
++		(now_ns - bfqd->first_dispatch)>>10,
++		(unsigned long long) bfqd->last_position,
++		((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++	bfq_log(bfqd,
++	"update_peak_rate: samples at end %d", bfqd->peak_rate_samples);
+ }
+ 
+ /*
+- * Move request from internal lists to the request queue dispatch list.
++ * Move request from internal lists to the dispatch list of the request queue
+  */
+ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
+ {
+-	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct bfq_queue *bfqq = RQ_BFQQ(rq);
+ 
+ 	/*
+@@ -1794,15 +2595,10 @@ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
+ 	 * incrementing bfqq->dispatched.
+ 	 */
+ 	bfqq->dispatched++;
++	bfq_update_peak_rate(q->elevator->elevator_data, rq);
++
+ 	bfq_remove_request(rq);
+ 	elv_dispatch_sort(q, rq);
+-
+-	if (bfq_bfqq_sync(bfqq))
+-		bfqd->sync_flight++;
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	bfqg_stats_update_dispatch(bfqq_group(bfqq), blk_rq_bytes(rq),
+-				   rq->cmd_flags);
+-#endif
+ }
+ 
+ /*
+@@ -1822,19 +2618,12 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
+ 
+ 	rq = rq_entry_fifo(bfqq->fifo.next);
+ 
+-	if (time_before(jiffies, rq->fifo_time))
++	if (ktime_get_ns() < rq->fifo_time)
+ 		return NULL;
+ 
+ 	return rq;
+ }
+ 
+-static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
+-{
+-	struct bfq_entity *entity = &bfqq->entity;
+-
+-	return entity->budget - entity->service;
+-}
+-
+ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	BUG_ON(bfqq != bfqd->in_service_queue);
+@@ -1851,12 +2640,15 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		bfq_mark_bfqq_split_coop(bfqq);
+ 
+ 	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+-		/*
+-		 * Overloading budget_timeout field to store the time
+-		 * at which the queue remains with no backlog; used by
+-		 * the weight-raising mechanism.
+-		 */
+-		bfqq->budget_timeout = jiffies;
++		if (bfqq->dispatched == 0)
++			/*
++			 * Overloading budget_timeout field to store
++			 * the time at which the queue remains with no
++			 * backlog and no outstanding request; used by
++			 * the weight-raising mechanism.
++			 */
++			bfqq->budget_timeout = jiffies;
++
+ 		bfq_del_bfqq_busy(bfqd, bfqq, 1);
+ 	} else {
+ 		bfq_activate_bfqq(bfqd, bfqq);
+@@ -1883,10 +2675,19 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ 	struct request *next_rq;
+ 	int budget, min_budget;
+ 
+-	budget = bfqq->max_budget;
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
+ 	min_budget = bfq_min_budget(bfqd);
+ 
+-	BUG_ON(bfqq != bfqd->in_service_queue);
++	if (bfqq->wr_coeff == 1)
++		budget = bfqq->max_budget;
++	else /*
++	      * Use a constant, low budget for weight-raised queues,
++	      * to help achieve a low latency. Keep it slightly higher
++	      * than the minimum possible budget, to cause a little
++	      * bit fewer expirations.
++	      */
++		budget = 2 * min_budget;
+ 
+ 	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d",
+ 		bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
+@@ -1895,7 +2696,7 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ 	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
+ 		bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
+ 
+-	if (bfq_bfqq_sync(bfqq)) {
++	if (bfq_bfqq_sync(bfqq) && bfqq->wr_coeff == 1) {
+ 		switch (reason) {
+ 		/*
+ 		 * Caveat: in all the following cases we trade latency
+@@ -1937,14 +2738,10 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ 			break;
+ 		case BFQ_BFQQ_BUDGET_TIMEOUT:
+ 			/*
+-			 * We double the budget here because: 1) it
+-			 * gives the chance to boost the throughput if
+-			 * this is not a seeky process (which may have
+-			 * bumped into this timeout because of, e.g.,
+-			 * ZBR), 2) together with charge_full_budget
+-			 * it helps give seeky processes higher
+-			 * timestamps, and hence be served less
+-			 * frequently.
++			 * We double the budget here because it gives
++			 * the chance to boost the throughput if this
++			 * is not a seeky process (and has bumped into
++			 * this timeout because of, e.g., ZBR).
+ 			 */
+ 			budget = min(budget * 2, bfqd->bfq_max_budget);
+ 			break;
+@@ -1961,17 +2758,49 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ 			budget = min(budget * 4, bfqd->bfq_max_budget);
+ 			break;
+ 		case BFQ_BFQQ_NO_MORE_REQUESTS:
+-		       /*
+-			* Leave the budget unchanged.
+-			*/
++			/*
++			 * For queues that expire for this reason, it
++			 * is particularly important to keep the
++			 * budget close to the actual service they
++			 * need. Doing so reduces the timestamp
++			 * misalignment problem described in the
++			 * comments in the body of
++			 * __bfq_activate_entity. In fact, suppose
++			 * that a queue systematically expires for
++			 * BFQ_BFQQ_NO_MORE_REQUESTS and presents a
++			 * new request in time to enjoy timestamp
++			 * back-shifting. The larger the budget of the
++			 * queue is with respect to the service the
++			 * queue actually requests in each service
++			 * slot, the more times the queue can be
++			 * reactivated with the same virtual finish
++			 * time. It follows that, even if this finish
++			 * time is pushed to the system virtual time
++			 * to reduce the consequent timestamp
++			 * misalignment, the queue unjustly enjoys for
++			 * many re-activations a lower finish time
++			 * than all newly activated queues.
++			 *
++			 * The service needed by bfqq is measured
++			 * quite precisely by bfqq->entity.service.
++			 * Since bfqq does not enjoy device idling,
++			 * bfqq->entity.service is equal to the number
++			 * of sectors that the process associated with
++			 * bfqq requested to read/write before waiting
++			 * for request completions, or blocking for
++			 * other reasons.
++			 */
++			budget = max_t(int, bfqq->entity.service, min_budget);
++			break;
+ 		default:
+ 			return;
+ 		}
+-	} else
++	} else if (!bfq_bfqq_sync(bfqq))
+ 		/*
+-		 * Async queues get always the maximum possible budget
+-		 * (their ability to dispatch is limited by
+-		 * @bfqd->bfq_max_budget_async_rq).
++		 * Async queues get always the maximum possible
++		 * budget, as for them we do not care about latency
++		 * (in addition, their ability to dispatch is limited
++		 * by the charging factor).
+ 		 */
+ 		budget = bfqd->bfq_max_budget;
+ 
+@@ -1982,160 +2811,120 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ 		bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget);
+ 
+ 	/*
+-	 * Make sure that we have enough budget for the next request.
+-	 * Since the finish time of the bfqq must be kept in sync with
+-	 * the budget, be sure to call __bfq_bfqq_expire() after the
++	 * If there is still backlog, then assign a new budget, making
++	 * sure that it is large enough for the next request.  Since
++	 * the finish time of bfqq must be kept in sync with the
++	 * budget, be sure to call __bfq_bfqq_expire() *after* this
+ 	 * update.
++	 *
++	 * If there is no backlog, then no need to update the budget;
++	 * it will be updated on the arrival of a new request.
+ 	 */
+ 	next_rq = bfqq->next_rq;
+-	if (next_rq)
++	if (next_rq) {
++		BUG_ON(reason == BFQ_BFQQ_TOO_IDLE ||
++		       reason == BFQ_BFQQ_NO_MORE_REQUESTS);
+ 		bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
+ 					    bfq_serv_to_charge(next_rq, bfqq));
+-	else
+-		bfqq->entity.budget = bfqq->max_budget;
++		BUG_ON(!bfq_bfqq_busy(bfqq));
++		BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++	}
+ 
+ 	bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d",
+ 			next_rq ? blk_rq_sectors(next_rq) : 0,
+ 			bfqq->entity.budget);
+ }
+ 
+-static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
+-{
+-	unsigned long max_budget;
+-
+-	/*
+-	 * The max_budget calculated when autotuning is equal to the
+-	 * amount of sectors transfered in timeout_sync at the
+-	 * estimated peak rate.
+-	 */
+-	max_budget = (unsigned long)(peak_rate * 1000 *
+-				     timeout >> BFQ_RATE_SHIFT);
+-
+-	return max_budget;
+-}
+-
+ /*
+- * In addition to updating the peak rate, checks whether the process
+- * is "slow", and returns 1 if so. This slow flag is used, in addition
+- * to the budget timeout, to reduce the amount of service provided to
+- * seeky processes, and hence reduce their chances to lower the
+- * throughput. See the code for more details.
++ * Return true if the process associated with bfqq is "slow". The slow
++ * flag is used, in addition to the budget timeout, to reduce the
++ * amount of service provided to seeky processes, and thus reduce
++ * their chances to lower the throughput. More details in the comments
++ * on the function bfq_bfqq_expire().
++ *
++ * An important observation is in order: as discussed in the comments
++ * on the function bfq_update_peak_rate(), with devices with internal
++ * queues, it is hard if ever possible to know when and for how long
++ * an I/O request is processed by the device (apart from the trivial
++ * I/O pattern where a new request is dispatched only after the
++ * previous one has been completed). This makes it hard to evaluate
++ * the real rate at which the I/O requests of each bfq_queue are
++ * served.  In fact, for an I/O scheduler like BFQ, serving a
++ * bfq_queue means just dispatching its requests during its service
++ * slot (i.e., until the budget of the queue is exhausted, or the
++ * queue remains idle, or, finally, a timeout fires). But, during the
++ * service slot of a bfq_queue, around 100 ms at most, the device may
++ * be even still processing requests of bfq_queues served in previous
++ * service slots. On the opposite end, the requests of the in-service
++ * bfq_queue may be completed after the service slot of the queue
++ * finishes.
++ *
++ * Anyway, unless more sophisticated solutions are used
++ * (where possible), the sum of the sizes of the requests dispatched
++ * during the service slot of a bfq_queue is probably the only
++ * approximation available for the service received by the bfq_queue
++ * during its service slot. And this sum is the quantity used in this
++ * function to evaluate the I/O speed of a process.
+  */
+-static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+-				 bool compensate, enum bfqq_expiration reason)
++static bool bfq_bfqq_is_slow(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				 bool compensate, enum bfqq_expiration reason,
++				 unsigned long *delta_ms)
+ {
+-	u64 bw, usecs, expected, timeout;
+-	ktime_t delta;
+-	int update = 0;
++	ktime_t delta_ktime;
++	u32 delta_usecs;
++	bool slow = BFQQ_SEEKY(bfqq); /* if delta too short, use seekyness */
+ 
+-	if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++	if (!bfq_bfqq_sync(bfqq))
+ 		return false;
+ 
+ 	if (compensate)
+-		delta = bfqd->last_idling_start;
+-	else
+-		delta = ktime_get();
+-	delta = ktime_sub(delta, bfqd->last_budget_start);
+-	usecs = ktime_to_us(delta);
+-
+-	/* Don't trust short/unrealistic values. */
+-	if (usecs < 100 || usecs >= LONG_MAX)
+-		return false;
+-
+-	/*
+-	 * Calculate the bandwidth for the last slice.  We use a 64 bit
+-	 * value to store the peak rate, in sectors per usec in fixed
+-	 * point math.  We do so to have enough precision in the estimate
+-	 * and to avoid overflows.
+-	 */
+-	bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
+-	do_div(bw, (unsigned long)usecs);
+-
+-	timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
+-
+-	/*
+-	 * Use only long (> 20ms) intervals to filter out spikes for
+-	 * the peak rate estimation.
+-	 */
+-	if (usecs > 20000) {
+-		if (bw > bfqd->peak_rate ||
+-		   (!BFQQ_SEEKY(bfqq) &&
+-		    reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
+-			bfq_log(bfqd, "measured bw =%llu", bw);
+-			/*
+-			 * To smooth oscillations use a low-pass filter with
+-			 * alpha=7/8, i.e.,
+-			 * new_rate = (7/8) * old_rate + (1/8) * bw
+-			 */
+-			do_div(bw, 8);
+-			if (bw == 0)
+-				return 0;
+-			bfqd->peak_rate *= 7;
+-			do_div(bfqd->peak_rate, 8);
+-			bfqd->peak_rate += bw;
+-			update = 1;
+-			bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
+-		}
+-
+-		update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
+-
+-		if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
+-			bfqd->peak_rate_samples++;
+-
+-		if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
+-		    update) {
+-			int dev_type = blk_queue_nonrot(bfqd->queue);
+-
+-			if (bfqd->bfq_user_max_budget == 0) {
+-				bfqd->bfq_max_budget =
+-					bfq_calc_max_budget(bfqd->peak_rate,
+-							    timeout);
+-				bfq_log(bfqd, "new max_budget=%d",
+-					bfqd->bfq_max_budget);
+-			}
+-			if (bfqd->device_speed == BFQ_BFQD_FAST &&
+-			    bfqd->peak_rate < device_speed_thresh[dev_type]) {
+-				bfqd->device_speed = BFQ_BFQD_SLOW;
+-				bfqd->RT_prod = R_slow[dev_type] *
+-						T_slow[dev_type];
+-			} else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
+-			    bfqd->peak_rate > device_speed_thresh[dev_type]) {
+-				bfqd->device_speed = BFQ_BFQD_FAST;
+-				bfqd->RT_prod = R_fast[dev_type] *
+-						T_fast[dev_type];
+-			}
+-		}
++		delta_ktime = bfqd->last_idling_start;
++	else
++		delta_ktime = ktime_get();
++	delta_ktime = ktime_sub(delta_ktime, bfqd->last_budget_start);
++	delta_usecs = ktime_to_us(delta_ktime);
++
++	/* don't trust short/unrealistic values. */
++	if (delta_usecs < 1000 || delta_usecs >= LONG_MAX) {
++		if (blk_queue_nonrot(bfqd->queue))
++			 /*
++			  * give same worst-case guarantees as idling
++			  * for seeky
++			  */
++			*delta_ms = BFQ_MIN_TT / NSEC_PER_MSEC;
++		else /* charge at least one seek */
++			*delta_ms = bfq_slice_idle / NSEC_PER_MSEC;
++
++		bfq_log(bfqd, "bfq_bfqq_is_slow: unrealistic %u", delta_usecs);
++
++		return slow;
+ 	}
+ 
+-	/*
+-	 * If the process has been served for a too short time
+-	 * interval to let its possible sequential accesses prevail on
+-	 * the initial seek time needed to move the disk head on the
+-	 * first sector it requested, then give the process a chance
+-	 * and for the moment return false.
+-	 */
+-	if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
+-		return false;
++	*delta_ms = delta_usecs / USEC_PER_MSEC;
+ 
+ 	/*
+-	 * A process is considered ``slow'' (i.e., seeky, so that we
+-	 * cannot treat it fairly in the service domain, as it would
+-	 * slow down too much the other processes) if, when a slice
+-	 * ends for whatever reason, it has received service at a
+-	 * rate that would not be high enough to complete the budget
+-	 * before the budget timeout expiration.
++	 * Use only long (> 20ms) intervals to filter out excessive
++	 * spikes in service rate estimation.
+ 	 */
+-	expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++	if (delta_usecs > 20000) {
++		/*
++		 * Caveat for rotational devices: processes doing I/O
++		 * in the slower disk zones tend to be slow(er) even
++		 * if not seeky. In this respect, the estimated peak
++		 * rate is likely to be an average over the disk
++		 * surface. Accordingly, to not be too harsh with
++		 * unlucky processes, a process is deemed slow only if
++		 * its rate has been lower than half of the estimated
++		 * peak rate.
++		 */
++		slow = bfqq->entity.service < bfqd->bfq_max_budget / 2;
++		bfq_log(bfqd, "bfq_bfqq_is_slow: relative rate %d/%d",
++			bfqq->entity.service, bfqd->bfq_max_budget);
++	}
+ 
+-	/*
+-	 * Caveat: processes doing IO in the slower disk zones will
+-	 * tend to be slow(er) even if not seeky. And the estimated
+-	 * peak rate will actually be an average over the disk
+-	 * surface. Hence, to not be too harsh with unlucky processes,
+-	 * we keep a budget/3 margin of safety before declaring a
+-	 * process slow.
+-	 */
+-	return expected > (4 * bfqq->entity.budget) / 3;
++	bfq_log_bfqq(bfqd, bfqq, "bfq_bfqq_is_slow: slow %d", slow);
++
++	return slow;
+ }
+ 
+ /*
+@@ -2193,20 +2982,35 @@ static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
+ 						struct bfq_queue *bfqq)
+ {
++	bfq_log_bfqq(bfqd, bfqq,
++"softrt_next_start: service_blkg %lu soft_rate %u sects/sec interval %u",
++		     bfqq->service_from_backlogged,
++		     bfqd->bfq_wr_max_softrt_rate,
++		     jiffies_to_msecs(HZ * bfqq->service_from_backlogged /
++				      bfqd->bfq_wr_max_softrt_rate));
++
+ 	return max(bfqq->last_idle_bklogged +
+ 		   HZ * bfqq->service_from_backlogged /
+ 		   bfqd->bfq_wr_max_softrt_rate,
+-		   jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++		   jiffies + nsecs_to_jiffies(bfqq->bfqd->bfq_slice_idle) + 4);
++}
++
++/*
++ * Return the farthest future time instant according to jiffies
++ * macros.
++ */
++static unsigned long bfq_greatest_from_now(void)
++{
++	return jiffies + MAX_JIFFY_OFFSET;
+ }
+ 
+ /*
+- * Return the largest-possible time instant such that, for as long as possible,
+- * the current time will be lower than this time instant according to the macro
+- * time_is_before_jiffies().
++ * Return the farthest past time instant according to jiffies
++ * macros.
+  */
+-static unsigned long bfq_infinity_from_now(unsigned long now)
++static unsigned long bfq_smallest_from_now(void)
+ {
+-	return now + ULONG_MAX / 2;
++	return jiffies - MAX_JIFFY_OFFSET;
+ }
+ 
+ /**
+@@ -2216,28 +3020,24 @@ static unsigned long bfq_infinity_from_now(unsigned long now)
+  * @compensate: if true, compensate for the time spent idling.
+  * @reason: the reason causing the expiration.
+  *
++ * If the process associated with bfqq does slow I/O (e.g., because it
++ * issues random requests), we charge bfqq with the time it has been
++ * in service instead of the service it has received (see
++ * bfq_bfqq_charge_time for details on how this goal is achieved). As
++ * a consequence, bfqq will typically get higher timestamps upon
++ * reactivation, and hence it will be rescheduled as if it had
++ * received more service than what it has actually received. In the
++ * end, bfqq receives less service in proportion to how slowly its
++ * associated process consumes its budgets (and hence how seriously it
++ * tends to lower the throughput). In addition, this time-charging
++ * strategy guarantees time fairness among slow processes. In
++ * contrast, if the process associated with bfqq is not slow, we
++ * charge bfqq exactly with the service it has received.
+  *
+- * If the process associated to the queue is slow (i.e., seeky), or in
+- * case of budget timeout, or, finally, if it is async, we
+- * artificially charge it an entire budget (independently of the
+- * actual service it received). As a consequence, the queue will get
+- * higher timestamps than the correct ones upon reactivation, and
+- * hence it will be rescheduled as if it had received more service
+- * than what it actually received. In the end, this class of processes
+- * will receive less service in proportion to how slowly they consume
+- * their budgets (and hence how seriously they tend to lower the
+- * throughput).
+- *
+- * In contrast, when a queue expires because it has been idling for
+- * too much or because it exhausted its budget, we do not touch the
+- * amount of service it has received. Hence when the queue will be
+- * reactivated and its timestamps updated, the latter will be in sync
+- * with the actual service received by the queue until expiration.
+- *
+- * Charging a full budget to the first type of queues and the exact
+- * service to the others has the effect of using the WF2Q+ policy to
+- * schedule the former on a timeslice basis, without violating the
+- * service domain guarantees of the latter.
++ * Charging time to the first type of queues and the exact service to
++ * the other has the effect of using the WF2Q+ policy to schedule the
++ * former on a timeslice basis, without violating service domain
++ * guarantees among the latter.
+  */
+ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 			    struct bfq_queue *bfqq,
+@@ -2245,41 +3045,52 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 			    enum bfqq_expiration reason)
+ {
+ 	bool slow;
++	unsigned long delta = 0;
++	struct bfq_entity *entity = &bfqq->entity;
+ 
+ 	BUG_ON(bfqq != bfqd->in_service_queue);
+ 
+ 	/*
+-	 * Update disk peak rate for autotuning and check whether the
+-	 * process is slow (see bfq_update_peak_rate).
++	 * Check whether the process is slow (see bfq_bfqq_is_slow).
+ 	 */
+-	slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++	slow = bfq_bfqq_is_slow(bfqd, bfqq, compensate, reason, &delta);
+ 
+ 	/*
+-	 * As above explained, 'punish' slow (i.e., seeky), timed-out
+-	 * and async queues, to favor sequential sync workloads.
+-	 *
+-	 * Processes doing I/O in the slower disk zones will tend to be
+-	 * slow(er) even if not seeky. Hence, since the estimated peak
+-	 * rate is actually an average over the disk surface, these
+-	 * processes may timeout just for bad luck. To avoid punishing
+-	 * them we do not charge a full budget to a process that
+-	 * succeeded in consuming at least 2/3 of its budget.
++	 * Increase service_from_backlogged before next statement,
++	 * because the possible next invocation of
++	 * bfq_bfqq_charge_time would likely inflate
++	 * entity->service. In contrast, service_from_backlogged must
++	 * contain real service, to enable the soft real-time
++	 * heuristic to correctly compute the bandwidth consumed by
++	 * bfqq.
+ 	 */
+-	if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
+-		     bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3))
+-		bfq_bfqq_charge_full_budget(bfqq);
++	bfqq->service_from_backlogged += entity->service;
+ 
+-	bfqq->service_from_backlogged += bfqq->entity.service;
++	/*
++	 * As above explained, charge slow (typically seeky) and
++	 * timed-out queues with the time and not the service
++	 * received, to favor sequential workloads.
++	 *
++	 * Processes doing I/O in the slower disk zones will tend to
++	 * be slow(er) even if not seeky. Therefore, since the
++	 * estimated peak rate is actually an average over the disk
++	 * surface, these processes may timeout just for bad luck. To
++	 * avoid punishing them, do not charge time to processes that
++	 * succeeded in consuming at least 2/3 of their budget. This
++	 * allows BFQ to preserve enough elasticity to still perform
++	 * bandwidth, and not time, distribution with little unlucky
++	 * or quasi-sequential processes.
++	 */
++	if (bfqq->wr_coeff == 1 &&
++	    (slow ||
++	     (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++	      bfq_bfqq_budget_left(bfqq) >=  entity->budget / 3)))
++		bfq_bfqq_charge_time(bfqd, bfqq, delta);
+ 
+-	if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
+-	    !bfq_bfqq_constantly_seeky(bfqq)) {
+-		bfq_mark_bfqq_constantly_seeky(bfqq);
+-		if (!blk_queue_nonrot(bfqd->queue))
+-			bfqd->const_seeky_busy_in_flight_queues++;
+-	}
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ 
+ 	if (reason == BFQ_BFQQ_TOO_IDLE &&
+-	    bfqq->entity.service <= 2 * bfqq->entity.budget / 10)
++	    entity->service <= 2 * entity->budget / 10)
+ 		bfq_clear_bfqq_IO_bound(bfqq);
+ 
+ 	if (bfqd->low_latency && bfqq->wr_coeff == 1)
+@@ -2288,19 +3099,23 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 	if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
+ 	    RB_EMPTY_ROOT(&bfqq->sort_list)) {
+ 		/*
+-		 * If we get here, and there are no outstanding requests,
+-		 * then the request pattern is isochronous (see the comments
+-		 * to the function bfq_bfqq_softrt_next_start()). Hence we
+-		 * can compute soft_rt_next_start. If, instead, the queue
+-		 * still has outstanding requests, then we have to wait
+-		 * for the completion of all the outstanding requests to
++		 * If we get here, and there are no outstanding
++		 * requests, then the request pattern is isochronous
++		 * (see the comments on the function
++		 * bfq_bfqq_softrt_next_start()). Thus we can compute
++		 * soft_rt_next_start. If, instead, the queue still
++		 * has outstanding requests, then we have to wait for
++		 * the completion of all the outstanding requests to
+ 		 * discover whether the request pattern is actually
+ 		 * isochronous.
+ 		 */
+-		if (bfqq->dispatched == 0)
++		BUG_ON(bfqd->busy_queues < 1);
++		if (bfqq->dispatched == 0) {
+ 			bfqq->soft_rt_next_start =
+ 				bfq_bfqq_softrt_next_start(bfqd, bfqq);
+-		else {
++			bfq_log_bfqq(bfqd, bfqq, "new soft_rt_next %lu",
++				     bfqq->soft_rt_next_start);
++		} else {
+ 			/*
+ 			 * The application is still waiting for the
+ 			 * completion of one or more requests:
+@@ -2317,7 +3132,7 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 			 *    happened to be in the past.
+ 			 */
+ 			bfqq->soft_rt_next_start =
+-				bfq_infinity_from_now(jiffies);
++				bfq_greatest_from_now();
+ 			/*
+ 			 * Schedule an update of soft_rt_next_start to when
+ 			 * the task may be discovered to be isochronous.
+@@ -2327,15 +3142,27 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 	}
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+-		"expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
+-		slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++		"expire (%d, slow %d, num_disp %d, idle_win %d, weight %d)",
++		     reason, slow, bfqq->dispatched,
++		     bfq_bfqq_idle_window(bfqq), entity->weight);
+ 
+ 	/*
+ 	 * Increase, decrease or leave budget unchanged according to
+ 	 * reason.
+ 	 */
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ 	__bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++	BUG_ON(bfqq->next_rq == NULL &&
++	       bfqq->entity.budget < bfqq->entity.service);
+ 	__bfq_bfqq_expire(bfqd, bfqq);
++
++	BUG_ON(!bfq_bfqq_busy(bfqq) && reason == BFQ_BFQQ_BUDGET_EXHAUSTED &&
++		!bfq_class_idle(bfqq));
++
++	if (!bfq_bfqq_busy(bfqq) &&
++	    reason != BFQ_BFQQ_BUDGET_TIMEOUT &&
++	    reason != BFQ_BFQQ_BUDGET_EXHAUSTED)
++		bfq_mark_bfqq_non_blocking_wait_rq(bfqq);
+ }
+ 
+ /*
+@@ -2345,20 +3172,17 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+  */
+ static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
+ {
+-	if (bfq_bfqq_budget_new(bfqq) ||
+-	    time_before(jiffies, bfqq->budget_timeout))
+-		return false;
+-	return true;
++	return time_is_before_eq_jiffies(bfqq->budget_timeout);
+ }
+ 
+ /*
+- * If we expire a queue that is waiting for the arrival of a new
+- * request, we may prevent the fictitious timestamp back-shifting that
+- * allows the guarantees of the queue to be preserved (see [1] for
+- * this tricky aspect). Hence we return true only if this condition
+- * does not hold, or if the queue is slow enough to deserve only to be
+- * kicked off for preserving a high throughput.
+-*/
++ * If we expire a queue that is actively waiting (i.e., with the
++ * device idled) for the arrival of a new request, then we may incur
++ * the timestamp misalignment problem described in the body of the
++ * function __bfq_activate_entity. Hence we return true only if this
++ * condition does not hold, or if the queue is slow enough to deserve
++ * only to be kicked off for preserving a high throughput.
++ */
+ static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
+ {
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq,
+@@ -2400,10 +3224,12 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ {
+ 	struct bfq_data *bfqd = bfqq->bfqd;
+ 	bool idling_boosts_thr, idling_boosts_thr_without_issues,
+-		all_queues_seeky, on_hdd_and_not_all_queues_seeky,
+ 		idling_needed_for_service_guarantees,
+ 		asymmetric_scenario;
+ 
++	if (bfqd->strict_guarantees)
++		return true;
++
+ 	/*
+ 	 * The next variable takes into account the cases where idling
+ 	 * boosts the throughput.
+@@ -2466,74 +3292,27 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ 		bfqd->wr_busy_queues == 0;
+ 
+ 	/*
+-	 * There are then two cases where idling must be performed not
++	 * There is then a case where idling must be performed not
+ 	 * for throughput concerns, but to preserve service
+-	 * guarantees. In the description of these cases, we say, for
+-	 * short, that a queue is sequential/random if the process
+-	 * associated to the queue issues sequential/random requests
+-	 * (in the second case the queue may be tagged as seeky or
+-	 * even constantly_seeky).
+-	 *
+-	 * To introduce the first case, we note that, since
+-	 * bfq_bfqq_idle_window(bfqq) is false if the device is
+-	 * NCQ-capable and bfqq is random (see
+-	 * bfq_update_idle_window()), then, from the above two
+-	 * assignments it follows that
+-	 * idling_boosts_thr_without_issues is false if the device is
+-	 * NCQ-capable and bfqq is random. Therefore, for this case,
+-	 * device idling would never be allowed if we used just
+-	 * idling_boosts_thr_without_issues to decide whether to allow
+-	 * it. And, beneficially, this would imply that throughput
+-	 * would always be boosted also with random I/O on NCQ-capable
+-	 * HDDs.
++	 * guarantees.
+ 	 *
+-	 * But we must be careful on this point, to avoid an unfair
+-	 * treatment for bfqq. In fact, because of the same above
+-	 * assignments, idling_boosts_thr_without_issues is, on the
+-	 * other hand, true if 1) the device is an HDD and bfqq is
+-	 * sequential, and 2) there are no busy weight-raised
+-	 * queues. As a consequence, if we used just
+-	 * idling_boosts_thr_without_issues to decide whether to idle
+-	 * the device, then with an HDD we might easily bump into a
+-	 * scenario where queues that are sequential and I/O-bound
+-	 * would enjoy idling, whereas random queues would not. The
+-	 * latter might then get a low share of the device throughput,
+-	 * simply because the former would get many requests served
+-	 * after being set as in service, while the latter would not.
+-	 *
+-	 * To address this issue, we start by setting to true a
+-	 * sentinel variable, on_hdd_and_not_all_queues_seeky, if the
+-	 * device is rotational and not all queues with pending or
+-	 * in-flight requests are constantly seeky (i.e., there are
+-	 * active sequential queues, and bfqq might then be mistreated
+-	 * if it does not enjoy idling because it is random).
+-	 */
+-	all_queues_seeky = bfq_bfqq_constantly_seeky(bfqq) &&
+-			   bfqd->busy_in_flight_queues ==
+-			   bfqd->const_seeky_busy_in_flight_queues;
+-
+-	on_hdd_and_not_all_queues_seeky =
+-		!blk_queue_nonrot(bfqd->queue) && !all_queues_seeky;
+-
+-	/*
+-	 * To introduce the second case where idling needs to be
+-	 * performed to preserve service guarantees, we can note that
+-	 * allowing the drive to enqueue more than one request at a
+-	 * time, and hence delegating de facto final scheduling
+-	 * decisions to the drive's internal scheduler, causes loss of
+-	 * control on the actual request service order. In particular,
+-	 * the critical situation is when requests from different
+-	 * processes happens to be present, at the same time, in the
+-	 * internal queue(s) of the drive. In such a situation, the
+-	 * drive, by deciding the service order of the
+-	 * internally-queued requests, does determine also the actual
+-	 * throughput distribution among these processes. But the
+-	 * drive typically has no notion or concern about per-process
+-	 * throughput distribution, and makes its decisions only on a
+-	 * per-request basis. Therefore, the service distribution
+-	 * enforced by the drive's internal scheduler is likely to
+-	 * coincide with the desired device-throughput distribution
+-	 * only in a completely symmetric scenario where:
++	 * To introduce this case, we can note that allowing the drive
++	 * to enqueue more than one request at a time, and hence
++	 * delegating de facto final scheduling decisions to the
++	 * drive's internal scheduler, entails loss of control on the
++	 * actual request service order. In particular, the critical
++	 * situation is when requests from different processes happen
++	 * to be present, at the same time, in the internal queue(s)
++	 * of the drive. In such a situation, the drive, by deciding
++	 * the service order of the internally-queued requests, does
++	 * determine also the actual throughput distribution among
++	 * these processes. But the drive typically has no notion or
++	 * concern about per-process throughput distribution, and
++	 * makes its decisions only on a per-request basis. Therefore,
++	 * the service distribution enforced by the drive's internal
++	 * scheduler is likely to coincide with the desired
++	 * device-throughput distribution only in a completely
++	 * symmetric scenario where:
+ 	 * (i)  each of these processes must get the same throughput as
+ 	 *      the others;
+ 	 * (ii) all these processes have the same I/O pattern
+@@ -2555,26 +3334,53 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ 	 * words, only if sub-condition (i) holds, then idling is
+ 	 * allowed, and the device tends to be prevented from queueing
+ 	 * many requests, possibly of several processes. The reason
+-	 * for not controlling also sub-condition (ii) is that, first,
+-	 * in the case of an HDD, the asymmetry in terms of types of
+-	 * I/O patterns is already taken in to account in the above
+-	 * sentinel variable
+-	 * on_hdd_and_not_all_queues_seeky. Secondly, in the case of a
+-	 * flash-based device, we prefer however to privilege
+-	 * throughput (and idling lowers throughput for this type of
+-	 * devices), for the following reasons:
+-	 * 1) differently from HDDs, the service time of random
+-	 *    requests is not orders of magnitudes lower than the service
+-	 *    time of sequential requests; thus, even if processes doing
+-	 *    sequential I/O get a preferential treatment with respect to
+-	 *    others doing random I/O, the consequences are not as
+-	 *    dramatic as with HDDs;
+-	 * 2) if a process doing random I/O does need strong
+-	 *    throughput guarantees, it is hopefully already being
+-	 *    weight-raised, or the user is likely to have assigned it a
+-	 *    higher weight than the other processes (and thus
+-	 *    sub-condition (i) is likely to be false, which triggers
+-	 *    idling).
++	 * for not controlling also sub-condition (ii) is that we
++	 * exploit preemption to preserve guarantees in case of
++	 * symmetric scenarios, even if (ii) does not hold, as
++	 * explained in the next two paragraphs.
++	 *
++	 * Even if a queue, say Q, is expired when it remains idle, Q
++	 * can still preempt the new in-service queue if the next
++	 * request of Q arrives soon (see the comments on
++	 * bfq_bfqq_update_budg_for_activation). If all queues and
++	 * groups have the same weight, this form of preemption,
++	 * combined with the hole-recovery heuristic described in the
++	 * comments on function bfq_bfqq_update_budg_for_activation,
++	 * are enough to preserve a correct bandwidth distribution in
++	 * the mid term, even without idling. In fact, even if not
++	 * idling allows the internal queues of the device to contain
++	 * many requests, and thus to reorder requests, we can rather
++	 * safely assume that the internal scheduler still preserves a
++	 * minimum of mid-term fairness. The motivation for using
++	 * preemption instead of idling is that, by not idling,
++	 * service guarantees are preserved without minimally
++	 * sacrificing throughput. In other words, both a high
++	 * throughput and its desired distribution are obtained.
++	 *
++	 * More precisely, this preemption-based, idleless approach
++	 * provides fairness in terms of IOPS, and not sectors per
++	 * second. This can be seen with a simple example. Suppose
++	 * that there are two queues with the same weight, but that
++	 * the first queue receives requests of 8 sectors, while the
++	 * second queue receives requests of 1024 sectors. In
++	 * addition, suppose that each of the two queues contains at
++	 * most one request at a time, which implies that each queue
++	 * always remains idle after it is served. Finally, after
++	 * remaining idle, each queue receives very quickly a new
++	 * request. It follows that the two queues are served
++	 * alternatively, preempting each other if needed. This
++	 * implies that, although both queues have the same weight,
++	 * the queue with large requests receives a service that is
++	 * 1024/8 times as high as the service received by the other
++	 * queue.
++	 *
++	 * On the other hand, device idling is performed, and thus
++	 * pure sector-domain guarantees are provided, for the
++	 * following queues, which are likely to need stronger
++	 * throughput guarantees: weight-raised queues, and queues
++	 * with a higher weight than other queues. When such queues
++	 * are active, sub-condition (i) is false, which triggers
++	 * device idling.
+ 	 *
+ 	 * According to the above considerations, the next variable is
+ 	 * true (only) if sub-condition (i) holds. To compute the
+@@ -2582,7 +3388,7 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ 	 * the function bfq_symmetric_scenario(), but also check
+ 	 * whether bfqq is being weight-raised, because
+ 	 * bfq_symmetric_scenario() does not take into account also
+-	 * weight-raised queues (see comments to
++	 * weight-raised queues (see comments on
+ 	 * bfq_weights_tree_add()).
+ 	 *
+ 	 * As a side note, it is worth considering that the above
+@@ -2604,17 +3410,16 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ 	 * bfqq. Such a case is when bfqq became active in a burst of
+ 	 * queue activations. Queues that became active during a large
+ 	 * burst benefit only from throughput, as discussed in the
+-	 * comments to bfq_handle_burst. Thus, if bfqq became active
++	 * comments on bfq_handle_burst. Thus, if bfqq became active
+ 	 * in a burst and not idling the device maximizes throughput,
+ 	 * then the device must no be idled, because not idling the
+ 	 * device provides bfqq and all other queues in the burst with
+-	 * maximum benefit. Combining this and the two cases above, we
+-	 * can now establish when idling is actually needed to
+-	 * preserve service guarantees.
++	 * maximum benefit. Combining this and the above case, we can
++	 * now establish when idling is actually needed to preserve
++	 * service guarantees.
+ 	 */
+ 	idling_needed_for_service_guarantees =
+-		(on_hdd_and_not_all_queues_seeky || asymmetric_scenario) &&
+-		!bfq_bfqq_in_large_burst(bfqq);
++		asymmetric_scenario && !bfq_bfqq_in_large_burst(bfqq);
+ 
+ 	/*
+ 	 * We have now all the components we need to compute the return
+@@ -2624,6 +3429,16 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ 	 * 2) idling either boosts the throughput (without issues), or
+ 	 *    is necessary to preserve service guarantees.
+ 	 */
++	bfq_log_bfqq(bfqd, bfqq, "may_idle: sync %d idling_boosts_thr %d",
++		     bfq_bfqq_sync(bfqq), idling_boosts_thr);
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "may_idle: wr_busy %d boosts %d IO-bound %d guar %d",
++		     bfqd->wr_busy_queues,
++		     idling_boosts_thr_without_issues,
++		     bfq_bfqq_IO_bound(bfqq),
++		     idling_needed_for_service_guarantees);
++
+ 	return bfq_bfqq_sync(bfqq) &&
+ 		(idling_boosts_thr_without_issues ||
+ 		 idling_needed_for_service_guarantees);
+@@ -2635,7 +3450,7 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+  * 1) the queue must remain in service and cannot be expired, and
+  * 2) the device must be idled to wait for the possible arrival of a new
+  *    request for the queue.
+- * See the comments to the function bfq_bfqq_may_idle for the reasons
++ * See the comments on the function bfq_bfqq_may_idle for the reasons
+  * why performing device idling is the best choice to boost the throughput
+  * and preserve service guarantees when bfq_bfqq_may_idle itself
+  * returns true.
+@@ -2665,7 +3480,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
+ 
+ 	if (bfq_may_expire_for_budg_timeout(bfqq) &&
+-	    !timer_pending(&bfqd->idle_slice_timer) &&
++	    !hrtimer_active(&bfqd->idle_slice_timer) &&
+ 	    !bfq_bfqq_must_idle(bfqq))
+ 		goto expire;
+ 
+@@ -2685,7 +3500,8 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 			 * not disable disk idling even when a new request
+ 			 * arrives.
+ 			 */
+-			if (timer_pending(&bfqd->idle_slice_timer)) {
++			if (bfq_bfqq_wait_request(bfqq)) {
++				BUG_ON(!hrtimer_active(&bfqd->idle_slice_timer));
+ 				/*
+ 				 * If we get here: 1) at least a new request
+ 				 * has arrived but we have not disabled the
+@@ -2700,10 +3516,8 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 				 * So we disable idling.
+ 				 */
+ 				bfq_clear_bfqq_wait_request(bfqq);
+-				del_timer(&bfqd->idle_slice_timer);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++				hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ 				bfqg_stats_update_idle_time(bfqq_group(bfqq));
+-#endif
+ 			}
+ 			goto keep_queue;
+ 		}
+@@ -2714,7 +3528,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 	 * for a new request, or has requests waiting for a completion and
+ 	 * may idle after their completion, then keep it anyway.
+ 	 */
+-	if (timer_pending(&bfqd->idle_slice_timer) ||
++	if (hrtimer_active(&bfqd->idle_slice_timer) ||
+ 	    (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) {
+ 		bfqq = NULL;
+ 		goto keep_queue;
+@@ -2736,6 +3550,9 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
+ 	if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
++		BUG_ON(bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time &&
++		       time_is_after_jiffies(bfqq->last_wr_start_finish));
++
+ 		bfq_log_bfqq(bfqd, bfqq,
+ 			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
+ 			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
+@@ -2749,22 +3566,30 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
+ 
+ 		/*
+-		 * If the queue was activated in a burst, or
+-		 * too much time has elapsed from the beginning
+-		 * of this weight-raising period, or the queue has
+-		 * exceeded the acceptable number of cooperations,
+-		 * then end weight raising.
++		 * If the queue was activated in a burst, or too much
++		 * time has elapsed from the beginning of this
++		 * weight-raising period, then end weight raising.
+ 		 */
+-		if (bfq_bfqq_in_large_burst(bfqq) ||
+-		    bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+-		    time_is_before_jiffies(bfqq->last_wr_start_finish +
+-					   bfqq->wr_cur_max_time)) {
+-			bfqq->last_wr_start_finish = jiffies;
+-			bfq_log_bfqq(bfqd, bfqq,
+-				     "wrais ending at %lu, rais_max_time %u",
+-				     bfqq->last_wr_start_finish,
+-				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++		if (bfq_bfqq_in_large_burst(bfqq))
+ 			bfq_bfqq_end_wr(bfqq);
++		else if (time_is_before_jiffies(bfqq->last_wr_start_finish +
++					   bfqq->wr_cur_max_time)) {
++			if (bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time ||
++			time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt +
++					bfq_wr_duration(bfqd)))
++				bfq_bfqq_end_wr(bfqq);
++			else {
++				/* switch back to interactive wr */
++				bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++				bfqq->last_wr_start_finish =
++					bfqq->wr_start_at_switch_to_srt;
++				BUG_ON(time_is_after_jiffies(
++					       bfqq->last_wr_start_finish));
++				bfqq->entity.prio_changed = 1;
++				bfq_log_bfqq(bfqd, bfqq,
++					"back to interactive wr");
++			}
+ 		}
+ 	}
+ 	/* Update weight both if it must be raised and if it must be lowered */
+@@ -2815,13 +3640,29 @@ static int bfq_dispatch_request(struct bfq_data *bfqd,
+ 		 */
+ 		if (!bfqd->rq_in_driver)
+ 			bfq_schedule_dispatch(bfqd);
++		BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ 		goto expire;
+ 	}
+ 
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ 	/* Finally, insert request into driver dispatch list. */
+ 	bfq_bfqq_served(bfqq, service_to_charge);
++
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
++
+ 	bfq_dispatch_insert(bfqd->queue, rq);
+ 
++	/*
++	 * If weight raising has to terminate for bfqq, then next
++	 * function causes an immediate update of bfqq's weight,
++	 * without waiting for next activation. As a consequence, on
++	 * expiration, bfqq will be timestamped as if has never been
++	 * weight-raised during this service slot, even if it has
++	 * received part or even most of the service as a
++	 * weight-raised queue. This inflates bfqq's timestamps, which
++	 * is beneficial, as bfqq is then more willing to leave the
++	 * device immediately to possible other weight-raised queues.
++	 */
+ 	bfq_update_wr_data(bfqd, bfqq);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+@@ -2837,9 +3678,7 @@ static int bfq_dispatch_request(struct bfq_data *bfqd,
+ 		bfqd->in_service_bic = RQ_BIC(rq);
+ 	}
+ 
+-	if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
+-	    dispatched >= bfqd->bfq_max_budget_async_rq) ||
+-	    bfq_class_idle(bfqq)))
++	if (bfqd->busy_queues > 1 && bfq_class_idle(bfqq))
+ 		goto expire;
+ 
+ 	return dispatched;
+@@ -2885,8 +3724,8 @@ static int bfq_forced_dispatch(struct bfq_data *bfqd)
+ 		st = bfq_entity_service_tree(&bfqq->entity);
+ 
+ 		dispatched += __bfq_forced_dispatch_bfqq(bfqq);
+-		bfqq->max_budget = bfq_max_budget(bfqd);
+ 
++		bfqq->max_budget = bfq_max_budget(bfqd);
+ 		bfq_forget_idle(st);
+ 	}
+ 
+@@ -2899,37 +3738,37 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct bfq_queue *bfqq;
+-	int max_dispatch;
+ 
+ 	bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++
+ 	if (bfqd->busy_queues == 0)
+ 		return 0;
+ 
+ 	if (unlikely(force))
+ 		return bfq_forced_dispatch(bfqd);
+ 
++	/*
++	 * Force device to serve one request at a time if
++	 * strict_guarantees is true. Forcing this service scheme is
++	 * currently the ONLY way to guarantee that the request
++	 * service order enforced by the scheduler is respected by a
++	 * queueing device. Otherwise the device is free even to make
++	 * some unlucky request wait for as long as the device
++	 * wishes.
++	 *
++	 * Of course, serving one request at at time may cause loss of
++	 * throughput.
++	 */
++	if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0)
++		return 0;
++
+ 	bfqq = bfq_select_queue(bfqd);
+ 	if (!bfqq)
+ 		return 0;
+ 
+-	if (bfq_class_idle(bfqq))
+-		max_dispatch = 1;
+-
+-	if (!bfq_bfqq_sync(bfqq))
+-		max_dispatch = bfqd->bfq_max_budget_async_rq;
+-
+-	if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
+-		if (bfqd->busy_queues > 1)
+-			return 0;
+-		if (bfqq->dispatched >= 4 * max_dispatch)
+-			return 0;
+-	}
+-
+-	if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
+-		return 0;
++	BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ 
+-	bfq_clear_bfqq_wait_request(bfqq);
+-	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++	BUG_ON(bfq_bfqq_wait_request(bfqq));
+ 
+ 	if (!bfq_dispatch_request(bfqd, bfqq))
+ 		return 0;
+@@ -2937,6 +3776,8 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+ 	bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
+ 			bfq_bfqq_sync(bfqq) ? "sync" : "async");
+ 
++	BUG_ON(bfqq->next_rq == NULL &&
++	       bfqq->entity.budget < bfqq->entity.service);
+ 	return 1;
+ }
+ 
+@@ -2948,23 +3789,22 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+  */
+ static void bfq_put_queue(struct bfq_queue *bfqq)
+ {
+-	struct bfq_data *bfqd = bfqq->bfqd;
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	struct bfq_group *bfqg = bfqq_group(bfqq);
+ #endif
+ 
+-	BUG_ON(atomic_read(&bfqq->ref) <= 0);
++	BUG_ON(bfqq->ref <= 0);
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
+-		     atomic_read(&bfqq->ref));
+-	if (!atomic_dec_and_test(&bfqq->ref))
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p %d", bfqq, bfqq->ref);
++	bfqq->ref--;
++	if (bfqq->ref)
+ 		return;
+ 
+ 	BUG_ON(rb_first(&bfqq->sort_list));
+ 	BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
+ 	BUG_ON(bfqq->entity.tree);
+ 	BUG_ON(bfq_bfqq_busy(bfqq));
+-	BUG_ON(bfqd->in_service_queue == bfqq);
++	BUG_ON(bfqq->bfqd->in_service_queue == bfqq);
+ 
+ 	if (bfq_bfqq_sync(bfqq))
+ 		/*
+@@ -2977,7 +3817,7 @@ static void bfq_put_queue(struct bfq_queue *bfqq)
+ 		 */
+ 		hlist_del_init(&bfqq->burst_list_node);
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p freed", bfqq);
+ 
+ 	kmem_cache_free(bfq_pool, bfqq);
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+@@ -3011,8 +3851,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 		bfq_schedule_dispatch(bfqd);
+ 	}
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
+-		     atomic_read(&bfqq->ref));
++	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref);
+ 
+ 	bfq_put_cooperator(bfqq);
+ 
+@@ -3021,28 +3860,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 
+ static void bfq_init_icq(struct io_cq *icq)
+ {
+-	struct bfq_io_cq *bic = icq_to_bic(icq);
+-
+-	bic->ttime.last_end_request = jiffies;
+-	/*
+-	 * A newly created bic indicates that the process has just
+-	 * started doing I/O, and is probably mapping into memory its
+-	 * executable and libraries: it definitely needs weight raising.
+-	 * There is however the possibility that the process performs,
+-	 * for a while, I/O close to some other process. EQM intercepts
+-	 * this behavior and may merge the queue corresponding to the
+-	 * process  with some other queue, BEFORE the weight of the queue
+-	 * is raised. Merged queues are not weight-raised (they are assumed
+-	 * to belong to processes that benefit only from high throughput).
+-	 * If the merge is basically the consequence of an accident, then
+-	 * the queue will be split soon and will get back its old weight.
+-	 * It is then important to write down somewhere that this queue
+-	 * does need weight raising, even if it did not make it to get its
+-	 * weight raised before being merged. To this purpose, we overload
+-	 * the field raising_time_left and assign 1 to it, to mark the queue
+-	 * as needing weight raising.
+-	 */
+-	bic->wr_time_left = 1;
++	icq_to_bic(icq)->ttime.last_end_request = ktime_get_ns() - (1ULL<<32);
+ }
+ 
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -3050,21 +3868,21 @@ static void bfq_exit_icq(struct io_cq *icq)
+ 	struct bfq_io_cq *bic = icq_to_bic(icq);
+ 	struct bfq_data *bfqd = bic_to_bfqd(bic);
+ 
+-	if (bic->bfqq[BLK_RW_ASYNC]) {
+-		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
+-		bic->bfqq[BLK_RW_ASYNC] = NULL;
++	if (bic_to_bfqq(bic, false)) {
++		bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, false));
++		bic_set_bfqq(bic, NULL, false);
+ 	}
+ 
+-	if (bic->bfqq[BLK_RW_SYNC]) {
++	if (bic_to_bfqq(bic, true)) {
+ 		/*
+ 		 * If the bic is using a shared queue, put the reference
+ 		 * taken on the io_context when the bic started using a
+ 		 * shared bfq_queue.
+ 		 */
+-		if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++		if (bfq_bfqq_coop(bic_to_bfqq(bic, true)))
+ 			put_io_context(icq->ioc);
+-		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+-		bic->bfqq[BLK_RW_SYNC] = NULL;
++		bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, true));
++		bic_set_bfqq(bic, NULL, true);
+ 	}
+ }
+ 
+@@ -3072,8 +3890,8 @@ static void bfq_exit_icq(struct io_cq *icq)
+  * Update the entity prio values; note that the new values will not
+  * be used until the next (re)activation.
+  */
+-static void
+-bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq,
++				     struct bfq_io_cq *bic)
+ {
+ 	struct task_struct *tsk = current;
+ 	int ioprio_class;
+@@ -3105,7 +3923,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 		break;
+ 	}
+ 
+-	if (bfqq->new_ioprio < 0 || bfqq->new_ioprio >= IOPRIO_BE_NR) {
++	if (bfqq->new_ioprio >= IOPRIO_BE_NR) {
+ 		pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
+ 			bfqq->new_ioprio);
+ 		BUG();
+@@ -3113,45 +3931,40 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 
+ 	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
+ 	bfqq->entity.prio_changed = 1;
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		     "set_next_ioprio_data: bic_class %d prio %d class %d",
++		     ioprio_class, bfqq->new_ioprio, bfqq->new_ioprio_class);
+ }
+ 
+ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
+ {
+-	struct bfq_data *bfqd;
+-	struct bfq_queue *bfqq, *new_bfqq;
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	struct bfq_queue *bfqq;
+ 	unsigned long uninitialized_var(flags);
+ 	int ioprio = bic->icq.ioc->ioprio;
+ 
+-	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
+-				   &flags);
+ 	/*
+ 	 * This condition may trigger on a newly created bic, be sure to
+ 	 * drop the lock before returning.
+ 	 */
+ 	if (unlikely(!bfqd) || likely(bic->ioprio == ioprio))
+-		goto out;
++		return;
+ 
+ 	bic->ioprio = ioprio;
+ 
+-	bfqq = bic->bfqq[BLK_RW_ASYNC];
++	bfqq = bic_to_bfqq(bic, false);
+ 	if (bfqq) {
+-		new_bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic,
+-					 GFP_ATOMIC);
+-		if (new_bfqq) {
+-			bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
+-			bfq_log_bfqq(bfqd, bfqq,
+-				     "check_ioprio_change: bfqq %p %d",
+-				     bfqq, atomic_read(&bfqq->ref));
+-			bfq_put_queue(bfqq);
+-		}
++		bfq_put_queue(bfqq);
++		bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic);
++		bic_set_bfqq(bic, bfqq, false);
++		bfq_log_bfqq(bfqd, bfqq,
++			     "check_ioprio_change: bfqq %p %d",
++			     bfqq, bfqq->ref);
+ 	}
+ 
+-	bfqq = bic->bfqq[BLK_RW_SYNC];
++	bfqq = bic_to_bfqq(bic, true);
+ 	if (bfqq)
+ 		bfq_set_next_ioprio_data(bfqq, bic);
+-
+-out:
+-	bfq_put_bfqd_unlock(bfqd, &flags);
+ }
+ 
+ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -3160,8 +3973,9 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	RB_CLEAR_NODE(&bfqq->entity.rb_node);
+ 	INIT_LIST_HEAD(&bfqq->fifo);
+ 	INIT_HLIST_NODE(&bfqq->burst_list_node);
++	BUG_ON(!hlist_unhashed(&bfqq->burst_list_node));
+ 
+-	atomic_set(&bfqq->ref, 0);
++	bfqq->ref = 0;
+ 	bfqq->bfqd = bfqd;
+ 
+ 	if (bic)
+@@ -3171,6 +3985,7 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		if (!bfq_class_idle(bfqq))
+ 			bfq_mark_bfqq_idle_window(bfqq);
+ 		bfq_mark_bfqq_sync(bfqq);
++		bfq_mark_bfqq_just_created(bfqq);
+ 	} else
+ 		bfq_clear_bfqq_sync(bfqq);
+ 	bfq_mark_bfqq_IO_bound(bfqq);
+@@ -3180,72 +3995,19 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	bfqq->pid = pid;
+ 
+ 	bfqq->wr_coeff = 1;
+-	bfqq->last_wr_start_finish = 0;
++	bfqq->last_wr_start_finish = jiffies;
++	bfqq->wr_start_at_switch_to_srt = bfq_smallest_from_now();
++	bfqq->budget_timeout = bfq_smallest_from_now();
++	bfqq->split_time = bfq_smallest_from_now();
++
+ 	/*
+ 	 * Set to the value for which bfqq will not be deemed as
+ 	 * soft rt when it becomes backlogged.
+ 	 */
+-	bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
+-}
+-
+-static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
+-					      struct bio *bio, int is_sync,
+-					      struct bfq_io_cq *bic,
+-					      gfp_t gfp_mask)
+-{
+-	struct bfq_group *bfqg;
+-	struct bfq_queue *bfqq, *new_bfqq = NULL;
+-	struct blkcg *blkcg;
+-
+-retry:
+-	rcu_read_lock();
+-
+-	blkcg = bio_blkcg(bio);
+-	bfqg = bfq_find_alloc_group(bfqd, blkcg);
+-	/* bic always exists here */
+-	bfqq = bic_to_bfqq(bic, is_sync);
+-
+-	/*
+-	 * Always try a new alloc if we fall back to the OOM bfqq
+-	 * originally, since it should just be a temporary situation.
+-	 */
+-	if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+-		bfqq = NULL;
+-		if (new_bfqq) {
+-			bfqq = new_bfqq;
+-			new_bfqq = NULL;
+-		} else if (gfpflags_allow_blocking(gfp_mask)) {
+-			rcu_read_unlock();
+-			spin_unlock_irq(bfqd->queue->queue_lock);
+-			new_bfqq = kmem_cache_alloc_node(bfq_pool,
+-					gfp_mask | __GFP_ZERO,
+-					bfqd->queue->node);
+-			spin_lock_irq(bfqd->queue->queue_lock);
+-			if (new_bfqq)
+-				goto retry;
+-		} else {
+-			bfqq = kmem_cache_alloc_node(bfq_pool,
+-					gfp_mask | __GFP_ZERO,
+-					bfqd->queue->node);
+-		}
+-
+-		if (bfqq) {
+-			bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
+-				      is_sync);
+-			bfq_init_entity(&bfqq->entity, bfqg);
+-			bfq_log_bfqq(bfqd, bfqq, "allocated");
+-		} else {
+-			bfqq = &bfqd->oom_bfqq;
+-			bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
+-		}
+-	}
+-
+-	if (new_bfqq)
+-		kmem_cache_free(bfq_pool, new_bfqq);
+-
+-	rcu_read_unlock();
++	bfqq->soft_rt_next_start = bfq_greatest_from_now();
+ 
+-	return bfqq;
++	/* first request is almost certainly seeky */
++	bfqq->seek_history = 1;
+ }
+ 
+ static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
+@@ -3268,90 +4030,84 @@ static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
+ }
+ 
+ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+-				       struct bio *bio, int is_sync,
+-				       struct bfq_io_cq *bic, gfp_t gfp_mask)
++				       struct bio *bio, bool is_sync,
++				       struct bfq_io_cq *bic)
+ {
+ 	const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
+ 	const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
+ 	struct bfq_queue **async_bfqq = NULL;
+-	struct bfq_queue *bfqq = NULL;
++	struct bfq_queue *bfqq;
++	struct bfq_group *bfqg;
+ 
+-	if (!is_sync) {
+-		struct blkcg *blkcg;
+-		struct bfq_group *bfqg;
++	rcu_read_lock();
++
++	bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio));
++	if (!bfqg) {
++		bfqq = &bfqd->oom_bfqq;
++		goto out;
++	}
+ 
+-		rcu_read_lock();
+-		blkcg = bio_blkcg(bio);
+-		rcu_read_unlock();
+-		bfqg = bfq_find_alloc_group(bfqd, blkcg);
++	if (!is_sync) {
+ 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
+ 						  ioprio);
+ 		bfqq = *async_bfqq;
++		if (bfqq)
++			goto out;
+ 	}
+ 
+-	if (!bfqq)
+-		bfqq = bfq_find_alloc_queue(bfqd, bio, is_sync, bic, gfp_mask);
++	bfqq = kmem_cache_alloc_node(bfq_pool, GFP_NOWAIT | __GFP_ZERO,
++				     bfqd->queue->node);
++
++	if (bfqq) {
++		bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++			      is_sync);
++		bfq_init_entity(&bfqq->entity, bfqg);
++		bfq_log_bfqq(bfqd, bfqq, "allocated");
++	} else {
++		bfqq = &bfqd->oom_bfqq;
++		bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++		goto out;
++	}
+ 
+ 	/*
+ 	 * Pin the queue now that it's allocated, scheduler exit will
+ 	 * prune it.
+ 	 */
+-	if (!is_sync && !(*async_bfqq)) {
+-		atomic_inc(&bfqq->ref);
++	if (async_bfqq) {
++		bfqq->ref++;
+ 		bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
+-			     bfqq, atomic_read(&bfqq->ref));
++			     bfqq, bfqq->ref);
+ 		*async_bfqq = bfqq;
+ 	}
+ 
+-	atomic_inc(&bfqq->ref);
+-	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
+-		     atomic_read(&bfqq->ref));
++out:
++	bfqq->ref++;
++	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref);
++	rcu_read_unlock();
+ 	return bfqq;
+ }
+ 
+ static void bfq_update_io_thinktime(struct bfq_data *bfqd,
+ 				    struct bfq_io_cq *bic)
+ {
+-	unsigned long elapsed = jiffies - bic->ttime.last_end_request;
+-	unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++	struct bfq_ttime *ttime = &bic->ttime;
++	u64 elapsed = ktime_get_ns() - bic->ttime.last_end_request;
++
++	elapsed = min_t(u64, elapsed, 2 * bfqd->bfq_slice_idle);
+ 
+-	bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
+-	bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
+-	bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
+-				bic->ttime.ttime_samples;
++	ttime->ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++	ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed,  8);
++	ttime->ttime_mean = div64_ul(ttime->ttime_total + 128,
++				     ttime->ttime_samples);
+ }
+ 
+-static void bfq_update_io_seektime(struct bfq_data *bfqd,
+-				   struct bfq_queue *bfqq,
+-				   struct request *rq)
++static void
++bfq_update_io_seektime(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++		       struct request *rq)
+ {
+-	sector_t sdist;
+-	u64 total;
+-
+-	if (bfqq->last_request_pos < blk_rq_pos(rq))
+-		sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
+-	else
+-		sdist = bfqq->last_request_pos - blk_rq_pos(rq);
+-
+-	/*
+-	 * Don't allow the seek distance to get too large from the
+-	 * odd fragment, pagein, etc.
+-	 */
+-	if (bfqq->seek_samples == 0) /* first request, not really a seek */
+-		sdist = 0;
+-	else if (bfqq->seek_samples <= 60) /* second & third seek */
+-		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
+-	else
+-		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
+-
+-	bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
+-	bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
+-	total = bfqq->seek_total + (bfqq->seek_samples/2);
+-	do_div(total, bfqq->seek_samples);
+-	bfqq->seek_mean = (sector_t)total;
+-
+-	bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
+-			(u64)bfqq->seek_mean);
++	bfqq->seek_history <<= 1;
++	bfqq->seek_history |=
++		get_sdist(bfqq->last_request_pos, rq) > BFQQ_SEEK_THR;
+ }
+ 
+ /*
+@@ -3369,7 +4125,8 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ 		return;
+ 
+ 	/* Idle window just restored, statistics are meaningless. */
+-	if (bfq_bfqq_just_split(bfqq))
++	if (time_is_after_eq_jiffies(bfqq->split_time +
++				     bfqd->bfq_wr_min_idle_time))
+ 		return;
+ 
+ 	enable_idle = bfq_bfqq_idle_window(bfqq);
+@@ -3409,22 +4166,13 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	bfq_update_io_thinktime(bfqd, bic);
+ 	bfq_update_io_seektime(bfqd, bfqq, rq);
+-	if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
+-		bfq_clear_bfqq_constantly_seeky(bfqq);
+-		if (!blk_queue_nonrot(bfqd->queue)) {
+-			BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
+-			bfqd->const_seeky_busy_in_flight_queues--;
+-		}
+-	}
+ 	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ 	    !BFQQ_SEEKY(bfqq))
+ 		bfq_update_idle_window(bfqd, bfqq, bic);
+-	bfq_clear_bfqq_just_split(bfqq);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+-		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+-		     bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
+-		     (unsigned long long) bfqq->seek_mean);
++		     "rq_enqueued: idle_window=%d (seeky %d)",
++		     bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq));
+ 
+ 	bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
+ 
+@@ -3438,14 +4186,15 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		 * is small and the queue is not to be expired, then
+ 		 * just exit.
+ 		 *
+-		 * In this way, if the disk is being idled to wait for
+-		 * a new request from the in-service queue, we avoid
+-		 * unplugging the device and committing the disk to serve
+-		 * just a small request. On the contrary, we wait for
+-		 * the block layer to decide when to unplug the device:
+-		 * hopefully, new requests will be merged to this one
+-		 * quickly, then the device will be unplugged and
+-		 * larger requests will be dispatched.
++		 * In this way, if the device is being idled to wait
++		 * for a new request from the in-service queue, we
++		 * avoid unplugging the device and committing the
++		 * device to serve just a small request. On the
++		 * contrary, we wait for the block layer to decide
++		 * when to unplug the device: hopefully, new requests
++		 * will be merged to this one quickly, then the device
++		 * will be unplugged and larger requests will be
++		 * dispatched.
+ 		 */
+ 		if (small_req && !budget_timeout)
+ 			return;
+@@ -3457,10 +4206,8 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		 * timer.
+ 		 */
+ 		bfq_clear_bfqq_wait_request(bfqq);
+-		del_timer(&bfqd->idle_slice_timer);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ 		bfqg_stats_update_idle_time(bfqq_group(bfqq));
+-#endif
+ 
+ 		/*
+ 		 * The queue is not empty, because a new request just
+@@ -3504,28 +4251,21 @@ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ 			 */
+ 			new_bfqq->allocated[rq_data_dir(rq)]++;
+ 			bfqq->allocated[rq_data_dir(rq)]--;
+-			atomic_inc(&new_bfqq->ref);
++			new_bfqq->ref++;
++			bfq_clear_bfqq_just_created(bfqq);
+ 			bfq_put_queue(bfqq);
+ 			if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
+ 				bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
+ 						bfqq, new_bfqq);
+ 			rq->elv.priv[1] = new_bfqq;
+ 			bfqq = new_bfqq;
+-		} else
+-			bfq_bfqq_increase_failed_cooperations(bfqq);
++		}
+ 	}
+ 
+ 	bfq_add_request(rq);
+ 
+-	/*
+-	 * Here a newly-created bfq_queue has already started a weight-raising
+-	 * period: clear raising_time_left to prevent bfq_bfqq_save_state()
+-	 * from assigning it a full weight-raising period. See the detailed
+-	 * comments about this field in bfq_init_icq().
+-	 */
+-	if (bfqq->bic)
+-		bfqq->bic->wr_time_left = 0;
+-	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++	rq->fifo_time = ktime_get_ns() +
++	  jiffies_to_nsecs(bfqd->bfq_fifo_expire[rq_is_sync(rq)]);
+ 	list_add_tail(&rq->queuelist, &bfqq->fifo);
+ 
+ 	bfq_rq_enqueued(bfqd, bfqq, rq);
+@@ -3533,8 +4273,8 @@ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ 
+ static void bfq_update_hw_tag(struct bfq_data *bfqd)
+ {
+-	bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
+-				     bfqd->rq_in_driver);
++	bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver,
++				       bfqd->rq_in_driver);
+ 
+ 	if (bfqd->hw_tag == 1)
+ 		return;
+@@ -3560,48 +4300,85 @@ static void bfq_completed_request(struct request_queue *q, struct request *rq)
+ {
+ 	struct bfq_queue *bfqq = RQ_BFQQ(rq);
+ 	struct bfq_data *bfqd = bfqq->bfqd;
+-	bool sync = bfq_bfqq_sync(bfqq);
++	u64 now_ns;
++	u32 delta_us;
+ 
+-	bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
+-		     blk_rq_sectors(rq), sync);
++	bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left",
++		     blk_rq_sectors(rq));
+ 
++	assert_spin_locked(bfqd->queue->queue_lock);
+ 	bfq_update_hw_tag(bfqd);
+ 
+ 	BUG_ON(!bfqd->rq_in_driver);
+ 	BUG_ON(!bfqq->dispatched);
+ 	bfqd->rq_in_driver--;
+ 	bfqq->dispatched--;
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	bfqg_stats_update_completion(bfqq_group(bfqq),
+ 				     rq_start_time_ns(rq),
+-				     rq_io_start_time_ns(rq), rq->cmd_flags);
+-#endif
++				     rq_io_start_time_ns(rq), req_op(rq),
++				     rq->cmd_flags);
+ 
+ 	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++		BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++		/*
++		 * Set budget_timeout (which we overload to store the
++		 * time at which the queue remains with no backlog and
++		 * no outstanding request; used by the weight-raising
++		 * mechanism).
++		 */
++		bfqq->budget_timeout = jiffies;
++
+ 		bfq_weights_tree_remove(bfqd, &bfqq->entity,
+ 					&bfqd->queue_weights_tree);
+-		if (!blk_queue_nonrot(bfqd->queue)) {
+-			BUG_ON(!bfqd->busy_in_flight_queues);
+-			bfqd->busy_in_flight_queues--;
+-			if (bfq_bfqq_constantly_seeky(bfqq)) {
+-				BUG_ON(!bfqd->
+-					const_seeky_busy_in_flight_queues);
+-				bfqd->const_seeky_busy_in_flight_queues--;
+-			}
+-		}
+ 	}
+ 
+-	if (sync) {
+-		bfqd->sync_flight--;
+-		RQ_BIC(rq)->ttime.last_end_request = jiffies;
+-	}
++	now_ns = ktime_get_ns();
++
++	RQ_BIC(rq)->ttime.last_end_request = now_ns;
++
++	/*
++	 * Using us instead of ns, to get a reasonable precision in
++	 * computing rate in next check.
++	 */
++	delta_us = div_u64(now_ns - bfqd->last_completion, NSEC_PER_USEC);
++
++	bfq_log(bfqd, "rq_completed: delta %uus/%luus max_size %u rate %llu/%llu",
++		delta_us, BFQ_MIN_TT/NSEC_PER_USEC, bfqd->last_rq_max_size,
++		(USEC_PER_SEC*
++		(u64)((bfqd->last_rq_max_size<<BFQ_RATE_SHIFT)/delta_us))
++			>>BFQ_RATE_SHIFT,
++		(USEC_PER_SEC*(u64)(1UL<<(BFQ_RATE_SHIFT-10)))>>BFQ_RATE_SHIFT);
++
++	/*
++	 * If the request took rather long to complete, and, according
++	 * to the maximum request size recorded, this completion latency
++	 * implies that the request was certainly served at a very low
++	 * rate (less than 1M sectors/sec), then the whole observation
++	 * interval that lasts up to this time instant cannot be a
++	 * valid time interval for computing a new peak rate.  Invoke
++	 * bfq_update_rate_reset to have the following three steps
++	 * taken:
++	 * - close the observation interval at the last (previous)
++	 *   request dispatch or completion
++	 * - compute rate, if possible, for that observation interval
++	 * - reset to zero samples, which will trigger a proper
++	 *   re-initialization of the observation interval on next
++	 *   dispatch
++	 */
++	if (delta_us > BFQ_MIN_TT/NSEC_PER_USEC &&
++	   (bfqd->last_rq_max_size<<BFQ_RATE_SHIFT)/delta_us <
++			1UL<<(BFQ_RATE_SHIFT - 10))
++		bfq_update_rate_reset(bfqd, NULL);
++	bfqd->last_completion = now_ns;
+ 
+ 	/*
+-	 * If we are waiting to discover whether the request pattern of the
+-	 * task associated with the queue is actually isochronous, and
+-	 * both requisites for this condition to hold are satisfied, then
+-	 * compute soft_rt_next_start (see the comments to the function
+-	 * bfq_bfqq_softrt_next_start()).
++	 * If we are waiting to discover whether the request pattern
++	 * of the task associated with the queue is actually
++	 * isochronous, and both requisites for this condition to hold
++	 * are now satisfied, then compute soft_rt_next_start (see the
++	 * comments on the function bfq_bfqq_softrt_next_start()). We
++	 * schedule this delayed check when bfqq expires, if it still
++	 * has in-flight requests.
+ 	 */
+ 	if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
+ 	    RB_EMPTY_ROOT(&bfqq->sort_list))
+@@ -3613,10 +4390,7 @@ static void bfq_completed_request(struct request_queue *q, struct request *rq)
+ 	 * or if we want to idle in case it has no pending requests.
+ 	 */
+ 	if (bfqd->in_service_queue == bfqq) {
+-		if (bfq_bfqq_budget_new(bfqq))
+-			bfq_set_budget_timeout(bfqd);
+-
+-		if (bfq_bfqq_must_idle(bfqq)) {
++		if (bfqq->dispatched == 0 && bfq_bfqq_must_idle(bfqq)) {
+ 			bfq_arm_slice_timer(bfqd);
+ 			goto out;
+ 		} else if (bfq_may_expire_for_budg_timeout(bfqq))
+@@ -3646,7 +4420,7 @@ static int __bfq_may_queue(struct bfq_queue *bfqq)
+ 	return ELV_MQUEUE_MAY;
+ }
+ 
+-static int bfq_may_queue(struct request_queue *q, int rw)
++static int bfq_may_queue(struct request_queue *q, int op, int op_flags)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+ 	struct task_struct *tsk = current;
+@@ -3663,7 +4437,7 @@ static int bfq_may_queue(struct request_queue *q, int rw)
+ 	if (!bic)
+ 		return ELV_MQUEUE_MAY;
+ 
+-	bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++	bfqq = bic_to_bfqq(bic, rw_is_sync(op, op_flags));
+ 	if (bfqq)
+ 		return __bfq_may_queue(bfqq);
+ 
+@@ -3687,14 +4461,14 @@ static void bfq_put_request(struct request *rq)
+ 		rq->elv.priv[1] = NULL;
+ 
+ 		bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
+-			     bfqq, atomic_read(&bfqq->ref));
++			     bfqq, bfqq->ref);
+ 		bfq_put_queue(bfqq);
+ 	}
+ }
+ 
+ /*
+  * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
+- * was the last process referring to said bfqq.
++ * was the last process referring to that bfqq.
+  */
+ static struct bfq_queue *
+ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+@@ -3732,11 +4506,8 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 	unsigned long flags;
+ 	bool split = false;
+ 
+-	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+-
+-	bfq_check_ioprio_change(bic, bio);
+-
+ 	spin_lock_irqsave(q->queue_lock, flags);
++	bfq_check_ioprio_change(bic, bio);
+ 
+ 	if (!bic)
+ 		goto queue_fail;
+@@ -3746,23 +4517,47 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ new_queue:
+ 	bfqq = bic_to_bfqq(bic, is_sync);
+ 	if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+-		bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
++		if (bfqq)
++			bfq_put_queue(bfqq);
++		bfqq = bfq_get_queue(bfqd, bio, is_sync, bic);
++		BUG_ON(!hlist_unhashed(&bfqq->burst_list_node));
++
+ 		bic_set_bfqq(bic, bfqq, is_sync);
+ 		if (split && is_sync) {
++			bfq_log_bfqq(bfqd, bfqq,
++				     "set_request: was_in_list %d "
++				     "was_in_large_burst %d "
++				     "large burst in progress %d",
++				     bic->was_in_burst_list,
++				     bic->saved_in_large_burst,
++				     bfqd->large_burst);
++
+ 			if ((bic->was_in_burst_list && bfqd->large_burst) ||
+-			    bic->saved_in_large_burst)
++			    bic->saved_in_large_burst) {
++				bfq_log_bfqq(bfqd, bfqq,
++					     "set_request: marking in "
++					     "large burst");
+ 				bfq_mark_bfqq_in_large_burst(bfqq);
+-			else {
++			} else {
++				bfq_log_bfqq(bfqd, bfqq,
++					     "set_request: clearing in "
++					     "large burst");
+ 				bfq_clear_bfqq_in_large_burst(bfqq);
+ 				if (bic->was_in_burst_list)
+ 					hlist_add_head(&bfqq->burst_list_node,
+ 						       &bfqd->burst_list);
+ 			}
++			bfqq->split_time = jiffies;
+ 		}
+ 	} else {
+ 		/* If the queue was seeky for too long, break it apart. */
+ 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
+ 			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++
++			/* Update bic before losing reference to bfqq */
++			if (bfq_bfqq_in_large_burst(bfqq))
++				bic->saved_in_large_burst = true;
++
+ 			bfqq = bfq_split_bfqq(bic, bfqq);
+ 			split = true;
+ 			if (!bfqq)
+@@ -3771,9 +4566,8 @@ new_queue:
+ 	}
+ 
+ 	bfqq->allocated[rw]++;
+-	atomic_inc(&bfqq->ref);
+-	bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
+-		     atomic_read(&bfqq->ref));
++	bfqq->ref++;
++	bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq, bfqq->ref);
+ 
+ 	rq->elv.priv[0] = bic;
+ 	rq->elv.priv[1] = bfqq;
+@@ -3788,7 +4582,6 @@ new_queue:
+ 	if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
+ 		bfqq->bic = bic;
+ 		if (split) {
+-			bfq_mark_bfqq_just_split(bfqq);
+ 			/*
+ 			 * If the queue has just been split from a shared
+ 			 * queue, restore the idle window and the possible
+@@ -3798,6 +4591,9 @@ new_queue:
+ 		}
+ 	}
+ 
++	if (unlikely(bfq_bfqq_just_created(bfqq)))
++		bfq_handle_burst(bfqd, bfqq);
++
+ 	spin_unlock_irqrestore(q->queue_lock, flags);
+ 
+ 	return 0;
+@@ -3824,9 +4620,10 @@ static void bfq_kick_queue(struct work_struct *work)
+  * Handler of the expiration of the timer running if the in-service queue
+  * is idling inside its time slice.
+  */
+-static void bfq_idle_slice_timer(unsigned long data)
++static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer)
+ {
+-	struct bfq_data *bfqd = (struct bfq_data *)data;
++	struct bfq_data *bfqd = container_of(timer, struct bfq_data,
++					     idle_slice_timer);
+ 	struct bfq_queue *bfqq;
+ 	unsigned long flags;
+ 	enum bfqq_expiration reason;
+@@ -3844,6 +4641,8 @@ static void bfq_idle_slice_timer(unsigned long data)
+ 	 */
+ 	if (bfqq) {
+ 		bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++		bfq_clear_bfqq_wait_request(bfqq);
++
+ 		if (bfq_bfqq_budget_timeout(bfqq))
+ 			/*
+ 			 * Also here the queue can be safely expired
+@@ -3869,14 +4668,16 @@ schedule_dispatch:
+ 	bfq_schedule_dispatch(bfqd);
+ 
+ 	spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++	return HRTIMER_NORESTART;
+ }
+ 
+ static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
+ {
+-	del_timer_sync(&bfqd->idle_slice_timer);
++	hrtimer_cancel(&bfqd->idle_slice_timer);
+ 	cancel_work_sync(&bfqd->unplug_work);
+ }
+ 
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
+ 					struct bfq_queue **bfqq_ptr)
+ {
+@@ -3885,9 +4686,9 @@ static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
+ 
+ 	bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
+ 	if (bfqq) {
+-		bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++		bfq_bfqq_move(bfqd, bfqq, root_group);
+ 		bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
+-			     bfqq, atomic_read(&bfqq->ref));
++			     bfqq, bfqq->ref);
+ 		bfq_put_queue(bfqq);
+ 		*bfqq_ptr = NULL;
+ 	}
+@@ -3909,6 +4710,7 @@ static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
+ 
+ 	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
+ }
++#endif
+ 
+ static void bfq_exit_queue(struct elevator_queue *e)
+ {
+@@ -3928,9 +4730,7 @@ static void bfq_exit_queue(struct elevator_queue *e)
+ 
+ 	bfq_shutdown_timer_wq(bfqd);
+ 
+-	synchronize_rcu();
+-
+-	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++	BUG_ON(hrtimer_active(&bfqd->idle_slice_timer));
+ 
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	blkcg_deactivate_policy(q, &blkcg_policy_bfq);
+@@ -3978,11 +4778,14 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 	 * will not attempt to free it.
+ 	 */
+ 	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
+-	atomic_inc(&bfqd->oom_bfqq.ref);
++	bfqd->oom_bfqq.ref++;
+ 	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
+ 	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
+ 	bfqd->oom_bfqq.entity.new_weight =
+ 		bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio);
++
++	/* oom_bfqq does not participate to bursts */
++	bfq_clear_bfqq_just_created(&bfqd->oom_bfqq);
+ 	/*
+ 	 * Trigger weight initialization, according to ioprio, at the
+ 	 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
+@@ -4001,13 +4804,10 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 		goto out_free;
+ 	bfq_init_root_group(bfqd->root_group, bfqd);
+ 	bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	bfqd->active_numerous_groups = 0;
+-#endif
+ 
+-	init_timer(&bfqd->idle_slice_timer);
++	hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
++		     HRTIMER_MODE_REL);
+ 	bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
+-	bfqd->idle_slice_timer.data = (unsigned long)bfqd;
+ 
+ 	bfqd->queue_weights_tree = RB_ROOT;
+ 	bfqd->group_weights_tree = RB_ROOT;
+@@ -4028,20 +4828,19 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 	bfqd->bfq_back_penalty = bfq_back_penalty;
+ 	bfqd->bfq_slice_idle = bfq_slice_idle;
+ 	bfqd->bfq_class_idle_last_service = 0;
+-	bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
+-	bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
+-	bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++	bfqd->bfq_timeout = bfq_timeout;
+ 
+-	bfqd->bfq_coop_thresh = 2;
+-	bfqd->bfq_failed_cooperations = 7000;
+ 	bfqd->bfq_requests_within_timer = 120;
+ 
+-	bfqd->bfq_large_burst_thresh = 11;
+-	bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++	bfqd->bfq_large_burst_thresh = 8;
++	bfqd->bfq_burst_interval = msecs_to_jiffies(180);
+ 
+ 	bfqd->low_latency = true;
+ 
+-	bfqd->bfq_wr_coeff = 20;
++	/*
++	 * Trade-off between responsiveness and fairness.
++	 */
++	bfqd->bfq_wr_coeff = 30;
+ 	bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
+ 	bfqd->bfq_wr_max_time = 0;
+ 	bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
+@@ -4053,16 +4852,15 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 					      * video.
+ 					      */
+ 	bfqd->wr_busy_queues = 0;
+-	bfqd->busy_in_flight_queues = 0;
+-	bfqd->const_seeky_busy_in_flight_queues = 0;
+ 
+ 	/*
+-	 * Begin by assuming, optimistically, that the device peak rate is
+-	 * equal to the highest reference rate.
++	 * Begin by assuming, optimistically, that the device is a
++	 * high-speed one, and that its peak rate is equal to 2/3 of
++	 * the highest reference rate.
+ 	 */
+ 	bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
+ 			T_fast[blk_queue_nonrot(bfqd->queue)];
+-	bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)] * 2 / 3;
+ 	bfqd->device_speed = BFQ_BFQD_FAST;
+ 
+ 	return 0;
+@@ -4088,7 +4886,7 @@ static int __init bfq_slab_setup(void)
+ 
+ static ssize_t bfq_var_show(unsigned int var, char *page)
+ {
+-	return sprintf(page, "%d\n", var);
++	return sprintf(page, "%u\n", var);
+ }
+ 
+ static ssize_t bfq_var_store(unsigned long *var, const char *page,
+@@ -4159,21 +4957,21 @@ static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
+ static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
+ {									\
+ 	struct bfq_data *bfqd = e->elevator_data;			\
+-	unsigned int __data = __VAR;					\
+-	if (__CONV)							\
++	u64 __data = __VAR;						\
++	if (__CONV == 1)						\
+ 		__data = jiffies_to_msecs(__data);			\
++	else if (__CONV == 2)						\
++		__data = div_u64(__data, NSEC_PER_MSEC);		\
+ 	return bfq_var_show(__data, (page));				\
+ }
+-SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
+-SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 2);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 2);
+ SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
+ SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
+-SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 2);
+ SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
+-SHOW_FUNCTION(bfq_max_budget_async_rq_show,
+-	      bfqd->bfq_max_budget_async_rq, 0);
+-SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
+-SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout, 1);
++SHOW_FUNCTION(bfq_strict_guarantees_show, bfqd->strict_guarantees, 0);
+ SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
+ SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
+ SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
+@@ -4183,6 +4981,17 @@ SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
+ SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
+ #undef SHOW_FUNCTION
+ 
++#define USEC_SHOW_FUNCTION(__FUNC, __VAR)				\
++static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	u64 __data = __VAR;						\
++	__data = div_u64(__data, NSEC_PER_USEC);			\
++	return bfq_var_show(__data, (page));				\
++}
++USEC_SHOW_FUNCTION(bfq_slice_idle_us_show, bfqd->bfq_slice_idle);
++#undef USEC_SHOW_FUNCTION
++
+ #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
+ static ssize_t								\
+ __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+@@ -4194,24 +5003,22 @@ __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ 		__data = (MIN);						\
+ 	else if (__data > (MAX))					\
+ 		__data = (MAX);						\
+-	if (__CONV)							\
++	if (__CONV == 1)						\
+ 		*(__PTR) = msecs_to_jiffies(__data);			\
++	else if (__CONV == 2)						\
++		*(__PTR) = (u64)__data * NSEC_PER_MSEC;			\
+ 	else								\
+ 		*(__PTR) = __data;					\
+ 	return ret;							\
+ }
+ STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
+-		INT_MAX, 1);
++		INT_MAX, 2);
+ STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
+-		INT_MAX, 1);
++		INT_MAX, 2);
+ STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
+ STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
+ 		INT_MAX, 0);
+-STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
+-STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
+-		1, INT_MAX, 0);
+-STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
+-		INT_MAX, 1);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2);
+ STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
+ STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
+ STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
+@@ -4224,6 +5031,23 @@ STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
+ 		INT_MAX, 0);
+ #undef STORE_FUNCTION
+ 
++#define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX)			\
++static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned long __data;						\
++	int ret = bfq_var_store(&__data, (page), count);		\
++	if (__data < (MIN))						\
++		__data = (MIN);						\
++	else if (__data > (MAX))					\
++		__data = (MAX);						\
++	*(__PTR) = (u64)__data * NSEC_PER_USEC;				\
++	return ret;							\
++}
++USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0,
++		    UINT_MAX);
++#undef USEC_STORE_FUNCTION
++
+ /* do nothing for the moment */
+ static ssize_t bfq_weights_store(struct elevator_queue *e,
+ 				    const char *page, size_t count)
+@@ -4231,16 +5055,6 @@ static ssize_t bfq_weights_store(struct elevator_queue *e,
+ 	return count;
+ }
+ 
+-static unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
+-{
+-	u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
+-
+-	if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
+-		return bfq_calc_max_budget(bfqd->peak_rate, timeout);
+-	else
+-		return bfq_default_max_budget;
+-}
+-
+ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ 				    const char *page, size_t count)
+ {
+@@ -4249,7 +5063,7 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ 	int ret = bfq_var_store(&__data, (page), count);
+ 
+ 	if (__data == 0)
+-		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++		bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
+ 	else {
+ 		if (__data > INT_MAX)
+ 			__data = INT_MAX;
+@@ -4261,6 +5075,10 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ 	return ret;
+ }
+ 
++/*
++ * Leaving this name to preserve name compatibility with cfq
++ * parameters, but this timeout is used for both sync and async.
++ */
+ static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
+ 				      const char *page, size_t count)
+ {
+@@ -4273,9 +5091,27 @@ static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
+ 	else if (__data > INT_MAX)
+ 		__data = INT_MAX;
+ 
+-	bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++	bfqd->bfq_timeout = msecs_to_jiffies(__data);
+ 	if (bfqd->bfq_user_max_budget == 0)
+-		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++		bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
++
++	return ret;
++}
++
++static ssize_t bfq_strict_guarantees_store(struct elevator_queue *e,
++				     const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data > 1)
++		__data = 1;
++	if (!bfqd->strict_guarantees && __data == 1
++	    && bfqd->bfq_slice_idle < 8 * NSEC_PER_MSEC)
++		bfqd->bfq_slice_idle = 8 * NSEC_PER_MSEC;
++
++	bfqd->strict_guarantees = __data;
+ 
+ 	return ret;
+ }
+@@ -4305,10 +5141,10 @@ static struct elv_fs_entry bfq_attrs[] = {
+ 	BFQ_ATTR(back_seek_max),
+ 	BFQ_ATTR(back_seek_penalty),
+ 	BFQ_ATTR(slice_idle),
++	BFQ_ATTR(slice_idle_us),
+ 	BFQ_ATTR(max_budget),
+-	BFQ_ATTR(max_budget_async_rq),
+ 	BFQ_ATTR(timeout_sync),
+-	BFQ_ATTR(timeout_async),
++	BFQ_ATTR(strict_guarantees),
+ 	BFQ_ATTR(low_latency),
+ 	BFQ_ATTR(wr_coeff),
+ 	BFQ_ATTR(wr_max_time),
+@@ -4328,7 +5164,8 @@ static struct elevator_type iosched_bfq = {
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 		.elevator_bio_merged_fn =	bfq_bio_merged,
+ #endif
+-		.elevator_allow_merge_fn =	bfq_allow_merge,
++		.elevator_allow_bio_merge_fn =	bfq_allow_bio_merge,
++		.elevator_allow_rq_merge_fn =	bfq_allow_rq_merge,
+ 		.elevator_dispatch_fn =		bfq_dispatch_requests,
+ 		.elevator_add_req_fn =		bfq_insert_request,
+ 		.elevator_activate_req_fn =	bfq_activate_request,
+@@ -4351,18 +5188,28 @@ static struct elevator_type iosched_bfq = {
+ 	.elevator_owner =	THIS_MODULE,
+ };
+ 
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static struct blkcg_policy blkcg_policy_bfq = {
++	.dfl_cftypes		= bfq_blkg_files,
++	.legacy_cftypes		= bfq_blkcg_legacy_files,
++
++	.cpd_alloc_fn		= bfq_cpd_alloc,
++	.cpd_init_fn		= bfq_cpd_init,
++	.cpd_bind_fn	        = bfq_cpd_init,
++	.cpd_free_fn		= bfq_cpd_free,
++
++	.pd_alloc_fn		= bfq_pd_alloc,
++	.pd_init_fn		= bfq_pd_init,
++	.pd_offline_fn		= bfq_pd_offline,
++	.pd_free_fn		= bfq_pd_free,
++	.pd_reset_stats_fn	= bfq_pd_reset_stats,
++};
++#endif
++
+ static int __init bfq_init(void)
+ {
+ 	int ret;
+-
+-	/*
+-	 * Can be 0 on HZ < 1000 setups.
+-	 */
+-	if (bfq_slice_idle == 0)
+-		bfq_slice_idle = 1;
+-
+-	if (bfq_timeout_async == 0)
+-		bfq_timeout_async = 1;
++	char msg[50] = "BFQ I/O-scheduler: v8r4";
+ 
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	ret = blkcg_policy_register(&blkcg_policy_bfq);
+@@ -4375,27 +5222,46 @@ static int __init bfq_init(void)
+ 		goto err_pol_unreg;
+ 
+ 	/*
+-	 * Times to load large popular applications for the typical systems
+-	 * installed on the reference devices (see the comments before the
+-	 * definitions of the two arrays).
++	 * Times to load large popular applications for the typical
++	 * systems installed on the reference devices (see the
++	 * comments before the definitions of the next two
++	 * arrays). Actually, we use slightly slower values, as the
++	 * estimated peak rate tends to be smaller than the actual
++	 * peak rate.  The reason for this last fact is that estimates
++	 * are computed over much shorter time intervals than the long
++	 * intervals typically used for benchmarking. Why? First, to
++	 * adapt more quickly to variations. Second, because an I/O
++	 * scheduler cannot rely on a peak-rate-evaluation workload to
++	 * be run for a long time.
+ 	 */
+-	T_slow[0] = msecs_to_jiffies(2600);
+-	T_slow[1] = msecs_to_jiffies(1000);
+-	T_fast[0] = msecs_to_jiffies(5500);
+-	T_fast[1] = msecs_to_jiffies(2000);
++	T_slow[0] = msecs_to_jiffies(3500); /* actually 4 sec */
++	T_slow[1] = msecs_to_jiffies(1000); /* actually 1.5 sec */
++	T_fast[0] = msecs_to_jiffies(7000); /* actually 8 sec */
++	T_fast[1] = msecs_to_jiffies(2500); /* actually 3 sec */
+ 
+ 	/*
+-	 * Thresholds that determine the switch between speed classes (see
+-	 * the comments before the definition of the array).
++	 * Thresholds that determine the switch between speed classes
++	 * (see the comments before the definition of the array
++	 * device_speed_thresh). These thresholds are biased towards
++	 * transitions to the fast class. This is safer than the
++	 * opposite bias. In fact, a wrong transition to the slow
++	 * class results in short weight-raising periods, because the
++	 * speed of the device then tends to be higher that the
++	 * reference peak rate. On the opposite end, a wrong
++	 * transition to the fast class tends to increase
++	 * weight-raising periods, because of the opposite reason.
+ 	 */
+-	device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
+-	device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++	device_speed_thresh[0] = (4 * R_slow[0]) / 3;
++	device_speed_thresh[1] = (4 * R_slow[1]) / 3;
+ 
+ 	ret = elv_register(&iosched_bfq);
+ 	if (ret)
+ 		goto err_pol_unreg;
+ 
+-	pr_info("BFQ I/O-scheduler: v7r11");
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	strcat(msg, " (with cgroups support)");
++#endif
++	pr_info("%s", msg);
+ 
+ 	return 0;
+ 
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+index a5ed694..45d63d3 100644
+--- a/block/bfq-sched.c
++++ b/block/bfq-sched.c
+@@ -7,9 +7,13 @@
+  * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+  *		      Paolo Valente <paolo.valente@unimore.it>
+  *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
+  */
+ 
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ #define for_each_entity(entity)	\
+ 	for (; entity ; entity = entity->parent)
+@@ -22,8 +26,6 @@ static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+ 						 int extract,
+ 						 struct bfq_data *bfqd);
+ 
+-static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+-
+ static void bfq_update_budget(struct bfq_entity *next_in_service)
+ {
+ 	struct bfq_entity *bfqg_entity;
+@@ -48,6 +50,7 @@ static void bfq_update_budget(struct bfq_entity *next_in_service)
+ static int bfq_update_next_in_service(struct bfq_sched_data *sd)
+ {
+ 	struct bfq_entity *next_in_service;
++	struct bfq_queue *bfqq;
+ 
+ 	if (sd->in_service_entity)
+ 		/* will update/requeue at the end of service */
+@@ -65,14 +68,29 @@ static int bfq_update_next_in_service(struct bfq_sched_data *sd)
+ 
+ 	if (next_in_service)
+ 		bfq_update_budget(next_in_service);
++	else
++		goto exit;
++
++	bfqq = bfq_entity_to_bfqq(next_in_service);
++	if (bfqq)
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			     "update_next_in_service: chosen this queue");
++	else {
++		struct bfq_group *bfqg =
++			container_of(next_in_service,
++				     struct bfq_group, entity);
+ 
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			     "update_next_in_service: chosen this entity");
++	}
++exit:
+ 	return 1;
+ }
+ 
+ static void bfq_check_next_in_service(struct bfq_sched_data *sd,
+ 				      struct bfq_entity *entity)
+ {
+-	BUG_ON(sd->next_in_service != entity);
++	WARN_ON(sd->next_in_service != entity);
+ }
+ #else
+ #define for_each_entity(entity)	\
+@@ -151,20 +169,36 @@ static u64 bfq_delta(unsigned long service, unsigned long weight)
+ static void bfq_calc_finish(struct bfq_entity *entity, unsigned long service)
+ {
+ 	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	unsigned long long start, finish, delta;
+ 
+ 	BUG_ON(entity->weight == 0);
+ 
+ 	entity->finish = entity->start +
+ 		bfq_delta(service, entity->weight);
+ 
++	start = ((entity->start>>10)*1000)>>12;
++	finish = ((entity->finish>>10)*1000)>>12;
++	delta = ((bfq_delta(service, entity->weight)>>10)*1000)>>12;
++
+ 	if (bfqq) {
+ 		bfq_log_bfqq(bfqq->bfqd, bfqq,
+ 			"calc_finish: serv %lu, w %d",
+ 			service, entity->weight);
+ 		bfq_log_bfqq(bfqq->bfqd, bfqq,
+ 			"calc_finish: start %llu, finish %llu, delta %llu",
+-			entity->start, entity->finish,
+-			bfq_delta(service, entity->weight));
++			start, finish, delta);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	} else {
++		struct bfq_group *bfqg =
++			container_of(entity, struct bfq_group, entity);
++
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			"calc_finish group: serv %lu, w %d",
++			     service, entity->weight);
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			"calc_finish group: start %llu, finish %llu, delta %llu",
++			start, finish, delta);
++#endif
+ 	}
+ }
+ 
+@@ -293,10 +327,26 @@ static void bfq_update_min(struct bfq_entity *entity, struct rb_node *node)
+ static void bfq_update_active_node(struct rb_node *node)
+ {
+ 	struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+ 
+ 	entity->min_start = entity->start;
+ 	bfq_update_min(entity, node->rb_right);
+ 	bfq_update_min(entity, node->rb_left);
++
++	if (bfqq) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			     "update_active_node: new min_start %llu",
++			     ((entity->min_start>>10)*1000)>>12);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	} else {
++		struct bfq_group *bfqg =
++			container_of(entity, struct bfq_group, entity);
++
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			     "update_active_node: new min_start %llu",
++			     ((entity->min_start>>10)*1000)>>12);
++#endif
++	}
+ }
+ 
+ /**
+@@ -386,8 +436,6 @@ static void bfq_active_insert(struct bfq_service_tree *st,
+ 		BUG_ON(!bfqg);
+ 		BUG_ON(!bfqd);
+ 		bfqg->active_entities++;
+-		if (bfqg->active_entities == 2)
+-			bfqd->active_numerous_groups++;
+ 	}
+ #endif
+ }
+@@ -399,7 +447,7 @@ static void bfq_active_insert(struct bfq_service_tree *st,
+ static unsigned short bfq_ioprio_to_weight(int ioprio)
+ {
+ 	BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+-	return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - ioprio;
++	return (IOPRIO_BE_NR - ioprio) * BFQ_WEIGHT_CONVERSION_COEFF;
+ }
+ 
+ /**
+@@ -422,9 +470,9 @@ static void bfq_get_entity(struct bfq_entity *entity)
+ 	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+ 
+ 	if (bfqq) {
+-		atomic_inc(&bfqq->ref);
++		bfqq->ref++;
+ 		bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
+-			     bfqq, atomic_read(&bfqq->ref));
++			     bfqq, bfqq->ref);
+ 	}
+ }
+ 
+@@ -499,10 +547,6 @@ static void bfq_active_extract(struct bfq_service_tree *st,
+ 		BUG_ON(!bfqd);
+ 		BUG_ON(!bfqg->active_entities);
+ 		bfqg->active_entities--;
+-		if (bfqg->active_entities == 1) {
+-			BUG_ON(!bfqd->active_numerous_groups);
+-			bfqd->active_numerous_groups--;
+-		}
+ 	}
+ #endif
+ }
+@@ -552,7 +596,7 @@ static void bfq_forget_entity(struct bfq_service_tree *st,
+ 	if (bfqq) {
+ 		sd = entity->sched_data;
+ 		bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
+-			     bfqq, atomic_read(&bfqq->ref));
++			     bfqq, bfqq->ref);
+ 		bfq_put_queue(bfqq);
+ 	}
+ }
+@@ -602,7 +646,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+ 
+ 	if (entity->prio_changed) {
+ 		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+-		unsigned short prev_weight, new_weight;
++		unsigned int prev_weight, new_weight;
+ 		struct bfq_data *bfqd = NULL;
+ 		struct rb_root *root;
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+@@ -630,7 +674,10 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+ 			    entity->new_weight > BFQ_MAX_WEIGHT) {
+ 				pr_crit("update_weight_prio: new_weight %d\n",
+ 					entity->new_weight);
+-				BUG();
++				if (entity->new_weight < BFQ_MIN_WEIGHT)
++					entity->new_weight = BFQ_MIN_WEIGHT;
++				else
++					entity->new_weight = BFQ_MAX_WEIGHT;
+ 			}
+ 			entity->orig_weight = entity->new_weight;
+ 			if (bfqq)
+@@ -661,6 +708,13 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+ 		 * associated with its new weight.
+ 		 */
+ 		if (prev_weight != new_weight) {
++			if (bfqq)
++				bfq_log_bfqq(bfqq->bfqd, bfqq,
++					     "weight changed %d %d(%d %d)",
++					     prev_weight, new_weight,
++					     entity->orig_weight,
++					     bfqq->wr_coeff);
++
+ 			root = bfqq ? &bfqd->queue_weights_tree :
+ 				      &bfqd->group_weights_tree;
+ 			bfq_weights_tree_remove(bfqd, entity, root);
+@@ -707,7 +761,7 @@ static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
+ 		st = bfq_entity_service_tree(entity);
+ 
+ 		entity->service += served;
+-		BUG_ON(entity->service > entity->budget);
++
+ 		BUG_ON(st->wsum == 0);
+ 
+ 		st->vtime += bfq_delta(served, st->wsum);
+@@ -716,31 +770,69 @@ static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	bfqg_stats_set_start_empty_time(bfqq_group(bfqq));
+ #endif
+-	bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served);
++	st = bfq_entity_service_tree(&bfqq->entity);
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs, vtime %llu on %p",
++		     served,  ((st->vtime>>10)*1000)>>12, st);
+ }
+ 
+ /**
+- * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * bfq_bfqq_charge_time - charge an amount of service equivalent to the length
++ *			  of the time interval during which bfqq has been in
++ *			  service.
++ * @bfqd: the device
+  * @bfqq: the queue that needs a service update.
++ * @time_ms: the amount of time during which the queue has received service
+  *
+- * When it's not possible to be fair in the service domain, because
+- * a queue is not consuming its budget fast enough (the meaning of
+- * fast depends on the timeout parameter), we charge it a full
+- * budget.  In this way we should obtain a sort of time-domain
+- * fairness among all the seeky/slow queues.
++ * If a queue does not consume its budget fast enough, then providing
++ * the queue with service fairness may impair throughput, more or less
++ * severely. For this reason, queues that consume their budget slowly
++ * are provided with time fairness instead of service fairness. This
++ * goal is achieved through the BFQ scheduling engine, even if such an
++ * engine works in the service, and not in the time domain. The trick
++ * is charging these queues with an inflated amount of service, equal
++ * to the amount of service that they would have received during their
++ * service slot if they had been fast, i.e., if their requests had
++ * been dispatched at a rate equal to the estimated peak rate.
++ *
++ * It is worth noting that time fairness can cause important
++ * distortions in terms of bandwidth distribution, on devices with
++ * internal queueing. The reason is that I/O requests dispatched
++ * during the service slot of a queue may be served after that service
++ * slot is finished, and may have a total processing time loosely
++ * correlated with the duration of the service slot. This is
++ * especially true for short service slots.
+  */
+-static void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++static void bfq_bfqq_charge_time(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				 unsigned long time_ms)
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
++	int tot_serv_to_charge = entity->service;
++	unsigned int timeout_ms = jiffies_to_msecs(bfq_timeout);
++
++	if (time_ms > 0 && time_ms < timeout_ms)
++		tot_serv_to_charge =
++			(bfqd->bfq_max_budget * time_ms) / timeout_ms;
+ 
+-	bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++	if (tot_serv_to_charge < entity->service)
++		tot_serv_to_charge = entity->service;
+ 
+-	bfq_bfqq_served(bfqq, entity->budget - entity->service);
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		     "charge_time: %lu/%u ms, %d/%d/%d sectors",
++		     time_ms, timeout_ms, entity->service,
++		     tot_serv_to_charge, entity->budget);
++
++	/* Increase budget to avoid inconsistencies */
++	if (tot_serv_to_charge > entity->budget)
++		entity->budget = tot_serv_to_charge;
++
++	bfq_bfqq_served(bfqq,
++			max_t(int, 0, tot_serv_to_charge - entity->service));
+ }
+ 
+ /**
+  * __bfq_activate_entity - activate an entity.
+  * @entity: the entity being activated.
++ * @non_blocking_wait_rq: true if this entity was waiting for a request
+  *
+  * Called whenever an entity is activated, i.e., it is not active and one
+  * of its children receives a new request, or has to be reactivated due to
+@@ -748,11 +840,16 @@ static void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
+  * service received if @entity is active) of the queue to calculate its
+  * timestamps.
+  */
+-static void __bfq_activate_entity(struct bfq_entity *entity)
++static void __bfq_activate_entity(struct bfq_entity *entity,
++				  bool non_blocking_wait_rq)
+ {
+ 	struct bfq_sched_data *sd = entity->sched_data;
+ 	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	bool backshifted = false;
+ 
++	BUG_ON(!sd);
++	BUG_ON(!st);
+ 	if (entity == sd->in_service_entity) {
+ 		BUG_ON(entity->tree);
+ 		/*
+@@ -770,45 +867,133 @@ static void __bfq_activate_entity(struct bfq_entity *entity)
+ 		 * old start time.
+ 		 */
+ 		bfq_active_extract(st, entity);
+-	} else if (entity->tree == &st->idle) {
+-		/*
+-		 * Must be on the idle tree, bfq_idle_extract() will
+-		 * check for that.
+-		 */
+-		bfq_idle_extract(st, entity);
+-		entity->start = bfq_gt(st->vtime, entity->finish) ?
+-				       st->vtime : entity->finish;
+ 	} else {
+-		/*
+-		 * The finish time of the entity may be invalid, and
+-		 * it is in the past for sure, otherwise the queue
+-		 * would have been on the idle tree.
+-		 */
+-		entity->start = st->vtime;
+-		st->wsum += entity->weight;
+-		bfq_get_entity(entity);
++		unsigned long long min_vstart;
++
++		/* See comments on bfq_fqq_update_budg_for_activation */
++		if (non_blocking_wait_rq && bfq_gt(st->vtime, entity->finish)) {
++			backshifted = true;
++			min_vstart = entity->finish;
++		} else
++			min_vstart = st->vtime;
++
++		if (entity->tree == &st->idle) {
++			/*
++			 * Must be on the idle tree, bfq_idle_extract() will
++			 * check for that.
++			 */
++			bfq_idle_extract(st, entity);
++			entity->start = bfq_gt(min_vstart, entity->finish) ?
++				min_vstart : entity->finish;
++		} else {
++			/*
++			 * The finish time of the entity may be invalid, and
++			 * it is in the past for sure, otherwise the queue
++			 * would have been on the idle tree.
++			 */
++			entity->start = min_vstart;
++			st->wsum += entity->weight;
++			bfq_get_entity(entity);
+ 
+-		BUG_ON(entity->on_st);
+-		entity->on_st = 1;
++			BUG_ON(entity->on_st);
++			entity->on_st = 1;
++		}
+ 	}
+ 
+ 	st = __bfq_entity_update_weight_prio(st, entity);
+ 	bfq_calc_finish(entity, entity->budget);
++
++	/*
++	 * If some queues enjoy backshifting for a while, then their
++	 * (virtual) finish timestamps may happen to become lower and
++	 * lower than the system virtual time.  In particular, if
++	 * these queues often happen to be idle for short time
++	 * periods, and during such time periods other queues with
++	 * higher timestamps happen to be busy, then the backshifted
++	 * timestamps of the former queues can become much lower than
++	 * the system virtual time. In fact, to serve the queues with
++	 * higher timestamps while the ones with lower timestamps are
++	 * idle, the system virtual time may be pushed-up to much
++	 * higher values than the finish timestamps of the idle
++	 * queues. As a consequence, the finish timestamps of all new
++	 * or newly activated queues may end up being much larger than
++	 * those of lucky queues with backshifted timestamps. The
++	 * latter queues may then monopolize the device for a lot of
++	 * time. This would simply break service guarantees.
++	 *
++	 * To reduce this problem, push up a little bit the
++	 * backshifted timestamps of the queue associated with this
++	 * entity (only a queue can happen to have the backshifted
++	 * flag set): just enough to let the finish timestamp of the
++	 * queue be equal to the current value of the system virtual
++	 * time. This may introduce a little unfairness among queues
++	 * with backshifted timestamps, but it does not break
++	 * worst-case fairness guarantees.
++	 *
++	 * As a special case, if bfqq is weight-raised, push up
++	 * timestamps much less, to keep very low the probability that
++	 * this push up causes the backshifted finish timestamps of
++	 * weight-raised queues to become higher than the backshifted
++	 * finish timestamps of non weight-raised queues.
++	 */
++	if (backshifted && bfq_gt(st->vtime, entity->finish)) {
++		unsigned long delta = st->vtime - entity->finish;
++
++		if (bfqq)
++			delta /= bfqq->wr_coeff;
++
++		entity->start += delta;
++		entity->finish += delta;
++
++		if (bfqq) {
++			bfq_log_bfqq(bfqq->bfqd, bfqq,
++				     "__activate_entity: new queue finish %llu",
++				     ((entity->finish>>10)*1000)>>12);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		} else {
++			struct bfq_group *bfqg =
++				container_of(entity, struct bfq_group, entity);
++
++			bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++				     "__activate_entity: new group finish %llu",
++				     ((entity->finish>>10)*1000)>>12);
++#endif
++		}
++	}
++
+ 	bfq_active_insert(st, entity);
++
++	if (bfqq) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"__activate_entity: queue %seligible in st %p",
++			     entity->start <= st->vtime ? "" : "non ", st);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	} else {
++		struct bfq_group *bfqg =
++			container_of(entity, struct bfq_group, entity);
++
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			"__activate_entity: group %seligible in st %p",
++			     entity->start <= st->vtime ? "" : "non ", st);
++#endif
++	}
+ }
+ 
+ /**
+  * bfq_activate_entity - activate an entity and its ancestors if necessary.
+  * @entity: the entity to activate.
++ * @non_blocking_wait_rq: true if this entity was waiting for a request
+  *
+  * Activate @entity and all the entities on the path from it to the root.
+  */
+-static void bfq_activate_entity(struct bfq_entity *entity)
++static void bfq_activate_entity(struct bfq_entity *entity,
++				bool non_blocking_wait_rq)
+ {
+ 	struct bfq_sched_data *sd;
+ 
+ 	for_each_entity(entity) {
+-		__bfq_activate_entity(entity);
++		BUG_ON(!entity);
++		__bfq_activate_entity(entity, non_blocking_wait_rq);
+ 
+ 		sd = entity->sched_data;
+ 		if (!bfq_update_next_in_service(sd))
+@@ -889,23 +1074,24 @@ static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
+ 
+ 		if (!__bfq_deactivate_entity(entity, requeue))
+ 			/*
+-			 * The parent entity is still backlogged, and
+-			 * we don't need to update it as it is still
+-			 * in service.
++			 * next_in_service has not been changed, so
++			 * no upwards update is needed
+ 			 */
+ 			break;
+ 
+ 		if (sd->next_in_service)
+ 			/*
+-			 * The parent entity is still backlogged and
+-			 * the budgets on the path towards the root
+-			 * need to be updated.
++			 * The parent entity is still backlogged,
++			 * because next_in_service is not NULL, and
++			 * next_in_service has been updated (see
++			 * comment on the body of the above if):
++			 * upwards update of the schedule is needed.
+ 			 */
+ 			goto update;
+ 
+ 		/*
+-		 * If we reach there the parent is no more backlogged and
+-		 * we want to propagate the dequeue upwards.
++		 * If we get here, then the parent is no more backlogged and
++		 * we want to propagate the deactivation upwards.
+ 		 */
+ 		requeue = 1;
+ 	}
+@@ -915,9 +1101,23 @@ static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
+ update:
+ 	entity = parent;
+ 	for_each_entity(entity) {
+-		__bfq_activate_entity(entity);
++		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++		__bfq_activate_entity(entity, false);
+ 
+ 		sd = entity->sched_data;
++		if (bfqq)
++			bfq_log_bfqq(bfqq->bfqd, bfqq,
++				     "invoking udpdate_next for this queue");
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		else {
++			struct bfq_group *bfqg =
++				container_of(entity,
++					     struct bfq_group, entity);
++
++			bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++				     "invoking udpdate_next for this entity");
++		}
++#endif
+ 		if (!bfq_update_next_in_service(sd))
+ 			break;
+ 	}
+@@ -943,7 +1143,23 @@ static void bfq_update_vtime(struct bfq_service_tree *st)
+ 
+ 	entry = rb_entry(node, struct bfq_entity, rb_node);
+ 	if (bfq_gt(entry->min_start, st->vtime)) {
++		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entry);
+ 		st->vtime = entry->min_start;
++
++		if (bfqq)
++			bfq_log_bfqq(bfqq->bfqd, bfqq,
++				     "update_vtime: new vtime %llu %p",
++				     ((st->vtime>>10)*1000)>>12, st);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		else {
++			struct bfq_group *bfqg =
++				container_of(entry, struct bfq_group, entity);
++
++			bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++				     "update_vtime: new vtime %llu %p",
++				     ((st->vtime>>10)*1000)>>12, st);
++		}
++#endif
+ 		bfq_forget_idle(st);
+ 	}
+ }
+@@ -996,10 +1212,11 @@ left:
+  * Update the virtual time in @st and return the first eligible entity
+  * it contains.
+  */
+-static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
+-						   bool force)
++static struct bfq_entity *
++__bfq_lookup_next_entity(struct bfq_service_tree *st, bool force)
+ {
+ 	struct bfq_entity *entity, *new_next_in_service = NULL;
++	struct bfq_queue *bfqq;
+ 
+ 	if (RB_EMPTY_ROOT(&st->active))
+ 		return NULL;
+@@ -1008,6 +1225,24 @@ static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
+ 	entity = bfq_first_active_entity(st);
+ 	BUG_ON(bfq_gt(entity->start, st->vtime));
+ 
++	bfqq = bfq_entity_to_bfqq(entity);
++	if (bfqq)
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			     "__lookup_next: start %llu vtime %llu st %p",
++			     ((entity->start>>10)*1000)>>12,
++			     ((st->vtime>>10)*1000)>>12, st);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	else {
++		struct bfq_group *bfqg =
++			container_of(entity, struct bfq_group, entity);
++
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			     "__lookup_next: start %llu vtime %llu st %p",
++			     ((entity->start>>10)*1000)>>12,
++			     ((st->vtime>>10)*1000)>>12, st);
++	}
++#endif
++
+ 	/*
+ 	 * If the chosen entity does not match with the sched_data's
+ 	 * next_in_service and we are forcedly serving the IDLE priority
+@@ -1043,11 +1278,36 @@ static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+ 
+ 	BUG_ON(sd->in_service_entity);
+ 
++	/*
++	 * Choose from idle class, if needed to guarantee a minimum
++	 * bandwidth to this class. This should also mitigate
++	 * priority-inversion problems in case a low priority task is
++	 * holding file system resources.
++	 */
+ 	if (bfqd &&
+-	    jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++	    jiffies - bfqd->bfq_class_idle_last_service >
++	    BFQ_CL_IDLE_TIMEOUT) {
+ 		entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
+ 						  true);
+ 		if (entity) {
++			struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++			if (bfqq)
++				bfq_log_bfqq(bfqd, bfqq,
++					     "idle chosen from st %p %d",
++					     st + BFQ_IOPRIO_CLASSES - 1,
++					BFQ_IOPRIO_CLASSES - 1);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++			else {
++				struct bfq_group *bfqg =
++				container_of(entity, struct bfq_group, entity);
++
++				bfq_log_bfqg(bfqd, bfqg,
++					     "idle chosen from st %p %d",
++					     st + BFQ_IOPRIO_CLASSES - 1,
++					BFQ_IOPRIO_CLASSES - 1);
++			}
++#endif
+ 			i = BFQ_IOPRIO_CLASSES - 1;
+ 			bfqd->bfq_class_idle_last_service = jiffies;
+ 			sd->next_in_service = entity;
+@@ -1056,6 +1316,25 @@ static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+ 	for (; i < BFQ_IOPRIO_CLASSES; i++) {
+ 		entity = __bfq_lookup_next_entity(st + i, false);
+ 		if (entity) {
++			if (bfqd != NULL) {
++			struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++			if (bfqq)
++				bfq_log_bfqq(bfqd, bfqq,
++					     "chosen from st %p %d",
++					     st + i, i);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++			else {
++				struct bfq_group *bfqg =
++				container_of(entity, struct bfq_group, entity);
++
++				bfq_log_bfqg(bfqd, bfqg,
++					     "chosen from st %p %d",
++					     st + i, i);
++			}
++#endif
++			}
++
+ 			if (extract) {
+ 				bfq_check_next_in_service(sd, entity);
+ 				bfq_active_extract(st + i, entity);
+@@ -1069,6 +1348,13 @@ static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+ 	return entity;
+ }
+ 
++static bool next_queue_may_preempt(struct bfq_data *bfqd)
++{
++	struct bfq_sched_data *sd = &bfqd->root_group->sched_data;
++
++	return sd->next_in_service != sd->in_service_entity;
++}
++
+ /*
+  * Get next queue for service.
+  */
+@@ -1085,7 +1371,36 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ 
+ 	sd = &bfqd->root_group->sched_data;
+ 	for (; sd ; sd = entity->my_sched_data) {
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		if (entity) {
++			struct bfq_group *bfqg =
++				container_of(entity, struct bfq_group, entity);
++
++			bfq_log_bfqg(bfqd, bfqg,
++				     "get_next_queue: lookup in this group");
++		} else
++			bfq_log_bfqg(bfqd, bfqd->root_group,
++				     "get_next_queue: lookup in root group");
++#endif
++
+ 		entity = bfq_lookup_next_entity(sd, 1, bfqd);
++
++		bfqq = bfq_entity_to_bfqq(entity);
++		if (bfqq)
++			bfq_log_bfqq(bfqd, bfqq,
++			     "get_next_queue: this queue, finish %llu",
++				(((entity->finish>>10)*1000)>>10)>>2);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++		else {
++			struct bfq_group *bfqg =
++				container_of(entity, struct bfq_group, entity);
++
++			bfq_log_bfqg(bfqd, bfqg,
++			     "get_next_queue: this entity, finish %llu",
++				(((entity->finish>>10)*1000)>>10)>>2);
++		}
++#endif
++
+ 		BUG_ON(!entity);
+ 		entity->service = 0;
+ 	}
+@@ -1103,8 +1418,9 @@ static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ 		bfqd->in_service_bic = NULL;
+ 	}
+ 
++	bfq_clear_bfqq_wait_request(bfqd->in_service_queue);
++	hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ 	bfqd->in_service_queue = NULL;
+-	del_timer(&bfqd->idle_slice_timer);
+ }
+ 
+ static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -1112,9 +1428,7 @@ static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
+-	if (bfqq == bfqd->in_service_queue)
+-		__bfq_bfqd_reset_in_service(bfqd);
+-
++	BUG_ON(bfqq == bfqd->in_service_queue);
+ 	bfq_deactivate_entity(entity, requeue);
+ }
+ 
+@@ -1122,12 +1436,11 @@ static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
+-	bfq_activate_entity(entity);
++	bfq_activate_entity(entity, bfq_bfqq_non_blocking_wait_rq(bfqq));
++	bfq_clear_bfqq_non_blocking_wait_rq(bfqq);
+ }
+ 
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ static void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
+-#endif
+ 
+ /*
+  * Called when the bfqq no longer has requests pending, remove it from
+@@ -1138,6 +1451,7 @@ static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	BUG_ON(!bfq_bfqq_busy(bfqq));
+ 	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++	BUG_ON(bfqq == bfqd->in_service_queue);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq, "del from busy");
+ 
+@@ -1146,27 +1460,20 @@ static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	BUG_ON(bfqd->busy_queues == 0);
+ 	bfqd->busy_queues--;
+ 
+-	if (!bfqq->dispatched) {
++	if (!bfqq->dispatched)
+ 		bfq_weights_tree_remove(bfqd, &bfqq->entity,
+ 					&bfqd->queue_weights_tree);
+-		if (!blk_queue_nonrot(bfqd->queue)) {
+-			BUG_ON(!bfqd->busy_in_flight_queues);
+-			bfqd->busy_in_flight_queues--;
+-			if (bfq_bfqq_constantly_seeky(bfqq)) {
+-				BUG_ON(!bfqd->
+-					const_seeky_busy_in_flight_queues);
+-				bfqd->const_seeky_busy_in_flight_queues--;
+-			}
+-		}
+-	}
++
+ 	if (bfqq->wr_coeff > 1)
+ 		bfqd->wr_busy_queues--;
+ 
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	bfqg_stats_update_dequeue(bfqq_group(bfqq));
+-#endif
++
++	BUG_ON(bfqq->entity.budget < 0);
+ 
+ 	bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++
++	BUG_ON(bfqq->entity.budget < 0);
+ }
+ 
+ /*
+@@ -1184,16 +1491,11 @@ static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	bfq_mark_bfqq_busy(bfqq);
+ 	bfqd->busy_queues++;
+ 
+-	if (!bfqq->dispatched) {
++	if (!bfqq->dispatched)
+ 		if (bfqq->wr_coeff == 1)
+ 			bfq_weights_tree_add(bfqd, &bfqq->entity,
+ 					     &bfqd->queue_weights_tree);
+-		if (!blk_queue_nonrot(bfqd->queue)) {
+-			bfqd->busy_in_flight_queues++;
+-			if (bfq_bfqq_constantly_seeky(bfqq))
+-				bfqd->const_seeky_busy_in_flight_queues++;
+-		}
+-	}
++
+ 	if (bfqq->wr_coeff > 1)
+ 		bfqd->wr_busy_queues++;
+ }
+diff --git a/block/bfq.h b/block/bfq.h
+index fcce855..ea1e7d8 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -1,5 +1,5 @@
+ /*
+- * BFQ-v7r11 for 4.5.0: data structures and common functions prototypes.
++ * BFQ-v8r4 for 4.8.0: data structures and common functions prototypes.
+  *
+  * Based on ideas and code from CFQ:
+  * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+@@ -7,7 +7,9 @@
+  * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+  *		      Paolo Valente <paolo.valente@unimore.it>
+  *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
+  */
+ 
+ #ifndef _BFQ_H
+@@ -28,20 +30,21 @@
+ 
+ #define BFQ_DEFAULT_QUEUE_IOPRIO	4
+ 
+-#define BFQ_DEFAULT_GRP_WEIGHT	10
++#define BFQ_WEIGHT_LEGACY_DFL	100
+ #define BFQ_DEFAULT_GRP_IOPRIO	0
+ #define BFQ_DEFAULT_GRP_CLASS	IOPRIO_CLASS_BE
+ 
++/*
++ * Soft real-time applications are extremely more latency sensitive
++ * than interactive ones. Over-raise the weight of the former to
++ * privilege them against the latter.
++ */
++#define BFQ_SOFTRT_WEIGHT_FACTOR	100
++
+ struct bfq_entity;
+ 
+ /**
+  * struct bfq_service_tree - per ioprio_class service tree.
+- * @active: tree for active entities (i.e., those backlogged).
+- * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
+- * @first_idle: idle entity with minimum F_i.
+- * @last_idle: idle entity with maximum F_i.
+- * @vtime: scheduler virtual time.
+- * @wsum: scheduler weight sum; active and idle entities contribute to it.
+  *
+  * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
+  * ioprio_class has its own independent scheduler, and so its own
+@@ -49,27 +52,28 @@ struct bfq_entity;
+  * of the containing bfqd.
+  */
+ struct bfq_service_tree {
++	/* tree for active entities (i.e., those backlogged) */
+ 	struct rb_root active;
++	/* tree for idle entities (i.e., not backlogged, with V <= F_i)*/
+ 	struct rb_root idle;
+ 
+-	struct bfq_entity *first_idle;
+-	struct bfq_entity *last_idle;
++	struct bfq_entity *first_idle;	/* idle entity with minimum F_i */
++	struct bfq_entity *last_idle;	/* idle entity with maximum F_i */
+ 
+-	u64 vtime;
++	u64 vtime; /* scheduler virtual time */
++	/* scheduler weight sum; active and idle entities contribute to it */
+ 	unsigned long wsum;
+ };
+ 
+ /**
+  * struct bfq_sched_data - multi-class scheduler.
+- * @in_service_entity: entity in service.
+- * @next_in_service: head-of-the-line entity in the scheduler.
+- * @service_tree: array of service trees, one per ioprio_class.
+  *
+  * bfq_sched_data is the basic scheduler queue.  It supports three
+- * ioprio_classes, and can be used either as a toplevel queue or as
+- * an intermediate queue on a hierarchical setup.
+- * @next_in_service points to the active entity of the sched_data
+- * service trees that will be scheduled next.
++ * ioprio_classes, and can be used either as a toplevel queue or as an
++ * intermediate queue on a hierarchical setup.  @next_in_service
++ * points to the active entity of the sched_data service trees that
++ * will be scheduled next. It is used to reduce the number of steps
++ * needed for each hierarchical-schedule update.
+  *
+  * The supported ioprio_classes are the same as in CFQ, in descending
+  * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
+@@ -79,48 +83,29 @@ struct bfq_service_tree {
+  * All the fields are protected by the queue lock of the containing bfqd.
+  */
+ struct bfq_sched_data {
+-	struct bfq_entity *in_service_entity;
++	struct bfq_entity *in_service_entity;  /* entity in service */
++	/* head-of-the-line entity in the scheduler (see comments above) */
+ 	struct bfq_entity *next_in_service;
++	/* array of service trees, one per ioprio_class */
+ 	struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
+ };
+ 
+ /**
+  * struct bfq_weight_counter - counter of the number of all active entities
+  *                             with a given weight.
+- * @weight: weight of the entities that this counter refers to.
+- * @num_active: number of active entities with this weight.
+- * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
+- *                and @group_weights_tree).
+  */
+ struct bfq_weight_counter {
+-	short int weight;
+-	unsigned int num_active;
++	unsigned int weight; /* weight of the entities this counter refers to */
++	unsigned int num_active; /* nr of active entities with this weight */
++	/*
++	 * Weights tree member (see bfq_data's @queue_weights_tree and
++	 * @group_weights_tree)
++	 */
+ 	struct rb_node weights_node;
+ };
+ 
+ /**
+  * struct bfq_entity - schedulable entity.
+- * @rb_node: service_tree member.
+- * @weight_counter: pointer to the weight counter associated with this entity.
+- * @on_st: flag, true if the entity is on a tree (either the active or
+- *         the idle one of its service_tree).
+- * @finish: B-WF2Q+ finish timestamp (aka F_i).
+- * @start: B-WF2Q+ start timestamp (aka S_i).
+- * @tree: tree the entity is enqueued into; %NULL if not on a tree.
+- * @min_start: minimum start time of the (active) subtree rooted at
+- *             this entity; used for O(log N) lookups into active trees.
+- * @service: service received during the last round of service.
+- * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
+- * @weight: weight of the queue
+- * @parent: parent entity, for hierarchical scheduling.
+- * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
+- *                 associated scheduler queue, %NULL on leaf nodes.
+- * @sched_data: the scheduler queue this entity belongs to.
+- * @ioprio: the ioprio in use.
+- * @new_weight: when a weight change is requested, the new weight value.
+- * @orig_weight: original weight, used to implement weight boosting
+- * @prio_changed: flag, true when the user requested a weight, ioprio or
+- *		  ioprio_class change.
+  *
+  * A bfq_entity is used to represent either a bfq_queue (leaf node in the
+  * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
+@@ -147,27 +132,52 @@ struct bfq_weight_counter {
+  * containing bfqd.
+  */
+ struct bfq_entity {
+-	struct rb_node rb_node;
++	struct rb_node rb_node; /* service_tree member */
++	/* pointer to the weight counter associated with this entity */
+ 	struct bfq_weight_counter *weight_counter;
+ 
++	/*
++	 * flag, true if the entity is on a tree (either the active or
++	 * the idle one of its service_tree).
++	 */
+ 	int on_st;
+ 
+-	u64 finish;
+-	u64 start;
++	u64 finish; /* B-WF2Q+ finish timestamp (aka F_i) */
++	u64 start;  /* B-WF2Q+ start timestamp (aka S_i) */
+ 
++	/* tree the entity is enqueued into; %NULL if not on a tree */
+ 	struct rb_root *tree;
+ 
++	/*
++	 * minimum start time of the (active) subtree rooted at this
++	 * entity; used for O(log N) lookups into active trees
++	 */
+ 	u64 min_start;
+ 
+-	int service, budget;
+-	unsigned short weight, new_weight;
+-	unsigned short orig_weight;
++	/* amount of service received during the last service slot */
++	int service;
++
++	/* budget, used also to calculate F_i: F_i = S_i + @budget / @weight */
++	int budget;
++
++	unsigned int weight;	 /* weight of the queue */
++	unsigned int new_weight; /* next weight if a change is in progress */
++
++	/* original weight, used to implement weight boosting */
++	unsigned int orig_weight;
+ 
++	/* parent entity, for hierarchical scheduling */
+ 	struct bfq_entity *parent;
+ 
++	/*
++	 * For non-leaf nodes in the hierarchy, the associated
++	 * scheduler queue, %NULL on leaf nodes.
++	 */
+ 	struct bfq_sched_data *my_sched_data;
++	/* the scheduler queue this entity belongs to */
+ 	struct bfq_sched_data *sched_data;
+ 
++	/* flag, set to request a weight, ioprio or ioprio_class change  */
+ 	int prio_changed;
+ };
+ 
+@@ -175,56 +185,6 @@ struct bfq_group;
+ 
+ /**
+  * struct bfq_queue - leaf schedulable entity.
+- * @ref: reference counter.
+- * @bfqd: parent bfq_data.
+- * @new_ioprio: when an ioprio change is requested, the new ioprio value.
+- * @ioprio_class: the ioprio_class in use.
+- * @new_ioprio_class: when an ioprio_class change is requested, the new
+- *                    ioprio_class value.
+- * @new_bfqq: shared bfq_queue if queue is cooperating with
+- *           one or more other queues.
+- * @pos_node: request-position tree member (see bfq_group's @rq_pos_tree).
+- * @pos_root: request-position tree root (see bfq_group's @rq_pos_tree).
+- * @sort_list: sorted list of pending requests.
+- * @next_rq: if fifo isn't expired, next request to serve.
+- * @queued: nr of requests queued in @sort_list.
+- * @allocated: currently allocated requests.
+- * @meta_pending: pending metadata requests.
+- * @fifo: fifo list of requests in sort_list.
+- * @entity: entity representing this queue in the scheduler.
+- * @max_budget: maximum budget allowed from the feedback mechanism.
+- * @budget_timeout: budget expiration (in jiffies).
+- * @dispatched: number of requests on the dispatch list or inside driver.
+- * @flags: status flags.
+- * @bfqq_list: node for active/idle bfqq list inside our bfqd.
+- * @burst_list_node: node for the device's burst list.
+- * @seek_samples: number of seeks sampled
+- * @seek_total: sum of the distances of the seeks sampled
+- * @seek_mean: mean seek distance
+- * @last_request_pos: position of the last request enqueued
+- * @requests_within_timer: number of consecutive pairs of request completion
+- *                         and arrival, such that the queue becomes idle
+- *                         after the completion, but the next request arrives
+- *                         within an idle time slice; used only if the queue's
+- *                         IO_bound has been cleared.
+- * @pid: pid of the process owning the queue, used for logging purposes.
+- * @last_wr_start_finish: start time of the current weight-raising period if
+- *                        the @bfq-queue is being weight-raised, otherwise
+- *                        finish time of the last weight-raising period
+- * @wr_cur_max_time: current max raising time for this queue
+- * @soft_rt_next_start: minimum time instant such that, only if a new
+- *                      request is enqueued after this time instant in an
+- *                      idle @bfq_queue with no outstanding requests, then
+- *                      the task associated with the queue it is deemed as
+- *                      soft real-time (see the comments to the function
+- *                      bfq_bfqq_softrt_next_start())
+- * @last_idle_bklogged: time of the last transition of the @bfq_queue from
+- *                      idle to backlogged
+- * @service_from_backlogged: cumulative service received from the @bfq_queue
+- *                           since the last transition from idle to
+- *                           backlogged
+- * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
+- *	 queue is shared
+  *
+  * A bfq_queue is a leaf request queue; it can be associated with an
+  * io_context or more, if it  is  async or shared  between  cooperating
+@@ -235,117 +195,174 @@ struct bfq_group;
+  * All the fields are protected by the queue lock of the containing bfqd.
+  */
+ struct bfq_queue {
+-	atomic_t ref;
++	/* reference counter */
++	int ref;
++	/* parent bfq_data */
+ 	struct bfq_data *bfqd;
+ 
+-	unsigned short ioprio, new_ioprio;
+-	unsigned short ioprio_class, new_ioprio_class;
++	/* current ioprio and ioprio class */
++	unsigned short ioprio, ioprio_class;
++	/* next ioprio and ioprio class if a change is in progress */
++	unsigned short new_ioprio, new_ioprio_class;
+ 
+-	/* fields for cooperating queues handling */
++	/*
++	 * Shared bfq_queue if queue is cooperating with one or more
++	 * other queues.
++	 */
+ 	struct bfq_queue *new_bfqq;
++	/* request-position tree member (see bfq_group's @rq_pos_tree) */
+ 	struct rb_node pos_node;
++	/* request-position tree root (see bfq_group's @rq_pos_tree) */
+ 	struct rb_root *pos_root;
+ 
++	/* sorted list of pending requests */
+ 	struct rb_root sort_list;
++	/* if fifo isn't expired, next request to serve */
+ 	struct request *next_rq;
++	/* number of sync and async requests queued */
+ 	int queued[2];
++	/* number of sync and async requests currently allocated */
+ 	int allocated[2];
++	/* number of pending metadata requests */
+ 	int meta_pending;
++	/* fifo list of requests in sort_list */
+ 	struct list_head fifo;
+ 
++	/* entity representing this queue in the scheduler */
+ 	struct bfq_entity entity;
+ 
++	/* maximum budget allowed from the feedback mechanism */
+ 	int max_budget;
++	/* budget expiration (in jiffies) */
+ 	unsigned long budget_timeout;
+ 
++	/* number of requests on the dispatch list or inside driver */
+ 	int dispatched;
+ 
+-	unsigned int flags;
++	unsigned int flags; /* status flags.*/
+ 
++	/* node for active/idle bfqq list inside parent bfqd */
+ 	struct list_head bfqq_list;
+ 
++	/* bit vector: a 1 for each seeky requests in history */
++	u32 seek_history;
++
++	/* node for the device's burst list */
+ 	struct hlist_node burst_list_node;
+ 
+-	unsigned int seek_samples;
+-	u64 seek_total;
+-	sector_t seek_mean;
++	/* position of the last request enqueued */
+ 	sector_t last_request_pos;
+ 
++	/* Number of consecutive pairs of request completion and
++	 * arrival, such that the queue becomes idle after the
++	 * completion, but the next request arrives within an idle
++	 * time slice; used only if the queue's IO_bound flag has been
++	 * cleared.
++	 */
+ 	unsigned int requests_within_timer;
+ 
++	/* pid of the process owning the queue, used for logging purposes */
+ 	pid_t pid;
++
++	/*
++	 * Pointer to the bfq_io_cq owning the bfq_queue, set to %NULL
++	 * if the queue is shared.
++	 */
+ 	struct bfq_io_cq *bic;
+ 
+-	/* weight-raising fields */
++	/* current maximum weight-raising time for this queue */
+ 	unsigned long wr_cur_max_time;
++	/*
++	 * Minimum time instant such that, only if a new request is
++	 * enqueued after this time instant in an idle @bfq_queue with
++	 * no outstanding requests, then the task associated with the
++	 * queue it is deemed as soft real-time (see the comments on
++	 * the function bfq_bfqq_softrt_next_start())
++	 */
+ 	unsigned long soft_rt_next_start;
++	/*
++	 * Start time of the current weight-raising period if
++	 * the @bfq-queue is being weight-raised, otherwise
++	 * finish time of the last weight-raising period.
++	 */
+ 	unsigned long last_wr_start_finish;
++	/* factor by which the weight of this queue is multiplied */
+ 	unsigned int wr_coeff;
++	/*
++	 * Time of the last transition of the @bfq_queue from idle to
++	 * backlogged.
++	 */
+ 	unsigned long last_idle_bklogged;
++	/*
++	 * Cumulative service received from the @bfq_queue since the
++	 * last transition from idle to backlogged.
++	 */
+ 	unsigned long service_from_backlogged;
++	/*
++	 * Value of wr start time when switching to soft rt
++	 */
++	unsigned long wr_start_at_switch_to_srt;
++
++	unsigned long split_time; /* time of last split */
+ };
+ 
+ /**
+  * struct bfq_ttime - per process thinktime stats.
+- * @ttime_total: total process thinktime
+- * @ttime_samples: number of thinktime samples
+- * @ttime_mean: average process thinktime
+  */
+ struct bfq_ttime {
+-	unsigned long last_end_request;
++	u64 last_end_request; /* completion time of last request */
++
++	u64 ttime_total; /* total process thinktime */
++	unsigned long ttime_samples; /* number of thinktime samples */
++	u64 ttime_mean; /* average process thinktime */
+ 
+-	unsigned long ttime_total;
+-	unsigned long ttime_samples;
+-	unsigned long ttime_mean;
+ };
+ 
+ /**
+  * struct bfq_io_cq - per (request_queue, io_context) structure.
+- * @icq: associated io_cq structure
+- * @bfqq: array of two process queues, the sync and the async
+- * @ttime: associated @bfq_ttime struct
+- * @ioprio: per (request_queue, blkcg) ioprio.
+- * @blkcg_id: id of the blkcg the related io_cq belongs to.
+- * @wr_time_left: snapshot of the time left before weight raising ends
+- *                for the sync queue associated to this process; this
+- *		  snapshot is taken to remember this value while the weight
+- *		  raising is suspended because the queue is merged with a
+- *		  shared queue, and is used to set @raising_cur_max_time
+- *		  when the queue is split from the shared queue and its
+- *		  weight is raised again
+- * @saved_idle_window: same purpose as the previous field for the idle
+- *                     window
+- * @saved_IO_bound: same purpose as the previous two fields for the I/O
+- *                  bound classification of a queue
+- * @saved_in_large_burst: same purpose as the previous fields for the
+- *                        value of the field keeping the queue's belonging
+- *                        to a large burst
+- * @was_in_burst_list: true if the queue belonged to a burst list
+- *                     before its merge with another cooperating queue
+- * @cooperations: counter of consecutive successful queue merges underwent
+- *                by any of the process' @bfq_queues
+- * @failed_cooperations: counter of consecutive failed queue merges of any
+- *                       of the process' @bfq_queues
+  */
+ struct bfq_io_cq {
++	/* associated io_cq structure */
+ 	struct io_cq icq; /* must be the first member */
++	/* array of two process queues, the sync and the async */
+ 	struct bfq_queue *bfqq[2];
++	/* associated @bfq_ttime struct */
+ 	struct bfq_ttime ttime;
++	/* per (request_queue, blkcg) ioprio */
+ 	int ioprio;
+-
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	uint64_t blkcg_id; /* the current blkcg ID */
++	uint64_t blkcg_serial_nr; /* the current blkcg serial */
+ #endif
+ 
+-	unsigned int wr_time_left;
++	/*
++	 * Snapshot of the idle window before merging; taken to
++	 * remember this value while the queue is merged, so as to be
++	 * able to restore it in case of split.
++	 */
+ 	bool saved_idle_window;
++	/*
++	 * Same purpose as the previous two fields for the I/O bound
++	 * classification of a queue.
++	 */
+ 	bool saved_IO_bound;
+ 
++	/*
++	 * Same purpose as the previous fields for the value of the
++	 * field keeping the queue's belonging to a large burst
++	 */
+ 	bool saved_in_large_burst;
++	/*
++	 * True if the queue belonged to a burst list before its merge
++	 * with another cooperating queue.
++	 */
+ 	bool was_in_burst_list;
+ 
+-	unsigned int cooperations;
+-	unsigned int failed_cooperations;
++	/*
++	 * Similar to previous fields: save wr information.
++	 */
++	unsigned long saved_wr_coeff;
++	unsigned long saved_last_wr_start_finish;
++	unsigned long saved_wr_start_at_switch_to_srt;
+ };
+ 
+ enum bfq_device_speed {
+@@ -354,224 +371,234 @@ enum bfq_device_speed {
+ };
+ 
+ /**
+- * struct bfq_data - per device data structure.
+- * @queue: request queue for the managed device.
+- * @root_group: root bfq_group for the device.
+- * @active_numerous_groups: number of bfq_groups containing more than one
+- *                          active @bfq_entity.
+- * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
+- *                      weight. Used to keep track of whether all @bfq_queues
+- *                     have the same weight. The tree contains one counter
+- *                     for each distinct weight associated to some active
+- *                     and not weight-raised @bfq_queue (see the comments to
+- *                      the functions bfq_weights_tree_[add|remove] for
+- *                     further details).
+- * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
+- *                      by weight. Used to keep track of whether all
+- *                     @bfq_groups have the same weight. The tree contains
+- *                     one counter for each distinct weight associated to
+- *                     some active @bfq_group (see the comments to the
+- *                     functions bfq_weights_tree_[add|remove] for further
+- *                     details).
+- * @busy_queues: number of bfq_queues containing requests (including the
+- *		 queue in service, even if it is idling).
+- * @busy_in_flight_queues: number of @bfq_queues containing pending or
+- *                         in-flight requests, plus the @bfq_queue in
+- *                         service, even if idle but waiting for the
+- *                         possible arrival of its next sync request. This
+- *                         field is updated only if the device is rotational,
+- *                         but used only if the device is also NCQ-capable.
+- *                         The reason why the field is updated also for non-
+- *                         NCQ-capable rotational devices is related to the
+- *                         fact that the value of @hw_tag may be set also
+- *                         later than when busy_in_flight_queues may need to
+- *                         be incremented for the first time(s). Taking also
+- *                         this possibility into account, to avoid unbalanced
+- *                         increments/decrements, would imply more overhead
+- *                         than just updating busy_in_flight_queues
+- *                         regardless of the value of @hw_tag.
+- * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
+- *                                     (that is, seeky queues that expired
+- *                                     for budget timeout at least once)
+- *                                     containing pending or in-flight
+- *                                     requests, including the in-service
+- *                                     @bfq_queue if constantly seeky. This
+- *                                     field is updated only if the device
+- *                                     is rotational, but used only if the
+- *                                     device is also NCQ-capable (see the
+- *                                     comments to @busy_in_flight_queues).
+- * @wr_busy_queues: number of weight-raised busy @bfq_queues.
+- * @queued: number of queued requests.
+- * @rq_in_driver: number of requests dispatched and waiting for completion.
+- * @sync_flight: number of sync requests in the driver.
+- * @max_rq_in_driver: max number of reqs in driver in the last
+- *                    @hw_tag_samples completed requests.
+- * @hw_tag_samples: nr of samples used to calculate hw_tag.
+- * @hw_tag: flag set to one if the driver is showing a queueing behavior.
+- * @budgets_assigned: number of budgets assigned.
+- * @idle_slice_timer: timer set when idling for the next sequential request
+- *                    from the queue in service.
+- * @unplug_work: delayed work to restart dispatching on the request queue.
+- * @in_service_queue: bfq_queue in service.
+- * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
+- * @last_position: on-disk position of the last served request.
+- * @last_budget_start: beginning of the last budget.
+- * @last_idling_start: beginning of the last idle slice.
+- * @peak_rate: peak transfer rate observed for a budget.
+- * @peak_rate_samples: number of samples used to calculate @peak_rate.
+- * @bfq_max_budget: maximum budget allotted to a bfq_queue before
+- *                  rescheduling.
+- * @active_list: list of all the bfq_queues active on the device.
+- * @idle_list: list of all the bfq_queues idle on the device.
+- * @bfq_fifo_expire: timeout for async/sync requests; when it expires
+- *                   requests are served in fifo order.
+- * @bfq_back_penalty: weight of backward seeks wrt forward ones.
+- * @bfq_back_max: maximum allowed backward seek.
+- * @bfq_slice_idle: maximum idling time.
+- * @bfq_user_max_budget: user-configured max budget value
+- *                       (0 for auto-tuning).
+- * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
+- *                           async queues.
+- * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
+- *               to prevent seeky queues to impose long latencies to well
+- *               behaved ones (this also implies that seeky queues cannot
+- *               receive guarantees in the service domain; after a timeout
+- *               they are charged for the whole allocated budget, to try
+- *               to preserve a behavior reasonably fair among them, but
+- *               without service-domain guarantees).
+- * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
+- *                   no more granted any weight-raising.
+- * @bfq_failed_cooperations: number of consecutive failed cooperation
+- *                           chances after which weight-raising is restored
+- *                           to a queue subject to more than bfq_coop_thresh
+- *                           queue merges.
+- * @bfq_requests_within_timer: number of consecutive requests that must be
+- *                             issued within the idle time slice to set
+- *                             again idling to a queue which was marked as
+- *                             non-I/O-bound (see the definition of the
+- *                             IO_bound flag for further details).
+- * @last_ins_in_burst: last time at which a queue entered the current
+- *                     burst of queues being activated shortly after
+- *                     each other; for more details about this and the
+- *                     following parameters related to a burst of
+- *                     activations, see the comments to the function
+- *                     @bfq_handle_burst.
+- * @bfq_burst_interval: reference time interval used to decide whether a
+- *                      queue has been activated shortly after
+- *                      @last_ins_in_burst.
+- * @burst_size: number of queues in the current burst of queue activations.
+- * @bfq_large_burst_thresh: maximum burst size above which the current
+- *			    queue-activation burst is deemed as 'large'.
+- * @large_burst: true if a large queue-activation burst is in progress.
+- * @burst_list: head of the burst list (as for the above fields, more details
+- *		in the comments to the function bfq_handle_burst).
+- * @low_latency: if set to true, low-latency heuristics are enabled.
+- * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
+- *                queue is multiplied.
+- * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
+- * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
+- * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
+- *			  may be reactivated for a queue (in jiffies).
+- * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
+- *				after which weight-raising may be
+- *				reactivated for an already busy queue
+- *				(in jiffies).
+- * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
+- *			    sectors per seconds.
+- * @RT_prod: cached value of the product R*T used for computing the maximum
+- *	     duration of the weight raising automatically.
+- * @device_speed: device-speed class for the low-latency heuristic.
+- * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ * struct bfq_data - per-device data structure.
+  *
+  * All the fields are protected by the @queue lock.
+  */
+ struct bfq_data {
++	/* request queue for the device */
+ 	struct request_queue *queue;
+ 
++	/* root bfq_group for the device */
+ 	struct bfq_group *root_group;
+ 
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+-	int active_numerous_groups;
+-#endif
+-
++	/*
++	 * rbtree of weight counters of @bfq_queues, sorted by
++	 * weight. Used to keep track of whether all @bfq_queues have
++	 * the same weight. The tree contains one counter for each
++	 * distinct weight associated to some active and not
++	 * weight-raised @bfq_queue (see the comments to the functions
++	 * bfq_weights_tree_[add|remove] for further details).
++	 */
+ 	struct rb_root queue_weights_tree;
++	/*
++	 * rbtree of non-queue @bfq_entity weight counters, sorted by
++	 * weight. Used to keep track of whether all @bfq_groups have
++	 * the same weight. The tree contains one counter for each
++	 * distinct weight associated to some active @bfq_group (see
++	 * the comments to the functions bfq_weights_tree_[add|remove]
++	 * for further details).
++	 */
+ 	struct rb_root group_weights_tree;
+ 
++	/*
++	 * Number of bfq_queues containing requests (including the
++	 * queue in service, even if it is idling).
++	 */
+ 	int busy_queues;
+-	int busy_in_flight_queues;
+-	int const_seeky_busy_in_flight_queues;
++	/* number of weight-raised busy @bfq_queues */
+ 	int wr_busy_queues;
++	/* number of queued requests */
+ 	int queued;
++	/* number of requests dispatched and waiting for completion */
+ 	int rq_in_driver;
+-	int sync_flight;
+ 
++	/*
++	 * Maximum number of requests in driver in the last
++	 * @hw_tag_samples completed requests.
++	 */
+ 	int max_rq_in_driver;
++	/* number of samples used to calculate hw_tag */
+ 	int hw_tag_samples;
++	/* flag set to one if the driver is showing a queueing behavior */
+ 	int hw_tag;
+ 
++	/* number of budgets assigned */
+ 	int budgets_assigned;
+ 
+-	struct timer_list idle_slice_timer;
++	/*
++	 * Timer set when idling (waiting) for the next request from
++	 * the queue in service.
++	 */
++	struct hrtimer idle_slice_timer;
++	/* delayed work to restart dispatching on the request queue */
+ 	struct work_struct unplug_work;
+ 
++	/* bfq_queue in service */
+ 	struct bfq_queue *in_service_queue;
++	/* bfq_io_cq (bic) associated with the @in_service_queue */
+ 	struct bfq_io_cq *in_service_bic;
+ 
++	/* on-disk position of the last served request */
+ 	sector_t last_position;
+ 
++	/* time of last request completion (ns) */
++	u64 last_completion;
++
++	/* time of first rq dispatch in current observation interval (ns) */
++	u64 first_dispatch;
++	/* time of last rq dispatch in current observation interval (ns) */
++	u64 last_dispatch;
++
++	/* beginning of the last budget */
+ 	ktime_t last_budget_start;
++	/* beginning of the last idle slice */
+ 	ktime_t last_idling_start;
++
++	/* number of samples in current observation interval */
+ 	int peak_rate_samples;
+-	u64 peak_rate;
++	/* num of samples of seq dispatches in current observation interval */
++	u32 sequential_samples;
++	/* total num of sectors transferred in current observation interval */
++	u64 tot_sectors_dispatched;
++	/* max rq size seen during current observation interval (sectors) */
++	u32 last_rq_max_size;
++	/* time elapsed from first dispatch in current observ. interval (us) */
++	u64 delta_from_first;
++	/* current estimate of device peak rate */
++	u32 peak_rate;
++
++	/* maximum budget allotted to a bfq_queue before rescheduling */
+ 	int bfq_max_budget;
+ 
++	/* list of all the bfq_queues active on the device */
+ 	struct list_head active_list;
++	/* list of all the bfq_queues idle on the device */
+ 	struct list_head idle_list;
+ 
+-	unsigned int bfq_fifo_expire[2];
++	/*
++	 * Timeout for async/sync requests; when it fires, requests
++	 * are served in fifo order.
++	 */
++	u64 bfq_fifo_expire[2];
++	/* weight of backward seeks wrt forward ones */
+ 	unsigned int bfq_back_penalty;
++	/* maximum allowed backward seek */
+ 	unsigned int bfq_back_max;
+-	unsigned int bfq_slice_idle;
++	/* maximum idling time */
++	u32 bfq_slice_idle;
++	/* last time CLASS_IDLE was served */
+ 	u64 bfq_class_idle_last_service;
+ 
++	/* user-configured max budget value (0 for auto-tuning) */
+ 	int bfq_user_max_budget;
+-	int bfq_max_budget_async_rq;
+-	unsigned int bfq_timeout[2];
+-
+-	unsigned int bfq_coop_thresh;
+-	unsigned int bfq_failed_cooperations;
++	/*
++	 * Timeout for bfq_queues to consume their budget; used to
++	 * prevent seeky queues from imposing long latencies to
++	 * sequential or quasi-sequential ones (this also implies that
++	 * seeky queues cannot receive guarantees in the service
++	 * domain; after a timeout they are charged for the time they
++	 * have been in service, to preserve fairness among them, but
++	 * without service-domain guarantees).
++	 */
++	unsigned int bfq_timeout;
++
++	/*
++	 * Number of consecutive requests that must be issued within
++	 * the idle time slice to set again idling to a queue which
++	 * was marked as non-I/O-bound (see the definition of the
++	 * IO_bound flag for further details).
++	 */
+ 	unsigned int bfq_requests_within_timer;
+ 
++	/*
++	 * Force device idling whenever needed to provide accurate
++	 * service guarantees, without caring about throughput
++	 * issues. CAVEAT: this may even increase latencies, in case
++	 * of useless idling for processes that did stop doing I/O.
++	 */
++	bool strict_guarantees;
++
++	/*
++	 * Last time at which a queue entered the current burst of
++	 * queues being activated shortly after each other; for more
++	 * details about this and the following parameters related to
++	 * a burst of activations, see the comments on the function
++	 * bfq_handle_burst.
++	 */
+ 	unsigned long last_ins_in_burst;
++	/*
++	 * Reference time interval used to decide whether a queue has
++	 * been activated shortly after @last_ins_in_burst.
++	 */
+ 	unsigned long bfq_burst_interval;
++	/* number of queues in the current burst of queue activations */
+ 	int burst_size;
++
++	/* common parent entity for the queues in the burst */
++	struct bfq_entity *burst_parent_entity;
++	/* Maximum burst size above which the current queue-activation
++	 * burst is deemed as 'large'.
++	 */
+ 	unsigned long bfq_large_burst_thresh;
++	/* true if a large queue-activation burst is in progress */
+ 	bool large_burst;
++	/*
++	 * Head of the burst list (as for the above fields, more
++	 * details in the comments on the function bfq_handle_burst).
++	 */
+ 	struct hlist_head burst_list;
+ 
++	/* if set to true, low-latency heuristics are enabled */
+ 	bool low_latency;
+-
+-	/* parameters of the low_latency heuristics */
++	/*
++	 * Maximum factor by which the weight of a weight-raised queue
++	 * is multiplied.
++	 */
+ 	unsigned int bfq_wr_coeff;
++	/* maximum duration of a weight-raising period (jiffies) */
+ 	unsigned int bfq_wr_max_time;
++
++	/* Maximum weight-raising duration for soft real-time processes */
+ 	unsigned int bfq_wr_rt_max_time;
++	/*
++	 * Minimum idle period after which weight-raising may be
++	 * reactivated for a queue (in jiffies).
++	 */
+ 	unsigned int bfq_wr_min_idle_time;
++	/*
++	 * Minimum period between request arrivals after which
++	 * weight-raising may be reactivated for an already busy async
++	 * queue (in jiffies).
++	 */
+ 	unsigned long bfq_wr_min_inter_arr_async;
++
++	/* Max service-rate for a soft real-time queue, in sectors/sec */
+ 	unsigned int bfq_wr_max_softrt_rate;
++	/*
++	 * Cached value of the product R*T, used for computing the
++	 * maximum duration of weight raising automatically.
++	 */
+ 	u64 RT_prod;
++	/* device-speed class for the low-latency heuristic */
+ 	enum bfq_device_speed device_speed;
+ 
++	/* fallback dummy bfqq for extreme OOM conditions */
+ 	struct bfq_queue oom_bfqq;
+ };
+ 
+ enum bfqq_state_flags {
+-	BFQ_BFQQ_FLAG_busy = 0,		/* has requests or is in service */
++	BFQ_BFQQ_FLAG_just_created = 0,	/* queue just allocated */
++	BFQ_BFQQ_FLAG_busy,		/* has requests or is in service */
+ 	BFQ_BFQQ_FLAG_wait_request,	/* waiting for a request */
++	BFQ_BFQQ_FLAG_non_blocking_wait_rq, /*
++					     * waiting for a request
++					     * without idling the device
++					     */
+ 	BFQ_BFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
+ 	BFQ_BFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
+ 	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
+ 	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
+-	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
+ 	BFQ_BFQQ_FLAG_IO_bound,		/*
+ 					 * bfqq has timed-out at least once
+ 					 * having consumed at most 2/10 of
+@@ -581,17 +608,12 @@ enum bfqq_state_flags {
+ 					 * bfqq activated in a large burst,
+ 					 * see comments to bfq_handle_burst.
+ 					 */
+-	BFQ_BFQQ_FLAG_constantly_seeky,	/*
+-					 * bfqq has proved to be slow and
+-					 * seeky until budget timeout
+-					 */
+ 	BFQ_BFQQ_FLAG_softrt_update,	/*
+ 					 * may need softrt-next-start
+ 					 * update
+ 					 */
+ 	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
+-	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be split */
+-	BFQ_BFQQ_FLAG_just_split,	/* queue has just been split */
++	BFQ_BFQQ_FLAG_split_coop	/* shared bfqq will be split */
+ };
+ 
+ #define BFQ_BFQQ_FNS(name)						\
+@@ -608,25 +630,53 @@ static int bfq_bfqq_##name(const struct bfq_queue *bfqq)		\
+ 	return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0;	\
+ }
+ 
++BFQ_BFQQ_FNS(just_created);
+ BFQ_BFQQ_FNS(busy);
+ BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(non_blocking_wait_rq);
+ BFQ_BFQQ_FNS(must_alloc);
+ BFQ_BFQQ_FNS(fifo_expire);
+ BFQ_BFQQ_FNS(idle_window);
+ BFQ_BFQQ_FNS(sync);
+-BFQ_BFQQ_FNS(budget_new);
+ BFQ_BFQQ_FNS(IO_bound);
+ BFQ_BFQQ_FNS(in_large_burst);
+-BFQ_BFQQ_FNS(constantly_seeky);
+ BFQ_BFQQ_FNS(coop);
+ BFQ_BFQQ_FNS(split_coop);
+-BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+ 
+ /* Logging facilities. */
+-#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
+-	blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...)	do {			\
++	char __pbuf[128];						\
++									\
++	assert_spin_locked((bfqd)->queue->queue_lock);			\
++	blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \
++	blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, \
++			  (bfqq)->pid,			  \
++			  bfq_bfqq_sync((bfqq)) ? 'S' : 'A',	\
++			  __pbuf, ##args);				\
++} while (0)
++
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...)	do {			\
++	char __pbuf[128];						\
++									\
++	blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf));		\
++	blk_add_trace_msg((bfqd)->queue, "%s " fmt, __pbuf, ##args);	\
++} while (0)
++
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...)	\
++	blk_add_trace_msg((bfqd)->queue, "bfq%d%c " fmt, (bfqq)->pid,	\
++			bfq_bfqq_sync((bfqq)) ? 'S' : 'A',		\
++				##args)
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...)		do {} while (0)
++
++#endif /* CONFIG_BFQ_GROUP_IOSCHED */
+ 
+ #define bfq_log(bfqd, fmt, args...) \
+ 	blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
+@@ -640,15 +690,12 @@ enum bfqq_expiration {
+ 	BFQ_BFQQ_BUDGET_TIMEOUT,	/* budget took too long to be used */
+ 	BFQ_BFQQ_BUDGET_EXHAUSTED,	/* budget consumed */
+ 	BFQ_BFQQ_NO_MORE_REQUESTS,	/* the queue has no more requests */
++	BFQ_BFQQ_PREEMPTED		/* preemption in progress */
+ };
+ 
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 
+ struct bfqg_stats {
+-	/* total bytes transferred */
+-	struct blkg_rwstat		service_bytes;
+-	/* total IOs serviced, post merge */
+-	struct blkg_rwstat		serviced;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 	/* number of ios merged */
+ 	struct blkg_rwstat		merged;
+ 	/* total time spent on device in ns, may not be accurate w/ queueing */
+@@ -657,12 +704,8 @@ struct bfqg_stats {
+ 	struct blkg_rwstat		wait_time;
+ 	/* number of IOs queued up */
+ 	struct blkg_rwstat		queued;
+-	/* total sectors transferred */
+-	struct blkg_stat		sectors;
+ 	/* total disk time and nr sectors dispatched by this group */
+ 	struct blkg_stat		time;
+-	/* time not charged to this cgroup */
+-	struct blkg_stat		unaccounted_time;
+ 	/* sum of number of ios queued across all samples */
+ 	struct blkg_stat		avg_queue_size_sum;
+ 	/* count of samples taken for average */
+@@ -680,8 +723,10 @@ struct bfqg_stats {
+ 	uint64_t			start_idle_time;
+ 	uint64_t			start_empty_time;
+ 	uint16_t			flags;
++#endif
+ };
+ 
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ /*
+  * struct bfq_group_data - per-blkcg storage for the blkio subsystem.
+  *
+@@ -692,7 +737,7 @@ struct bfq_group_data {
+ 	/* must be the first member */
+ 	struct blkcg_policy_data pd;
+ 
+-	unsigned short weight;
++	unsigned int weight;
+ };
+ 
+ /**
+@@ -712,7 +757,7 @@ struct bfq_group_data {
+  *                   unused for the root group. Used to know whether there
+  *                   are groups with more than one active @bfq_entity
+  *                   (see the comments to the function
+- *                   bfq_bfqq_must_not_expire()).
++ *                   bfq_bfqq_may_idle()).
+  * @rq_pos_tree: rbtree sorted by next_request position, used when
+  *               determining if two or more queues have interleaving
+  *               requests (see bfq_find_close_cooperator()).
+@@ -745,7 +790,6 @@ struct bfq_group {
+ 	struct rb_root rq_pos_tree;
+ 
+ 	struct bfqg_stats stats;
+-	struct bfqg_stats dead_stats;	/* stats pushed from dead children */
+ };
+ 
+ #else
+@@ -767,11 +811,25 @@ bfq_entity_service_tree(struct bfq_entity *entity)
+ 	struct bfq_sched_data *sched_data = entity->sched_data;
+ 	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+ 	unsigned int idx = bfqq ? bfqq->ioprio_class - 1 :
+-				  BFQ_DEFAULT_GRP_CLASS;
++				  BFQ_DEFAULT_GRP_CLASS - 1;
+ 
+ 	BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
+ 	BUG_ON(sched_data == NULL);
+ 
++	if (bfqq)
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			     "entity_service_tree %p %d",
++			     sched_data->service_tree + idx, idx);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++	else {
++		struct bfq_group *bfqg =
++			container_of(entity, struct bfq_group, entity);
++
++		bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++			     "entity_service_tree %p %d",
++			     sched_data->service_tree + idx, idx);
++	}
++#endif
+ 	return sched_data->service_tree + idx;
+ }
+ 
+@@ -791,47 +849,6 @@ static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
+ 	return bic->icq.q->elevator->elevator_data;
+ }
+ 
+-/**
+- * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
+- * @ptr: a pointer to a bfqd.
+- * @flags: storage for the flags to be saved.
+- *
+- * This function allows bfqg->bfqd to be protected by the
+- * queue lock of the bfqd they reference; the pointer is dereferenced
+- * under RCU, so the storage for bfqd is assured to be safe as long
+- * as the RCU read side critical section does not end.  After the
+- * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
+- * sure that no other writer accessed it.  If we raced with a writer,
+- * the function returns NULL, with the queue unlocked, otherwise it
+- * returns the dereferenced pointer, with the queue locked.
+- */
+-static struct bfq_data *bfq_get_bfqd_locked(void **ptr, unsigned long *flags)
+-{
+-	struct bfq_data *bfqd;
+-
+-	rcu_read_lock();
+-	bfqd = rcu_dereference(*(struct bfq_data **)ptr);
+-
+-	if (bfqd != NULL) {
+-		spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
+-		if (ptr == NULL)
+-			printk(KERN_CRIT "get_bfqd_locked pointer NULL\n");
+-		else if (*ptr == bfqd)
+-			goto out;
+-		spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+-	}
+-
+-	bfqd = NULL;
+-out:
+-	rcu_read_unlock();
+-	return bfqd;
+-}
+-
+-static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
+-{
+-	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+-}
+-
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ 
+ static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
+@@ -857,11 +874,13 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
+ static void bfq_put_queue(struct bfq_queue *bfqq);
+ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
+ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+-				       struct bio *bio, int is_sync,
+-				       struct bfq_io_cq *bic, gfp_t gfp_mask);
++				       struct bio *bio, bool is_sync,
++				       struct bfq_io_cq *bic);
+ static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
+ 				    struct bfq_group *bfqg);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++#endif
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+ 
+ #endif /* _BFQ_H */
+-- 
+2.7.4 (Apple Git-66)
+


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-28 14:03 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-10-28 14:03 UTC (permalink / raw
  To: gentoo-commits

commit:     311c3d27cdc9e022bf07925e0c8ff10c8d610c26
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 28 14:04:27 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Oct 28 14:04:27 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=311c3d27

Linux patch 4.8.5

 0000_README            |    4 +
 1004_linux-4.8.5.patch | 5397 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5401 insertions(+)

diff --git a/0000_README b/0000_README
index c65f1ed..a5b48e4 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-4.8.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.4
 
+Patch:  1003_linux-4.8.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-4.8.5.patch b/1004_linux-4.8.5.patch
new file mode 100644
index 0000000..2074a32
--- /dev/null
+++ b/1004_linux-4.8.5.patch
@@ -0,0 +1,5397 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
+index 4ba0a2a61926..640f65e79ef1 100644
+--- a/Documentation/ABI/testing/sysfs-class-cxl
++++ b/Documentation/ABI/testing/sysfs-class-cxl
+@@ -220,8 +220,11 @@ What:           /sys/class/cxl/<card>/reset
+ Date:           October 2014
+ Contact:        linuxppc-dev@lists.ozlabs.org
+ Description:    write only
+-                Writing 1 will issue a PERST to card which may cause the card
+-                to reload the FPGA depending on load_image_on_perst.
++                Writing 1 will issue a PERST to card provided there are no
++                contexts active on any one of the card AFUs. This may cause
++                the card to reload the FPGA depending on load_image_on_perst.
++                Writing -1 will do a force PERST irrespective of any active
++                contexts on the card AFUs.
+ Users:		https://github.com/ibm-capi/libcxl
+ 
+ What:		/sys/class/cxl/<card>/perst_reloads_same_image (not in a guest)
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
+index a4f4d693e2c1..46726d4899fe 100644
+--- a/Documentation/kernel-parameters.txt
++++ b/Documentation/kernel-parameters.txt
+@@ -1457,7 +1457,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
+ 	i8042.nopnp	[HW] Don't use ACPIPnP / PnPBIOS to discover KBD/AUX
+ 			     controllers
+ 	i8042.notimeout	[HW] Ignore timeout condition signalled by controller
+-	i8042.reset	[HW] Reset the controller during init and cleanup
++	i8042.reset	[HW] Reset the controller during init, cleanup and
++			     suspend-to-ram transitions, only during s2r
++			     transitions, or never reset
++			Format: { 1 | Y | y | 0 | N | n }
++			1, Y, y: always reset controller
++			0, N, n: don't ever reset controller
++			Default: only on s2r transitions on x86; most other
++			architectures force reset to be always executed
+ 	i8042.unlock	[HW] Unlock (ignore) the keylock
+ 	i8042.kbdreset  [HW] Reset device connected to KBD port
+ 
+diff --git a/Makefile b/Makefile
+index 82a36ab540a4..daa3a01d2525 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
+index 6cb3736b6b83..d347bbc086fe 100644
+--- a/arch/arc/kernel/signal.c
++++ b/arch/arc/kernel/signal.c
+@@ -107,13 +107,13 @@ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
+ 	struct user_regs_struct uregs;
+ 
+ 	err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set));
+-	if (!err)
+-		set_current_blocked(&set);
+-
+ 	err |= __copy_from_user(&uregs.scratch,
+ 				&(sf->uc.uc_mcontext.regs.scratch),
+ 				sizeof(sf->uc.uc_mcontext.regs.scratch));
++	if (err)
++		return err;
+ 
++	set_current_blocked(&set);
+ 	regs->bta	= uregs.scratch.bta;
+ 	regs->lp_start	= uregs.scratch.lp_start;
+ 	regs->lp_end	= uregs.scratch.lp_end;
+@@ -138,7 +138,7 @@ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
+ 	regs->r0	= uregs.scratch.r0;
+ 	regs->sp	= uregs.scratch.sp;
+ 
+-	return err;
++	return 0;
+ }
+ 
+ static inline int is_do_ss_needed(unsigned int magic)
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index 4cdeae3b17c6..948a9a8a9297 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
+ 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
+ }
+ 
+-static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
+-{
+-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
+-}
+-
+ static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
+ {
+ 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
+@@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
+ 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
+ }
+ 
++static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
++{
++	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
++		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
++}
++
+ static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
+ {
+ 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);
+diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
+index e12af6754634..06ff7fd9e81f 100644
+--- a/arch/arm64/include/asm/module.h
++++ b/arch/arm64/include/asm/module.h
+@@ -17,6 +17,7 @@
+ #define __ASM_MODULE_H
+ 
+ #include <asm-generic/module.h>
++#include <asm/memory.h>
+ 
+ #define MODULE_ARCH_VERMAGIC	"aarch64"
+ 
+@@ -32,6 +33,10 @@ u64 module_emit_plt_entry(struct module *mod, const Elf64_Rela *rela,
+ 			  Elf64_Sym *sym);
+ 
+ #ifdef CONFIG_RANDOMIZE_BASE
++#ifdef CONFIG_MODVERSIONS
++#define ARCH_RELOCATES_KCRCTAB
++#define reloc_start 		(kimage_vaddr - KIMAGE_VADDR)
++#endif
+ extern u64 module_alloc_base;
+ #else
+ #define module_alloc_base	((u64)_etext - MODULES_VSIZE)
+diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
+index 2fee2f59288c..5394c8405e66 100644
+--- a/arch/arm64/include/asm/percpu.h
++++ b/arch/arm64/include/asm/percpu.h
+@@ -44,48 +44,44 @@ static inline unsigned long __percpu_##op(void *ptr,			\
+ 									\
+ 	switch (size) {							\
+ 	case 1:								\
+-		do {							\
+-			asm ("//__per_cpu_" #op "_1\n"			\
+-			"ldxrb	  %w[ret], %[ptr]\n"			\
++		asm ("//__per_cpu_" #op "_1\n"				\
++		"1:	ldxrb	  %w[ret], %[ptr]\n"			\
+ 			#asm_op " %w[ret], %w[ret], %w[val]\n"		\
+-			"stxrb	  %w[loop], %w[ret], %[ptr]\n"		\
+-			: [loop] "=&r" (loop), [ret] "=&r" (ret),	\
+-			  [ptr] "+Q"(*(u8 *)ptr)			\
+-			: [val] "Ir" (val));				\
+-		} while (loop);						\
++		"	stxrb	  %w[loop], %w[ret], %[ptr]\n"		\
++		"	cbnz	  %w[loop], 1b"				\
++		: [loop] "=&r" (loop), [ret] "=&r" (ret),		\
++		  [ptr] "+Q"(*(u8 *)ptr)				\
++		: [val] "Ir" (val));					\
+ 		break;							\
+ 	case 2:								\
+-		do {							\
+-			asm ("//__per_cpu_" #op "_2\n"			\
+-			"ldxrh	  %w[ret], %[ptr]\n"			\
++		asm ("//__per_cpu_" #op "_2\n"				\
++		"1:	ldxrh	  %w[ret], %[ptr]\n"			\
+ 			#asm_op " %w[ret], %w[ret], %w[val]\n"		\
+-			"stxrh	  %w[loop], %w[ret], %[ptr]\n"		\
+-			: [loop] "=&r" (loop), [ret] "=&r" (ret),	\
+-			  [ptr]  "+Q"(*(u16 *)ptr)			\
+-			: [val] "Ir" (val));				\
+-		} while (loop);						\
++		"	stxrh	  %w[loop], %w[ret], %[ptr]\n"		\
++		"	cbnz	  %w[loop], 1b"				\
++		: [loop] "=&r" (loop), [ret] "=&r" (ret),		\
++		  [ptr]  "+Q"(*(u16 *)ptr)				\
++		: [val] "Ir" (val));					\
+ 		break;							\
+ 	case 4:								\
+-		do {							\
+-			asm ("//__per_cpu_" #op "_4\n"			\
+-			"ldxr	  %w[ret], %[ptr]\n"			\
++		asm ("//__per_cpu_" #op "_4\n"				\
++		"1:	ldxr	  %w[ret], %[ptr]\n"			\
+ 			#asm_op " %w[ret], %w[ret], %w[val]\n"		\
+-			"stxr	  %w[loop], %w[ret], %[ptr]\n"		\
+-			: [loop] "=&r" (loop), [ret] "=&r" (ret),	\
+-			  [ptr] "+Q"(*(u32 *)ptr)			\
+-			: [val] "Ir" (val));				\
+-		} while (loop);						\
++		"	stxr	  %w[loop], %w[ret], %[ptr]\n"		\
++		"	cbnz	  %w[loop], 1b"				\
++		: [loop] "=&r" (loop), [ret] "=&r" (ret),		\
++		  [ptr] "+Q"(*(u32 *)ptr)				\
++		: [val] "Ir" (val));					\
+ 		break;							\
+ 	case 8:								\
+-		do {							\
+-			asm ("//__per_cpu_" #op "_8\n"			\
+-			"ldxr	  %[ret], %[ptr]\n"			\
++		asm ("//__per_cpu_" #op "_8\n"				\
++		"1:	ldxr	  %[ret], %[ptr]\n"			\
+ 			#asm_op " %[ret], %[ret], %[val]\n"		\
+-			"stxr	  %w[loop], %[ret], %[ptr]\n"		\
+-			: [loop] "=&r" (loop), [ret] "=&r" (ret),	\
+-			  [ptr] "+Q"(*(u64 *)ptr)			\
+-			: [val] "Ir" (val));				\
+-		} while (loop);						\
++		"	stxr	  %w[loop], %[ret], %[ptr]\n"		\
++		"	cbnz	  %w[loop], 1b"				\
++		: [loop] "=&r" (loop), [ret] "=&r" (ret),		\
++		  [ptr] "+Q"(*(u64 *)ptr)				\
++		: [val] "Ir" (val));					\
+ 		break;							\
+ 	default:							\
+ 		BUILD_BUG();						\
+@@ -150,44 +146,40 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
+ 
+ 	switch (size) {
+ 	case 1:
+-		do {
+-			asm ("//__percpu_xchg_1\n"
+-			"ldxrb %w[ret], %[ptr]\n"
+-			"stxrb %w[loop], %w[val], %[ptr]\n"
+-			: [loop] "=&r"(loop), [ret] "=&r"(ret),
+-			  [ptr] "+Q"(*(u8 *)ptr)
+-			: [val] "r" (val));
+-		} while (loop);
++		asm ("//__percpu_xchg_1\n"
++		"1:	ldxrb	%w[ret], %[ptr]\n"
++		"	stxrb	%w[loop], %w[val], %[ptr]\n"
++		"	cbnz	%w[loop], 1b"
++		: [loop] "=&r"(loop), [ret] "=&r"(ret),
++		  [ptr] "+Q"(*(u8 *)ptr)
++		: [val] "r" (val));
+ 		break;
+ 	case 2:
+-		do {
+-			asm ("//__percpu_xchg_2\n"
+-			"ldxrh %w[ret], %[ptr]\n"
+-			"stxrh %w[loop], %w[val], %[ptr]\n"
+-			: [loop] "=&r"(loop), [ret] "=&r"(ret),
+-			  [ptr] "+Q"(*(u16 *)ptr)
+-			: [val] "r" (val));
+-		} while (loop);
++		asm ("//__percpu_xchg_2\n"
++		"1:	ldxrh	%w[ret], %[ptr]\n"
++		"	stxrh	%w[loop], %w[val], %[ptr]\n"
++		"	cbnz	%w[loop], 1b"
++		: [loop] "=&r"(loop), [ret] "=&r"(ret),
++		  [ptr] "+Q"(*(u16 *)ptr)
++		: [val] "r" (val));
+ 		break;
+ 	case 4:
+-		do {
+-			asm ("//__percpu_xchg_4\n"
+-			"ldxr %w[ret], %[ptr]\n"
+-			"stxr %w[loop], %w[val], %[ptr]\n"
+-			: [loop] "=&r"(loop), [ret] "=&r"(ret),
+-			  [ptr] "+Q"(*(u32 *)ptr)
+-			: [val] "r" (val));
+-		} while (loop);
++		asm ("//__percpu_xchg_4\n"
++		"1:	ldxr	%w[ret], %[ptr]\n"
++		"	stxr	%w[loop], %w[val], %[ptr]\n"
++		"	cbnz	%w[loop], 1b"
++		: [loop] "=&r"(loop), [ret] "=&r"(ret),
++		  [ptr] "+Q"(*(u32 *)ptr)
++		: [val] "r" (val));
+ 		break;
+ 	case 8:
+-		do {
+-			asm ("//__percpu_xchg_8\n"
+-			"ldxr %[ret], %[ptr]\n"
+-			"stxr %w[loop], %[val], %[ptr]\n"
+-			: [loop] "=&r"(loop), [ret] "=&r"(ret),
+-			  [ptr] "+Q"(*(u64 *)ptr)
+-			: [val] "r" (val));
+-		} while (loop);
++		asm ("//__percpu_xchg_8\n"
++		"1:	ldxr	%[ret], %[ptr]\n"
++		"	stxr	%w[loop], %[val], %[ptr]\n"
++		"	cbnz	%w[loop], 1b"
++		: [loop] "=&r"(loop), [ret] "=&r"(ret),
++		  [ptr] "+Q"(*(u64 *)ptr)
++		: [val] "r" (val));
+ 		break;
+ 	default:
+ 		BUILD_BUG();
+diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
+index c47257c91b77..db849839e07b 100644
+--- a/arch/arm64/include/asm/uaccess.h
++++ b/arch/arm64/include/asm/uaccess.h
+@@ -21,6 +21,7 @@
+ /*
+  * User space memory access functions
+  */
++#include <linux/bitops.h>
+ #include <linux/kasan-checks.h>
+ #include <linux/string.h>
+ #include <linux/thread_info.h>
+@@ -102,6 +103,13 @@ static inline void set_fs(mm_segment_t fs)
+ 	flag;								\
+ })
+ 
++/*
++ * When dealing with data aborts or instruction traps we may end up with
++ * a tagged userland pointer. Clear the tag to get a sane pointer to pass
++ * on to access_ok(), for instance.
++ */
++#define untagged_addr(addr)		sign_extend64(addr, 55)
++
+ #define access_ok(type, addr, size)	__range_ok(addr, size)
+ #define user_addr_max			get_fs
+ 
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 42ffdb54e162..b0988bb1bf64 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -280,35 +280,43 @@ static void __init register_insn_emulation_sysctl(struct ctl_table *table)
+ /*
+  * Error-checking SWP macros implemented using ldxr{b}/stxr{b}
+  */
+-#define __user_swpX_asm(data, addr, res, temp, B)		\
++
++/* Arbitrary constant to ensure forward-progress of the LL/SC loop */
++#define __SWP_LL_SC_LOOPS	4
++
++#define __user_swpX_asm(data, addr, res, temp, temp2, B)	\
+ 	__asm__ __volatile__(					\
++	"	mov		%w3, %w7\n"			\
+ 	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+ 		    CONFIG_ARM64_PAN)				\
+-	"0:	ldxr"B"		%w2, [%3]\n"			\
+-	"1:	stxr"B"		%w0, %w1, [%3]\n"		\
++	"0:	ldxr"B"		%w2, [%4]\n"			\
++	"1:	stxr"B"		%w0, %w1, [%4]\n"		\
+ 	"	cbz		%w0, 2f\n"			\
+-	"	mov		%w0, %w4\n"			\
++	"	sub		%w3, %w3, #1\n"			\
++	"	cbnz		%w3, 0b\n"			\
++	"	mov		%w0, %w5\n"			\
+ 	"	b		3f\n"				\
+ 	"2:\n"							\
+ 	"	mov		%w1, %w2\n"			\
+ 	"3:\n"							\
+ 	"	.pushsection	 .fixup,\"ax\"\n"		\
+ 	"	.align		2\n"				\
+-	"4:	mov		%w0, %w5\n"			\
++	"4:	mov		%w0, %w6\n"			\
+ 	"	b		3b\n"				\
+ 	"	.popsection"					\
+ 	_ASM_EXTABLE(0b, 4b)					\
+ 	_ASM_EXTABLE(1b, 4b)					\
+ 	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+ 		CONFIG_ARM64_PAN)				\
+-	: "=&r" (res), "+r" (data), "=&r" (temp)		\
+-	: "r" (addr), "i" (-EAGAIN), "i" (-EFAULT)		\
++	: "=&r" (res), "+r" (data), "=&r" (temp), "=&r" (temp2)	\
++	: "r" (addr), "i" (-EAGAIN), "i" (-EFAULT),		\
++	  "i" (__SWP_LL_SC_LOOPS)				\
+ 	: "memory")
+ 
+-#define __user_swp_asm(data, addr, res, temp) \
+-	__user_swpX_asm(data, addr, res, temp, "")
+-#define __user_swpb_asm(data, addr, res, temp) \
+-	__user_swpX_asm(data, addr, res, temp, "b")
++#define __user_swp_asm(data, addr, res, temp, temp2) \
++	__user_swpX_asm(data, addr, res, temp, temp2, "")
++#define __user_swpb_asm(data, addr, res, temp, temp2) \
++	__user_swpX_asm(data, addr, res, temp, temp2, "b")
+ 
+ /*
+  * Bit 22 of the instruction encoding distinguishes between
+@@ -328,12 +336,12 @@ static int emulate_swpX(unsigned int address, unsigned int *data,
+ 	}
+ 
+ 	while (1) {
+-		unsigned long temp;
++		unsigned long temp, temp2;
+ 
+ 		if (type == TYPE_SWPB)
+-			__user_swpb_asm(*data, address, res, temp);
++			__user_swpb_asm(*data, address, res, temp, temp2);
+ 		else
+-			__user_swp_asm(*data, address, res, temp);
++			__user_swp_asm(*data, address, res, temp, temp2);
+ 
+ 		if (likely(res != -EAGAIN) || signal_pending(current))
+ 			break;
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 3e7b050e99dc..4d19508c55a3 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -578,8 +578,9 @@ CPU_LE(	movk	x0, #0x30d0, lsl #16	)	// Clear EE and E0E on LE systems
+ 	b.lt	4f				// Skip if no PMU present
+ 	mrs	x0, pmcr_el0			// Disable debug access traps
+ 	ubfx	x0, x0, #11, #5			// to EL2 and allow access to
+-	msr	mdcr_el2, x0			// all PMU counters from EL1
+ 4:
++	csel	x0, xzr, x0, lt			// all PMU counters from EL1
++	msr	mdcr_el2, x0			// (if they exist)
+ 
+ 	/* Stage-2 translation */
+ 	msr	vttbr_el2, xzr
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index df06750846de..771a01a7fbce 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -434,18 +434,21 @@ void cpu_enable_cache_maint_trap(void *__unused)
+ }
+ 
+ #define __user_cache_maint(insn, address, res)			\
+-	asm volatile (						\
+-		"1:	" insn ", %1\n"				\
+-		"	mov	%w0, #0\n"			\
+-		"2:\n"						\
+-		"	.pushsection .fixup,\"ax\"\n"		\
+-		"	.align	2\n"				\
+-		"3:	mov	%w0, %w2\n"			\
+-		"	b	2b\n"				\
+-		"	.popsection\n"				\
+-		_ASM_EXTABLE(1b, 3b)				\
+-		: "=r" (res)					\
+-		: "r" (address), "i" (-EFAULT) )
++	if (untagged_addr(address) >= user_addr_max())		\
++		res = -EFAULT;					\
++	else							\
++		asm volatile (					\
++			"1:	" insn ", %1\n"			\
++			"	mov	%w0, #0\n"		\
++			"2:\n"					\
++			"	.pushsection .fixup,\"ax\"\n"	\
++			"	.align	2\n"			\
++			"3:	mov	%w0, %w2\n"		\
++			"	b	2b\n"			\
++			"	.popsection\n"			\
++			_ASM_EXTABLE(1b, 3b)			\
++			: "=r" (res)				\
++			: "r" (address), "i" (-EFAULT) )
+ 
+ asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
+ {
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index ce9e5e5f28cf..eaf08d3abbef 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -98,6 +98,8 @@ ENTRY(__guest_exit)
+ 	// x4-x29,lr: vcpu regs
+ 	// vcpu x0-x3 on the stack
+ 
++	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
++
+ 	add	x2, x0, #VCPU_CONTEXT
+ 
+ 	stp	x4, x5,   [x2, #CPU_XREG_OFFSET(4)]
+diff --git a/arch/metag/include/asm/atomic.h b/arch/metag/include/asm/atomic.h
+index 470e365f04ea..8ff0a70865f6 100644
+--- a/arch/metag/include/asm/atomic.h
++++ b/arch/metag/include/asm/atomic.h
+@@ -39,11 +39,10 @@
+ #define atomic_dec(v) atomic_sub(1, (v))
+ 
+ #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
++#define atomic_dec_if_positive(v)       atomic_sub_if_positive(1, v)
+ 
+ #endif
+ 
+-#define atomic_dec_if_positive(v)       atomic_sub_if_positive(1, v)
+-
+ #include <asm-generic/atomic64.h>
+ 
+ #endif /* __ASM_METAG_ATOMIC_H */
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index f6fc6aac5496..b6578611dddb 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -152,7 +152,7 @@ static inline int is_syscall_success(struct pt_regs *regs)
+ 
+ static inline long regs_return_value(struct pt_regs *regs)
+ {
+-	if (is_syscall_success(regs))
++	if (is_syscall_success(regs) || !user_mode(regs))
+ 		return regs->regs[2];
+ 	else
+ 		return -regs->regs[2];
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index 3b4538ec0102..de9e8836d248 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -82,7 +82,7 @@ obj-vdso := $(obj-vdso-y:%.o=$(obj)/%.o)
+ $(obj-vdso): KBUILD_CFLAGS := $(cflags-vdso) $(native-abi)
+ $(obj-vdso): KBUILD_AFLAGS := $(aflags-vdso) $(native-abi)
+ 
+-$(obj)/vdso.lds: KBUILD_CPPFLAGS := $(native-abi)
++$(obj)/vdso.lds: KBUILD_CPPFLAGS := $(ccflags-vdso) $(native-abi)
+ 
+ $(obj)/vdso.so.dbg.raw: $(obj)/vdso.lds $(obj-vdso) FORCE
+ 	$(call if_changed,vdsold)
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 291cee28ccb6..c2c43f714684 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -83,10 +83,10 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ 	printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e))
+ 
+ /* This is the size of the initially mapped kernel memory */
+-#ifdef CONFIG_64BIT
+-#define KERNEL_INITIAL_ORDER	25	/* 1<<25 = 32MB */
++#if defined(CONFIG_64BIT)
++#define KERNEL_INITIAL_ORDER	26	/* 1<<26 = 64MB */
+ #else
+-#define KERNEL_INITIAL_ORDER	24	/* 1<<24 = 16MB */
++#define KERNEL_INITIAL_ORDER	25	/* 1<<25 = 32MB */
+ #endif
+ #define KERNEL_INITIAL_SIZE	(1 << KERNEL_INITIAL_ORDER)
+ 
+diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c
+index f7ea626e29c9..81d6f6391944 100644
+--- a/arch/parisc/kernel/setup.c
++++ b/arch/parisc/kernel/setup.c
+@@ -38,6 +38,7 @@
+ #include <linux/export.h>
+ 
+ #include <asm/processor.h>
++#include <asm/sections.h>
+ #include <asm/pdc.h>
+ #include <asm/led.h>
+ #include <asm/machdep.h>	/* for pa7300lc_init() proto */
+@@ -140,6 +141,13 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+ 	printk(KERN_CONT ".\n");
+ 
++	/*
++	 * Check if initial kernel page mappings are sufficient.
++	 * panic early if not, else we may access kernel functions
++	 * and variables which can't be reached.
++	 */
++	if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)
++		panic("KERNEL_INITIAL_ORDER too small!");
+ 
+ 	pdc_console_init();
+ 
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index 4b0b963d52a7..9b63b876a13a 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -226,12 +226,6 @@ void __init start_cpu_itimer(void)
+ 	unsigned int cpu = smp_processor_id();
+ 	unsigned long next_tick = mfctl(16) + clocktick;
+ 
+-#if defined(CONFIG_HAVE_UNSTABLE_SCHED_CLOCK) && defined(CONFIG_64BIT)
+-	/* With multiple 64bit CPUs online, the cr16's are not syncronized. */
+-	if (cpu != 0)
+-		clear_sched_clock_stable();
+-#endif
+-
+ 	mtctl(next_tick, 16);		/* kick off Interval Timer (CR16) */
+ 
+ 	per_cpu(cpu_data, cpu).it_value = next_tick;
+diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
+index f3ead0b6ce46..75304af9f742 100644
+--- a/arch/parisc/kernel/vmlinux.lds.S
++++ b/arch/parisc/kernel/vmlinux.lds.S
+@@ -89,8 +89,9 @@ SECTIONS
+ 	/* Start of data section */
+ 	_sdata = .;
+ 
+-	RO_DATA_SECTION(8)
+-
++	/* Architecturally we need to keep __gp below 0x1000000 and thus
++	 * in front of RO_DATA_SECTION() which stores lots of tracepoint
++	 * and ftrace symbols. */
+ #ifdef CONFIG_64BIT
+ 	. = ALIGN(16);
+ 	/* Linkage tables */
+@@ -105,6 +106,8 @@ SECTIONS
+ 	}
+ #endif
+ 
++	RO_DATA_SECTION(8)
++
+ 	/* unwind info */
+ 	.PARISC.unwind : {
+ 		__start___unwind = .;
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 927d2ab2ce08..792cb1768c8f 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -637,7 +637,7 @@ config FORCE_MAX_ZONEORDER
+ 	int "Maximum zone order"
+ 	range 8 9 if PPC64 && PPC_64K_PAGES
+ 	default "9" if PPC64 && PPC_64K_PAGES
+-	range 9 13 if PPC64 && !PPC_64K_PAGES
++	range 13 13 if PPC64 && !PPC_64K_PAGES
+ 	default "13" if PPC64 && !PPC_64K_PAGES
+ 	range 9 64 if PPC32 && PPC_16K_PAGES
+ 	default "9" if PPC32 && PPC_16K_PAGES
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 5f36e8a70daa..29aa8d1ce273 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -994,6 +994,14 @@ static void eeh_handle_special_event(void)
+ 				/* Notify all devices to be down */
+ 				eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
+ 				bus = eeh_pe_bus_get(phb_pe);
++				if (!bus) {
++					pr_err("%s: Cannot find PCI bus for "
++					       "PHB#%d-PE#%x\n",
++					       __func__,
++					       pe->phb->global_number,
++					       pe->addr);
++					break;
++				}
+ 				eeh_pe_dev_traverse(pe,
+ 					eeh_report_failure, NULL);
+ 				pci_hp_remove_devices(bus);
+diff --git a/arch/powerpc/kernel/vdso64/datapage.S b/arch/powerpc/kernel/vdso64/datapage.S
+index 184a6ba7f283..abf17feffe40 100644
+--- a/arch/powerpc/kernel/vdso64/datapage.S
++++ b/arch/powerpc/kernel/vdso64/datapage.S
+@@ -59,7 +59,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
+ 	bl	V_LOCAL_FUNC(__get_datapage)
+ 	mtlr	r12
+ 	addi	r3,r3,CFG_SYSCALL_MAP64
+-	cmpli	cr0,r4,0
++	cmpldi	cr0,r4,0
+ 	crclr	cr0*4+so
+ 	beqlr
+ 	li	r0,NR_syscalls
+diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
+index a76b4af37ef2..382021324883 100644
+--- a/arch/powerpc/kernel/vdso64/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
+@@ -145,7 +145,7 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
+ 	bne	cr0,99f
+ 
+ 	li	r3,0
+-	cmpli	cr0,r4,0
++	cmpldi	cr0,r4,0
+ 	crclr	cr0*4+so
+ 	beqlr
+ 	lis	r5,CLOCK_REALTIME_RES@h
+diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
+index f09899e35991..7b22624f332c 100644
+--- a/arch/powerpc/lib/copyuser_64.S
++++ b/arch/powerpc/lib/copyuser_64.S
+@@ -359,6 +359,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
+ 	addi	r3,r3,8
+ 171:
+ 177:
++179:
+ 	addi	r3,r3,8
+ 370:
+ 372:
+@@ -373,7 +374,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
+ 173:
+ 174:
+ 175:
+-179:
+ 181:
+ 184:
+ 186:
+diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
+index bb0354222b11..362954f98029 100644
+--- a/arch/powerpc/mm/copro_fault.c
++++ b/arch/powerpc/mm/copro_fault.c
+@@ -106,6 +106,8 @@ int copro_calculate_slb(struct mm_struct *mm, u64 ea, struct copro_slb *slb)
+ 	switch (REGION_ID(ea)) {
+ 	case USER_REGION_ID:
+ 		pr_devel("%s: 0x%llx -- USER_REGION_ID\n", __func__, ea);
++		if (mm == NULL)
++			return 1;
+ 		psize = get_slice_psize(mm, ea);
+ 		ssize = user_segment_size(ea);
+ 		vsid = get_vsid(mm->context.id, ea, ssize);
+diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
+index 0821556e16f4..28923b2e2df1 100644
+--- a/arch/powerpc/mm/hash_utils_64.c
++++ b/arch/powerpc/mm/hash_utils_64.c
+@@ -526,7 +526,7 @@ static bool might_have_hea(void)
+ 	 */
+ #ifdef CONFIG_IBMEBUS
+ 	return !cpu_has_feature(CPU_FTR_ARCH_207S) &&
+-		!firmware_has_feature(FW_FEATURE_SPLPAR);
++		firmware_has_feature(FW_FEATURE_SPLPAR);
+ #else
+ 	return false;
+ #endif
+diff --git a/arch/powerpc/platforms/powernv/eeh-powernv.c b/arch/powerpc/platforms/powernv/eeh-powernv.c
+index 86544ea85dc3..ba17fdd87ab0 100644
+--- a/arch/powerpc/platforms/powernv/eeh-powernv.c
++++ b/arch/powerpc/platforms/powernv/eeh-powernv.c
+@@ -1091,6 +1091,11 @@ static int pnv_eeh_reset(struct eeh_pe *pe, int option)
+ 	}
+ 
+ 	bus = eeh_pe_bus_get(pe);
++	if (!bus) {
++		pr_err("%s: Cannot find PCI bus for PHB#%d-PE#%x\n",
++			__func__, pe->phb->global_number, pe->addr);
++		return -EIO;
++	}
+ 	if (pe->type & EEH_PE_VF)
+ 		return pnv_eeh_reset_vf_pe(pe, option);
+ 
+@@ -1306,7 +1311,7 @@ static void pnv_eeh_get_and_dump_hub_diag(struct pci_controller *hose)
+ 		return;
+ 	}
+ 
+-	switch (data->type) {
++	switch (be16_to_cpu(data->type)) {
+ 	case OPAL_P7IOC_DIAG_TYPE_RGC:
+ 		pr_info("P7IOC diag-data for RGC\n\n");
+ 		pnv_eeh_dump_hub_diag_common(data);
+@@ -1538,7 +1543,7 @@ static int pnv_eeh_next_error(struct eeh_pe **pe)
+ 
+ 				/* Try best to clear it */
+ 				opal_pci_eeh_freeze_clear(phb->opal_id,
+-					frozen_pe_no,
++					be64_to_cpu(frozen_pe_no),
+ 					OPAL_EEH_ACTION_CLEAR_FREEZE_ALL);
+ 				ret = EEH_NEXT_ERR_NONE;
+ 			} else if ((*pe)->state & EEH_PE_ISOLATED ||
+diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
+index a21d831c1114..0fe3520058a5 100644
+--- a/arch/powerpc/platforms/powernv/pci.c
++++ b/arch/powerpc/platforms/powernv/pci.c
+@@ -309,8 +309,8 @@ static void pnv_pci_dump_p7ioc_diag_data(struct pci_controller *hose,
+ 			be64_to_cpu(data->dma1ErrorLog1));
+ 
+ 	for (i = 0; i < OPAL_P7IOC_NUM_PEST_REGS; i++) {
+-		if ((data->pestA[i] >> 63) == 0 &&
+-		    (data->pestB[i] >> 63) == 0)
++		if ((be64_to_cpu(data->pestA[i]) >> 63) == 0 &&
++		    (be64_to_cpu(data->pestB[i]) >> 63) == 0)
+ 			continue;
+ 
+ 		pr_info("PE[%3d] A/B: %016llx %016llx\n",
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 86707e67843f..aa35245d8d6d 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -393,7 +393,7 @@ static void __pSeries_lpar_hugepage_invalidate(unsigned long *slot,
+ 					     unsigned long *vpn, int count,
+ 					     int psize, int ssize)
+ {
+-	unsigned long param[8];
++	unsigned long param[PLPAR_HCALL9_BUFSIZE];
+ 	int i = 0, pix = 0, rc;
+ 	unsigned long flags = 0;
+ 	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+@@ -522,7 +522,7 @@ static void pSeries_lpar_flush_hash_range(unsigned long number, int local)
+ 	unsigned long flags = 0;
+ 	struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+ 	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+-	unsigned long param[9];
++	unsigned long param[PLPAR_HCALL9_BUFSIZE];
+ 	unsigned long hash, index, shift, hidx, slot;
+ 	real_pte_t pte;
+ 	int psize, ssize;
+diff --git a/arch/powerpc/sysdev/cpm1.c b/arch/powerpc/sysdev/cpm1.c
+index 81d49476c47e..82e8e2b6a3c4 100644
+--- a/arch/powerpc/sysdev/cpm1.c
++++ b/arch/powerpc/sysdev/cpm1.c
+@@ -233,8 +233,6 @@ void __init cpm_reset(void)
+ 	else
+ 		out_be32(&siu_conf->sc_sdcr, 1);
+ 	immr_unmap(siu_conf);
+-
+-	cpm_muram_init();
+ }
+ 
+ static DEFINE_SPINLOCK(cmd_lock);
+diff --git a/arch/powerpc/sysdev/cpm2.c b/arch/powerpc/sysdev/cpm2.c
+index 8dc1e24f3c23..f78ff841652c 100644
+--- a/arch/powerpc/sysdev/cpm2.c
++++ b/arch/powerpc/sysdev/cpm2.c
+@@ -66,10 +66,6 @@ void __init cpm2_reset(void)
+ 	cpm2_immr = ioremap(get_immrbase(), CPM_MAP_SIZE);
+ #endif
+ 
+-	/* Reclaim the DP memory for our use.
+-	 */
+-	cpm_muram_init();
+-
+ 	/* Tell everyone where the comm processor resides.
+ 	 */
+ 	cpmp = &cpm2_immr->im_cpm;
+diff --git a/arch/powerpc/sysdev/cpm_common.c b/arch/powerpc/sysdev/cpm_common.c
+index 947f42007734..51bf749a4f3a 100644
+--- a/arch/powerpc/sysdev/cpm_common.c
++++ b/arch/powerpc/sysdev/cpm_common.c
+@@ -37,6 +37,21 @@
+ #include <linux/of_gpio.h>
+ #endif
+ 
++static int __init cpm_init(void)
++{
++	struct device_node *np;
++
++	np = of_find_compatible_node(NULL, NULL, "fsl,cpm1");
++	if (!np)
++		np = of_find_compatible_node(NULL, NULL, "fsl,cpm2");
++	if (!np)
++		return -ENODEV;
++	cpm_muram_init();
++	of_node_put(np);
++	return 0;
++}
++subsys_initcall(cpm_init);
++
+ #ifdef CONFIG_PPC_EARLY_DEBUG_CPM
+ static u32 __iomem *cpm_udbg_txdesc;
+ static u8 __iomem *cpm_udbg_txbuf;
+diff --git a/arch/powerpc/xmon/spr_access.S b/arch/powerpc/xmon/spr_access.S
+index 84ad74213c83..7d8b0e8ed6d9 100644
+--- a/arch/powerpc/xmon/spr_access.S
++++ b/arch/powerpc/xmon/spr_access.S
+@@ -2,12 +2,12 @@
+ 
+ /* unsigned long xmon_mfspr(sprn, default_value) */
+ _GLOBAL(xmon_mfspr)
+-	ld	r5, .Lmfspr_table@got(r2)
++	PPC_LL	r5, .Lmfspr_table@got(r2)
+ 	b	xmon_mxspr
+ 
+ /* void xmon_mtspr(sprn, new_value) */
+ _GLOBAL(xmon_mtspr)
+-	ld	r5, .Lmtspr_table@got(r2)
++	PPC_LL	r5, .Lmtspr_table@got(r2)
+ 	b	xmon_mxspr
+ 
+ /*
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index dfd0ca2638fa..9746b780ad5a 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -118,8 +118,13 @@ static int handle_validity(struct kvm_vcpu *vcpu)
+ 
+ 	vcpu->stat.exit_validity++;
+ 	trace_kvm_s390_intercept_validity(vcpu, viwhy);
+-	WARN_ONCE(true, "kvm: unhandled validity intercept 0x%x\n", viwhy);
+-	return -EOPNOTSUPP;
++	KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%pK)", viwhy,
++		  current->pid, vcpu->kvm);
++
++	/* do not warn on invalid runtime instrumentation mode */
++	WARN_ONCE(viwhy != 0x44, "kvm: unhandled validity intercept 0x%x\n",
++		  viwhy);
++	return -EINVAL;
+ }
+ 
+ static int handle_instruction(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 8a90f1517837..625eb698c780 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -348,7 +348,7 @@ int __init sanitize_e820_map(struct e820entry *biosmap, int max_nr_map,
+ 		 * continue building up new bios map based on this
+ 		 * information
+ 		 */
+-		if (current_type != last_type) {
++		if (current_type != last_type || current_type == E820_PRAM) {
+ 			if (last_type != 0)	 {
+ 				new_bios[new_bios_entry].size =
+ 					change_point[chgidx]->addr - last_addr;
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 82b17373b66a..9e152cdab0f3 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1408,15 +1408,17 @@ __init void prefill_possible_map(void)
+ 
+ 	/* No boot processor was found in mptable or ACPI MADT */
+ 	if (!num_processors) {
+-		int apicid = boot_cpu_physical_apicid;
+-		int cpu = hard_smp_processor_id();
++		if (boot_cpu_has(X86_FEATURE_APIC)) {
++			int apicid = boot_cpu_physical_apicid;
++			int cpu = hard_smp_processor_id();
+ 
+-		pr_warn("Boot CPU (id %d) not listed by BIOS\n", cpu);
++			pr_warn("Boot CPU (id %d) not listed by BIOS\n", cpu);
+ 
+-		/* Make sure boot cpu is enumerated */
+-		if (apic->cpu_present_to_apicid(0) == BAD_APICID &&
+-		    apic->apic_id_valid(apicid))
+-			generic_processor_info(apicid, boot_cpu_apic_version);
++			/* Make sure boot cpu is enumerated */
++			if (apic->cpu_present_to_apicid(0) == BAD_APICID &&
++			    apic->apic_id_valid(apicid))
++				generic_processor_info(apicid, boot_cpu_apic_version);
++		}
+ 
+ 		if (!num_processors)
+ 			num_processors = 1;
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index c7220ba94aa7..1a22de70f7f7 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -594,7 +594,7 @@ static void kvm_ioapic_reset(struct kvm_ioapic *ioapic)
+ 	ioapic->irr = 0;
+ 	ioapic->irr_delivered = 0;
+ 	ioapic->id = 0;
+-	memset(ioapic->irq_eoi, 0x00, IOAPIC_NUM_PINS);
++	memset(ioapic->irq_eoi, 0x00, sizeof(ioapic->irq_eoi));
+ 	rtc_irq_eoi_tracking_reset(ioapic);
+ }
+ 
+diff --git a/arch/x86/platform/uv/bios_uv.c b/arch/x86/platform/uv/bios_uv.c
+index 23f2f3e41c7f..58e152b3bd90 100644
+--- a/arch/x86/platform/uv/bios_uv.c
++++ b/arch/x86/platform/uv/bios_uv.c
+@@ -40,7 +40,15 @@ s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5)
+ 		 */
+ 		return BIOS_STATUS_UNIMPLEMENTED;
+ 
+-	ret = efi_call_virt_pointer(tab, function, (u64)which, a1, a2, a3, a4, a5);
++	/*
++	 * If EFI_OLD_MEMMAP is set, we need to fall back to using our old EFI
++	 * callback method, which uses efi_call() directly, with the kernel page tables:
++	 */
++	if (unlikely(test_bit(EFI_OLD_MEMMAP, &efi.flags)))
++		ret = efi_call((void *)__va(tab->function), (u64)which, a1, a2, a3, a4, a5);
++	else
++		ret = efi_call_virt_pointer(tab, function, (u64)which, a1, a2, a3, a4, a5);
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(uv_bios_call);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index dd38e5ced4a3..b08ccbb9393a 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1340,10 +1340,8 @@ int blkcg_policy_register(struct blkcg_policy *pol)
+ 			struct blkcg_policy_data *cpd;
+ 
+ 			cpd = pol->cpd_alloc_fn(GFP_KERNEL);
+-			if (!cpd) {
+-				mutex_unlock(&blkcg_pol_mutex);
++			if (!cpd)
+ 				goto err_free_cpds;
+-			}
+ 
+ 			blkcg->cpd[pol->plid] = cpd;
+ 			cpd->blkcg = blkcg;
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index 6482d47deb50..d5572295cad3 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -97,7 +97,7 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ 		int ret;
+ 
+ 		ret = of_irq_get(dev->dev.of_node, num);
+-		if (ret >= 0 || ret == -EPROBE_DEFER)
++		if (ret > 0 || ret == -EPROBE_DEFER)
+ 			return ret;
+ 	}
+ 
+@@ -175,7 +175,7 @@ int platform_get_irq_byname(struct platform_device *dev, const char *name)
+ 		int ret;
+ 
+ 		ret = of_irq_get_byname(dev->dev.of_node, name);
+-		if (ret >= 0 || ret == -EPROBE_DEFER)
++		if (ret > 0 || ret == -EPROBE_DEFER)
+ 			return ret;
+ 	}
+ 
+diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
+index ba1c1ae72ac2..ce8ea10407e4 100644
+--- a/drivers/clk/imx/clk-imx6q.c
++++ b/drivers/clk/imx/clk-imx6q.c
+@@ -318,11 +318,16 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 		clk[IMX6QDL_CLK_IPG_PER_SEL] = imx_clk_mux("ipg_per_sel", base + 0x1c, 6, 1, ipg_per_sels, ARRAY_SIZE(ipg_per_sels));
+ 		clk[IMX6QDL_CLK_UART_SEL] = imx_clk_mux("uart_sel", base + 0x24, 6, 1, uart_sels, ARRAY_SIZE(uart_sels));
+ 		clk[IMX6QDL_CLK_GPU2D_CORE_SEL] = imx_clk_mux("gpu2d_core_sel", base + 0x18, 16, 2, gpu2d_core_sels_2, ARRAY_SIZE(gpu2d_core_sels_2));
++	} else if (clk_on_imx6dl()) {
++		clk[IMX6QDL_CLK_MLB_SEL] = imx_clk_mux("mlb_sel",   base + 0x18, 16, 2, gpu2d_core_sels,   ARRAY_SIZE(gpu2d_core_sels));
+ 	} else {
+ 		clk[IMX6QDL_CLK_GPU2D_CORE_SEL] = imx_clk_mux("gpu2d_core_sel",   base + 0x18, 16, 2, gpu2d_core_sels,   ARRAY_SIZE(gpu2d_core_sels));
+ 	}
+ 	clk[IMX6QDL_CLK_GPU3D_CORE_SEL]   = imx_clk_mux("gpu3d_core_sel",   base + 0x18, 4,  2, gpu3d_core_sels,   ARRAY_SIZE(gpu3d_core_sels));
+-	clk[IMX6QDL_CLK_GPU3D_SHADER_SEL] = imx_clk_mux("gpu3d_shader_sel", base + 0x18, 8,  2, gpu3d_shader_sels, ARRAY_SIZE(gpu3d_shader_sels));
++	if (clk_on_imx6dl())
++		clk[IMX6QDL_CLK_GPU2D_CORE_SEL] = imx_clk_mux("gpu2d_core_sel", base + 0x18, 8,  2, gpu3d_shader_sels, ARRAY_SIZE(gpu3d_shader_sels));
++	else
++		clk[IMX6QDL_CLK_GPU3D_SHADER_SEL] = imx_clk_mux("gpu3d_shader_sel", base + 0x18, 8,  2, gpu3d_shader_sels, ARRAY_SIZE(gpu3d_shader_sels));
+ 	clk[IMX6QDL_CLK_IPU1_SEL]         = imx_clk_mux("ipu1_sel",         base + 0x3c, 9,  2, ipu_sels,          ARRAY_SIZE(ipu_sels));
+ 	clk[IMX6QDL_CLK_IPU2_SEL]         = imx_clk_mux("ipu2_sel",         base + 0x3c, 14, 2, ipu_sels,          ARRAY_SIZE(ipu_sels));
+ 	clk[IMX6QDL_CLK_LDB_DI0_SEL]      = imx_clk_mux_flags("ldb_di0_sel", base + 0x2c, 9,  3, ldb_di_sels,      ARRAY_SIZE(ldb_di_sels), CLK_SET_RATE_PARENT);
+@@ -400,9 +405,15 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 		clk[IMX6QDL_CLK_LDB_DI0_DIV_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7);
+ 		clk[IMX6QDL_CLK_LDB_DI1_DIV_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1_sel", 2, 7);
+ 	}
+-	clk[IMX6QDL_CLK_GPU2D_CORE_PODF]  = imx_clk_divider("gpu2d_core_podf",  "gpu2d_core_sel",    base + 0x18, 23, 3);
++	if (clk_on_imx6dl())
++		clk[IMX6QDL_CLK_MLB_PODF]  = imx_clk_divider("mlb_podf",  "mlb_sel",    base + 0x18, 23, 3);
++	else
++		clk[IMX6QDL_CLK_GPU2D_CORE_PODF]  = imx_clk_divider("gpu2d_core_podf",  "gpu2d_core_sel",    base + 0x18, 23, 3);
+ 	clk[IMX6QDL_CLK_GPU3D_CORE_PODF]  = imx_clk_divider("gpu3d_core_podf",  "gpu3d_core_sel",    base + 0x18, 26, 3);
+-	clk[IMX6QDL_CLK_GPU3D_SHADER]     = imx_clk_divider("gpu3d_shader",     "gpu3d_shader_sel",  base + 0x18, 29, 3);
++	if (clk_on_imx6dl())
++		clk[IMX6QDL_CLK_GPU2D_CORE_PODF]  = imx_clk_divider("gpu2d_core_podf",     "gpu2d_core_sel",  base + 0x18, 29, 3);
++	else
++		clk[IMX6QDL_CLK_GPU3D_SHADER]     = imx_clk_divider("gpu3d_shader",     "gpu3d_shader_sel",  base + 0x18, 29, 3);
+ 	clk[IMX6QDL_CLK_IPU1_PODF]        = imx_clk_divider("ipu1_podf",        "ipu1_sel",          base + 0x3c, 11, 3);
+ 	clk[IMX6QDL_CLK_IPU2_PODF]        = imx_clk_divider("ipu2_podf",        "ipu2_sel",          base + 0x3c, 16, 3);
+ 	clk[IMX6QDL_CLK_LDB_DI0_PODF]     = imx_clk_divider_flags("ldb_di0_podf", "ldb_di0_div_3_5", base + 0x20, 10, 1, 0);
+@@ -473,14 +484,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 	clk[IMX6QDL_CLK_ESAI_MEM]     = imx_clk_gate2_shared("esai_mem", "ahb",             base + 0x6c, 16, &share_count_esai);
+ 	clk[IMX6QDL_CLK_GPT_IPG]      = imx_clk_gate2("gpt_ipg",       "ipg",               base + 0x6c, 20);
+ 	clk[IMX6QDL_CLK_GPT_IPG_PER]  = imx_clk_gate2("gpt_ipg_per",   "ipg_per",           base + 0x6c, 22);
+-	if (clk_on_imx6dl())
+-		/*
+-		 * The multiplexer and divider of imx6q clock gpu3d_shader get
+-		 * redefined/reused as gpu2d_core_sel and gpu2d_core_podf on imx6dl.
+-		 */
+-		clk[IMX6QDL_CLK_GPU2D_CORE] = imx_clk_gate2("gpu2d_core", "gpu3d_shader", base + 0x6c, 24);
+-	else
+-		clk[IMX6QDL_CLK_GPU2D_CORE] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24);
++	clk[IMX6QDL_CLK_GPU2D_CORE] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24);
+ 	clk[IMX6QDL_CLK_GPU3D_CORE]   = imx_clk_gate2("gpu3d_core",    "gpu3d_core_podf",   base + 0x6c, 26);
+ 	clk[IMX6QDL_CLK_HDMI_IAHB]    = imx_clk_gate2("hdmi_iahb",     "ahb",               base + 0x70, 0);
+ 	clk[IMX6QDL_CLK_HDMI_ISFR]    = imx_clk_gate2("hdmi_isfr",     "video_27m",         base + 0x70, 4);
+@@ -511,7 +515,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 		 * The multiplexer and divider of the imx6q clock gpu2d get
+ 		 * redefined/reused as mlb_sys_sel and mlb_sys_clk_podf on imx6dl.
+ 		 */
+-		clk[IMX6QDL_CLK_MLB] = imx_clk_gate2("mlb",            "gpu2d_core_podf",   base + 0x74, 18);
++		clk[IMX6QDL_CLK_MLB] = imx_clk_gate2("mlb",            "mlb_podf",   base + 0x74, 18);
+ 	else
+ 		clk[IMX6QDL_CLK_MLB] = imx_clk_gate2("mlb",            "axi",               base + 0x74, 18);
+ 	clk[IMX6QDL_CLK_MMDC_CH0_AXI] = imx_clk_gate2("mmdc_ch0_axi",  "mmdc_ch0_axi_podf", base + 0x74, 20);
+@@ -629,6 +633,24 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 	if (IS_ENABLED(CONFIG_PCI_IMX6))
+ 		clk_set_parent(clk[IMX6QDL_CLK_LVDS1_SEL], clk[IMX6QDL_CLK_SATA_REF_100M]);
+ 
++	/*
++	 * Initialize the GPU clock muxes, so that the maximum specified clock
++	 * rates for the respective SoC are not exceeded.
++	 */
++	if (clk_on_imx6dl()) {
++		clk_set_parent(clk[IMX6QDL_CLK_GPU3D_CORE_SEL],
++			       clk[IMX6QDL_CLK_PLL2_PFD1_594M]);
++		clk_set_parent(clk[IMX6QDL_CLK_GPU2D_CORE_SEL],
++			       clk[IMX6QDL_CLK_PLL2_PFD1_594M]);
++	} else if (clk_on_imx6q()) {
++		clk_set_parent(clk[IMX6QDL_CLK_GPU3D_CORE_SEL],
++			       clk[IMX6QDL_CLK_MMDC_CH0_AXI]);
++		clk_set_parent(clk[IMX6QDL_CLK_GPU3D_SHADER_SEL],
++			       clk[IMX6QDL_CLK_PLL2_PFD1_594M]);
++		clk_set_parent(clk[IMX6QDL_CLK_GPU2D_CORE_SEL],
++			       clk[IMX6QDL_CLK_PLL3_USB_OTG]);
++	}
++
+ 	imx_register_uart_clocks(uart_clks);
+ }
+ CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 2ee40fd360ca..e1aa531a4c34 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -68,6 +68,8 @@ static const struct of_device_id machines[] __initconst = {
+ 
+ 	{ .compatible = "sigma,tango4" },
+ 
++	{ .compatible = "ti,am33xx", },
++	{ .compatible = "ti,dra7", },
+ 	{ .compatible = "ti,omap2", },
+ 	{ .compatible = "ti,omap3", },
+ 	{ .compatible = "ti,omap4", },
+diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
+index 18da4f8051d3..13475890d792 100644
+--- a/drivers/cpufreq/cpufreq_conservative.c
++++ b/drivers/cpufreq/cpufreq_conservative.c
+@@ -17,6 +17,7 @@
+ struct cs_policy_dbs_info {
+ 	struct policy_dbs_info policy_dbs;
+ 	unsigned int down_skip;
++	unsigned int requested_freq;
+ };
+ 
+ static inline struct cs_policy_dbs_info *to_dbs_info(struct policy_dbs_info *policy_dbs)
+@@ -61,6 +62,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
+ {
+ 	struct policy_dbs_info *policy_dbs = policy->governor_data;
+ 	struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy_dbs);
++	unsigned int requested_freq = dbs_info->requested_freq;
+ 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
+ 	struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
+ 	unsigned int load = dbs_update(policy);
+@@ -72,10 +74,16 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
+ 	if (cs_tuners->freq_step == 0)
+ 		goto out;
+ 
++	/*
++	 * If requested_freq is out of range, it is likely that the limits
++	 * changed in the meantime, so fall back to current frequency in that
++	 * case.
++	 */
++	if (requested_freq > policy->max || requested_freq < policy->min)
++		requested_freq = policy->cur;
++
+ 	/* Check for frequency increase */
+ 	if (load > dbs_data->up_threshold) {
+-		unsigned int requested_freq = policy->cur;
+-
+ 		dbs_info->down_skip = 0;
+ 
+ 		/* if we are already at full speed then break out early */
+@@ -83,8 +91,11 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
+ 			goto out;
+ 
+ 		requested_freq += get_freq_target(cs_tuners, policy);
++		if (requested_freq > policy->max)
++			requested_freq = policy->max;
+ 
+ 		__cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_H);
++		dbs_info->requested_freq = requested_freq;
+ 		goto out;
+ 	}
+ 
+@@ -95,7 +106,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
+ 
+ 	/* Check for frequency decrease */
+ 	if (load < cs_tuners->down_threshold) {
+-		unsigned int freq_target, requested_freq = policy->cur;
++		unsigned int freq_target;
+ 		/*
+ 		 * if we cannot reduce the frequency anymore, break out early
+ 		 */
+@@ -109,6 +120,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
+ 			requested_freq = policy->min;
+ 
+ 		__cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_L);
++		dbs_info->requested_freq = requested_freq;
+ 	}
+ 
+  out:
+@@ -287,6 +299,7 @@ static void cs_start(struct cpufreq_policy *policy)
+ 	struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data);
+ 
+ 	dbs_info->down_skip = 0;
++	dbs_info->requested_freq = policy->cur;
+ }
+ 
+ static struct dbs_governor cs_governor = {
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index be9eade147f2..b46547e907be 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -556,12 +556,12 @@ static void intel_pstate_hwp_set(const struct cpumask *cpumask)
+ 	int min, hw_min, max, hw_max, cpu, range, adj_range;
+ 	u64 value, cap;
+ 
+-	rdmsrl(MSR_HWP_CAPABILITIES, cap);
+-	hw_min = HWP_LOWEST_PERF(cap);
+-	hw_max = HWP_HIGHEST_PERF(cap);
+-	range = hw_max - hw_min;
+-
+ 	for_each_cpu(cpu, cpumask) {
++		rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap);
++		hw_min = HWP_LOWEST_PERF(cap);
++		hw_max = HWP_HIGHEST_PERF(cap);
++		range = hw_max - hw_min;
++
+ 		rdmsrl_on_cpu(cpu, MSR_HWP_REQUEST, &value);
+ 		adj_range = limits->min_perf_pct * range / 100;
+ 		min = hw_min + adj_range;
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 425501c39527..793518a30afe 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -239,7 +239,7 @@ static int mpc8xxx_gpio_irq_map(struct irq_domain *h, unsigned int irq,
+ 				irq_hw_number_t hwirq)
+ {
+ 	irq_set_chip_data(irq, h->host_data);
+-	irq_set_chip_and_handler(irq, &mpc8xxx_irq_chip, handle_level_irq);
++	irq_set_chip_and_handler(irq, &mpc8xxx_irq_chip, handle_edge_irq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index f2b776efab3a..5f88ccd6806b 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -821,7 +821,7 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ 		if (ret) {
+ 			pr_err("failed to init MR pool ret= %d\n", ret);
+ 			ib_destroy_qp(qp);
+-			qp = ERR_PTR(ret);
++			return ERR_PTR(ret);
+ 		}
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 3322ed750172..6b07d4bca764 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -1400,7 +1400,9 @@ static int srp_map_sg_entry(struct srp_map_state *state,
+ 
+ 	while (dma_len) {
+ 		unsigned offset = dma_addr & ~dev->mr_page_mask;
+-		if (state->npages == dev->max_pages_per_mr || offset != 0) {
++
++		if (state->npages == dev->max_pages_per_mr ||
++		    (state->npages > 0 && offset != 0)) {
+ 			ret = srp_map_finish_fmr(state, ch);
+ 			if (ret)
+ 				return ret;
+@@ -1417,12 +1419,12 @@ static int srp_map_sg_entry(struct srp_map_state *state,
+ 	}
+ 
+ 	/*
+-	 * If the last entry of the MR wasn't a full page, then we need to
++	 * If the end of the MR is not on a page boundary then we need to
+ 	 * close it out and start a new one -- we can only merge at page
+ 	 * boundaries.
+ 	 */
+ 	ret = 0;
+-	if (len != dev->mr_page_size)
++	if ((dma_addr & ~dev->mr_page_mask) != 0)
+ 		ret = srp_map_finish_fmr(state, ch);
+ 	return ret;
+ }
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 08e252a42480..ff8c10749e57 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1159,6 +1159,13 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H730"),
+ 		},
+ 	},
++	{
++		/* Fujitsu H760 also has a middle button */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H760"),
++		},
++	},
+ #endif
+ 	{ }
+ };
+@@ -1503,10 +1510,10 @@ static const struct dmi_system_id elantech_dmi_force_crc_enabled[] = {
+ 		},
+ 	},
+ 	{
+-		/* Fujitsu LIFEBOOK E554  does not work with crc_enabled == 0 */
++		/* Fujitsu H760 does not work with crc_enabled == 0 */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E554"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H760"),
+ 		},
+ 	},
+ 	{
+@@ -1517,6 +1524,20 @@ static const struct dmi_system_id elantech_dmi_force_crc_enabled[] = {
+ 		},
+ 	},
+ 	{
++		/* Fujitsu LIFEBOOK E554  does not work with crc_enabled == 0 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E554"),
++		},
++	},
++	{
++		/* Fujitsu LIFEBOOK E556 does not work with crc_enabled == 0 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E556"),
++		},
++	},
++	{
+ 		/* Fujitsu LIFEBOOK U745 does not work with crc_enabled == 0 */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+diff --git a/drivers/input/serio/i8042-io.h b/drivers/input/serio/i8042-io.h
+index a5eed2ade53d..34da81c006b6 100644
+--- a/drivers/input/serio/i8042-io.h
++++ b/drivers/input/serio/i8042-io.h
+@@ -81,7 +81,7 @@ static inline int i8042_platform_init(void)
+ 		return -EBUSY;
+ #endif
+ 
+-	i8042_reset = 1;
++	i8042_reset = I8042_RESET_ALWAYS;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/input/serio/i8042-ip22io.h b/drivers/input/serio/i8042-ip22io.h
+index ee1ad27d6ed0..08a1c10a1448 100644
+--- a/drivers/input/serio/i8042-ip22io.h
++++ b/drivers/input/serio/i8042-ip22io.h
+@@ -61,7 +61,7 @@ static inline int i8042_platform_init(void)
+ 		return -EBUSY;
+ #endif
+ 
+-	i8042_reset = 1;
++	i8042_reset = I8042_RESET_ALWAYS;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/input/serio/i8042-ppcio.h b/drivers/input/serio/i8042-ppcio.h
+index f708c75d16f1..1aabea43329e 100644
+--- a/drivers/input/serio/i8042-ppcio.h
++++ b/drivers/input/serio/i8042-ppcio.h
+@@ -44,7 +44,7 @@ static inline void i8042_write_command(int val)
+ 
+ static inline int i8042_platform_init(void)
+ {
+-	i8042_reset = 1;
++	i8042_reset = I8042_RESET_ALWAYS;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/input/serio/i8042-sparcio.h b/drivers/input/serio/i8042-sparcio.h
+index afcd1c1a05b2..6231d63860ee 100644
+--- a/drivers/input/serio/i8042-sparcio.h
++++ b/drivers/input/serio/i8042-sparcio.h
+@@ -130,7 +130,7 @@ static int __init i8042_platform_init(void)
+ 		}
+ 	}
+ 
+-	i8042_reset = 1;
++	i8042_reset = I8042_RESET_ALWAYS;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/input/serio/i8042-unicore32io.h b/drivers/input/serio/i8042-unicore32io.h
+index 73f5cc124a36..455747552f85 100644
+--- a/drivers/input/serio/i8042-unicore32io.h
++++ b/drivers/input/serio/i8042-unicore32io.h
+@@ -61,7 +61,7 @@ static inline int i8042_platform_init(void)
+ 	if (!request_mem_region(I8042_REGION_START, I8042_REGION_SIZE, "i8042"))
+ 		return -EBUSY;
+ 
+-	i8042_reset = 1;
++	i8042_reset = I8042_RESET_ALWAYS;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 68f5f4a0f1e7..f4bfb4b2d50a 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -510,6 +510,90 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 	{ }
+ };
+ 
++/*
++ * On some Asus laptops, just running self tests cause problems.
++ */
++static const struct dmi_system_id i8042_dmi_noselftest_table[] = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "A455LD"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "K401LB"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "K501LB"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "K501LX"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "R409L"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "V502LX"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X302LA"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X450LD"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X455LAB"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X455LDB"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X455LF"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Z450LA"),
++		},
++	},
++	{ }
++};
+ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 	{
+ 		/* MSI Wind U-100 */
+@@ -1072,12 +1156,18 @@ static int __init i8042_platform_init(void)
+ 		return retval;
+ 
+ #if defined(__ia64__)
+-        i8042_reset = true;
++        i8042_reset = I8042_RESET_ALWAYS;
+ #endif
+ 
+ #ifdef CONFIG_X86
+-	if (dmi_check_system(i8042_dmi_reset_table))
+-		i8042_reset = true;
++	/* Honor module parameter when value is not default */
++	if (i8042_reset == I8042_RESET_DEFAULT) {
++		if (dmi_check_system(i8042_dmi_reset_table))
++			i8042_reset = I8042_RESET_ALWAYS;
++
++		if (dmi_check_system(i8042_dmi_noselftest_table))
++			i8042_reset = I8042_RESET_NEVER;
++	}
+ 
+ 	if (dmi_check_system(i8042_dmi_noloop_table))
+ 		i8042_noloop = true;
+diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c
+index 405252a884dd..89abfdb539ac 100644
+--- a/drivers/input/serio/i8042.c
++++ b/drivers/input/serio/i8042.c
+@@ -48,9 +48,39 @@ static bool i8042_unlock;
+ module_param_named(unlock, i8042_unlock, bool, 0);
+ MODULE_PARM_DESC(unlock, "Ignore keyboard lock.");
+ 
+-static bool i8042_reset;
+-module_param_named(reset, i8042_reset, bool, 0);
+-MODULE_PARM_DESC(reset, "Reset controller during init and cleanup.");
++enum i8042_controller_reset_mode {
++	I8042_RESET_NEVER,
++	I8042_RESET_ALWAYS,
++	I8042_RESET_ON_S2RAM,
++#define I8042_RESET_DEFAULT	I8042_RESET_ON_S2RAM
++};
++static enum i8042_controller_reset_mode i8042_reset = I8042_RESET_DEFAULT;
++static int i8042_set_reset(const char *val, const struct kernel_param *kp)
++{
++	enum i8042_controller_reset_mode *arg = kp->arg;
++	int error;
++	bool reset;
++
++	if (val) {
++		error = kstrtobool(val, &reset);
++		if (error)
++			return error;
++	} else {
++		reset = true;
++	}
++
++	*arg = reset ? I8042_RESET_ALWAYS : I8042_RESET_NEVER;
++	return 0;
++}
++
++static const struct kernel_param_ops param_ops_reset_param = {
++	.flags = KERNEL_PARAM_OPS_FL_NOARG,
++	.set = i8042_set_reset,
++};
++#define param_check_reset_param(name, p)	\
++	__param_check(name, p, enum i8042_controller_reset_mode)
++module_param_named(reset, i8042_reset, reset_param, 0);
++MODULE_PARM_DESC(reset, "Reset controller on resume, cleanup or both");
+ 
+ static bool i8042_direct;
+ module_param_named(direct, i8042_direct, bool, 0);
+@@ -1019,7 +1049,7 @@ static int i8042_controller_init(void)
+  * Reset the controller and reset CRT to the original value set by BIOS.
+  */
+ 
+-static void i8042_controller_reset(bool force_reset)
++static void i8042_controller_reset(bool s2r_wants_reset)
+ {
+ 	i8042_flush();
+ 
+@@ -1044,8 +1074,10 @@ static void i8042_controller_reset(bool force_reset)
+  * Reset the controller if requested.
+  */
+ 
+-	if (i8042_reset || force_reset)
++	if (i8042_reset == I8042_RESET_ALWAYS ||
++	    (i8042_reset == I8042_RESET_ON_S2RAM && s2r_wants_reset)) {
+ 		i8042_controller_selftest();
++	}
+ 
+ /*
+  * Restore the original control register setting.
+@@ -1110,7 +1142,7 @@ static void i8042_dritek_enable(void)
+  * before suspending.
+  */
+ 
+-static int i8042_controller_resume(bool force_reset)
++static int i8042_controller_resume(bool s2r_wants_reset)
+ {
+ 	int error;
+ 
+@@ -1118,7 +1150,8 @@ static int i8042_controller_resume(bool force_reset)
+ 	if (error)
+ 		return error;
+ 
+-	if (i8042_reset || force_reset) {
++	if (i8042_reset == I8042_RESET_ALWAYS ||
++	    (i8042_reset == I8042_RESET_ON_S2RAM && s2r_wants_reset)) {
+ 		error = i8042_controller_selftest();
+ 		if (error)
+ 			return error;
+@@ -1195,7 +1228,7 @@ static int i8042_pm_resume_noirq(struct device *dev)
+ 
+ static int i8042_pm_resume(struct device *dev)
+ {
+-	bool force_reset;
++	bool want_reset;
+ 	int i;
+ 
+ 	for (i = 0; i < I8042_NUM_PORTS; i++) {
+@@ -1218,9 +1251,9 @@ static int i8042_pm_resume(struct device *dev)
+ 	 * off control to the platform firmware, otherwise we can simply restore
+ 	 * the mode.
+ 	 */
+-	force_reset = pm_resume_via_firmware();
++	want_reset = pm_resume_via_firmware();
+ 
+-	return i8042_controller_resume(force_reset);
++	return i8042_controller_resume(want_reset);
+ }
+ 
+ static int i8042_pm_thaw(struct device *dev)
+@@ -1482,7 +1515,7 @@ static int __init i8042_probe(struct platform_device *dev)
+ 
+ 	i8042_platform_device = dev;
+ 
+-	if (i8042_reset) {
++	if (i8042_reset == I8042_RESET_ALWAYS) {
+ 		error = i8042_controller_selftest();
+ 		if (error)
+ 			return error;
+diff --git a/drivers/irqchip/irq-eznps.c b/drivers/irqchip/irq-eznps.c
+index efbf0e4304b7..ebc2b0b15f67 100644
+--- a/drivers/irqchip/irq-eznps.c
++++ b/drivers/irqchip/irq-eznps.c
+@@ -85,7 +85,7 @@ static void nps400_irq_eoi_global(struct irq_data *irqd)
+ 	nps_ack_gic();
+ }
+ 
+-static void nps400_irq_eoi(struct irq_data *irqd)
++static void nps400_irq_ack(struct irq_data *irqd)
+ {
+ 	unsigned int __maybe_unused irq = irqd_to_hwirq(irqd);
+ 
+@@ -103,7 +103,7 @@ static struct irq_chip nps400_irq_chip_percpu = {
+ 	.name		= "NPS400 IC",
+ 	.irq_mask	= nps400_irq_mask,
+ 	.irq_unmask	= nps400_irq_unmask,
+-	.irq_eoi	= nps400_irq_eoi,
++	.irq_ack	= nps400_irq_ack,
+ };
+ 
+ static int nps400_irq_map(struct irq_domain *d, unsigned int virq,
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index da6c0ba61d4f..708a2604a7b5 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -153,7 +153,7 @@ static void gic_enable_redist(bool enable)
+ 			return;	/* No PM support in this redistributor */
+ 	}
+ 
+-	while (count--) {
++	while (--count) {
+ 		val = readl_relaxed(rbase + GICR_WAKER);
+ 		if (enable ^ (bool)(val & GICR_WAKER_ChildrenAsleep))
+ 			break;
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 874295757caa..6fc8923bd92a 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -113,8 +113,7 @@ struct iv_tcw_private {
+  * and encrypts / decrypts at the same time.
+  */
+ enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
+-	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD,
+-	     DM_CRYPT_EXIT_THREAD};
++	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
+ 
+ /*
+  * The fields in here must be read only after initialization.
+@@ -1207,18 +1206,20 @@ continue_locked:
+ 		if (!RB_EMPTY_ROOT(&cc->write_tree))
+ 			goto pop_from_list;
+ 
+-		if (unlikely(test_bit(DM_CRYPT_EXIT_THREAD, &cc->flags))) {
+-			spin_unlock_irq(&cc->write_thread_wait.lock);
+-			break;
+-		}
+-
+-		__set_current_state(TASK_INTERRUPTIBLE);
++		set_current_state(TASK_INTERRUPTIBLE);
+ 		__add_wait_queue(&cc->write_thread_wait, &wait);
+ 
+ 		spin_unlock_irq(&cc->write_thread_wait.lock);
+ 
++		if (unlikely(kthread_should_stop())) {
++			set_task_state(current, TASK_RUNNING);
++			remove_wait_queue(&cc->write_thread_wait, &wait);
++			break;
++		}
++
+ 		schedule();
+ 
++		set_task_state(current, TASK_RUNNING);
+ 		spin_lock_irq(&cc->write_thread_wait.lock);
+ 		__remove_wait_queue(&cc->write_thread_wait, &wait);
+ 		goto continue_locked;
+@@ -1533,13 +1534,8 @@ static void crypt_dtr(struct dm_target *ti)
+ 	if (!cc)
+ 		return;
+ 
+-	if (cc->write_thread) {
+-		spin_lock_irq(&cc->write_thread_wait.lock);
+-		set_bit(DM_CRYPT_EXIT_THREAD, &cc->flags);
+-		wake_up_locked(&cc->write_thread_wait);
+-		spin_unlock_irq(&cc->write_thread_wait.lock);
++	if (cc->write_thread)
+ 		kthread_stop(cc->write_thread);
+-	}
+ 
+ 	if (cc->io_queue)
+ 		destroy_workqueue(cc->io_queue);
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index ac734e5bbe48..15db5e9c572e 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1521,10 +1521,10 @@ static void activate_path(struct work_struct *work)
+ {
+ 	struct pgpath *pgpath =
+ 		container_of(work, struct pgpath, activate_path.work);
++	struct request_queue *q = bdev_get_queue(pgpath->path.dev->bdev);
+ 
+-	if (pgpath->is_active)
+-		scsi_dh_activate(bdev_get_queue(pgpath->path.dev->bdev),
+-				 pg_init_done, pgpath);
++	if (pgpath->is_active && !blk_queue_dying(q))
++		scsi_dh_activate(q, pg_init_done, pgpath);
+ 	else
+ 		pg_init_done(pgpath, SCSI_DH_DEV_OFFLINED);
+ }
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 1ca7463e8bb2..5da86c8b6545 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -73,15 +73,24 @@ static void dm_old_start_queue(struct request_queue *q)
+ 	spin_unlock_irqrestore(q->queue_lock, flags);
+ }
+ 
++static void dm_mq_start_queue(struct request_queue *q)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(q->queue_lock, flags);
++	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	blk_mq_start_stopped_hw_queues(q, true);
++	blk_mq_kick_requeue_list(q);
++}
++
+ void dm_start_queue(struct request_queue *q)
+ {
+ 	if (!q->mq_ops)
+ 		dm_old_start_queue(q);
+-	else {
+-		queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, q);
+-		blk_mq_start_stopped_hw_queues(q, true);
+-		blk_mq_kick_requeue_list(q);
+-	}
++	else
++		dm_mq_start_queue(q);
+ }
+ 
+ static void dm_old_stop_queue(struct request_queue *q)
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index fa9b1cb4438a..0f2928b3136b 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1873,6 +1873,7 @@ EXPORT_SYMBOL_GPL(dm_device_name);
+ 
+ static void __dm_destroy(struct mapped_device *md, bool wait)
+ {
++	struct request_queue *q = dm_get_md_queue(md);
+ 	struct dm_table *map;
+ 	int srcu_idx;
+ 
+@@ -1883,6 +1884,10 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
+ 	set_bit(DMF_FREEING, &md->flags);
+ 	spin_unlock(&_minor_lock);
+ 
++	spin_lock_irq(q->queue_lock);
++	queue_flag_set(QUEUE_FLAG_DYING, q);
++	spin_unlock_irq(q->queue_lock);
++
+ 	if (dm_request_based(md) && md->kworker_task)
+ 		flush_kthread_worker(&md->kworker);
+ 
+@@ -2249,10 +2254,11 @@ static int __dm_resume(struct mapped_device *md, struct dm_table *map)
+ 
+ int dm_resume(struct mapped_device *md)
+ {
+-	int r = -EINVAL;
++	int r;
+ 	struct dm_table *map = NULL;
+ 
+ retry:
++	r = -EINVAL;
+ 	mutex_lock_nested(&md->suspend_lock, SINGLE_DEPTH_NESTING);
+ 
+ 	if (!dm_suspended_md(md))
+@@ -2276,8 +2282,6 @@ retry:
+ 		goto out;
+ 
+ 	clear_bit(DMF_SUSPENDED, &md->flags);
+-
+-	r = 0;
+ out:
+ 	mutex_unlock(&md->suspend_lock);
+ 
+diff --git a/drivers/media/dvb-frontends/mb86a20s.c b/drivers/media/dvb-frontends/mb86a20s.c
+index 41325328a22e..fe79358b035e 100644
+--- a/drivers/media/dvb-frontends/mb86a20s.c
++++ b/drivers/media/dvb-frontends/mb86a20s.c
+@@ -71,25 +71,27 @@ static struct regdata mb86a20s_init1[] = {
+ };
+ 
+ static struct regdata mb86a20s_init2[] = {
+-	{ 0x28, 0x22 }, { 0x29, 0x00 }, { 0x2a, 0x1f }, { 0x2b, 0xf0 },
++	{ 0x50, 0xd1 }, { 0x51, 0x22 },
++	{ 0x39, 0x01 },
++	{ 0x71, 0x00 },
+ 	{ 0x3b, 0x21 },
+-	{ 0x3c, 0x38 },
++	{ 0x3c, 0x3a },
+ 	{ 0x01, 0x0d },
+-	{ 0x04, 0x08 }, { 0x05, 0x03 },
++	{ 0x04, 0x08 }, { 0x05, 0x05 },
+ 	{ 0x04, 0x0e }, { 0x05, 0x00 },
+-	{ 0x04, 0x0f }, { 0x05, 0x37 },
+-	{ 0x04, 0x0b }, { 0x05, 0x78 },
++	{ 0x04, 0x0f }, { 0x05, 0x14 },
++	{ 0x04, 0x0b }, { 0x05, 0x8c },
+ 	{ 0x04, 0x00 }, { 0x05, 0x00 },
+-	{ 0x04, 0x01 }, { 0x05, 0x1e },
+-	{ 0x04, 0x02 }, { 0x05, 0x07 },
+-	{ 0x04, 0x03 }, { 0x05, 0xd0 },
++	{ 0x04, 0x01 }, { 0x05, 0x07 },
++	{ 0x04, 0x02 }, { 0x05, 0x0f },
++	{ 0x04, 0x03 }, { 0x05, 0xa0 },
+ 	{ 0x04, 0x09 }, { 0x05, 0x00 },
+ 	{ 0x04, 0x0a }, { 0x05, 0xff },
+-	{ 0x04, 0x27 }, { 0x05, 0x00 },
++	{ 0x04, 0x27 }, { 0x05, 0x64 },
+ 	{ 0x04, 0x28 }, { 0x05, 0x00 },
+-	{ 0x04, 0x1e }, { 0x05, 0x00 },
+-	{ 0x04, 0x29 }, { 0x05, 0x64 },
+-	{ 0x04, 0x32 }, { 0x05, 0x02 },
++	{ 0x04, 0x1e }, { 0x05, 0xff },
++	{ 0x04, 0x29 }, { 0x05, 0x0a },
++	{ 0x04, 0x32 }, { 0x05, 0x0a },
+ 	{ 0x04, 0x14 }, { 0x05, 0x02 },
+ 	{ 0x04, 0x04 }, { 0x05, 0x00 },
+ 	{ 0x04, 0x05 }, { 0x05, 0x22 },
+@@ -97,8 +99,6 @@ static struct regdata mb86a20s_init2[] = {
+ 	{ 0x04, 0x07 }, { 0x05, 0xd8 },
+ 	{ 0x04, 0x12 }, { 0x05, 0x00 },
+ 	{ 0x04, 0x13 }, { 0x05, 0xff },
+-	{ 0x04, 0x15 }, { 0x05, 0x4e },
+-	{ 0x04, 0x16 }, { 0x05, 0x20 },
+ 
+ 	/*
+ 	 * On this demod, when the bit count reaches the count below,
+@@ -152,42 +152,36 @@ static struct regdata mb86a20s_init2[] = {
+ 	{ 0x50, 0x51 }, { 0x51, 0x04 },		/* MER symbol 4 */
+ 	{ 0x45, 0x04 },				/* CN symbol 4 */
+ 	{ 0x48, 0x04 },				/* CN manual mode */
+-
++	{ 0x50, 0xd5 }, { 0x51, 0x01 },
+ 	{ 0x50, 0xd6 }, { 0x51, 0x1f },
+ 	{ 0x50, 0xd2 }, { 0x51, 0x03 },
+-	{ 0x50, 0xd7 }, { 0x51, 0xbf },
+-	{ 0x28, 0x74 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0xff },
+-	{ 0x28, 0x46 }, { 0x29, 0x00 }, { 0x2a, 0x1a }, { 0x2b, 0x0c },
+-
+-	{ 0x04, 0x40 }, { 0x05, 0x00 },
+-	{ 0x28, 0x00 }, { 0x2b, 0x08 },
+-	{ 0x28, 0x05 }, { 0x2b, 0x00 },
++	{ 0x50, 0xd7 }, { 0x51, 0x3f },
+ 	{ 0x1c, 0x01 },
+-	{ 0x28, 0x06 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x1f },
+-	{ 0x28, 0x07 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x18 },
+-	{ 0x28, 0x08 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x12 },
+-	{ 0x28, 0x09 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x30 },
+-	{ 0x28, 0x0a }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x37 },
+-	{ 0x28, 0x0b }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x02 },
+-	{ 0x28, 0x0c }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x09 },
+-	{ 0x28, 0x0d }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x06 },
+-	{ 0x28, 0x0e }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x7b },
+-	{ 0x28, 0x0f }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x76 },
+-	{ 0x28, 0x10 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x7d },
+-	{ 0x28, 0x11 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x08 },
+-	{ 0x28, 0x12 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x0b },
+-	{ 0x28, 0x13 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x00 },
+-	{ 0x28, 0x14 }, { 0x29, 0x00 }, { 0x2a, 0x01 }, { 0x2b, 0xf2 },
+-	{ 0x28, 0x15 }, { 0x29, 0x00 }, { 0x2a, 0x01 }, { 0x2b, 0xf3 },
+-	{ 0x28, 0x16 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x05 },
+-	{ 0x28, 0x17 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x16 },
+-	{ 0x28, 0x18 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x0f },
+-	{ 0x28, 0x19 }, { 0x29, 0x00 }, { 0x2a, 0x07 }, { 0x2b, 0xef },
+-	{ 0x28, 0x1a }, { 0x29, 0x00 }, { 0x2a, 0x07 }, { 0x2b, 0xd8 },
+-	{ 0x28, 0x1b }, { 0x29, 0x00 }, { 0x2a, 0x07 }, { 0x2b, 0xf1 },
+-	{ 0x28, 0x1c }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x3d },
+-	{ 0x28, 0x1d }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x94 },
+-	{ 0x28, 0x1e }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0xba },
++	{ 0x28, 0x06 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x03 },
++	{ 0x28, 0x07 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x0d },
++	{ 0x28, 0x08 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x02 },
++	{ 0x28, 0x09 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x01 },
++	{ 0x28, 0x0a }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x21 },
++	{ 0x28, 0x0b }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x29 },
++	{ 0x28, 0x0c }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x16 },
++	{ 0x28, 0x0d }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x31 },
++	{ 0x28, 0x0e }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x0e },
++	{ 0x28, 0x0f }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x4e },
++	{ 0x28, 0x10 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x46 },
++	{ 0x28, 0x11 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x0f },
++	{ 0x28, 0x12 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x56 },
++	{ 0x28, 0x13 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x35 },
++	{ 0x28, 0x14 }, { 0x29, 0x00 }, { 0x2a, 0x01 }, { 0x2b, 0xbe },
++	{ 0x28, 0x15 }, { 0x29, 0x00 }, { 0x2a, 0x01 }, { 0x2b, 0x84 },
++	{ 0x28, 0x16 }, { 0x29, 0x00 }, { 0x2a, 0x03 }, { 0x2b, 0xee },
++	{ 0x28, 0x17 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x98 },
++	{ 0x28, 0x18 }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x9f },
++	{ 0x28, 0x19 }, { 0x29, 0x00 }, { 0x2a, 0x07 }, { 0x2b, 0xb2 },
++	{ 0x28, 0x1a }, { 0x29, 0x00 }, { 0x2a, 0x06 }, { 0x2b, 0xc2 },
++	{ 0x28, 0x1b }, { 0x29, 0x00 }, { 0x2a, 0x07 }, { 0x2b, 0x4a },
++	{ 0x28, 0x1c }, { 0x29, 0x00 }, { 0x2a, 0x01 }, { 0x2b, 0xbc },
++	{ 0x28, 0x1d }, { 0x29, 0x00 }, { 0x2a, 0x04 }, { 0x2b, 0xba },
++	{ 0x28, 0x1e }, { 0x29, 0x00 }, { 0x2a, 0x06 }, { 0x2b, 0x14 },
+ 	{ 0x50, 0x1e }, { 0x51, 0x5d },
+ 	{ 0x50, 0x22 }, { 0x51, 0x00 },
+ 	{ 0x50, 0x23 }, { 0x51, 0xc8 },
+@@ -196,9 +190,7 @@ static struct regdata mb86a20s_init2[] = {
+ 	{ 0x50, 0x26 }, { 0x51, 0x00 },
+ 	{ 0x50, 0x27 }, { 0x51, 0xc3 },
+ 	{ 0x50, 0x39 }, { 0x51, 0x02 },
+-	{ 0xec, 0x0f },
+-	{ 0xeb, 0x1f },
+-	{ 0x28, 0x6a }, { 0x29, 0x00 }, { 0x2a, 0x00 }, { 0x2b, 0x00 },
++	{ 0x50, 0xd5 }, { 0x51, 0x01 },
+ 	{ 0xd0, 0x00 },
+ };
+ 
+@@ -318,7 +310,11 @@ static int mb86a20s_read_status(struct dvb_frontend *fe, enum fe_status *status)
+ 	if (val >= 7)
+ 		*status |= FE_HAS_SYNC;
+ 
+-	if (val >= 8)				/* Maybe 9? */
++	/*
++	 * Actually, on state S8, it starts receiving TS, but the TS
++	 * output is only on normal state after the transition to S9.
++	 */
++	if (val >= 9)
+ 		*status |= FE_HAS_LOCK;
+ 
+ 	dev_dbg(&state->i2c->dev, "%s: Status = 0x%02x (state = %d)\n",
+@@ -2058,6 +2054,11 @@ static void mb86a20s_release(struct dvb_frontend *fe)
+ 	kfree(state);
+ }
+ 
++static int mb86a20s_get_frontend_algo(struct dvb_frontend *fe)
++{
++        return DVBFE_ALGO_HW;
++}
++
+ static struct dvb_frontend_ops mb86a20s_ops;
+ 
+ struct dvb_frontend *mb86a20s_attach(const struct mb86a20s_config *config,
+@@ -2130,6 +2131,7 @@ static struct dvb_frontend_ops mb86a20s_ops = {
+ 	.read_status = mb86a20s_read_status_and_stats,
+ 	.read_signal_strength = mb86a20s_read_signal_strength_from_cache,
+ 	.tune = mb86a20s_tune,
++	.get_frontend_algo = mb86a20s_get_frontend_algo,
+ };
+ 
+ MODULE_DESCRIPTION("DVB Frontend module for Fujitsu mb86A20s hardware");
+diff --git a/drivers/media/usb/cx231xx/cx231xx-avcore.c b/drivers/media/usb/cx231xx/cx231xx-avcore.c
+index 491913778bcc..2f52d66b4dae 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-avcore.c
++++ b/drivers/media/usb/cx231xx/cx231xx-avcore.c
+@@ -1264,7 +1264,10 @@ int cx231xx_set_agc_analog_digital_mux_select(struct cx231xx *dev,
+ 				   dev->board.agc_analog_digital_select_gpio,
+ 				   analog_or_digital);
+ 
+-	return status;
++	if (status < 0)
++		return status;
++
++	return 0;
+ }
+ 
+ int cx231xx_enable_i2c_port_3(struct cx231xx *dev, bool is_port_3)
+diff --git a/drivers/media/usb/cx231xx/cx231xx-cards.c b/drivers/media/usb/cx231xx/cx231xx-cards.c
+index c63248a18823..72c246bfaa1c 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-cards.c
++++ b/drivers/media/usb/cx231xx/cx231xx-cards.c
+@@ -486,7 +486,7 @@ struct cx231xx_board cx231xx_boards[] = {
+ 		.output_mode = OUT_MODE_VIP11,
+ 		.demod_xfer_mode = 0,
+ 		.ctl_pin_status_mask = 0xFFFFFFC4,
+-		.agc_analog_digital_select_gpio = 0x00,	/* According with PV cxPolaris.inf file */
++		.agc_analog_digital_select_gpio = 0x1c,
+ 		.tuner_sif_gpio = -1,
+ 		.tuner_scl_gpio = -1,
+ 		.tuner_sda_gpio = -1,
+diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
+index 630f4fc5155f..ea9a99e41581 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-core.c
++++ b/drivers/media/usb/cx231xx/cx231xx-core.c
+@@ -712,6 +712,7 @@ int cx231xx_set_mode(struct cx231xx *dev, enum cx231xx_mode set_mode)
+ 			break;
+ 		case CX231XX_BOARD_CNXT_RDE_253S:
+ 		case CX231XX_BOARD_CNXT_RDU_253S:
++		case CX231XX_BOARD_PV_PLAYTV_USB_HYBRID:
+ 			errCode = cx231xx_set_agc_analog_digital_mux_select(dev, 1);
+ 			break;
+ 		case CX231XX_BOARD_HAUPPAUGE_EXETER:
+@@ -738,7 +739,7 @@ int cx231xx_set_mode(struct cx231xx *dev, enum cx231xx_mode set_mode)
+ 		case CX231XX_BOARD_PV_PLAYTV_USB_HYBRID:
+ 		case CX231XX_BOARD_HAUPPAUGE_USB2_FM_PAL:
+ 		case CX231XX_BOARD_HAUPPAUGE_USB2_FM_NTSC:
+-		errCode = cx231xx_set_agc_analog_digital_mux_select(dev, 0);
++			errCode = cx231xx_set_agc_analog_digital_mux_select(dev, 0);
+ 			break;
+ 		default:
+ 			break;
+@@ -1301,15 +1302,29 @@ int cx231xx_dev_init(struct cx231xx *dev)
+ 	dev->i2c_bus[2].i2c_reserve = 0;
+ 
+ 	/* register I2C buses */
+-	cx231xx_i2c_register(&dev->i2c_bus[0]);
+-	cx231xx_i2c_register(&dev->i2c_bus[1]);
+-	cx231xx_i2c_register(&dev->i2c_bus[2]);
++	errCode = cx231xx_i2c_register(&dev->i2c_bus[0]);
++	if (errCode < 0)
++		return errCode;
++	errCode = cx231xx_i2c_register(&dev->i2c_bus[1]);
++	if (errCode < 0)
++		return errCode;
++	errCode = cx231xx_i2c_register(&dev->i2c_bus[2]);
++	if (errCode < 0)
++		return errCode;
+ 
+ 	errCode = cx231xx_i2c_mux_create(dev);
++	if (errCode < 0) {
++		dev_err(dev->dev,
++			"%s: Failed to create I2C mux\n", __func__);
++		return errCode;
++	}
++	errCode = cx231xx_i2c_mux_register(dev, 0);
++	if (errCode < 0)
++		return errCode;
++
++	errCode = cx231xx_i2c_mux_register(dev, 1);
+ 	if (errCode < 0)
+ 		return errCode;
+-	cx231xx_i2c_mux_register(dev, 0);
+-	cx231xx_i2c_mux_register(dev, 1);
+ 
+ 	/* scan the real bus segments in the order of physical port numbers */
+ 	cx231xx_do_i2c_scan(dev, I2C_0);
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index d34bc3530385..2e3cf012ef48 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -524,6 +524,7 @@ static void rtsx_usb_ms_handle_req(struct work_struct *work)
+ 	int rc;
+ 
+ 	if (!host->req) {
++		pm_runtime_get_sync(ms_dev(host));
+ 		do {
+ 			rc = memstick_next_req(msh, &host->req);
+ 			dev_dbg(ms_dev(host), "next req %d\n", rc);
+@@ -544,6 +545,7 @@ static void rtsx_usb_ms_handle_req(struct work_struct *work)
+ 						host->req->error);
+ 			}
+ 		} while (!rc);
++		pm_runtime_put(ms_dev(host));
+ 	}
+ 
+ }
+@@ -570,6 +572,7 @@ static int rtsx_usb_ms_set_param(struct memstick_host *msh,
+ 	dev_dbg(ms_dev(host), "%s: param = %d, value = %d\n",
+ 			__func__, param, value);
+ 
++	pm_runtime_get_sync(ms_dev(host));
+ 	mutex_lock(&ucr->dev_mutex);
+ 
+ 	err = rtsx_usb_card_exclusive_check(ucr, RTSX_USB_MS_CARD);
+@@ -635,6 +638,7 @@ static int rtsx_usb_ms_set_param(struct memstick_host *msh,
+ 	}
+ out:
+ 	mutex_unlock(&ucr->dev_mutex);
++	pm_runtime_put(ms_dev(host));
+ 
+ 	/* power-on delay */
+ 	if (param == MEMSTICK_POWER && value == MEMSTICK_POWER_ON)
+@@ -681,6 +685,7 @@ static int rtsx_usb_detect_ms_card(void *__host)
+ 	int err;
+ 
+ 	for (;;) {
++		pm_runtime_get_sync(ms_dev(host));
+ 		mutex_lock(&ucr->dev_mutex);
+ 
+ 		/* Check pending MS card changes */
+@@ -703,6 +708,7 @@ static int rtsx_usb_detect_ms_card(void *__host)
+ 		}
+ 
+ poll_again:
++		pm_runtime_put(ms_dev(host));
+ 		if (host->eject)
+ 			break;
+ 
+diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
+index f3d34b941f85..af23d7dfe752 100644
+--- a/drivers/misc/cxl/api.c
++++ b/drivers/misc/cxl/api.c
+@@ -229,6 +229,14 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
+ 	if (ctx->status == STARTED)
+ 		goto out; /* already started */
+ 
++	/*
++	 * Increment the mapped context count for adapter. This also checks
++	 * if adapter_context_lock is taken.
++	 */
++	rc = cxl_adapter_context_get(ctx->afu->adapter);
++	if (rc)
++		goto out;
++
+ 	if (task) {
+ 		ctx->pid = get_task_pid(task, PIDTYPE_PID);
+ 		ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
+@@ -240,6 +248,7 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
+ 
+ 	if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
+ 		put_pid(ctx->pid);
++		cxl_adapter_context_put(ctx->afu->adapter);
+ 		cxl_ctx_put();
+ 		goto out;
+ 	}
+diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
+index c466ee2b0c97..5e506c19108a 100644
+--- a/drivers/misc/cxl/context.c
++++ b/drivers/misc/cxl/context.c
+@@ -238,6 +238,9 @@ int __detach_context(struct cxl_context *ctx)
+ 	put_pid(ctx->glpid);
+ 
+ 	cxl_ctx_put();
++
++	/* Decrease the attached context count on the adapter */
++	cxl_adapter_context_put(ctx->afu->adapter);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
+index 344a0ff8f8c7..19aa2aca9683 100644
+--- a/drivers/misc/cxl/cxl.h
++++ b/drivers/misc/cxl/cxl.h
+@@ -615,6 +615,14 @@ struct cxl {
+ 	bool perst_select_user;
+ 	bool perst_same_image;
+ 	bool psl_timebase_synced;
++
++	/*
++	 * number of contexts mapped on to this card. Possible values are:
++	 * >0: Number of contexts mapped and new one can be mapped.
++	 *  0: No active contexts and new ones can be mapped.
++	 * -1: No contexts mapped and new ones cannot be mapped.
++	 */
++	atomic_t contexts_num;
+ };
+ 
+ int cxl_pci_alloc_one_irq(struct cxl *adapter);
+@@ -940,4 +948,20 @@ bool cxl_pci_is_vphb_device(struct pci_dev *dev);
+ 
+ /* decode AFU error bits in the PSL register PSL_SERR_An */
+ void cxl_afu_decode_psl_serr(struct cxl_afu *afu, u64 serr);
++
++/*
++ * Increments the number of attached contexts on an adapter.
++ * In case an adapter_context_lock is taken the return -EBUSY.
++ */
++int cxl_adapter_context_get(struct cxl *adapter);
++
++/* Decrements the number of attached contexts on an adapter */
++void cxl_adapter_context_put(struct cxl *adapter);
++
++/* If no active contexts then prevents contexts from being attached */
++int cxl_adapter_context_lock(struct cxl *adapter);
++
++/* Unlock the contexts-lock if taken. Warn and force unlock otherwise */
++void cxl_adapter_context_unlock(struct cxl *adapter);
++
+ #endif
+diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
+index 5fb9894b157f..d0b421f49b39 100644
+--- a/drivers/misc/cxl/file.c
++++ b/drivers/misc/cxl/file.c
+@@ -205,11 +205,22 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
+ 	ctx->pid = get_task_pid(current, PIDTYPE_PID);
+ 	ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
+ 
++	/*
++	 * Increment the mapped context count for adapter. This also checks
++	 * if adapter_context_lock is taken.
++	 */
++	rc = cxl_adapter_context_get(ctx->afu->adapter);
++	if (rc) {
++		afu_release_irqs(ctx, ctx);
++		goto out;
++	}
++
+ 	trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
+ 
+ 	if ((rc = cxl_ops->attach_process(ctx, false, work.work_element_descriptor,
+ 							amr))) {
+ 		afu_release_irqs(ctx, ctx);
++		cxl_adapter_context_put(ctx->afu->adapter);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 9aa58a77a24d..3e102cd6ed91 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -1152,6 +1152,9 @@ struct cxl *cxl_guest_init_adapter(struct device_node *np, struct platform_devic
+ 	if ((rc = cxl_sysfs_adapter_add(adapter)))
+ 		goto err_put1;
+ 
++	/* release the context lock as the adapter is configured */
++	cxl_adapter_context_unlock(adapter);
++
+ 	return adapter;
+ 
+ err_put1:
+diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
+index d9be23b24aa3..62e0dfb5f15b 100644
+--- a/drivers/misc/cxl/main.c
++++ b/drivers/misc/cxl/main.c
+@@ -243,8 +243,10 @@ struct cxl *cxl_alloc_adapter(void)
+ 	if (dev_set_name(&adapter->dev, "card%i", adapter->adapter_num))
+ 		goto err2;
+ 
+-	return adapter;
++	/* start with context lock taken */
++	atomic_set(&adapter->contexts_num, -1);
+ 
++	return adapter;
+ err2:
+ 	cxl_remove_adapter_nr(adapter);
+ err1:
+@@ -286,6 +288,44 @@ int cxl_afu_select_best_mode(struct cxl_afu *afu)
+ 	return 0;
+ }
+ 
++int cxl_adapter_context_get(struct cxl *adapter)
++{
++	int rc;
++
++	rc = atomic_inc_unless_negative(&adapter->contexts_num);
++	return rc >= 0 ? 0 : -EBUSY;
++}
++
++void cxl_adapter_context_put(struct cxl *adapter)
++{
++	atomic_dec_if_positive(&adapter->contexts_num);
++}
++
++int cxl_adapter_context_lock(struct cxl *adapter)
++{
++	int rc;
++	/* no active contexts -> contexts_num == 0 */
++	rc = atomic_cmpxchg(&adapter->contexts_num, 0, -1);
++	return rc ? -EBUSY : 0;
++}
++
++void cxl_adapter_context_unlock(struct cxl *adapter)
++{
++	int val = atomic_cmpxchg(&adapter->contexts_num, -1, 0);
++
++	/*
++	 * contexts lock taken -> contexts_num == -1
++	 * If not true then show a warning and force reset the lock.
++	 * This will happen when context_unlock was requested without
++	 * doing a context_lock.
++	 */
++	if (val != -1) {
++		atomic_set(&adapter->contexts_num, 0);
++		WARN(1, "Adapter context unlocked with %d active contexts",
++		     val);
++	}
++}
++
+ static int __init init_cxl(void)
+ {
+ 	int rc = 0;
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index 6f0c4ac4b649..8ad4e4f6ff77 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -1484,6 +1484,8 @@ static int cxl_configure_adapter(struct cxl *adapter, struct pci_dev *dev)
+ 	if ((rc = cxl_native_register_psl_err_irq(adapter)))
+ 		goto err;
+ 
++	/* Release the context lock as adapter is configured */
++	cxl_adapter_context_unlock(adapter);
+ 	return 0;
+ 
+ err:
+diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
+index b043c20f158f..a8b6d6a635e9 100644
+--- a/drivers/misc/cxl/sysfs.c
++++ b/drivers/misc/cxl/sysfs.c
+@@ -75,12 +75,31 @@ static ssize_t reset_adapter_store(struct device *device,
+ 	int val;
+ 
+ 	rc = sscanf(buf, "%i", &val);
+-	if ((rc != 1) || (val != 1))
++	if ((rc != 1) || (val != 1 && val != -1))
+ 		return -EINVAL;
+ 
+-	if ((rc = cxl_ops->adapter_reset(adapter)))
+-		return rc;
+-	return count;
++	/*
++	 * See if we can lock the context mapping that's only allowed
++	 * when there are no contexts attached to the adapter. Once
++	 * taken this will also prevent any context from getting activated.
++	 */
++	if (val == 1) {
++		rc =  cxl_adapter_context_lock(adapter);
++		if (rc)
++			goto out;
++
++		rc = cxl_ops->adapter_reset(adapter);
++		/* In case reset failed release context lock */
++		if (rc)
++			cxl_adapter_context_unlock(adapter);
++
++	} else if (val == -1) {
++		/* Perform a forced adapter reset */
++		rc = cxl_ops->adapter_reset(adapter);
++	}
++
++out:
++	return rc ? rc : count;
+ }
+ 
+ static ssize_t load_image_on_perst_show(struct device *device,
+diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
+index fd9271bc1a11..cd01e342bc78 100644
+--- a/drivers/misc/mei/amthif.c
++++ b/drivers/misc/mei/amthif.c
+@@ -139,7 +139,7 @@ int mei_amthif_read(struct mei_device *dev, struct file *file,
+ 			return -ERESTARTSYS;
+ 
+ 		if (!mei_cl_is_connected(cl)) {
+-			rets = -EBUSY;
++			rets = -ENODEV;
+ 			goto out;
+ 		}
+ 
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index e094df3cf2d5..5b5b2e07e99e 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -142,7 +142,7 @@ ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
+ 		mutex_lock(&bus->device_lock);
+ 
+ 		if (!mei_cl_is_connected(cl)) {
+-			rets = -EBUSY;
++			rets = -ENODEV;
+ 			goto out;
+ 		}
+ 	}
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 0dcb854b4bfc..7ad15d678878 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -125,6 +125,9 @@
+ #define MEI_DEV_ID_BXT_M      0x1A9A  /* Broxton M */
+ #define MEI_DEV_ID_APL_I      0x5A9A  /* Apollo Lake I */
+ 
++#define MEI_DEV_ID_KBP        0xA2BA  /* Kaby Point */
++#define MEI_DEV_ID_KBP_2      0xA2BB  /* Kaby Point 2 */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index 52635b063873..080208dc5516 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -202,7 +202,7 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
+ 
+ 		mutex_lock(&dev->device_lock);
+ 		if (!mei_cl_is_connected(cl)) {
+-			rets = -EBUSY;
++			rets = -ENODEV;
+ 			goto out;
+ 		}
+ 	}
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 71cea9b296b2..5eb9b75ae9ec 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -91,6 +91,9 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, mei_me_pch8_cfg)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, mei_me_pch8_cfg)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_KBP, mei_me_pch8_cfg)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, mei_me_pch8_cfg)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
+index 2206d4477dbb..17891f17f39d 100644
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -1778,7 +1778,7 @@ static void mmc_blk_packed_hdr_wrq_prep(struct mmc_queue_req *mqrq,
+ 	struct mmc_blk_data *md = mq->data;
+ 	struct mmc_packed *packed = mqrq->packed;
+ 	bool do_rel_wr, do_data_tag;
+-	u32 *packed_cmd_hdr;
++	__le32 *packed_cmd_hdr;
+ 	u8 hdr_blocks;
+ 	u8 i = 1;
+ 
+@@ -2303,7 +2303,8 @@ again:
+ 	set_capacity(md->disk, size);
+ 
+ 	if (mmc_host_cmd23(card->host)) {
+-		if (mmc_card_mmc(card) ||
++		if ((mmc_card_mmc(card) &&
++		     card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
+ 		    (mmc_card_sd(card) &&
+ 		     card->scr.cmds & SD_SCR_CMD23_SUPPORT))
+ 			md->flags |= MMC_BLK_CMD23;
+diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
+index fee5e1271465..7f16709a5bd5 100644
+--- a/drivers/mmc/card/queue.h
++++ b/drivers/mmc/card/queue.h
+@@ -31,7 +31,7 @@ enum mmc_packed_type {
+ 
+ struct mmc_packed {
+ 	struct list_head	list;
+-	u32			cmd_hdr[1024];
++	__le32			cmd_hdr[1024];
+ 	unsigned int		blocks;
+ 	u8			nr_entries;
+ 	u8			retries;
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index f2d185cf8a8b..c57eb32dc075 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1259,6 +1259,16 @@ static int mmc_select_hs400es(struct mmc_card *card)
+ 		goto out_err;
+ 	}
+ 
++	if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400_1_2V)
++		err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120);
++
++	if (err && card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400_1_8V)
++		err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180);
++
++	/* If fails try again during next card power cycle */
++	if (err)
++		goto out_err;
++
+ 	err = mmc_select_bus_width(card);
+ 	if (err < 0)
+ 		goto out_err;
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index 6c71fc9f76c7..da9f71b8deb0 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1138,11 +1138,6 @@ static void sdmmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	dev_dbg(sdmmc_dev(host), "%s\n", __func__);
+ 	mutex_lock(&ucr->dev_mutex);
+ 
+-	if (rtsx_usb_card_exclusive_check(ucr, RTSX_USB_SD_CARD)) {
+-		mutex_unlock(&ucr->dev_mutex);
+-		return;
+-	}
+-
+ 	sd_set_power_mode(host, ios->power_mode);
+ 	sd_set_bus_width(host, ios->bus_width);
+ 	sd_set_timing(host, ios->timing, &host->ddr_mode);
+@@ -1314,6 +1309,7 @@ static void rtsx_usb_update_led(struct work_struct *work)
+ 		container_of(work, struct rtsx_usb_sdmmc, led_work);
+ 	struct rtsx_ucr *ucr = host->ucr;
+ 
++	pm_runtime_get_sync(sdmmc_dev(host));
+ 	mutex_lock(&ucr->dev_mutex);
+ 
+ 	if (host->led.brightness == LED_OFF)
+@@ -1322,6 +1318,7 @@ static void rtsx_usb_update_led(struct work_struct *work)
+ 		rtsx_usb_turn_on_led(ucr);
+ 
+ 	mutex_unlock(&ucr->dev_mutex);
++	pm_runtime_put(sdmmc_dev(host));
+ }
+ #endif
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index cd65d474afa2..a8a022a7358f 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -687,7 +687,7 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
+ 			 * host->clock is in Hz.  target_timeout is in us.
+ 			 * Hence, us = 1000000 * cycles / Hz.  Round up.
+ 			 */
+-			val = 1000000 * data->timeout_clks;
++			val = 1000000ULL * data->timeout_clks;
+ 			if (do_div(val, host->clock))
+ 				target_timeout++;
+ 			target_timeout += val;
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index f4533266d7b2..b419c7cfd014 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -644,7 +644,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 				int shutdown)
+ {
+ 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
+-	int vol_id = -1, lnum = -1;
++	int erase = 0, keep = 0, vol_id = -1, lnum = -1;
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ 	int anchor = wrk->anchor;
+ #endif
+@@ -780,6 +780,16 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 			       e1->pnum);
+ 			scrubbing = 1;
+ 			goto out_not_moved;
++		} else if (ubi->fast_attach && err == UBI_IO_BAD_HDR_EBADMSG) {
++			/*
++			 * While a full scan would detect interrupted erasures
++			 * at attach time we can face them here when attached from
++			 * Fastmap.
++			 */
++			dbg_wl("PEB %d has ECC errors, maybe from an interrupted erasure",
++			       e1->pnum);
++			erase = 1;
++			goto out_not_moved;
+ 		}
+ 
+ 		ubi_err(ubi, "error %d while reading VID header from PEB %d",
+@@ -815,6 +825,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 			 * Target PEB had bit-flips or write error - torture it.
+ 			 */
+ 			torture = 1;
++			keep = 1;
+ 			goto out_not_moved;
+ 		}
+ 
+@@ -901,7 +912,7 @@ out_not_moved:
+ 		ubi->erroneous_peb_count += 1;
+ 	} else if (scrubbing)
+ 		wl_tree_add(e1, &ubi->scrub);
+-	else
++	else if (keep)
+ 		wl_tree_add(e1, &ubi->used);
+ 	if (dst_leb_clean) {
+ 		wl_tree_add(e2, &ubi->free);
+@@ -922,6 +933,12 @@ out_not_moved:
+ 			goto out_ro;
+ 	}
+ 
++	if (erase) {
++		err = do_sync_erase(ubi, e1, vol_id, lnum, 1);
++		if (err)
++			goto out_ro;
++	}
++
+ 	mutex_unlock(&ubi->move_mutex);
+ 	return 0;
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 9fb8d7472d18..da9998ea9271 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -433,6 +433,13 @@ void ath10k_ce_rx_update_write_idx(struct ath10k_ce_pipe *pipe, u32 nentries)
+ 	unsigned int nentries_mask = dest_ring->nentries_mask;
+ 	unsigned int write_index = dest_ring->write_index;
+ 	u32 ctrl_addr = pipe->ctrl_addr;
++	u32 cur_write_idx = ath10k_ce_dest_ring_write_index_get(ar, ctrl_addr);
++
++	/* Prevent CE ring stuck issue that will occur when ring is full.
++	 * Make sure that write index is 1 less than read index.
++	 */
++	if ((cur_write_idx + nentries)  == dest_ring->sw_index)
++		nentries -= 1;
+ 
+ 	write_index = CE_RING_IDX_ADD(nentries_mask, write_index, nentries);
+ 	ath10k_ce_dest_ring_write_index_set(ar, ctrl_addr, write_index);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/regd.c b/drivers/net/wireless/realtek/rtlwifi/regd.c
+index 3524441fd516..6ee6bf8e7eaf 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/regd.c
++++ b/drivers/net/wireless/realtek/rtlwifi/regd.c
+@@ -345,9 +345,9 @@ static const struct ieee80211_regdomain *_rtl_regdomain_select(
+ 		return &rtl_regdom_no_midband;
+ 	case COUNTRY_CODE_IC:
+ 		return &rtl_regdom_11;
+-	case COUNTRY_CODE_ETSI:
+ 	case COUNTRY_CODE_TELEC_NETGEAR:
+ 		return &rtl_regdom_60_64;
++	case COUNTRY_CODE_ETSI:
+ 	case COUNTRY_CODE_SPAIN:
+ 	case COUNTRY_CODE_FRANCE:
+ 	case COUNTRY_CODE_ISRAEL:
+@@ -406,6 +406,8 @@ static u8 channel_plan_to_country_code(u8 channelplan)
+ 		return COUNTRY_CODE_WORLD_WIDE_13;
+ 	case 0x22:
+ 		return COUNTRY_CODE_IC;
++	case 0x25:
++		return COUNTRY_CODE_ETSI;
+ 	case 0x32:
+ 		return COUNTRY_CODE_TELEC_NETGEAR;
+ 	case 0x41:
+diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c
+index 6de0757b11e4..84d650d892e7 100644
+--- a/drivers/pci/host/pci-tegra.c
++++ b/drivers/pci/host/pci-tegra.c
+@@ -856,7 +856,7 @@ static int tegra_pcie_phy_disable(struct tegra_pcie *pcie)
+ 	/* override IDDQ */
+ 	value = pads_readl(pcie, PADS_CTL);
+ 	value |= PADS_CTL_IDDQ_1L;
+-	pads_writel(pcie, PADS_CTL, value);
++	pads_writel(pcie, value, PADS_CTL);
+ 
+ 	/* reset PLL */
+ 	value = pads_readl(pcie, soc->pads_pll_ctl);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 44e0ff37480b..4bf1a88d7ba7 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3198,6 +3198,7 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
+ 
+ static void quirk_no_pm_reset(struct pci_dev *dev)
+ {
+diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
+index d22a9fe2e6df..71bbeb9321ba 100644
+--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
+@@ -1808,6 +1808,8 @@ static int byt_pinctrl_probe(struct platform_device *pdev)
+ 		return PTR_ERR(vg->pctl_dev);
+ 	}
+ 
++	raw_spin_lock_init(&vg->lock);
++
+ 	ret = byt_gpio_probe(vg);
+ 	if (ret) {
+ 		pinctrl_unregister(vg->pctl_dev);
+@@ -1815,7 +1817,6 @@ static int byt_pinctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	platform_set_drvdata(pdev, vg);
+-	raw_spin_lock_init(&vg->lock);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	return 0;
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 257cab129692..2b5b20bf7d99 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -19,6 +19,7 @@
+ #include <linux/pinctrl/pinconf.h>
+ #include <linux/pinctrl/pinconf-generic.h>
+ 
++#include "../core.h"
+ #include "pinctrl-intel.h"
+ 
+ /* Offset from regs */
+@@ -1079,6 +1080,26 @@ int intel_pinctrl_remove(struct platform_device *pdev)
+ EXPORT_SYMBOL_GPL(intel_pinctrl_remove);
+ 
+ #ifdef CONFIG_PM_SLEEP
++static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned pin)
++{
++	const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
++
++	if (!pd || !intel_pad_usable(pctrl, pin))
++		return false;
++
++	/*
++	 * Only restore the pin if it is actually in use by the kernel (or
++	 * by userspace). It is possible that some pins are used by the
++	 * BIOS during resume and those are not always locked down so leave
++	 * them alone.
++	 */
++	if (pd->mux_owner || pd->gpio_owner ||
++	    gpiochip_line_is_irq(&pctrl->chip, pin))
++		return true;
++
++	return false;
++}
++
+ int intel_pinctrl_suspend(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+@@ -1092,7 +1113,7 @@ int intel_pinctrl_suspend(struct device *dev)
+ 		const struct pinctrl_pin_desc *desc = &pctrl->soc->pins[i];
+ 		u32 val;
+ 
+-		if (!intel_pad_usable(pctrl, desc->number))
++		if (!intel_pinctrl_should_save(pctrl, desc->number))
+ 			continue;
+ 
+ 		val = readl(intel_get_padcfg(pctrl, desc->number, PADCFG0));
+@@ -1153,7 +1174,7 @@ int intel_pinctrl_resume(struct device *dev)
+ 		void __iomem *padcfg;
+ 		u32 val;
+ 
+-		if (!intel_pad_usable(pctrl, desc->number))
++		if (!intel_pinctrl_should_save(pctrl, desc->number))
+ 			continue;
+ 
+ 		padcfg = intel_get_padcfg(pctrl, desc->number, PADCFG0);
+diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
+index fb991ec76423..696116ebdf50 100644
+--- a/drivers/regulator/tps65910-regulator.c
++++ b/drivers/regulator/tps65910-regulator.c
+@@ -1111,6 +1111,12 @@ static int tps65910_probe(struct platform_device *pdev)
+ 		pmic->num_regulators = ARRAY_SIZE(tps65910_regs);
+ 		pmic->ext_sleep_control = tps65910_ext_sleep_control;
+ 		info = tps65910_regs;
++		/* Work around silicon erratum SWCZ010: output programmed
++		 * voltage level can go higher than expected or crash
++		 * Workaround: use no synchronization of DCDC clocks
++		 */
++		tps65910_reg_clear_bits(pmic->mfd, TPS65910_DCDCCTRL,
++					DCDCCTRL_DCDCCKSYNC_MASK);
+ 		break;
+ 	case TPS65911:
+ 		pmic->get_ctrl_reg = &tps65911_get_ctrl_register;
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index 5d7fbe4e907e..581001989937 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -3,7 +3,7 @@
+  *
+  * Debug traces for zfcp.
+  *
+- * Copyright IBM Corp. 2002, 2013
++ * Copyright IBM Corp. 2002, 2016
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -65,7 +65,7 @@ void zfcp_dbf_pl_write(struct zfcp_dbf *dbf, void *data, u16 length, char *area,
+  * @tag: tag indicating which kind of unsolicited status has been received
+  * @req: request for which a response was received
+  */
+-void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req)
++void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req)
+ {
+ 	struct zfcp_dbf *dbf = req->adapter->dbf;
+ 	struct fsf_qtcb_prefix *q_pref = &req->qtcb->prefix;
+@@ -85,6 +85,8 @@ void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req)
+ 	rec->u.res.req_issued = req->issued;
+ 	rec->u.res.prot_status = q_pref->prot_status;
+ 	rec->u.res.fsf_status = q_head->fsf_status;
++	rec->u.res.port_handle = q_head->port_handle;
++	rec->u.res.lun_handle = q_head->lun_handle;
+ 
+ 	memcpy(rec->u.res.prot_status_qual, &q_pref->prot_status_qual,
+ 	       FSF_PROT_STATUS_QUAL_SIZE);
+@@ -97,7 +99,7 @@ void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req)
+ 				  rec->pl_len, "fsf_res", req->req_id);
+ 	}
+ 
+-	debug_event(dbf->hba, 1, rec, sizeof(*rec));
++	debug_event(dbf->hba, level, rec, sizeof(*rec));
+ 	spin_unlock_irqrestore(&dbf->hba_lock, flags);
+ }
+ 
+@@ -241,7 +243,8 @@ static void zfcp_dbf_set_common(struct zfcp_dbf_rec *rec,
+ 	if (sdev) {
+ 		rec->lun_status = atomic_read(&sdev_to_zfcp(sdev)->status);
+ 		rec->lun = zfcp_scsi_dev_lun(sdev);
+-	}
++	} else
++		rec->lun = ZFCP_DBF_INVALID_LUN;
+ }
+ 
+ /**
+@@ -320,13 +323,48 @@ void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
+ 	spin_unlock_irqrestore(&dbf->rec_lock, flags);
+ }
+ 
++/**
++ * zfcp_dbf_rec_run_wka - trace wka port event with info like running recovery
++ * @tag: identifier for event
++ * @wka_port: well known address port
++ * @req_id: request ID to correlate with potential HBA trace record
++ */
++void zfcp_dbf_rec_run_wka(char *tag, struct zfcp_fc_wka_port *wka_port,
++			  u64 req_id)
++{
++	struct zfcp_dbf *dbf = wka_port->adapter->dbf;
++	struct zfcp_dbf_rec *rec = &dbf->rec_buf;
++	unsigned long flags;
++
++	spin_lock_irqsave(&dbf->rec_lock, flags);
++	memset(rec, 0, sizeof(*rec));
++
++	rec->id = ZFCP_DBF_REC_RUN;
++	memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
++	rec->port_status = wka_port->status;
++	rec->d_id = wka_port->d_id;
++	rec->lun = ZFCP_DBF_INVALID_LUN;
++
++	rec->u.run.fsf_req_id = req_id;
++	rec->u.run.rec_status = ~0;
++	rec->u.run.rec_step = ~0;
++	rec->u.run.rec_action = ~0;
++	rec->u.run.rec_count = ~0;
++
++	debug_event(dbf->rec, 1, rec, sizeof(*rec));
++	spin_unlock_irqrestore(&dbf->rec_lock, flags);
++}
++
+ static inline
+-void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf, void *data, u8 id, u16 len,
+-		  u64 req_id, u32 d_id)
++void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf,
++		  char *paytag, struct scatterlist *sg, u8 id, u16 len,
++		  u64 req_id, u32 d_id, u16 cap_len)
+ {
+ 	struct zfcp_dbf_san *rec = &dbf->san_buf;
+ 	u16 rec_len;
+ 	unsigned long flags;
++	struct zfcp_dbf_pay *payload = &dbf->pay_buf;
++	u16 pay_sum = 0;
+ 
+ 	spin_lock_irqsave(&dbf->san_lock, flags);
+ 	memset(rec, 0, sizeof(*rec));
+@@ -334,10 +372,41 @@ void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf, void *data, u8 id, u16 len,
+ 	rec->id = id;
+ 	rec->fsf_req_id = req_id;
+ 	rec->d_id = d_id;
+-	rec_len = min(len, (u16)ZFCP_DBF_SAN_MAX_PAYLOAD);
+-	memcpy(rec->payload, data, rec_len);
+ 	memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
++	rec->pl_len = len; /* full length even if we cap pay below */
++	if (!sg)
++		goto out;
++	rec_len = min_t(unsigned int, sg->length, ZFCP_DBF_SAN_MAX_PAYLOAD);
++	memcpy(rec->payload, sg_virt(sg), rec_len); /* part of 1st sg entry */
++	if (len <= rec_len)
++		goto out; /* skip pay record if full content in rec->payload */
++
++	/* if (len > rec_len):
++	 * dump data up to cap_len ignoring small duplicate in rec->payload
++	 */
++	spin_lock(&dbf->pay_lock);
++	memset(payload, 0, sizeof(*payload));
++	memcpy(payload->area, paytag, ZFCP_DBF_TAG_LEN);
++	payload->fsf_req_id = req_id;
++	payload->counter = 0;
++	for (; sg && pay_sum < cap_len; sg = sg_next(sg)) {
++		u16 pay_len, offset = 0;
++
++		while (offset < sg->length && pay_sum < cap_len) {
++			pay_len = min((u16)ZFCP_DBF_PAY_MAX_REC,
++				      (u16)(sg->length - offset));
++			/* cap_len <= pay_sum < cap_len+ZFCP_DBF_PAY_MAX_REC */
++			memcpy(payload->data, sg_virt(sg) + offset, pay_len);
++			debug_event(dbf->pay, 1, payload,
++				    zfcp_dbf_plen(pay_len));
++			payload->counter++;
++			offset += pay_len;
++			pay_sum += pay_len;
++		}
++	}
++	spin_unlock(&dbf->pay_lock);
+ 
++out:
+ 	debug_event(dbf->san, 1, rec, sizeof(*rec));
+ 	spin_unlock_irqrestore(&dbf->san_lock, flags);
+ }
+@@ -354,9 +423,62 @@ void zfcp_dbf_san_req(char *tag, struct zfcp_fsf_req *fsf, u32 d_id)
+ 	struct zfcp_fsf_ct_els *ct_els = fsf->data;
+ 	u16 length;
+ 
+-	length = (u16)(ct_els->req->length + FC_CT_HDR_LEN);
+-	zfcp_dbf_san(tag, dbf, sg_virt(ct_els->req), ZFCP_DBF_SAN_REQ, length,
+-		     fsf->req_id, d_id);
++	length = (u16)zfcp_qdio_real_bytes(ct_els->req);
++	zfcp_dbf_san(tag, dbf, "san_req", ct_els->req, ZFCP_DBF_SAN_REQ,
++		     length, fsf->req_id, d_id, length);
++}
++
++static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag,
++					      struct zfcp_fsf_req *fsf,
++					      u16 len)
++{
++	struct zfcp_fsf_ct_els *ct_els = fsf->data;
++	struct fc_ct_hdr *reqh = sg_virt(ct_els->req);
++	struct fc_ns_gid_ft *reqn = (struct fc_ns_gid_ft *)(reqh + 1);
++	struct scatterlist *resp_entry = ct_els->resp;
++	struct fc_gpn_ft_resp *acc;
++	int max_entries, x, last = 0;
++
++	if (!(memcmp(tag, "fsscth2", 7) == 0
++	      && ct_els->d_id == FC_FID_DIR_SERV
++	      && reqh->ct_rev == FC_CT_REV
++	      && reqh->ct_in_id[0] == 0
++	      && reqh->ct_in_id[1] == 0
++	      && reqh->ct_in_id[2] == 0
++	      && reqh->ct_fs_type == FC_FST_DIR
++	      && reqh->ct_fs_subtype == FC_NS_SUBTYPE
++	      && reqh->ct_options == 0
++	      && reqh->_ct_resvd1 == 0
++	      && reqh->ct_cmd == FC_NS_GPN_FT
++	      /* reqh->ct_mr_size can vary so do not match but read below */
++	      && reqh->_ct_resvd2 == 0
++	      && reqh->ct_reason == 0
++	      && reqh->ct_explan == 0
++	      && reqh->ct_vendor == 0
++	      && reqn->fn_resvd == 0
++	      && reqn->fn_domain_id_scope == 0
++	      && reqn->fn_area_id_scope == 0
++	      && reqn->fn_fc4_type == FC_TYPE_FCP))
++		return len; /* not GPN_FT response so do not cap */
++
++	acc = sg_virt(resp_entry);
++	max_entries = (reqh->ct_mr_size * 4 / sizeof(struct fc_gpn_ft_resp))
++		+ 1 /* zfcp_fc_scan_ports: bytes correct, entries off-by-one
++		     * to account for header as 1st pseudo "entry" */;
++
++	/* the basic CT_IU preamble is the same size as one entry in the GPN_FT
++	 * response, allowing us to skip special handling for it - just skip it
++	 */
++	for (x = 1; x < max_entries && !last; x++) {
++		if (x % (ZFCP_FC_GPN_FT_ENT_PAGE + 1))
++			acc++;
++		else
++			acc = sg_virt(++resp_entry);
++
++		last = acc->fp_flags & FC_NS_FID_LAST;
++	}
++	len = min(len, (u16)(x * sizeof(struct fc_gpn_ft_resp)));
++	return len; /* cap after last entry */
+ }
+ 
+ /**
+@@ -370,9 +492,10 @@ void zfcp_dbf_san_res(char *tag, struct zfcp_fsf_req *fsf)
+ 	struct zfcp_fsf_ct_els *ct_els = fsf->data;
+ 	u16 length;
+ 
+-	length = (u16)(ct_els->resp->length + FC_CT_HDR_LEN);
+-	zfcp_dbf_san(tag, dbf, sg_virt(ct_els->resp), ZFCP_DBF_SAN_RES, length,
+-		     fsf->req_id, 0);
++	length = (u16)zfcp_qdio_real_bytes(ct_els->resp);
++	zfcp_dbf_san(tag, dbf, "san_res", ct_els->resp, ZFCP_DBF_SAN_RES,
++		     length, fsf->req_id, ct_els->d_id,
++		     zfcp_dbf_san_res_cap_len_if_gpn_ft(tag, fsf, length));
+ }
+ 
+ /**
+@@ -386,11 +509,13 @@ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf)
+ 	struct fsf_status_read_buffer *srb =
+ 		(struct fsf_status_read_buffer *) fsf->data;
+ 	u16 length;
++	struct scatterlist sg;
+ 
+ 	length = (u16)(srb->length -
+ 			offsetof(struct fsf_status_read_buffer, payload));
+-	zfcp_dbf_san(tag, dbf, srb->payload.data, ZFCP_DBF_SAN_ELS, length,
+-		     fsf->req_id, ntoh24(srb->d_id));
++	sg_init_one(&sg, srb->payload.data, length);
++	zfcp_dbf_san(tag, dbf, "san_els", &sg, ZFCP_DBF_SAN_ELS, length,
++		     fsf->req_id, ntoh24(srb->d_id), length);
+ }
+ 
+ /**
+@@ -399,7 +524,8 @@ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf)
+  * @sc: pointer to struct scsi_cmnd
+  * @fsf: pointer to struct zfcp_fsf_req
+  */
+-void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf)
++void zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *sc,
++		   struct zfcp_fsf_req *fsf)
+ {
+ 	struct zfcp_adapter *adapter =
+ 		(struct zfcp_adapter *) sc->device->host->hostdata[0];
+@@ -442,7 +568,7 @@ void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf)
+ 		}
+ 	}
+ 
+-	debug_event(dbf->scsi, 1, rec, sizeof(*rec));
++	debug_event(dbf->scsi, level, rec, sizeof(*rec));
+ 	spin_unlock_irqrestore(&dbf->scsi_lock, flags);
+ }
+ 
+diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h
+index 0be3d48681ae..36d07584271d 100644
+--- a/drivers/s390/scsi/zfcp_dbf.h
++++ b/drivers/s390/scsi/zfcp_dbf.h
+@@ -2,7 +2,7 @@
+  * zfcp device driver
+  * debug feature declarations
+  *
+- * Copyright IBM Corp. 2008, 2010
++ * Copyright IBM Corp. 2008, 2015
+  */
+ 
+ #ifndef ZFCP_DBF_H
+@@ -17,6 +17,11 @@
+ 
+ #define ZFCP_DBF_INVALID_LUN	0xFFFFFFFFFFFFFFFFull
+ 
++enum zfcp_dbf_pseudo_erp_act_type {
++	ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD = 0xff,
++	ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL = 0xfe,
++};
++
+ /**
+  * struct zfcp_dbf_rec_trigger - trace record for triggered recovery action
+  * @ready: number of ready recovery actions
+@@ -110,6 +115,7 @@ struct zfcp_dbf_san {
+ 	u32 d_id;
+ #define ZFCP_DBF_SAN_MAX_PAYLOAD (FC_CT_HDR_LEN + 32)
+ 	char payload[ZFCP_DBF_SAN_MAX_PAYLOAD];
++	u16 pl_len;
+ } __packed;
+ 
+ /**
+@@ -126,6 +132,8 @@ struct zfcp_dbf_hba_res {
+ 	u8  prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE];
+ 	u32 fsf_status;
+ 	u8  fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE];
++	u32 port_handle;
++	u32 lun_handle;
+ } __packed;
+ 
+ /**
+@@ -279,7 +287,7 @@ static inline
+ void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req)
+ {
+ 	if (debug_level_enabled(req->adapter->dbf->hba, level))
+-		zfcp_dbf_hba_fsf_res(tag, req);
++		zfcp_dbf_hba_fsf_res(tag, level, req);
+ }
+ 
+ /**
+@@ -318,7 +326,7 @@ void _zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *scmd,
+ 					scmd->device->host->hostdata[0];
+ 
+ 	if (debug_level_enabled(adapter->dbf->scsi, level))
+-		zfcp_dbf_scsi(tag, scmd, req);
++		zfcp_dbf_scsi(tag, level, scmd, req);
+ }
+ 
+ /**
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 3fb410977014..a59d678125bd 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -3,7 +3,7 @@
+  *
+  * Error Recovery Procedures (ERP).
+  *
+- * Copyright IBM Corp. 2002, 2010
++ * Copyright IBM Corp. 2002, 2015
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -1217,8 +1217,14 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
+ 		break;
+ 
+ 	case ZFCP_ERP_ACTION_REOPEN_PORT:
+-		if (result == ZFCP_ERP_SUCCEEDED)
+-			zfcp_scsi_schedule_rport_register(port);
++		/* This switch case might also happen after a forced reopen
++		 * was successfully done and thus overwritten with a new
++		 * non-forced reopen at `ersfs_2'. In this case, we must not
++		 * do the clean-up of the non-forced version.
++		 */
++		if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
++			if (result == ZFCP_ERP_SUCCEEDED)
++				zfcp_scsi_schedule_rport_register(port);
+ 		/* fall through */
+ 	case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
+ 		put_device(&port->dev);
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index 5b500652572b..c8fed9fa1cca 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -3,7 +3,7 @@
+  *
+  * External function declarations.
+  *
+- * Copyright IBM Corp. 2002, 2010
++ * Copyright IBM Corp. 2002, 2015
+  */
+ 
+ #ifndef ZFCP_EXT_H
+@@ -35,8 +35,9 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *);
+ extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *,
+ 			      struct zfcp_port *, struct scsi_device *, u8, u8);
+ extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *);
++extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
+ extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
+-extern void zfcp_dbf_hba_fsf_res(char *, struct zfcp_fsf_req *);
++extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
+@@ -44,7 +45,8 @@ extern void zfcp_dbf_hba_basic(char *, struct zfcp_adapter *);
+ extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
+ extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);
+-extern void zfcp_dbf_scsi(char *, struct scsi_cmnd *, struct zfcp_fsf_req *);
++extern void zfcp_dbf_scsi(char *, int, struct scsi_cmnd *,
++			  struct zfcp_fsf_req *);
+ 
+ /* zfcp_erp.c */
+ extern void zfcp_erp_set_adapter_status(struct zfcp_adapter *, u32);
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 522a633c866a..75f820ca17b7 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -3,7 +3,7 @@
+  *
+  * Implementation of FSF commands.
+  *
+- * Copyright IBM Corp. 2002, 2013
++ * Copyright IBM Corp. 2002, 2015
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -508,7 +508,10 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
+ 		fc_host_port_type(shost) = FC_PORTTYPE_PTP;
+ 		break;
+ 	case FSF_TOPO_FABRIC:
+-		fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
++		if (bottom->connection_features & FSF_FEATURE_NPIV_MODE)
++			fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
++		else
++			fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
+ 		break;
+ 	case FSF_TOPO_AL:
+ 		fc_host_port_type(shost) = FC_PORTTYPE_NLPORT;
+@@ -613,7 +616,6 @@ static void zfcp_fsf_exchange_port_evaluate(struct zfcp_fsf_req *req)
+ 
+ 	if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) {
+ 		fc_host_permanent_port_name(shost) = bottom->wwpn;
+-		fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+ 	} else
+ 		fc_host_permanent_port_name(shost) = fc_host_port_name(shost);
+ 	fc_host_maxframe_size(shost) = bottom->maximum_frame_size;
+@@ -982,8 +984,12 @@ static int zfcp_fsf_setup_ct_els_sbals(struct zfcp_fsf_req *req,
+ 	if (zfcp_adapter_multi_buffer_active(adapter)) {
+ 		if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_req))
+ 			return -EIO;
++		qtcb->bottom.support.req_buf_length =
++			zfcp_qdio_real_bytes(sg_req);
+ 		if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_resp))
+ 			return -EIO;
++		qtcb->bottom.support.resp_buf_length =
++			zfcp_qdio_real_bytes(sg_resp);
+ 
+ 		zfcp_qdio_set_data_div(qdio, &req->qdio_req,
+ 					zfcp_qdio_sbale_count(sg_req));
+@@ -1073,6 +1079,7 @@ int zfcp_fsf_send_ct(struct zfcp_fc_wka_port *wka_port,
+ 
+ 	req->handler = zfcp_fsf_send_ct_handler;
+ 	req->qtcb->header.port_handle = wka_port->handle;
++	ct->d_id = wka_port->d_id;
+ 	req->data = ct;
+ 
+ 	zfcp_dbf_san_req("fssct_1", req, wka_port->d_id);
+@@ -1169,6 +1176,7 @@ int zfcp_fsf_send_els(struct zfcp_adapter *adapter, u32 d_id,
+ 
+ 	hton24(req->qtcb->bottom.support.d_id, d_id);
+ 	req->handler = zfcp_fsf_send_els_handler;
++	els->d_id = d_id;
+ 	req->data = els;
+ 
+ 	zfcp_dbf_san_req("fssels1", req, d_id);
+@@ -1575,7 +1583,7 @@ out:
+ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+-	struct zfcp_fsf_req *req;
++	struct zfcp_fsf_req *req = NULL;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1604,6 +1612,8 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 		zfcp_fsf_req_free(req);
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
++	if (req && !IS_ERR(req))
++		zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req->req_id);
+ 	return retval;
+ }
+ 
+@@ -1628,7 +1638,7 @@ static void zfcp_fsf_close_wka_port_handler(struct zfcp_fsf_req *req)
+ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+-	struct zfcp_fsf_req *req;
++	struct zfcp_fsf_req *req = NULL;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1657,6 +1667,8 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 		zfcp_fsf_req_free(req);
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
++	if (req && !IS_ERR(req))
++		zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req->req_id);
+ 	return retval;
+ }
+ 
+diff --git a/drivers/s390/scsi/zfcp_fsf.h b/drivers/s390/scsi/zfcp_fsf.h
+index 57ae3ae1046d..be1c04b334c5 100644
+--- a/drivers/s390/scsi/zfcp_fsf.h
++++ b/drivers/s390/scsi/zfcp_fsf.h
+@@ -3,7 +3,7 @@
+  *
+  * Interface to the FSF support functions.
+  *
+- * Copyright IBM Corp. 2002, 2010
++ * Copyright IBM Corp. 2002, 2015
+  */
+ 
+ #ifndef FSF_H
+@@ -436,6 +436,7 @@ struct zfcp_blk_drv_data {
+  * @handler_data: data passed to handler function
+  * @port: Optional pointer to port for zfcp internal ELS (only test link ADISC)
+  * @status: used to pass error status to calling function
++ * @d_id: Destination ID of either open WKA port for CT or of D_ID for ELS
+  */
+ struct zfcp_fsf_ct_els {
+ 	struct scatterlist *req;
+@@ -444,6 +445,7 @@ struct zfcp_fsf_ct_els {
+ 	void *handler_data;
+ 	struct zfcp_port *port;
+ 	int status;
++	u32 d_id;
+ };
+ 
+ #endif				/* FSF_H */
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index b3c6ff49103b..9069f98a1817 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -3,7 +3,7 @@
+  *
+  * Interface to Linux SCSI midlayer.
+  *
+- * Copyright IBM Corp. 2002, 2013
++ * Copyright IBM Corp. 2002, 2015
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -556,6 +556,9 @@ static void zfcp_scsi_rport_register(struct zfcp_port *port)
+ 	ids.port_id = port->d_id;
+ 	ids.roles = FC_RPORT_ROLE_FCP_TARGET;
+ 
++	zfcp_dbf_rec_trig("scpaddy", port->adapter, port, NULL,
++			  ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD,
++			  ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD);
+ 	rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids);
+ 	if (!rport) {
+ 		dev_err(&port->adapter->ccw_device->dev,
+@@ -577,6 +580,9 @@ static void zfcp_scsi_rport_block(struct zfcp_port *port)
+ 	struct fc_rport *rport = port->rport;
+ 
+ 	if (rport) {
++		zfcp_dbf_rec_trig("scpdely", port->adapter, port, NULL,
++				  ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL,
++				  ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL);
+ 		fc_remote_port_delete(rport);
+ 		port->rport = NULL;
+ 	}
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index e0a78f53d809..bac8cdf9fb23 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1472,12 +1472,12 @@ retry:
+  out_err:
+ 	kfree(lun_data);
+  out:
+-	scsi_device_put(sdev);
+ 	if (scsi_device_created(sdev))
+ 		/*
+ 		 * the sdev we used didn't appear in the report luns scan
+ 		 */
+ 		__scsi_remove_device(sdev);
++	scsi_device_put(sdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/soc/fsl/qe/gpio.c b/drivers/soc/fsl/qe/gpio.c
+index 333eb2215a57..0aaf429f31d5 100644
+--- a/drivers/soc/fsl/qe/gpio.c
++++ b/drivers/soc/fsl/qe/gpio.c
+@@ -41,7 +41,8 @@ struct qe_gpio_chip {
+ 
+ static void qe_gpio_save_regs(struct of_mm_gpio_chip *mm_gc)
+ {
+-	struct qe_gpio_chip *qe_gc = gpiochip_get_data(&mm_gc->gc);
++	struct qe_gpio_chip *qe_gc =
++		container_of(mm_gc, struct qe_gpio_chip, mm_gc);
+ 	struct qe_pio_regs __iomem *regs = mm_gc->regs;
+ 
+ 	qe_gc->cpdata = in_be32(&regs->cpdata);
+diff --git a/drivers/soc/fsl/qe/qe_common.c b/drivers/soc/fsl/qe/qe_common.c
+index 41eff805a904..104e68d9b84f 100644
+--- a/drivers/soc/fsl/qe/qe_common.c
++++ b/drivers/soc/fsl/qe/qe_common.c
+@@ -70,6 +70,11 @@ int cpm_muram_init(void)
+ 	}
+ 
+ 	muram_pool = gen_pool_create(0, -1);
++	if (!muram_pool) {
++		pr_err("Cannot allocate memory pool for CPM/QE muram");
++		ret = -ENOMEM;
++		goto out_muram;
++	}
+ 	muram_pbase = of_translate_address(np, zero);
+ 	if (muram_pbase == (phys_addr_t)OF_BAD_ADDR) {
+ 		pr_err("Cannot translate zero through CPM muram node");
+@@ -116,6 +121,9 @@ static unsigned long cpm_muram_alloc_common(unsigned long size,
+ 	struct muram_block *entry;
+ 	unsigned long start;
+ 
++	if (!muram_pool && cpm_muram_init())
++		goto out2;
++
+ 	start = gen_pool_alloc_algo(muram_pool, size, algo, data);
+ 	if (!start)
+ 		goto out2;
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 6094a6beddde..e825d580ccee 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -754,15 +754,7 @@ EXPORT_SYMBOL(target_complete_cmd);
+ 
+ void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
+ {
+-	if (scsi_status != SAM_STAT_GOOD) {
+-		return;
+-	}
+-
+-	/*
+-	 * Calculate new residual count based upon length of SCSI data
+-	 * transferred.
+-	 */
+-	if (length < cmd->data_length) {
++	if (scsi_status == SAM_STAT_GOOD && length < cmd->data_length) {
+ 		if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+ 			cmd->residual_count += cmd->data_length - length;
+ 		} else {
+@@ -771,12 +763,6 @@ void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int len
+ 		}
+ 
+ 		cmd->data_length = length;
+-	} else if (length > cmd->data_length) {
+-		cmd->se_cmd_flags |= SCF_OVERFLOW_BIT;
+-		cmd->residual_count = length - cmd->data_length;
+-	} else {
+-		cmd->se_cmd_flags &= ~(SCF_OVERFLOW_BIT | SCF_UNDERFLOW_BIT);
+-		cmd->residual_count = 0;
+ 	}
+ 
+ 	target_complete_cmd(cmd, scsi_status);
+@@ -1706,6 +1692,7 @@ void transport_generic_request_failure(struct se_cmd *cmd,
+ 	case TCM_LOGICAL_BLOCK_GUARD_CHECK_FAILED:
+ 	case TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED:
+ 	case TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED:
++	case TCM_COPY_TARGET_DEVICE_NOT_REACHABLE:
+ 		break;
+ 	case TCM_OUT_OF_RESOURCES:
+ 		sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+@@ -2547,8 +2534,10 @@ int target_get_sess_cmd(struct se_cmd *se_cmd, bool ack_kref)
+ 	 * fabric acknowledgement that requires two target_put_sess_cmd()
+ 	 * invocations before se_cmd descriptor release.
+ 	 */
+-	if (ack_kref)
++	if (ack_kref) {
+ 		kref_get(&se_cmd->cmd_kref);
++		se_cmd->se_cmd_flags |= SCF_ACK_KREF;
++	}
+ 
+ 	spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
+ 	if (se_sess->sess_tearing_down) {
+@@ -2871,6 +2860,12 @@ static const struct sense_info sense_info_table[] = {
+ 		.ascq = 0x03, /* LOGICAL BLOCK REFERENCE TAG CHECK FAILED */
+ 		.add_sector_info = true,
+ 	},
++	[TCM_COPY_TARGET_DEVICE_NOT_REACHABLE] = {
++		.key = COPY_ABORTED,
++		.asc = 0x0d,
++		.ascq = 0x02, /* COPY TARGET DEVICE NOT REACHABLE */
++
++	},
+ 	[TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE] = {
+ 		/*
+ 		 * Returning ILLEGAL REQUEST would cause immediate IO errors on
+diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
+index 75cd85426ae3..094a1440eacb 100644
+--- a/drivers/target/target_core_xcopy.c
++++ b/drivers/target/target_core_xcopy.c
+@@ -104,7 +104,7 @@ static int target_xcopy_locate_se_dev_e4(struct se_cmd *se_cmd, struct xcopy_op
+ 	}
+ 	mutex_unlock(&g_device_mutex);
+ 
+-	pr_err("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
++	pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
+ 	return -EINVAL;
+ }
+ 
+@@ -185,7 +185,7 @@ static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op
+ 
+ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd,
+ 				struct xcopy_op *xop, unsigned char *p,
+-				unsigned short tdll)
++				unsigned short tdll, sense_reason_t *sense_ret)
+ {
+ 	struct se_device *local_dev = se_cmd->se_dev;
+ 	unsigned char *desc = p;
+@@ -193,6 +193,8 @@ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd,
+ 	unsigned short start = 0;
+ 	bool src = true;
+ 
++	*sense_ret = TCM_INVALID_PARAMETER_LIST;
++
+ 	if (offset != 0) {
+ 		pr_err("XCOPY target descriptor list length is not"
+ 			" multiple of %d\n", XCOPY_TARGET_DESC_LEN);
+@@ -243,9 +245,16 @@ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd,
+ 		rc = target_xcopy_locate_se_dev_e4(se_cmd, xop, true);
+ 	else
+ 		rc = target_xcopy_locate_se_dev_e4(se_cmd, xop, false);
+-
+-	if (rc < 0)
++	/*
++	 * If a matching IEEE NAA 0x83 descriptor for the requested device
++	 * is not located on this node, return COPY_ABORTED with ASQ/ASQC
++	 * 0x0d/0x02 - COPY_TARGET_DEVICE_NOT_REACHABLE to request the
++	 * initiator to fall back to normal copy method.
++	 */
++	if (rc < 0) {
++		*sense_ret = TCM_COPY_TARGET_DEVICE_NOT_REACHABLE;
+ 		goto out;
++	}
+ 
+ 	pr_debug("XCOPY TGT desc: Source dev: %p NAA IEEE WWN: 0x%16phN\n",
+ 		 xop->src_dev, &xop->src_tid_wwn[0]);
+@@ -653,6 +662,7 @@ static int target_xcopy_read_source(
+ 	rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, src_dev, &cdb[0],
+ 				remote_port, true);
+ 	if (rc < 0) {
++		ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status;
+ 		transport_generic_free_cmd(se_cmd, 0);
+ 		return rc;
+ 	}
+@@ -664,6 +674,7 @@ static int target_xcopy_read_source(
+ 
+ 	rc = target_xcopy_issue_pt_cmd(xpt_cmd);
+ 	if (rc < 0) {
++		ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status;
+ 		transport_generic_free_cmd(se_cmd, 0);
+ 		return rc;
+ 	}
+@@ -714,6 +725,7 @@ static int target_xcopy_write_destination(
+ 				remote_port, false);
+ 	if (rc < 0) {
+ 		struct se_cmd *src_cmd = &xop->src_pt_cmd->se_cmd;
++		ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status;
+ 		/*
+ 		 * If the failure happened before the t_mem_list hand-off in
+ 		 * target_xcopy_setup_pt_cmd(), Reset memory + clear flag so that
+@@ -729,6 +741,7 @@ static int target_xcopy_write_destination(
+ 
+ 	rc = target_xcopy_issue_pt_cmd(xpt_cmd);
+ 	if (rc < 0) {
++		ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status;
+ 		se_cmd->se_cmd_flags &= ~SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
+ 		transport_generic_free_cmd(se_cmd, 0);
+ 		return rc;
+@@ -815,9 +828,14 @@ static void target_xcopy_do_work(struct work_struct *work)
+ out:
+ 	xcopy_pt_undepend_remotedev(xop);
+ 	kfree(xop);
+-
+-	pr_warn("target_xcopy_do_work: Setting X-COPY CHECK_CONDITION -> sending response\n");
+-	ec_cmd->scsi_status = SAM_STAT_CHECK_CONDITION;
++	/*
++	 * Don't override an error scsi status if it has already been set
++	 */
++	if (ec_cmd->scsi_status == SAM_STAT_GOOD) {
++		pr_warn_ratelimited("target_xcopy_do_work: rc: %d, Setting X-COPY"
++			" CHECK_CONDITION -> sending response\n", rc);
++		ec_cmd->scsi_status = SAM_STAT_CHECK_CONDITION;
++	}
+ 	target_complete_cmd(ec_cmd, SAM_STAT_CHECK_CONDITION);
+ }
+ 
+@@ -875,7 +893,7 @@ sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
+ 		" tdll: %hu sdll: %u inline_dl: %u\n", list_id, list_id_usage,
+ 		tdll, sdll, inline_dl);
+ 
+-	rc = target_xcopy_parse_target_descriptors(se_cmd, xop, &p[16], tdll);
++	rc = target_xcopy_parse_target_descriptors(se_cmd, xop, &p[16], tdll, &ret);
+ 	if (rc <= 0)
+ 		goto out;
+ 
+diff --git a/drivers/target/tcm_fc/tfc_cmd.c b/drivers/target/tcm_fc/tfc_cmd.c
+index 216e18cc9133..9a874a89941d 100644
+--- a/drivers/target/tcm_fc/tfc_cmd.c
++++ b/drivers/target/tcm_fc/tfc_cmd.c
+@@ -572,7 +572,7 @@ static void ft_send_work(struct work_struct *work)
+ 	if (target_submit_cmd(&cmd->se_cmd, cmd->sess->se_sess, fcp->fc_cdb,
+ 			      &cmd->ft_sense_buffer[0], scsilun_to_int(&fcp->fc_lun),
+ 			      ntohl(fcp->fc_dl), task_attr, data_dir,
+-			      TARGET_SCF_ACK_KREF))
++			      TARGET_SCF_ACK_KREF | TARGET_SCF_USE_CPUID))
+ 		goto err;
+ 
+ 	pr_debug("r_ctl %x alloc target_submit_cmd\n", fh->fh_r_ctl);
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 924bad45c176..37a37c4d04cb 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -50,9 +50,9 @@ static int efifb_setcolreg(unsigned regno, unsigned red, unsigned green,
+ 		return 1;
+ 
+ 	if (regno < 16) {
+-		red   >>= 8;
+-		green >>= 8;
+-		blue  >>= 8;
++		red   >>= 16 - info->var.red.length;
++		green >>= 16 - info->var.green.length;
++		blue  >>= 16 - info->var.blue.length;
+ 		((u32 *)(info->pseudo_palette))[regno] =
+ 			(red   << info->var.red.offset)   |
+ 			(green << info->var.green.offset) |
+diff --git a/drivers/watchdog/mt7621_wdt.c b/drivers/watchdog/mt7621_wdt.c
+index 4a2290f900a8..d5735c12067d 100644
+--- a/drivers/watchdog/mt7621_wdt.c
++++ b/drivers/watchdog/mt7621_wdt.c
+@@ -139,7 +139,6 @@ static int mt7621_wdt_probe(struct platform_device *pdev)
+ 	if (!IS_ERR(mt7621_wdt_reset))
+ 		reset_control_deassert(mt7621_wdt_reset);
+ 
+-	mt7621_wdt_dev.dev = &pdev->dev;
+ 	mt7621_wdt_dev.bootstatus = mt7621_wdt_bootcause();
+ 
+ 	watchdog_init_timeout(&mt7621_wdt_dev, mt7621_wdt_dev.max_timeout,
+diff --git a/drivers/watchdog/rt2880_wdt.c b/drivers/watchdog/rt2880_wdt.c
+index 1967919ae743..14b4fd428fff 100644
+--- a/drivers/watchdog/rt2880_wdt.c
++++ b/drivers/watchdog/rt2880_wdt.c
+@@ -158,7 +158,6 @@ static int rt288x_wdt_probe(struct platform_device *pdev)
+ 
+ 	rt288x_wdt_freq = clk_get_rate(rt288x_wdt_clk) / RALINK_WDT_PRESCALE;
+ 
+-	rt288x_wdt_dev.dev = &pdev->dev;
+ 	rt288x_wdt_dev.bootstatus = rt288x_wdt_bootcause();
+ 	rt288x_wdt_dev.max_timeout = (0xfffful / rt288x_wdt_freq);
+ 	rt288x_wdt_dev.parent = &pdev->dev;
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 0f5375d8e030..eede975e85c0 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1272,7 +1272,8 @@ again:
+ 		statret = __ceph_do_getattr(inode, page,
+ 					    CEPH_STAT_CAP_INLINE_DATA, !!page);
+ 		if (statret < 0) {
+-			 __free_page(page);
++			if (page)
++				__free_page(page);
+ 			if (statret == -ENODATA) {
+ 				BUG_ON(retry_op != READ_INLINE);
+ 				goto again;
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 6c58e13fed2f..3d03e48a9213 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -152,6 +152,7 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 	list_for_each(tmp1, &cifs_tcp_ses_list) {
+ 		server = list_entry(tmp1, struct TCP_Server_Info,
+ 				    tcp_ses_list);
++		seq_printf(m, "\nNumber of credits: %d", server->credits);
+ 		i++;
+ 		list_for_each(tmp2, &server->smb_ses_list) {
+ 			ses = list_entry(tmp2, struct cifs_ses,
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 14ae4b8e1a3c..8c68d03a6949 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -271,7 +271,7 @@ cifs_alloc_inode(struct super_block *sb)
+ 	cifs_inode->createtime = 0;
+ 	cifs_inode->epoch = 0;
+ #ifdef CONFIG_CIFS_SMB2
+-	get_random_bytes(cifs_inode->lease_key, SMB2_LEASE_KEY_SIZE);
++	generate_random_uuid(cifs_inode->lease_key);
+ #endif
+ 	/*
+ 	 * Can not set i_flags here - they get immediately overwritten to zero
+@@ -1271,7 +1271,6 @@ init_cifs(void)
+ 	GlobalTotalActiveXid = 0;
+ 	GlobalMaxActiveXid = 0;
+ 	spin_lock_init(&cifs_tcp_ses_lock);
+-	spin_lock_init(&cifs_file_list_lock);
+ 	spin_lock_init(&GlobalMid_Lock);
+ 
+ 	get_random_bytes(&cifs_lock_secret, sizeof(cifs_lock_secret));
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 8f1d8c1e72be..65f78b7a9062 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -833,6 +833,7 @@ struct cifs_tcon {
+ 	struct list_head tcon_list;
+ 	int tc_count;
+ 	struct list_head openFileList;
++	spinlock_t open_file_lock; /* protects list above */
+ 	struct cifs_ses *ses;	/* pointer to session associated with */
+ 	char treeName[MAX_TREE_SIZE + 1]; /* UNC name of resource in ASCII */
+ 	char *nativeFileSystem;
+@@ -889,7 +890,7 @@ struct cifs_tcon {
+ #endif /* CONFIG_CIFS_STATS2 */
+ 	__u64    bytes_read;
+ 	__u64    bytes_written;
+-	spinlock_t stat_lock;
++	spinlock_t stat_lock;  /* protects the two fields above */
+ #endif /* CONFIG_CIFS_STATS */
+ 	FILE_SYSTEM_DEVICE_INFO fsDevInfo;
+ 	FILE_SYSTEM_ATTRIBUTE_INFO fsAttrInfo; /* ok if fs name truncated */
+@@ -1040,8 +1041,10 @@ struct cifs_fid_locks {
+ };
+ 
+ struct cifsFileInfo {
++	/* following two lists are protected by tcon->open_file_lock */
+ 	struct list_head tlist;	/* pointer to next fid owned by tcon */
+ 	struct list_head flist;	/* next fid (file instance) for this inode */
++	/* lock list below protected by cifsi->lock_sem */
+ 	struct cifs_fid_locks *llist;	/* brlocks held by this fid */
+ 	kuid_t uid;		/* allows finding which FileInfo structure */
+ 	__u32 pid;		/* process id who opened file */
+@@ -1049,11 +1052,12 @@ struct cifsFileInfo {
+ 	/* BB add lock scope info here if needed */ ;
+ 	/* lock scope id (0 if none) */
+ 	struct dentry *dentry;
+-	unsigned int f_flags;
+ 	struct tcon_link *tlink;
++	unsigned int f_flags;
+ 	bool invalidHandle:1;	/* file closed via session abend */
+ 	bool oplock_break_cancelled:1;
+-	int count;		/* refcount protected by cifs_file_list_lock */
++	int count;
++	spinlock_t file_info_lock; /* protects four flag/count fields above */
+ 	struct mutex fh_mutex; /* prevents reopen race after dead ses*/
+ 	struct cifs_search_info srch_inf;
+ 	struct work_struct oplock_break; /* work for oplock breaks */
+@@ -1120,7 +1124,7 @@ struct cifs_writedata {
+ 
+ /*
+  * Take a reference on the file private data. Must be called with
+- * cifs_file_list_lock held.
++ * cfile->file_info_lock held.
+  */
+ static inline void
+ cifsFileInfo_get_locked(struct cifsFileInfo *cifs_file)
+@@ -1514,8 +1518,10 @@ require use of the stronger protocol */
+  *  GlobalMid_Lock protects:
+  *	list operations on pending_mid_q and oplockQ
+  *      updates to XID counters, multiplex id  and SMB sequence numbers
+- *  cifs_file_list_lock protects:
+- *	list operations on tcp and SMB session lists and tCon lists
++ *  tcp_ses_lock protects:
++ *	list operations on tcp and SMB session lists
++ *  tcon->open_file_lock protects the list of open files hanging off the tcon
++ *  cfile->file_info_lock protects counters and fields in cifs file struct
+  *  f_owner.lock protects certain per file struct operations
+  *  mapping->page_lock protects certain per page operations
+  *
+@@ -1547,18 +1553,12 @@ GLOBAL_EXTERN struct list_head		cifs_tcp_ses_list;
+  * tcp session, and the list of tcon's per smb session. It also protects
+  * the reference counters for the server, smb session, and tcon. Finally,
+  * changes to the tcon->tidStatus should be done while holding this lock.
++ * generally the locks should be taken in order tcp_ses_lock before
++ * tcon->open_file_lock and that before file->file_info_lock since the
++ * structure order is cifs_socket-->cifs_ses-->cifs_tcon-->cifs_file
+  */
+ GLOBAL_EXTERN spinlock_t		cifs_tcp_ses_lock;
+ 
+-/*
+- * This lock protects the cifs_file->llist and cifs_file->flist
+- * list operations, and updates to some flags (cifs_file->invalidHandle)
+- * It will be moved to either use the tcon->stat_lock or equivalent later.
+- * If cifs_tcp_ses_lock and the lock below are both needed to be held, then
+- * the cifs_tcp_ses_lock must be grabbed first and released last.
+- */
+-GLOBAL_EXTERN spinlock_t	cifs_file_list_lock;
+-
+ #ifdef CONFIG_CIFS_DNOTIFY_EXPERIMENTAL /* unused temporarily */
+ /* Outstanding dir notify requests */
+ GLOBAL_EXTERN struct list_head GlobalDnotifyReqList;
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index d47197ea4ab6..78046051bbbc 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -98,13 +98,13 @@ cifs_mark_open_files_invalid(struct cifs_tcon *tcon)
+ 	struct list_head *tmp1;
+ 
+ 	/* list all files open on tree connection and mark them invalid */
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
+ 	list_for_each_safe(tmp, tmp1, &tcon->openFileList) {
+ 		open_file = list_entry(tmp, struct cifsFileInfo, tlist);
+ 		open_file->invalidHandle = true;
+ 		open_file->oplock_break_cancelled = true;
+ 	}
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tcon->open_file_lock);
+ 	/*
+ 	 * BB Add call to invalidate_inodes(sb) for all superblocks mounted
+ 	 * to this tcon.
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 2e4f4bad8b1e..7b67179521cf 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2163,7 +2163,7 @@ cifs_get_tcp_session(struct smb_vol *volume_info)
+ 	memcpy(&tcp_ses->dstaddr, &volume_info->dstaddr,
+ 		sizeof(tcp_ses->dstaddr));
+ #ifdef CONFIG_CIFS_SMB2
+-	get_random_bytes(tcp_ses->client_guid, SMB2_CLIENT_GUID_SIZE);
++	generate_random_uuid(tcp_ses->client_guid);
+ #endif
+ 	/*
+ 	 * at this point we are the only ones with the pointer
+@@ -3688,14 +3688,16 @@ remote_path_check:
+ 			goto mount_fail_check;
+ 		}
+ 
+-		rc = cifs_are_all_path_components_accessible(server,
++		if (rc != -EREMOTE) {
++			rc = cifs_are_all_path_components_accessible(server,
+ 							     xid, tcon, cifs_sb,
+ 							     full_path);
+-		if (rc != 0) {
+-			cifs_dbg(VFS, "cannot query dirs between root and final path, "
+-				 "enabling CIFS_MOUNT_USE_PREFIX_PATH\n");
+-			cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
+-			rc = 0;
++			if (rc != 0) {
++				cifs_dbg(VFS, "cannot query dirs between root and final path, "
++					 "enabling CIFS_MOUNT_USE_PREFIX_PATH\n");
++				cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
++				rc = 0;
++			}
+ 		}
+ 		kfree(full_path);
+ 	}
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 579e41b350a2..605438afe7ef 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -305,6 +305,7 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ 	cfile->tlink = cifs_get_tlink(tlink);
+ 	INIT_WORK(&cfile->oplock_break, cifs_oplock_break);
+ 	mutex_init(&cfile->fh_mutex);
++	spin_lock_init(&cfile->file_info_lock);
+ 
+ 	cifs_sb_active(inode->i_sb);
+ 
+@@ -317,7 +318,7 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ 		oplock = 0;
+ 	}
+ 
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
+ 	if (fid->pending_open->oplock != CIFS_OPLOCK_NO_CHANGE && oplock)
+ 		oplock = fid->pending_open->oplock;
+ 	list_del(&fid->pending_open->olist);
+@@ -326,12 +327,13 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ 	server->ops->set_fid(cfile, fid, oplock);
+ 
+ 	list_add(&cfile->tlist, &tcon->openFileList);
++
+ 	/* if readable file instance put first in list*/
+ 	if (file->f_mode & FMODE_READ)
+ 		list_add(&cfile->flist, &cinode->openFileList);
+ 	else
+ 		list_add_tail(&cfile->flist, &cinode->openFileList);
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tcon->open_file_lock);
+ 
+ 	if (fid->purge_cache)
+ 		cifs_zap_mapping(inode);
+@@ -343,16 +345,16 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ struct cifsFileInfo *
+ cifsFileInfo_get(struct cifsFileInfo *cifs_file)
+ {
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&cifs_file->file_info_lock);
+ 	cifsFileInfo_get_locked(cifs_file);
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&cifs_file->file_info_lock);
+ 	return cifs_file;
+ }
+ 
+ /*
+  * Release a reference on the file private data. This may involve closing
+  * the filehandle out on the server. Must be called without holding
+- * cifs_file_list_lock.
++ * tcon->open_file_lock and cifs_file->file_info_lock.
+  */
+ void cifsFileInfo_put(struct cifsFileInfo *cifs_file)
+ {
+@@ -367,11 +369,15 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file)
+ 	struct cifs_pending_open open;
+ 	bool oplock_break_cancelled;
+ 
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
++
++	spin_lock(&cifs_file->file_info_lock);
+ 	if (--cifs_file->count > 0) {
+-		spin_unlock(&cifs_file_list_lock);
++		spin_unlock(&cifs_file->file_info_lock);
++		spin_unlock(&tcon->open_file_lock);
+ 		return;
+ 	}
++	spin_unlock(&cifs_file->file_info_lock);
+ 
+ 	if (server->ops->get_lease_key)
+ 		server->ops->get_lease_key(inode, &fid);
+@@ -395,7 +401,8 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file)
+ 			set_bit(CIFS_INO_INVALID_MAPPING, &cifsi->flags);
+ 		cifs_set_oplock_level(cifsi, 0);
+ 	}
+-	spin_unlock(&cifs_file_list_lock);
++
++	spin_unlock(&tcon->open_file_lock);
+ 
+ 	oplock_break_cancelled = cancel_work_sync(&cifs_file->oplock_break);
+ 
+@@ -772,10 +779,10 @@ int cifs_closedir(struct inode *inode, struct file *file)
+ 	server = tcon->ses->server;
+ 
+ 	cifs_dbg(FYI, "Freeing private data in close dir\n");
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&cfile->file_info_lock);
+ 	if (server->ops->dir_needs_close(cfile)) {
+ 		cfile->invalidHandle = true;
+-		spin_unlock(&cifs_file_list_lock);
++		spin_unlock(&cfile->file_info_lock);
+ 		if (server->ops->close_dir)
+ 			rc = server->ops->close_dir(xid, tcon, &cfile->fid);
+ 		else
+@@ -784,7 +791,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
+ 		/* not much we can do if it fails anyway, ignore rc */
+ 		rc = 0;
+ 	} else
+-		spin_unlock(&cifs_file_list_lock);
++		spin_unlock(&cfile->file_info_lock);
+ 
+ 	buf = cfile->srch_inf.ntwrk_buf_start;
+ 	if (buf) {
+@@ -1728,12 +1735,13 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ {
+ 	struct cifsFileInfo *open_file = NULL;
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
++	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+ 
+ 	/* only filter by fsuid on multiuser mounts */
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
+ 		fsuid_only = false;
+ 
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
+ 	/* we could simply get the first_list_entry since write-only entries
+ 	   are always at the end of the list but since the first entry might
+ 	   have a close pending, we go through the whole list */
+@@ -1744,8 +1752,8 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ 			if (!open_file->invalidHandle) {
+ 				/* found a good file */
+ 				/* lock it so it will not be closed on us */
+-				cifsFileInfo_get_locked(open_file);
+-				spin_unlock(&cifs_file_list_lock);
++				cifsFileInfo_get(open_file);
++				spin_unlock(&tcon->open_file_lock);
+ 				return open_file;
+ 			} /* else might as well continue, and look for
+ 			     another, or simply have the caller reopen it
+@@ -1753,7 +1761,7 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ 		} else /* write only file */
+ 			break; /* write only files are last so must be done */
+ 	}
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tcon->open_file_lock);
+ 	return NULL;
+ }
+ 
+@@ -1762,6 +1770,7 @@ struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *cifs_inode,
+ {
+ 	struct cifsFileInfo *open_file, *inv_file = NULL;
+ 	struct cifs_sb_info *cifs_sb;
++	struct cifs_tcon *tcon;
+ 	bool any_available = false;
+ 	int rc;
+ 	unsigned int refind = 0;
+@@ -1777,15 +1786,16 @@ struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *cifs_inode,
+ 	}
+ 
+ 	cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
++	tcon = cifs_sb_master_tcon(cifs_sb);
+ 
+ 	/* only filter by fsuid on multiuser mounts */
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
+ 		fsuid_only = false;
+ 
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
+ refind_writable:
+ 	if (refind > MAX_REOPEN_ATT) {
+-		spin_unlock(&cifs_file_list_lock);
++		spin_unlock(&tcon->open_file_lock);
+ 		return NULL;
+ 	}
+ 	list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {
+@@ -1796,8 +1806,8 @@ refind_writable:
+ 		if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+ 			if (!open_file->invalidHandle) {
+ 				/* found a good writable file */
+-				cifsFileInfo_get_locked(open_file);
+-				spin_unlock(&cifs_file_list_lock);
++				cifsFileInfo_get(open_file);
++				spin_unlock(&tcon->open_file_lock);
+ 				return open_file;
+ 			} else {
+ 				if (!inv_file)
+@@ -1813,24 +1823,24 @@ refind_writable:
+ 
+ 	if (inv_file) {
+ 		any_available = false;
+-		cifsFileInfo_get_locked(inv_file);
++		cifsFileInfo_get(inv_file);
+ 	}
+ 
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tcon->open_file_lock);
+ 
+ 	if (inv_file) {
+ 		rc = cifs_reopen_file(inv_file, false);
+ 		if (!rc)
+ 			return inv_file;
+ 		else {
+-			spin_lock(&cifs_file_list_lock);
++			spin_lock(&tcon->open_file_lock);
+ 			list_move_tail(&inv_file->flist,
+ 					&cifs_inode->openFileList);
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&tcon->open_file_lock);
+ 			cifsFileInfo_put(inv_file);
+-			spin_lock(&cifs_file_list_lock);
+ 			++refind;
+ 			inv_file = NULL;
++			spin_lock(&tcon->open_file_lock);
+ 			goto refind_writable;
+ 		}
+ 	}
+@@ -3618,15 +3628,17 @@ static int cifs_readpage(struct file *file, struct page *page)
+ static int is_inode_writable(struct cifsInodeInfo *cifs_inode)
+ {
+ 	struct cifsFileInfo *open_file;
++	struct cifs_tcon *tcon =
++		cifs_sb_master_tcon(CIFS_SB(cifs_inode->vfs_inode.i_sb));
+ 
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tcon->open_file_lock);
+ 	list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {
+ 		if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&tcon->open_file_lock);
+ 			return 1;
+ 		}
+ 	}
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tcon->open_file_lock);
+ 	return 0;
+ }
+ 
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 813fe13c2ae1..c6729156f9a0 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -120,6 +120,7 @@ tconInfoAlloc(void)
+ 		++ret_buf->tc_count;
+ 		INIT_LIST_HEAD(&ret_buf->openFileList);
+ 		INIT_LIST_HEAD(&ret_buf->tcon_list);
++		spin_lock_init(&ret_buf->open_file_lock);
+ #ifdef CONFIG_CIFS_STATS
+ 		spin_lock_init(&ret_buf->stat_lock);
+ #endif
+@@ -465,7 +466,7 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 				continue;
+ 
+ 			cifs_stats_inc(&tcon->stats.cifs_stats.num_oplock_brks);
+-			spin_lock(&cifs_file_list_lock);
++			spin_lock(&tcon->open_file_lock);
+ 			list_for_each(tmp2, &tcon->openFileList) {
+ 				netfile = list_entry(tmp2, struct cifsFileInfo,
+ 						     tlist);
+@@ -495,11 +496,11 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 					   &netfile->oplock_break);
+ 				netfile->oplock_break_cancelled = false;
+ 
+-				spin_unlock(&cifs_file_list_lock);
++				spin_unlock(&tcon->open_file_lock);
+ 				spin_unlock(&cifs_tcp_ses_lock);
+ 				return true;
+ 			}
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&tcon->open_file_lock);
+ 			spin_unlock(&cifs_tcp_ses_lock);
+ 			cifs_dbg(FYI, "No matching file for oplock break\n");
+ 			return true;
+@@ -613,9 +614,9 @@ backup_cred(struct cifs_sb_info *cifs_sb)
+ void
+ cifs_del_pending_open(struct cifs_pending_open *open)
+ {
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tlink_tcon(open->tlink)->open_file_lock);
+ 	list_del(&open->olist);
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tlink_tcon(open->tlink)->open_file_lock);
+ }
+ 
+ void
+@@ -635,7 +636,7 @@ void
+ cifs_add_pending_open(struct cifs_fid *fid, struct tcon_link *tlink,
+ 		      struct cifs_pending_open *open)
+ {
+-	spin_lock(&cifs_file_list_lock);
++	spin_lock(&tlink_tcon(tlink)->open_file_lock);
+ 	cifs_add_pending_open_locked(fid, tlink, open);
+-	spin_unlock(&cifs_file_list_lock);
++	spin_unlock(&tlink_tcon(open->tlink)->open_file_lock);
+ }
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index 65cf85dcda09..8f6a2a5863b9 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -597,14 +597,14 @@ find_cifs_entry(const unsigned int xid, struct cifs_tcon *tcon, loff_t pos,
+ 	     is_dir_changed(file)) || (index_to_find < first_entry_in_buffer)) {
+ 		/* close and restart search */
+ 		cifs_dbg(FYI, "search backing up - close and restart search\n");
+-		spin_lock(&cifs_file_list_lock);
++		spin_lock(&cfile->file_info_lock);
+ 		if (server->ops->dir_needs_close(cfile)) {
+ 			cfile->invalidHandle = true;
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&cfile->file_info_lock);
+ 			if (server->ops->close_dir)
+ 				server->ops->close_dir(xid, tcon, &cfile->fid);
+ 		} else
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&cfile->file_info_lock);
+ 		if (cfile->srch_inf.ntwrk_buf_start) {
+ 			cifs_dbg(FYI, "freeing SMB ff cache buf on search rewind\n");
+ 			if (cfile->srch_inf.smallBuf)
+diff --git a/fs/cifs/smb2glob.h b/fs/cifs/smb2glob.h
+index 0ffa18094335..238759c146ba 100644
+--- a/fs/cifs/smb2glob.h
++++ b/fs/cifs/smb2glob.h
+@@ -61,4 +61,14 @@
+ /* Maximum buffer size value we can send with 1 credit */
+ #define SMB2_MAX_BUFFER_SIZE 65536
+ 
++/*
++ * Maximum number of credits to keep available.
++ * This value is chosen somewhat arbitrarily. The Windows client
++ * defaults to 128 credits, the Windows server allows clients up to
++ * 512 credits, and the NetApp server does not limit clients at all.
++ * Choose a high enough value such that the client shouldn't limit
++ * performance.
++ */
++#define SMB2_MAX_CREDITS_AVAILABLE 32000
++
+ #endif	/* _SMB2_GLOB_H */
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 4f0231e685a9..1238cd3552f9 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -266,9 +266,15 @@ smb2_set_file_info(struct inode *inode, const char *full_path,
+ 	struct tcon_link *tlink;
+ 	int rc;
+ 
++	if ((buf->CreationTime == 0) && (buf->LastAccessTime == 0) &&
++	    (buf->LastWriteTime == 0) && (buf->ChangeTime) &&
++	    (buf->Attributes == 0))
++		return 0; /* would be a no op, no sense sending this */
++
+ 	tlink = cifs_sb_tlink(cifs_sb);
+ 	if (IS_ERR(tlink))
+ 		return PTR_ERR(tlink);
++
+ 	rc = smb2_open_op_close(xid, tlink_tcon(tlink), cifs_sb, full_path,
+ 				FILE_WRITE_ATTRIBUTES, FILE_OPEN, 0, buf,
+ 				SMB2_OP_SET_INFO);
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 389fb9f8c84e..3d383489b9cf 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -549,19 +549,19 @@ smb2_is_valid_lease_break(char *buffer)
+ 		list_for_each(tmp1, &server->smb_ses_list) {
+ 			ses = list_entry(tmp1, struct cifs_ses, smb_ses_list);
+ 
+-			spin_lock(&cifs_file_list_lock);
+ 			list_for_each(tmp2, &ses->tcon_list) {
+ 				tcon = list_entry(tmp2, struct cifs_tcon,
+ 						  tcon_list);
++				spin_lock(&tcon->open_file_lock);
+ 				cifs_stats_inc(
+ 				    &tcon->stats.cifs_stats.num_oplock_brks);
+ 				if (smb2_tcon_has_lease(tcon, rsp, lw)) {
+-					spin_unlock(&cifs_file_list_lock);
++					spin_unlock(&tcon->open_file_lock);
+ 					spin_unlock(&cifs_tcp_ses_lock);
+ 					return true;
+ 				}
++				spin_unlock(&tcon->open_file_lock);
+ 			}
+-			spin_unlock(&cifs_file_list_lock);
+ 		}
+ 	}
+ 	spin_unlock(&cifs_tcp_ses_lock);
+@@ -603,7 +603,7 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ 			tcon = list_entry(tmp1, struct cifs_tcon, tcon_list);
+ 
+ 			cifs_stats_inc(&tcon->stats.cifs_stats.num_oplock_brks);
+-			spin_lock(&cifs_file_list_lock);
++			spin_lock(&tcon->open_file_lock);
+ 			list_for_each(tmp2, &tcon->openFileList) {
+ 				cfile = list_entry(tmp2, struct cifsFileInfo,
+ 						     tlist);
+@@ -615,7 +615,7 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ 
+ 				cifs_dbg(FYI, "file id match, oplock break\n");
+ 				cinode = CIFS_I(d_inode(cfile->dentry));
+-
++				spin_lock(&cfile->file_info_lock);
+ 				if (!CIFS_CACHE_WRITE(cinode) &&
+ 				    rsp->OplockLevel == SMB2_OPLOCK_LEVEL_NONE)
+ 					cfile->oplock_break_cancelled = true;
+@@ -637,14 +637,14 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ 					clear_bit(
+ 					   CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
+ 					   &cinode->flags);
+-
++				spin_unlock(&cfile->file_info_lock);
+ 				queue_work(cifsiod_wq, &cfile->oplock_break);
+ 
+-				spin_unlock(&cifs_file_list_lock);
++				spin_unlock(&tcon->open_file_lock);
+ 				spin_unlock(&cifs_tcp_ses_lock);
+ 				return true;
+ 			}
+-			spin_unlock(&cifs_file_list_lock);
++			spin_unlock(&tcon->open_file_lock);
+ 			spin_unlock(&cifs_tcp_ses_lock);
+ 			cifs_dbg(FYI, "No matching file for oplock break\n");
+ 			return true;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index d203c0329626..0e73cefca65e 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -287,7 +287,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
+ 		cifs_dbg(FYI, "Link Speed %lld\n",
+ 			le64_to_cpu(out_buf->LinkSpeed));
+ 	}
+-
++	kfree(out_buf);
+ 	return rc;
+ }
+ #endif /* STATS2 */
+@@ -541,6 +541,7 @@ smb2_set_fid(struct cifsFileInfo *cfile, struct cifs_fid *fid, __u32 oplock)
+ 	server->ops->set_oplock_level(cinode, oplock, fid->epoch,
+ 				      &fid->purge_cache);
+ 	cinode->can_cache_brlcks = CIFS_CACHE_WRITE(cinode);
++	memcpy(cfile->fid.create_guid, fid->create_guid, 16);
+ }
+ 
+ static void
+@@ -699,6 +700,7 @@ smb2_clone_range(const unsigned int xid,
+ 
+ cchunk_out:
+ 	kfree(pcchunk);
++	kfree(retbuf);
+ 	return rc;
+ }
+ 
+@@ -823,7 +825,6 @@ smb2_duplicate_extents(const unsigned int xid,
+ {
+ 	int rc;
+ 	unsigned int ret_data_len;
+-	char *retbuf = NULL;
+ 	struct duplicate_extents_to_file dup_ext_buf;
+ 	struct cifs_tcon *tcon = tlink_tcon(trgtfile->tlink);
+ 
+@@ -849,7 +850,7 @@ smb2_duplicate_extents(const unsigned int xid,
+ 			FSCTL_DUPLICATE_EXTENTS_TO_FILE,
+ 			true /* is_fsctl */, (char *)&dup_ext_buf,
+ 			sizeof(struct duplicate_extents_to_file),
+-			(char **)&retbuf,
++			NULL,
+ 			&ret_data_len);
+ 
+ 	if (ret_data_len > 0)
+@@ -872,7 +873,6 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 		   struct cifsFileInfo *cfile)
+ {
+ 	struct fsctl_set_integrity_information_req integr_info;
+-	char *retbuf = NULL;
+ 	unsigned int ret_data_len;
+ 
+ 	integr_info.ChecksumAlgorithm = cpu_to_le16(CHECKSUM_TYPE_UNCHANGED);
+@@ -884,7 +884,7 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 			FSCTL_SET_INTEGRITY_INFORMATION,
+ 			true /* is_fsctl */, (char *)&integr_info,
+ 			sizeof(struct fsctl_set_integrity_information_req),
+-			(char **)&retbuf,
++			NULL,
+ 			&ret_data_len);
+ 
+ }
+@@ -1041,7 +1041,7 @@ smb2_set_lease_key(struct inode *inode, struct cifs_fid *fid)
+ static void
+ smb2_new_lease_key(struct cifs_fid *fid)
+ {
+-	get_random_bytes(fid->lease_key, SMB2_LEASE_KEY_SIZE);
++	generate_random_uuid(fid->lease_key);
+ }
+ 
+ #define SMB2_SYMLINK_STRUCT_SIZE \
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 29e06db5f187..3eec96ca87d9 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -100,7 +100,21 @@ smb2_hdr_assemble(struct smb2_hdr *hdr, __le16 smb2_cmd /* command */ ,
+ 	hdr->ProtocolId = SMB2_PROTO_NUMBER;
+ 	hdr->StructureSize = cpu_to_le16(64);
+ 	hdr->Command = smb2_cmd;
+-	hdr->CreditRequest = cpu_to_le16(2); /* BB make this dynamic */
++	if (tcon && tcon->ses && tcon->ses->server) {
++		struct TCP_Server_Info *server = tcon->ses->server;
++
++		spin_lock(&server->req_lock);
++		/* Request up to 2 credits but don't go over the limit. */
++		if (server->credits >= SMB2_MAX_CREDITS_AVAILABLE)
++			hdr->CreditRequest = cpu_to_le16(0);
++		else
++			hdr->CreditRequest = cpu_to_le16(
++				min_t(int, SMB2_MAX_CREDITS_AVAILABLE -
++						server->credits, 2));
++		spin_unlock(&server->req_lock);
++	} else {
++		hdr->CreditRequest = cpu_to_le16(2);
++	}
+ 	hdr->ProcessId = cpu_to_le32((__u16)current->tgid);
+ 
+ 	if (!tcon)
+@@ -590,6 +604,7 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
+ 	char *security_blob = NULL;
+ 	unsigned char *ntlmssp_blob = NULL;
+ 	bool use_spnego = false; /* else use raw ntlmssp */
++	u64 previous_session = ses->Suid;
+ 
+ 	cifs_dbg(FYI, "Session Setup\n");
+ 
+@@ -627,6 +642,10 @@ ssetup_ntlmssp_authenticate:
+ 		return rc;
+ 
+ 	req->hdr.SessionId = 0; /* First session, not a reauthenticate */
++
++	/* if reconnect, we need to send previous sess id, otherwise it is 0 */
++	req->PreviousSessionId = previous_session;
++
+ 	req->Flags = 0; /* MBZ */
+ 	/* to enable echos and oplocks */
+ 	req->hdr.CreditRequest = cpu_to_le16(3);
+@@ -1164,7 +1183,7 @@ create_durable_v2_buf(struct cifs_fid *pfid)
+ 
+ 	buf->dcontext.Timeout = 0; /* Should this be configurable by workload */
+ 	buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT);
+-	get_random_bytes(buf->dcontext.CreateGuid, 16);
++	generate_random_uuid(buf->dcontext.CreateGuid);
+ 	memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16);
+ 
+ 	/* SMB2_CREATE_DURABLE_HANDLE_REQUEST is "DH2Q" */
+@@ -2057,6 +2076,7 @@ smb2_async_readv(struct cifs_readdata *rdata)
+ 	if (rdata->credits) {
+ 		buf->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes,
+ 						SMB2_MAX_BUFFER_SIZE));
++		buf->CreditRequest = buf->CreditCharge;
+ 		spin_lock(&server->req_lock);
+ 		server->credits += rdata->credits -
+ 						le16_to_cpu(buf->CreditCharge);
+@@ -2243,6 +2263,7 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ 	if (wdata->credits) {
+ 		req->hdr.CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes,
+ 						    SMB2_MAX_BUFFER_SIZE));
++		req->hdr.CreditRequest = req->hdr.CreditCharge;
+ 		spin_lock(&server->req_lock);
+ 		server->credits += wdata->credits -
+ 					le16_to_cpu(req->hdr.CreditCharge);
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index ff88d9feb01e..fd3709e8de33 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -276,7 +276,7 @@ struct smb2_sess_setup_req {
+ 	__le32 Channel;
+ 	__le16 SecurityBufferOffset;
+ 	__le16 SecurityBufferLength;
+-	__le64 PreviousSessionId;
++	__u64 PreviousSessionId;
+ 	__u8   Buffer[1];	/* variable length GSS security buffer */
+ } __packed;
+ 
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index c502c116924c..55d64fba1e87 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -152,7 +152,10 @@ static int do_page_crypto(struct inode *inode,
+ 			struct page *src_page, struct page *dest_page,
+ 			gfp_t gfp_flags)
+ {
+-	u8 xts_tweak[FS_XTS_TWEAK_SIZE];
++	struct {
++		__le64 index;
++		u8 padding[FS_XTS_TWEAK_SIZE - sizeof(__le64)];
++	} xts_tweak;
+ 	struct skcipher_request *req = NULL;
+ 	DECLARE_FS_COMPLETION_RESULT(ecr);
+ 	struct scatterlist dst, src;
+@@ -172,17 +175,15 @@ static int do_page_crypto(struct inode *inode,
+ 		req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ 		fscrypt_complete, &ecr);
+ 
+-	BUILD_BUG_ON(FS_XTS_TWEAK_SIZE < sizeof(index));
+-	memcpy(xts_tweak, &index, sizeof(index));
+-	memset(&xts_tweak[sizeof(index)], 0,
+-			FS_XTS_TWEAK_SIZE - sizeof(index));
++	BUILD_BUG_ON(sizeof(xts_tweak) != FS_XTS_TWEAK_SIZE);
++	xts_tweak.index = cpu_to_le64(index);
++	memset(xts_tweak.padding, 0, sizeof(xts_tweak.padding));
+ 
+ 	sg_init_table(&dst, 1);
+ 	sg_set_page(&dst, dest_page, PAGE_SIZE, 0);
+ 	sg_init_table(&src, 1);
+ 	sg_set_page(&src, src_page, PAGE_SIZE, 0);
+-	skcipher_request_set_crypt(req, &src, &dst, PAGE_SIZE,
+-					xts_tweak);
++	skcipher_request_set_crypt(req, &src, &dst, PAGE_SIZE, &xts_tweak);
+ 	if (rw == FS_DECRYPT)
+ 		res = crypto_skcipher_decrypt(req);
+ 	else
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index ed115acb5dee..6865663aac69 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -109,6 +109,8 @@ int fscrypt_process_policy(struct file *filp,
+ 	if (ret)
+ 		return ret;
+ 
++	inode_lock(inode);
++
+ 	if (!inode_has_encryption_context(inode)) {
+ 		if (!S_ISDIR(inode->i_mode))
+ 			ret = -EINVAL;
+@@ -127,6 +129,8 @@ int fscrypt_process_policy(struct file *filp,
+ 		ret = -EINVAL;
+ 	}
+ 
++	inode_unlock(inode);
++
+ 	mnt_drop_write_file(filp);
+ 	return ret;
+ }
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index 73bcfd41f5f2..42145be5c6b4 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -223,14 +223,18 @@ static struct attribute *ext4_attrs[] = {
+ EXT4_ATTR_FEATURE(lazy_itable_init);
+ EXT4_ATTR_FEATURE(batched_discard);
+ EXT4_ATTR_FEATURE(meta_bg_resize);
++#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ EXT4_ATTR_FEATURE(encryption);
++#endif
+ EXT4_ATTR_FEATURE(metadata_csum_seed);
+ 
+ static struct attribute *ext4_feat_attrs[] = {
+ 	ATTR_LIST(lazy_itable_init),
+ 	ATTR_LIST(batched_discard),
+ 	ATTR_LIST(meta_bg_resize),
++#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ 	ATTR_LIST(encryption),
++#endif
+ 	ATTR_LIST(metadata_csum_seed),
+ 	NULL,
+ };
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ad0c745ebad7..871c8b392099 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -687,6 +687,11 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ 	pri_bh = NULL;
+ 
+ root_found:
++	/* We don't support read-write mounts */
++	if (!(s->s_flags & MS_RDONLY)) {
++		error = -EACCES;
++		goto out_freebh;
++	}
+ 
+ 	if (joliet_level && (pri == NULL || !opt.rock)) {
+ 		/* This is the case of Joliet with the norock mount flag.
+@@ -1501,9 +1506,6 @@ struct inode *__isofs_iget(struct super_block *sb,
+ static struct dentry *isofs_mount(struct file_system_type *fs_type,
+ 	int flags, const char *dev_name, void *data)
+ {
+-	/* We don't support read-write mounts */
+-	if (!(flags & MS_RDONLY))
+-		return ERR_PTR(-EACCES);
+ 	return mount_bdev(fs_type, flags, dev_name, data, isofs_fill_super);
+ }
+ 
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 3d8246a9faa4..e1652665bd93 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1149,6 +1149,7 @@ int jbd2_journal_get_create_access(handle_t *handle, struct buffer_head *bh)
+ 		JBUFFER_TRACE(jh, "file as BJ_Reserved");
+ 		spin_lock(&journal->j_list_lock);
+ 		__jbd2_journal_file_buffer(jh, transaction, BJ_Reserved);
++		spin_unlock(&journal->j_list_lock);
+ 	} else if (jh->b_transaction == journal->j_committing_transaction) {
+ 		/* first access by this transaction */
+ 		jh->b_modified = 0;
+@@ -1156,8 +1157,8 @@ int jbd2_journal_get_create_access(handle_t *handle, struct buffer_head *bh)
+ 		JBUFFER_TRACE(jh, "set next transaction");
+ 		spin_lock(&journal->j_list_lock);
+ 		jh->b_next_transaction = transaction;
++		spin_unlock(&journal->j_list_lock);
+ 	}
+-	spin_unlock(&journal->j_list_lock);
+ 	jbd_unlock_bh_state(bh);
+ 
+ 	/*
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 217847679f0e..2905479f214a 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -344,9 +344,10 @@ static void bl_write_cleanup(struct work_struct *work)
+ 		u64 start = hdr->args.offset & (loff_t)PAGE_MASK;
+ 		u64 end = (hdr->args.offset + hdr->args.count +
+ 			PAGE_SIZE - 1) & (loff_t)PAGE_MASK;
++		u64 lwb = hdr->args.offset + hdr->args.count;
+ 
+ 		ext_tree_mark_written(bl, start >> SECTOR_SHIFT,
+-					(end - start) >> SECTOR_SHIFT, end);
++					(end - start) >> SECTOR_SHIFT, lwb);
+ 	}
+ 
+ 	pnfs_ld_write_done(hdr);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 322c2585bc34..b9c65421ed81 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -41,6 +41,17 @@ void nfs_mark_delegation_referenced(struct nfs_delegation *delegation)
+ 	set_bit(NFS_DELEGATION_REFERENCED, &delegation->flags);
+ }
+ 
++static bool
++nfs4_is_valid_delegation(const struct nfs_delegation *delegation,
++		fmode_t flags)
++{
++	if (delegation != NULL && (delegation->type & flags) == flags &&
++	    !test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) &&
++	    !test_bit(NFS_DELEGATION_RETURNING, &delegation->flags))
++		return true;
++	return false;
++}
++
+ static int
+ nfs4_do_check_delegation(struct inode *inode, fmode_t flags, bool mark)
+ {
+@@ -50,8 +61,7 @@ nfs4_do_check_delegation(struct inode *inode, fmode_t flags, bool mark)
+ 	flags &= FMODE_READ|FMODE_WRITE;
+ 	rcu_read_lock();
+ 	delegation = rcu_dereference(NFS_I(inode)->delegation);
+-	if (delegation != NULL && (delegation->type & flags) == flags &&
+-	    !test_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++	if (nfs4_is_valid_delegation(delegation, flags)) {
+ 		if (mark)
+ 			nfs_mark_delegation_referenced(delegation);
+ 		ret = 1;
+@@ -893,7 +903,7 @@ bool nfs4_copy_delegation_stateid(struct inode *inode, fmode_t flags,
+ 	flags &= FMODE_READ|FMODE_WRITE;
+ 	rcu_read_lock();
+ 	delegation = rcu_dereference(nfsi->delegation);
+-	ret = (delegation != NULL && (delegation->type & flags) == flags);
++	ret = nfs4_is_valid_delegation(delegation, flags);
+ 	if (ret) {
+ 		nfs4_stateid_copy(dst, &delegation->stateid);
+ 		nfs_mark_delegation_referenced(delegation);
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 177fefb26c18..6bc5a68e39f1 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -435,11 +435,11 @@ int nfs_same_file(struct dentry *dentry, struct nfs_entry *entry)
+ 		return 0;
+ 
+ 	nfsi = NFS_I(inode);
+-	if (entry->fattr->fileid == nfsi->fileid)
+-		return 1;
+-	if (nfs_compare_fh(entry->fh, &nfsi->fh) == 0)
+-		return 1;
+-	return 0;
++	if (entry->fattr->fileid != nfsi->fileid)
++		return 0;
++	if (entry->fh->size && nfs_compare_fh(entry->fh, &nfsi->fh) != 0)
++		return 0;
++	return 1;
+ }
+ 
+ static
+@@ -517,6 +517,8 @@ again:
+ 					&entry->fattr->fsid))
+ 			goto out;
+ 		if (nfs_same_file(dentry, entry)) {
++			if (!entry->fh->size)
++				goto out;
+ 			nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
+ 			status = nfs_refresh_inode(d_inode(dentry), entry->fattr);
+ 			if (!status)
+@@ -529,6 +531,10 @@ again:
+ 			goto again;
+ 		}
+ 	}
++	if (!entry->fh->size) {
++		d_lookup_done(dentry);
++		goto out;
++	}
+ 
+ 	inode = nfs_fhget(dentry->d_sb, entry->fh, entry->fattr, entry->label);
+ 	alias = d_splice_alias(inode, dentry);
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 64b43b4ad9dd..608501971fe0 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -443,6 +443,7 @@ int nfs42_proc_layoutstats_generic(struct nfs_server *server,
+ 	task = rpc_run_task(&task_setup);
+ 	if (IS_ERR(task))
+ 		return PTR_ERR(task);
++	rpc_put_task(task);
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index cada00aa5096..8353f33f0466 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1498,6 +1498,9 @@ restart:
+ 					__func__, status);
+ 			case -ENOENT:
+ 			case -ENOMEM:
++			case -EACCES:
++			case -EROFS:
++			case -EIO:
+ 			case -ESTALE:
+ 				/* Open state on this file cannot be recovered */
+ 				nfs4_state_mark_recovery_failed(state, status);
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 45007acaf364..a2b65fc56dd6 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -366,14 +366,21 @@ static struct notifier_block nfsd_inet6addr_notifier = {
+ };
+ #endif
+ 
++/* Only used under nfsd_mutex, so this atomic may be overkill: */
++static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
++
+ static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
+ {
+ 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ 
+-	unregister_inetaddr_notifier(&nfsd_inetaddr_notifier);
++	/* check if the notifier still has clients */
++	if (atomic_dec_return(&nfsd_notifier_refcount) == 0) {
++		unregister_inetaddr_notifier(&nfsd_inetaddr_notifier);
+ #if IS_ENABLED(CONFIG_IPV6)
+-	unregister_inet6addr_notifier(&nfsd_inet6addr_notifier);
++		unregister_inet6addr_notifier(&nfsd_inet6addr_notifier);
+ #endif
++	}
++
+ 	/*
+ 	 * write_ports can create the server without actually starting
+ 	 * any threads--if we get shut down before any threads are
+@@ -488,10 +495,13 @@ int nfsd_create_serv(struct net *net)
+ 	}
+ 
+ 	set_max_drc();
+-	register_inetaddr_notifier(&nfsd_inetaddr_notifier);
++	/* check if the notifier is already set */
++	if (atomic_inc_return(&nfsd_notifier_refcount) == 1) {
++		register_inetaddr_notifier(&nfsd_inetaddr_notifier);
+ #if IS_ENABLED(CONFIG_IPV6)
+-	register_inet6addr_notifier(&nfsd_inet6addr_notifier);
++		register_inet6addr_notifier(&nfsd_inet6addr_notifier);
+ #endif
++	}
+ 	do_gettimeofday(&nn->nfssvc_boot);		/* record boot time */
+ 	return 0;
+ }
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 43fdc2765aea..abadbc30e013 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -57,6 +57,7 @@ int ovl_copy_xattr(struct dentry *old, struct dentry *new)
+ 	ssize_t list_size, size, value_size = 0;
+ 	char *buf, *name, *value = NULL;
+ 	int uninitialized_var(error);
++	size_t slen;
+ 
+ 	if (!old->d_inode->i_op->getxattr ||
+ 	    !new->d_inode->i_op->getxattr)
+@@ -79,7 +80,16 @@ int ovl_copy_xattr(struct dentry *old, struct dentry *new)
+ 		goto out;
+ 	}
+ 
+-	for (name = buf; name < (buf + list_size); name += strlen(name) + 1) {
++	for (name = buf; list_size; name += slen) {
++		slen = strnlen(name, list_size) + 1;
++
++		/* underlying fs providing us with an broken xattr list? */
++		if (WARN_ON(slen > list_size)) {
++			error = -EIO;
++			break;
++		}
++		list_size -= slen;
++
+ 		if (ovl_is_private_xattr(name))
+ 			continue;
+ retry:
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 1560fdc09a5f..74e696426aae 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -14,6 +14,7 @@
+ #include <linux/cred.h>
+ #include <linux/posix_acl.h>
+ #include <linux/posix_acl_xattr.h>
++#include <linux/atomic.h>
+ #include "overlayfs.h"
+ 
+ void ovl_cleanup(struct inode *wdir, struct dentry *wdentry)
+@@ -37,8 +38,10 @@ struct dentry *ovl_lookup_temp(struct dentry *workdir, struct dentry *dentry)
+ {
+ 	struct dentry *temp;
+ 	char name[20];
++	static atomic_t temp_id = ATOMIC_INIT(0);
+ 
+-	snprintf(name, sizeof(name), "#%lx", (unsigned long) dentry);
++	/* counter is allowed to wrap, since temp dentries are ephemeral */
++	snprintf(name, sizeof(name), "#%x", atomic_inc_return(&temp_id));
+ 
+ 	temp = lookup_one_len(name, workdir, strlen(name));
+ 	if (!IS_ERR(temp) && temp->d_inode) {
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index 7a034d62cf8c..2340262a7e97 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -377,13 +377,14 @@ static void ramoops_free_przs(struct ramoops_context *cxt)
+ {
+ 	int i;
+ 
+-	cxt->max_dump_cnt = 0;
+ 	if (!cxt->przs)
+ 		return;
+ 
+-	for (i = 0; !IS_ERR_OR_NULL(cxt->przs[i]); i++)
++	for (i = 0; i < cxt->max_dump_cnt; i++)
+ 		persistent_ram_free(cxt->przs[i]);
++
+ 	kfree(cxt->przs);
++	cxt->max_dump_cnt = 0;
+ }
+ 
+ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
+@@ -408,7 +409,7 @@ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
+ 			     GFP_KERNEL);
+ 	if (!cxt->przs) {
+ 		dev_err(dev, "failed to initialize a prz array for dumps\n");
+-		goto fail_prz;
++		goto fail_mem;
+ 	}
+ 
+ 	for (i = 0; i < cxt->max_dump_cnt; i++) {
+@@ -419,6 +420,11 @@ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
+ 			err = PTR_ERR(cxt->przs[i]);
+ 			dev_err(dev, "failed to request mem region (0x%zx@0x%llx): %d\n",
+ 				cxt->record_size, (unsigned long long)*paddr, err);
++
++			while (i > 0) {
++				i--;
++				persistent_ram_free(cxt->przs[i]);
++			}
+ 			goto fail_prz;
+ 		}
+ 		*paddr += cxt->record_size;
+@@ -426,7 +432,9 @@ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
+ 
+ 	return 0;
+ fail_prz:
+-	ramoops_free_przs(cxt);
++	kfree(cxt->przs);
++fail_mem:
++	cxt->max_dump_cnt = 0;
+ 	return err;
+ }
+ 
+@@ -659,7 +667,6 @@ static int ramoops_remove(struct platform_device *pdev)
+ 	struct ramoops_context *cxt = &oops_cxt;
+ 
+ 	pstore_unregister(&cxt->pstore);
+-	cxt->max_dump_cnt = 0;
+ 
+ 	kfree(cxt->pstore.buf);
+ 	cxt->pstore.bufsize = 0;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 76c3f80efdfa..364d2dffe5a6 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -47,43 +47,10 @@ static inline size_t buffer_start(struct persistent_ram_zone *prz)
+ 	return atomic_read(&prz->buffer->start);
+ }
+ 
+-/* increase and wrap the start pointer, returning the old value */
+-static size_t buffer_start_add_atomic(struct persistent_ram_zone *prz, size_t a)
+-{
+-	int old;
+-	int new;
+-
+-	do {
+-		old = atomic_read(&prz->buffer->start);
+-		new = old + a;
+-		while (unlikely(new >= prz->buffer_size))
+-			new -= prz->buffer_size;
+-	} while (atomic_cmpxchg(&prz->buffer->start, old, new) != old);
+-
+-	return old;
+-}
+-
+-/* increase the size counter until it hits the max size */
+-static void buffer_size_add_atomic(struct persistent_ram_zone *prz, size_t a)
+-{
+-	size_t old;
+-	size_t new;
+-
+-	if (atomic_read(&prz->buffer->size) == prz->buffer_size)
+-		return;
+-
+-	do {
+-		old = atomic_read(&prz->buffer->size);
+-		new = old + a;
+-		if (new > prz->buffer_size)
+-			new = prz->buffer_size;
+-	} while (atomic_cmpxchg(&prz->buffer->size, old, new) != old);
+-}
+-
+ static DEFINE_RAW_SPINLOCK(buffer_lock);
+ 
+ /* increase and wrap the start pointer, returning the old value */
+-static size_t buffer_start_add_locked(struct persistent_ram_zone *prz, size_t a)
++static size_t buffer_start_add(struct persistent_ram_zone *prz, size_t a)
+ {
+ 	int old;
+ 	int new;
+@@ -103,7 +70,7 @@ static size_t buffer_start_add_locked(struct persistent_ram_zone *prz, size_t a)
+ }
+ 
+ /* increase the size counter until it hits the max size */
+-static void buffer_size_add_locked(struct persistent_ram_zone *prz, size_t a)
++static void buffer_size_add(struct persistent_ram_zone *prz, size_t a)
+ {
+ 	size_t old;
+ 	size_t new;
+@@ -124,9 +91,6 @@ exit:
+ 	raw_spin_unlock_irqrestore(&buffer_lock, flags);
+ }
+ 
+-static size_t (*buffer_start_add)(struct persistent_ram_zone *, size_t) = buffer_start_add_atomic;
+-static void (*buffer_size_add)(struct persistent_ram_zone *, size_t) = buffer_size_add_atomic;
+-
+ static void notrace persistent_ram_encode_rs8(struct persistent_ram_zone *prz,
+ 	uint8_t *data, size_t len, uint8_t *ecc)
+ {
+@@ -299,7 +263,7 @@ static void notrace persistent_ram_update(struct persistent_ram_zone *prz,
+ 	const void *s, unsigned int start, unsigned int count)
+ {
+ 	struct persistent_ram_buffer *buffer = prz->buffer;
+-	memcpy(buffer->data + start, s, count);
++	memcpy_toio(buffer->data + start, s, count);
+ 	persistent_ram_update_ecc(prz, start, count);
+ }
+ 
+@@ -322,8 +286,8 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz)
+ 	}
+ 
+ 	prz->old_log_size = size;
+-	memcpy(prz->old_log, &buffer->data[start], size - start);
+-	memcpy(prz->old_log + size - start, &buffer->data[0], start);
++	memcpy_fromio(prz->old_log, &buffer->data[start], size - start);
++	memcpy_fromio(prz->old_log + size - start, &buffer->data[0], start);
+ }
+ 
+ int notrace persistent_ram_write(struct persistent_ram_zone *prz,
+@@ -426,9 +390,6 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+ 		return NULL;
+ 	}
+ 
+-	buffer_start_add = buffer_start_add_locked;
+-	buffer_size_add = buffer_size_add_locked;
+-
+ 	if (memtype)
+ 		va = ioremap(start, size);
+ 	else
+diff --git a/fs/super.c b/fs/super.c
+index c2ff475c1711..47d11e0462d0 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1379,8 +1379,8 @@ int freeze_super(struct super_block *sb)
+ 		}
+ 	}
+ 	/*
+-	 * This is just for debugging purposes so that fs can warn if it
+-	 * sees write activity when frozen is set to SB_FREEZE_COMPLETE.
++	 * For debugging purposes so that fs can warn if it sees write activity
++	 * when frozen is set to SB_FREEZE_COMPLETE, and for thaw_super().
+ 	 */
+ 	sb->s_writers.frozen = SB_FREEZE_COMPLETE;
+ 	up_write(&sb->s_umount);
+@@ -1399,7 +1399,7 @@ int thaw_super(struct super_block *sb)
+ 	int error;
+ 
+ 	down_write(&sb->s_umount);
+-	if (sb->s_writers.frozen == SB_UNFROZEN) {
++	if (sb->s_writers.frozen != SB_FREEZE_COMPLETE) {
+ 		up_write(&sb->s_umount);
+ 		return -EINVAL;
+ 	}
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 11a004114eba..c9ee6f6efa07 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -172,6 +172,7 @@ out_cancel:
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(nm->len);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
++	host_ui->xattr_names -= nm->len;
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -476,6 +477,7 @@ out_cancel:
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(nm->len);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
++	host_ui->xattr_names += nm->len;
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+diff --git a/include/dt-bindings/clock/imx6qdl-clock.h b/include/dt-bindings/clock/imx6qdl-clock.h
+index 29050337d9d5..da59fd9cdb5e 100644
+--- a/include/dt-bindings/clock/imx6qdl-clock.h
++++ b/include/dt-bindings/clock/imx6qdl-clock.h
+@@ -269,6 +269,8 @@
+ #define IMX6QDL_CLK_PRG0_APB			256
+ #define IMX6QDL_CLK_PRG1_APB			257
+ #define IMX6QDL_CLK_PRE_AXI			258
+-#define IMX6QDL_CLK_END				259
++#define IMX6QDL_CLK_MLB_SEL			259
++#define IMX6QDL_CLK_MLB_PODF			260
++#define IMX6QDL_CLK_END				261
+ 
+ #endif /* __DT_BINDINGS_CLOCK_IMX6QDL_H */
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index 631ba33bbe9f..32dc0cbd51ca 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -639,19 +639,19 @@ static inline int cpufreq_table_find_index_al(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq >= target_freq)
+-			return i;
++			return pos - table;
+ 
+-		best = i;
++		best = pos;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Find lowest freq at or above target in a table in descending order */
+@@ -659,28 +659,28 @@ static inline int cpufreq_table_find_index_dl(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq == target_freq)
+-			return i;
++			return pos - table;
+ 
+ 		if (freq > target_freq) {
+-			best = i;
++			best = pos;
+ 			continue;
+ 		}
+ 
+ 		/* No freq found above target_freq */
+-		if (best == -1)
+-			return i;
++		if (best == table - 1)
++			return pos - table;
+ 
+-		return best;
++		return best - table;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Works only on sorted freq-tables */
+@@ -700,28 +700,28 @@ static inline int cpufreq_table_find_index_ah(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq == target_freq)
+-			return i;
++			return pos - table;
+ 
+ 		if (freq < target_freq) {
+-			best = i;
++			best = pos;
+ 			continue;
+ 		}
+ 
+ 		/* No freq found below target_freq */
+-		if (best == -1)
+-			return i;
++		if (best == table - 1)
++			return pos - table;
+ 
+-		return best;
++		return best - table;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Find highest freq at or below target in a table in descending order */
+@@ -729,19 +729,19 @@ static inline int cpufreq_table_find_index_dh(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq <= target_freq)
+-			return i;
++			return pos - table;
+ 
+-		best = i;
++		best = pos;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Works only on sorted freq-tables */
+@@ -761,32 +761,32 @@ static inline int cpufreq_table_find_index_ac(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq == target_freq)
+-			return i;
++			return pos - table;
+ 
+ 		if (freq < target_freq) {
+-			best = i;
++			best = pos;
+ 			continue;
+ 		}
+ 
+ 		/* No freq found below target_freq */
+-		if (best == -1)
+-			return i;
++		if (best == table - 1)
++			return pos - table;
+ 
+ 		/* Choose the closest freq */
+-		if (target_freq - table[best].frequency > freq - target_freq)
+-			return i;
++		if (target_freq - best->frequency > freq - target_freq)
++			return pos - table;
+ 
+-		return best;
++		return best - table;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Find closest freq to target in a table in descending order */
+@@ -794,32 +794,32 @@ static inline int cpufreq_table_find_index_dc(struct cpufreq_policy *policy,
+ 					      unsigned int target_freq)
+ {
+ 	struct cpufreq_frequency_table *table = policy->freq_table;
++	struct cpufreq_frequency_table *pos, *best = table - 1;
+ 	unsigned int freq;
+-	int i, best = -1;
+ 
+-	for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+-		freq = table[i].frequency;
++	cpufreq_for_each_valid_entry(pos, table) {
++		freq = pos->frequency;
+ 
+ 		if (freq == target_freq)
+-			return i;
++			return pos - table;
+ 
+ 		if (freq > target_freq) {
+-			best = i;
++			best = pos;
+ 			continue;
+ 		}
+ 
+ 		/* No freq found above target_freq */
+-		if (best == -1)
+-			return i;
++		if (best == table - 1)
++			return pos - table;
+ 
+ 		/* Choose the closest freq */
+-		if (table[best].frequency - target_freq > target_freq - freq)
+-			return i;
++		if (best->frequency - target_freq > target_freq - freq)
++			return pos - table;
+ 
+-		return best;
++		return best - table;
+ 	}
+ 
+-	return best;
++	return best - table;
+ }
+ 
+ /* Works only on sorted freq-tables */
+diff --git a/include/linux/devfreq-event.h b/include/linux/devfreq-event.h
+index 0a83a1e648b0..4db00b02ca3f 100644
+--- a/include/linux/devfreq-event.h
++++ b/include/linux/devfreq-event.h
+@@ -148,11 +148,6 @@ static inline int devfreq_event_reset_event(struct devfreq_event_dev *edev)
+ 	return -EINVAL;
+ }
+ 
+-static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev)
+-{
+-	return ERR_PTR(-EINVAL);
+-}
+-
+ static inline struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(
+ 					struct device *dev, int index)
+ {
+diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
+index 99ac022edc60..3a8610ea6ab7 100644
+--- a/include/linux/irqchip/arm-gic-v3.h
++++ b/include/linux/irqchip/arm-gic-v3.h
+@@ -290,7 +290,7 @@
+ #define GITS_BASER_TYPE_SHIFT			(56)
+ #define GITS_BASER_TYPE(r)		(((r) >> GITS_BASER_TYPE_SHIFT) & 7)
+ #define GITS_BASER_ENTRY_SIZE_SHIFT		(48)
+-#define GITS_BASER_ENTRY_SIZE(r)	((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0xff) + 1)
++#define GITS_BASER_ENTRY_SIZE(r)	((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1)
+ #define GITS_BASER_SHAREABILITY_SHIFT	(10)
+ #define GITS_BASER_InnerShareable					\
+ 	GIC_BASER_SHAREABILITY(GITS_BASER, InnerShareable)
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index fb8e3b6febdf..c2119008990a 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -177,6 +177,7 @@ enum tcm_sense_reason_table {
+ 	TCM_LOGICAL_BLOCK_GUARD_CHECK_FAILED	= R(0x15),
+ 	TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED	= R(0x16),
+ 	TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED	= R(0x17),
++	TCM_COPY_TARGET_DEVICE_NOT_REACHABLE	= R(0x18),
+ #undef R
+ };
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 039de34f1521..8b3610c871f2 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -456,17 +456,23 @@ static inline int entity_before(struct sched_entity *a,
+ 
+ static void update_min_vruntime(struct cfs_rq *cfs_rq)
+ {
++	struct sched_entity *curr = cfs_rq->curr;
++
+ 	u64 vruntime = cfs_rq->min_vruntime;
+ 
+-	if (cfs_rq->curr)
+-		vruntime = cfs_rq->curr->vruntime;
++	if (curr) {
++		if (curr->on_rq)
++			vruntime = curr->vruntime;
++		else
++			curr = NULL;
++	}
+ 
+ 	if (cfs_rq->rb_leftmost) {
+ 		struct sched_entity *se = rb_entry(cfs_rq->rb_leftmost,
+ 						   struct sched_entity,
+ 						   run_node);
+ 
+-		if (!cfs_rq->curr)
++		if (!curr)
+ 			vruntime = se->vruntime;
+ 		else
+ 			vruntime = min_vruntime(vruntime, se->vruntime);
+@@ -680,7 +686,14 @@ void init_entity_runnable_average(struct sched_entity *se)
+ 	 * will definitely be update (after enqueue).
+ 	 */
+ 	sa->period_contrib = 1023;
+-	sa->load_avg = scale_load_down(se->load.weight);
++	/*
++	 * Tasks are intialized with full load to be seen as heavy tasks until
++	 * they get a chance to stabilize to their real load level.
++	 * Group entities are intialized with zero load to reflect the fact that
++	 * nothing has been attached to the task group yet.
++	 */
++	if (entity_is_task(se))
++		sa->load_avg = scale_load_down(se->load.weight);
+ 	sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
+ 	/*
+ 	 * At this point, util_avg won't be used in select_task_rq_fair anyway
+@@ -3459,9 +3472,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	account_entity_dequeue(cfs_rq, se);
+ 
+ 	/*
+-	 * Normalize the entity after updating the min_vruntime because the
+-	 * update can refer to the ->curr item and we need to reflect this
+-	 * movement in our normalized position.
++	 * Normalize after update_curr(); which will also have moved
++	 * min_vruntime if @se is the one holding it back. But before doing
++	 * update_min_vruntime() again, which will discount @se's position and
++	 * can move min_vruntime forward still more.
+ 	 */
+ 	if (!(flags & DEQUEUE_SLEEP))
+ 		se->vruntime -= cfs_rq->min_vruntime;
+@@ -3469,8 +3483,16 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	/* return excess runtime on last dequeue */
+ 	return_cfs_rq_runtime(cfs_rq);
+ 
+-	update_min_vruntime(cfs_rq);
+ 	update_cfs_shares(cfs_rq);
++
++	/*
++	 * Now advance min_vruntime if @se was the entity holding it back,
++	 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
++	 * put back on, and if we advance min_vruntime, we'll be placed back
++	 * further than we started -- ie. we'll be penalized.
++	 */
++	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
++		update_min_vruntime(cfs_rq);
+ }
+ 
+ /*
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index bf168838a029..e72581da9648 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -473,7 +473,16 @@ static int xs_nospace(struct rpc_task *task)
+ 	spin_unlock_bh(&xprt->transport_lock);
+ 
+ 	/* Race breaker in case memory is freed before above code is called */
+-	sk->sk_write_space(sk);
++	if (ret == -EAGAIN) {
++		struct socket_wq *wq;
++
++		rcu_read_lock();
++		wq = rcu_dereference(sk->sk_wq);
++		set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
++		rcu_read_unlock();
++
++		sk->sk_write_space(sk);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/sound/pci/hda/dell_wmi_helper.c b/sound/pci/hda/dell_wmi_helper.c
+index 9c22f95838ef..19d41da79f93 100644
+--- a/sound/pci/hda/dell_wmi_helper.c
++++ b/sound/pci/hda/dell_wmi_helper.c
+@@ -49,7 +49,7 @@ static void alc_fixup_dell_wmi(struct hda_codec *codec,
+ 		removefunc = true;
+ 		if (dell_led_set_func(DELL_LED_MICMUTE, false) >= 0) {
+ 			dell_led_value = 0;
+-			if (spec->gen.num_adc_nids > 1)
++			if (spec->gen.num_adc_nids > 1 && !spec->gen.dyn_adc_switch)
+ 				codec_dbg(codec, "Skipping micmute LED control due to several ADCs");
+ 			else {
+ 				dell_old_cap_hook = spec->gen.cap_sync_hook;
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index f0955fd7a2e7..6a23302297c9 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -62,7 +62,7 @@ static void hda_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 			removefunc = false;
+ 		}
+ 		if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+-			if (spec->num_adc_nids > 1)
++			if (spec->num_adc_nids > 1 && !spec->dyn_adc_switch)
+ 				codec_dbg(codec,
+ 					  "Skipping micmute LED control due to several ADCs");
+ 			else {
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 8ff6c6a61291..c9c8dc330116 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -89,6 +89,7 @@ struct intel_pt_decoder {
+ 	bool pge;
+ 	bool have_tma;
+ 	bool have_cyc;
++	bool fixup_last_mtc;
+ 	uint64_t pos;
+ 	uint64_t last_ip;
+ 	uint64_t ip;
+@@ -584,10 +585,31 @@ struct intel_pt_calc_cyc_to_tsc_info {
+ 	uint64_t        tsc_timestamp;
+ 	uint64_t        timestamp;
+ 	bool            have_tma;
++	bool            fixup_last_mtc;
+ 	bool            from_mtc;
+ 	double          cbr_cyc_to_tsc;
+ };
+ 
++/*
++ * MTC provides a 8-bit slice of CTC but the TMA packet only provides the lower
++ * 16 bits of CTC. If mtc_shift > 8 then some of the MTC bits are not in the CTC
++ * provided by the TMA packet. Fix-up the last_mtc calculated from the TMA
++ * packet by copying the missing bits from the current MTC assuming the least
++ * difference between the two, and that the current MTC comes after last_mtc.
++ */
++static void intel_pt_fixup_last_mtc(uint32_t mtc, int mtc_shift,
++				    uint32_t *last_mtc)
++{
++	uint32_t first_missing_bit = 1U << (16 - mtc_shift);
++	uint32_t mask = ~(first_missing_bit - 1);
++
++	*last_mtc |= mtc & mask;
++	if (*last_mtc >= mtc) {
++		*last_mtc -= first_missing_bit;
++		*last_mtc &= 0xff;
++	}
++}
++
+ static int intel_pt_calc_cyc_cb(struct intel_pt_pkt_info *pkt_info)
+ {
+ 	struct intel_pt_decoder *decoder = pkt_info->decoder;
+@@ -617,6 +639,11 @@ static int intel_pt_calc_cyc_cb(struct intel_pt_pkt_info *pkt_info)
+ 			return 0;
+ 
+ 		mtc = pkt_info->packet.payload;
++		if (decoder->mtc_shift > 8 && data->fixup_last_mtc) {
++			data->fixup_last_mtc = false;
++			intel_pt_fixup_last_mtc(mtc, decoder->mtc_shift,
++						&data->last_mtc);
++		}
+ 		if (mtc > data->last_mtc)
+ 			mtc_delta = mtc - data->last_mtc;
+ 		else
+@@ -685,6 +712,7 @@ static int intel_pt_calc_cyc_cb(struct intel_pt_pkt_info *pkt_info)
+ 
+ 		data->ctc_delta = 0;
+ 		data->have_tma = true;
++		data->fixup_last_mtc = true;
+ 
+ 		return 0;
+ 
+@@ -751,6 +779,7 @@ static void intel_pt_calc_cyc_to_tsc(struct intel_pt_decoder *decoder,
+ 		.tsc_timestamp  = decoder->tsc_timestamp,
+ 		.timestamp      = decoder->timestamp,
+ 		.have_tma       = decoder->have_tma,
++		.fixup_last_mtc = decoder->fixup_last_mtc,
+ 		.from_mtc       = from_mtc,
+ 		.cbr_cyc_to_tsc = 0,
+ 	};
+@@ -1241,6 +1270,7 @@ static void intel_pt_calc_tma(struct intel_pt_decoder *decoder)
+ 	}
+ 	decoder->ctc_delta = 0;
+ 	decoder->have_tma = true;
++	decoder->fixup_last_mtc = true;
+ 	intel_pt_log("CTC timestamp " x64_fmt " last MTC %#x  CTC rem %#x\n",
+ 		     decoder->ctc_timestamp, decoder->last_mtc, ctc_rem);
+ }
+@@ -1255,6 +1285,12 @@ static void intel_pt_calc_mtc_timestamp(struct intel_pt_decoder *decoder)
+ 
+ 	mtc = decoder->packet.payload;
+ 
++	if (decoder->mtc_shift > 8 && decoder->fixup_last_mtc) {
++		decoder->fixup_last_mtc = false;
++		intel_pt_fixup_last_mtc(mtc, decoder->mtc_shift,
++					&decoder->last_mtc);
++	}
++
+ 	if (mtc > decoder->last_mtc)
+ 		mtc_delta = mtc - decoder->last_mtc;
+ 	else
+@@ -1323,6 +1359,8 @@ static void intel_pt_calc_cyc_timestamp(struct intel_pt_decoder *decoder)
+ 			     timestamp, decoder->timestamp);
+ 	else
+ 		decoder->timestamp = timestamp;
++
++	decoder->timestamp_insn_cnt = 0;
+ }
+ 
+ /* Walk PSB+ packets when already in sync. */
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 551ff6f640be..b2878d2b2d67 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -241,7 +241,7 @@ static int intel_pt_get_trace(struct intel_pt_buffer *b, void *data)
+ 	}
+ 
+ 	queue = &ptq->pt->queues.queue_array[ptq->queue_nr];
+-
++next:
+ 	buffer = auxtrace_buffer__next(queue, buffer);
+ 	if (!buffer) {
+ 		if (old_buffer)
+@@ -264,9 +264,6 @@ static int intel_pt_get_trace(struct intel_pt_buffer *b, void *data)
+ 	    intel_pt_do_fix_overlap(ptq->pt, old_buffer, buffer))
+ 		return -ENOMEM;
+ 
+-	if (old_buffer)
+-		auxtrace_buffer__drop_data(old_buffer);
+-
+ 	if (buffer->use_data) {
+ 		b->len = buffer->use_size;
+ 		b->buf = buffer->use_data;
+@@ -276,6 +273,16 @@ static int intel_pt_get_trace(struct intel_pt_buffer *b, void *data)
+ 	}
+ 	b->ref_timestamp = buffer->reference;
+ 
++	/*
++	 * If in snapshot mode and the buffer has no usable data, get next
++	 * buffer and again check overlap against old_buffer.
++	 */
++	if (ptq->pt->snapshot_mode && !b->len)
++		goto next;
++
++	if (old_buffer)
++		auxtrace_buffer__drop_data(old_buffer);
++
+ 	if (!old_buffer || ptq->pt->sampling_mode || (ptq->pt->snapshot_mode &&
+ 						      !buffer->consecutive)) {
+ 		b->consecutive = false;
+diff --git a/tools/spi/spidev_test.c b/tools/spi/spidev_test.c
+index 8a73d8185316..f3825b676e38 100644
+--- a/tools/spi/spidev_test.c
++++ b/tools/spi/spidev_test.c
+@@ -284,7 +284,7 @@ static void parse_opts(int argc, char *argv[])
+ 
+ static void transfer_escaped_string(int fd, char *str)
+ {
+-	size_t size = strlen(str + 1);
++	size_t size = strlen(str);
+ 	uint8_t *tx;
+ 	uint8_t *rx;
+ 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-10-31 12:34 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-10-31 12:34 UTC (permalink / raw
  To: gentoo-commits

commit:     27f3971a11c5ea990bee74d6c8fa48eed0fecadb
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 31 08:01:26 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Oct 31 08:01:26 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=27f3971a

Linux patch 4.8.6

 0000_README            |    8 +-
 1005_linux-4.8.6.patch | 5137 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5143 insertions(+), 2 deletions(-)

diff --git a/0000_README b/0000_README
index a5b48e4..cd373fa 100644
--- a/0000_README
+++ b/0000_README
@@ -59,9 +59,13 @@ Patch:  1003_linux-4.8.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.4
 
-Patch:  1003_linux-4.8.4.patch
+Patch:  1004_linux-4.8.5.patch
 From:   http://www.kernel.org
-Desc:   Linux 4.8.4
+Desc:   Linux 4.8.5
+
+Patch:  1005_linux-4.8.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.6
 
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644

diff --git a/1005_linux-4.8.6.patch b/1005_linux-4.8.6.patch
new file mode 100644
index 0000000..e2d7370
--- /dev/null
+++ b/1005_linux-4.8.6.patch
@@ -0,0 +1,5137 @@
+diff --git a/Makefile b/Makefile
+index daa3a01d2525..b249529204cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/boot/dts/arm-realview-eb.dtsi b/arch/arm/boot/dts/arm-realview-eb.dtsi
+index 1c6a040218e3..e2e9599596e2 100644
+--- a/arch/arm/boot/dts/arm-realview-eb.dtsi
++++ b/arch/arm/boot/dts/arm-realview-eb.dtsi
+@@ -51,14 +51,6 @@
+ 		regulator-boot-on;
+         };
+ 
+-	veth: fixedregulator@0 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "veth";
+-		regulator-min-microvolt = <3300000>;
+-		regulator-max-microvolt = <3300000>;
+-		regulator-boot-on;
+-	};
+-
+ 	xtal24mhz: xtal24mhz@24M {
+ 		#clock-cells = <0>;
+ 		compatible = "fixed-clock";
+@@ -134,16 +126,15 @@
+ 		bank-width = <4>;
+ 	};
+ 
+-	/* SMSC 9118 ethernet with PHY and EEPROM */
++	/* SMSC LAN91C111 ethernet with PHY and EEPROM */
+ 	ethernet: ethernet@4e000000 {
+-		compatible = "smsc,lan9118", "smsc,lan9115";
++		compatible = "smsc,lan91c111";
+ 		reg = <0x4e000000 0x10000>;
+-		phy-mode = "mii";
+-		reg-io-width = <4>;
+-		smsc,irq-active-high;
+-		smsc,irq-push-pull;
+-		vdd33a-supply = <&veth>;
+-		vddvario-supply = <&veth>;
++		/*
++		 * This means the adapter can be accessed with 8, 16 or
++		 * 32 bit reads/writes.
++		 */
++		reg-io-width = <7>;
+ 	};
+ 
+ 	usb: usb@4f000000 {
+diff --git a/arch/arm/boot/dts/bcm958625hr.dts b/arch/arm/boot/dts/bcm958625hr.dts
+index 03b8bbeb694f..652418aa2700 100644
+--- a/arch/arm/boot/dts/bcm958625hr.dts
++++ b/arch/arm/boot/dts/bcm958625hr.dts
+@@ -47,7 +47,8 @@
+ 	};
+ 
+ 	memory {
+-		reg = <0x60000000 0x20000000>;
++		device_type = "memory";
++		reg = <0x60000000 0x80000000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+index ca86da68220c..854117dc0b77 100644
+--- a/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
++++ b/arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
+@@ -119,7 +119,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mcspi1_pins>;
+ 
+-	lcd0: display {
++	lcd0: display@1 {
+ 		compatible = "lgphilips,lb035q02";
+ 		label = "lcd35";
+ 
+diff --git a/arch/arm/boot/dts/sun9i-a80.dtsi b/arch/arm/boot/dts/sun9i-a80.dtsi
+index f68b3242b33a..3f528a379288 100644
+--- a/arch/arm/boot/dts/sun9i-a80.dtsi
++++ b/arch/arm/boot/dts/sun9i-a80.dtsi
+@@ -899,8 +899,7 @@
+ 			resets = <&apbs_rst 0>;
+ 			gpio-controller;
+ 			interrupt-controller;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
++			#interrupt-cells = <3>;
+ 			#gpio-cells = <3>;
+ 
+ 			r_ir_pins: r_ir {
+diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c
+index 1568cb5cd870..b88364aa149a 100644
+--- a/arch/arm/crypto/ghash-ce-glue.c
++++ b/arch/arm/crypto/ghash-ce-glue.c
+@@ -220,6 +220,27 @@ static int ghash_async_digest(struct ahash_request *req)
+ 	}
+ }
+ 
++static int ghash_async_import(struct ahash_request *req, const void *in)
++{
++	struct ahash_request *cryptd_req = ahash_request_ctx(req);
++	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++	struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
++	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
++
++	desc->tfm = cryptd_ahash_child(ctx->cryptd_tfm);
++	desc->flags = req->base.flags;
++
++	return crypto_shash_import(desc, in);
++}
++
++static int ghash_async_export(struct ahash_request *req, void *out)
++{
++	struct ahash_request *cryptd_req = ahash_request_ctx(req);
++	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
++
++	return crypto_shash_export(desc, out);
++}
++
+ static int ghash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 			      unsigned int keylen)
+ {
+@@ -268,7 +289,10 @@ static struct ahash_alg ghash_async_alg = {
+ 	.final			= ghash_async_final,
+ 	.setkey			= ghash_async_setkey,
+ 	.digest			= ghash_async_digest,
++	.import			= ghash_async_import,
++	.export			= ghash_async_export,
+ 	.halg.digestsize	= GHASH_DIGEST_SIZE,
++	.halg.statesize		= sizeof(struct ghash_desc_ctx),
+ 	.halg.base		= {
+ 		.cra_name	= "ghash",
+ 		.cra_driver_name = "ghash-ce",
+diff --git a/arch/arm/mach-pxa/corgi_pm.c b/arch/arm/mach-pxa/corgi_pm.c
+index d9206811be9b..c71c483f410e 100644
+--- a/arch/arm/mach-pxa/corgi_pm.c
++++ b/arch/arm/mach-pxa/corgi_pm.c
+@@ -131,16 +131,11 @@ static int corgi_should_wakeup(unsigned int resume_on_alarm)
+ 	return is_resume;
+ }
+ 
+-static unsigned long corgi_charger_wakeup(void)
++static bool corgi_charger_wakeup(void)
+ {
+-	unsigned long ret;
+-
+-	ret = (!gpio_get_value(CORGI_GPIO_AC_IN) << GPIO_bit(CORGI_GPIO_AC_IN))
+-		| (!gpio_get_value(CORGI_GPIO_KEY_INT)
+-		<< GPIO_bit(CORGI_GPIO_KEY_INT))
+-		| (!gpio_get_value(CORGI_GPIO_WAKEUP)
+-		<< GPIO_bit(CORGI_GPIO_WAKEUP));
+-	return ret;
++	return !gpio_get_value(CORGI_GPIO_AC_IN) ||
++		!gpio_get_value(CORGI_GPIO_KEY_INT) ||
++		!gpio_get_value(CORGI_GPIO_WAKEUP);
+ }
+ 
+ unsigned long corgipm_read_devdata(int type)
+diff --git a/arch/arm/mach-pxa/pxa_cplds_irqs.c b/arch/arm/mach-pxa/pxa_cplds_irqs.c
+index 2385052b0ce1..e362f865fcd2 100644
+--- a/arch/arm/mach-pxa/pxa_cplds_irqs.c
++++ b/arch/arm/mach-pxa/pxa_cplds_irqs.c
+@@ -41,30 +41,35 @@ static irqreturn_t cplds_irq_handler(int in_irq, void *d)
+ 	unsigned long pending;
+ 	unsigned int bit;
+ 
+-	pending = readl(fpga->base + FPGA_IRQ_SET_CLR) & fpga->irq_mask;
+-	for_each_set_bit(bit, &pending, CPLDS_NB_IRQ)
+-		generic_handle_irq(irq_find_mapping(fpga->irqdomain, bit));
++	do {
++		pending = readl(fpga->base + FPGA_IRQ_SET_CLR) & fpga->irq_mask;
++		for_each_set_bit(bit, &pending, CPLDS_NB_IRQ) {
++			generic_handle_irq(irq_find_mapping(fpga->irqdomain,
++							    bit));
++		}
++	} while (pending);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void cplds_irq_mask_ack(struct irq_data *d)
++static void cplds_irq_mask(struct irq_data *d)
+ {
+ 	struct cplds *fpga = irq_data_get_irq_chip_data(d);
+ 	unsigned int cplds_irq = irqd_to_hwirq(d);
+-	unsigned int set, bit = BIT(cplds_irq);
++	unsigned int bit = BIT(cplds_irq);
+ 
+ 	fpga->irq_mask &= ~bit;
+ 	writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);
+-	set = readl(fpga->base + FPGA_IRQ_SET_CLR);
+-	writel(set & ~bit, fpga->base + FPGA_IRQ_SET_CLR);
+ }
+ 
+ static void cplds_irq_unmask(struct irq_data *d)
+ {
+ 	struct cplds *fpga = irq_data_get_irq_chip_data(d);
+ 	unsigned int cplds_irq = irqd_to_hwirq(d);
+-	unsigned int bit = BIT(cplds_irq);
++	unsigned int set, bit = BIT(cplds_irq);
++
++	set = readl(fpga->base + FPGA_IRQ_SET_CLR);
++	writel(set & ~bit, fpga->base + FPGA_IRQ_SET_CLR);
+ 
+ 	fpga->irq_mask |= bit;
+ 	writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);
+@@ -72,7 +77,8 @@ static void cplds_irq_unmask(struct irq_data *d)
+ 
+ static struct irq_chip cplds_irq_chip = {
+ 	.name		= "pxa_cplds",
+-	.irq_mask_ack	= cplds_irq_mask_ack,
++	.irq_ack	= cplds_irq_mask,
++	.irq_mask	= cplds_irq_mask,
+ 	.irq_unmask	= cplds_irq_unmask,
+ 	.flags		= IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE,
+ };
+diff --git a/arch/arm/mach-pxa/sharpsl_pm.c b/arch/arm/mach-pxa/sharpsl_pm.c
+index b80eab9993c5..249b7bd5fbc4 100644
+--- a/arch/arm/mach-pxa/sharpsl_pm.c
++++ b/arch/arm/mach-pxa/sharpsl_pm.c
+@@ -744,7 +744,7 @@ static int sharpsl_off_charge_battery(void)
+ 		time = RCNR;
+ 		while (1) {
+ 			/* Check if any wakeup event had occurred */
+-			if (sharpsl_pm.machinfo->charger_wakeup() != 0)
++			if (sharpsl_pm.machinfo->charger_wakeup())
+ 				return 0;
+ 			/* Check for timeout */
+ 			if ((RCNR - time) > SHARPSL_WAIT_CO_TIME)
+diff --git a/arch/arm/mach-pxa/sharpsl_pm.h b/arch/arm/mach-pxa/sharpsl_pm.h
+index 905be6755f04..fa75b6df8134 100644
+--- a/arch/arm/mach-pxa/sharpsl_pm.h
++++ b/arch/arm/mach-pxa/sharpsl_pm.h
+@@ -34,7 +34,7 @@ struct sharpsl_charger_machinfo {
+ #define SHARPSL_STATUS_LOCK     5
+ #define SHARPSL_STATUS_CHRGFULL 6
+ #define SHARPSL_STATUS_FATAL    7
+-	unsigned long (*charger_wakeup)(void);
++	bool (*charger_wakeup)(void);
+ 	int (*should_wakeup)(unsigned int resume_on_alarm);
+ 	void (*backlight_limit)(int);
+ 	int (*backlight_get_status) (void);
+diff --git a/arch/arm/mach-pxa/spitz_pm.c b/arch/arm/mach-pxa/spitz_pm.c
+index ea9f9034cb54..4e64a140252e 100644
+--- a/arch/arm/mach-pxa/spitz_pm.c
++++ b/arch/arm/mach-pxa/spitz_pm.c
+@@ -165,13 +165,10 @@ static int spitz_should_wakeup(unsigned int resume_on_alarm)
+ 	return is_resume;
+ }
+ 
+-static unsigned long spitz_charger_wakeup(void)
++static bool spitz_charger_wakeup(void)
+ {
+-	unsigned long ret;
+-	ret = ((!gpio_get_value(SPITZ_GPIO_KEY_INT)
+-		<< GPIO_bit(SPITZ_GPIO_KEY_INT))
+-		| gpio_get_value(SPITZ_GPIO_SYNC));
+-	return ret;
++	return !gpio_get_value(SPITZ_GPIO_KEY_INT) ||
++		gpio_get_value(SPITZ_GPIO_SYNC);
+ }
+ 
+ unsigned long spitzpm_read_devdata(int type)
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 263bf39ced40..9bd84ba06ec4 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -6,6 +6,8 @@
+  */
+ #define _PAGE_BIT_SWAP_TYPE	0
+ 
++#define _PAGE_RO		0
++
+ #define _PAGE_EXEC		0x00001 /* execute permission */
+ #define _PAGE_WRITE		0x00002 /* write access allowed */
+ #define _PAGE_READ		0x00004	/* read access allowed */
+diff --git a/arch/powerpc/kernel/nvram_64.c b/arch/powerpc/kernel/nvram_64.c
+index 64174bf95611..05a0a913ec38 100644
+--- a/arch/powerpc/kernel/nvram_64.c
++++ b/arch/powerpc/kernel/nvram_64.c
+@@ -956,7 +956,7 @@ int __init nvram_remove_partition(const char *name, int sig,
+ 
+ 		/* Make partition a free partition */
+ 		part->header.signature = NVRAM_SIG_FREE;
+-		strncpy(part->header.name, "wwwwwwwwwwww", 12);
++		memset(part->header.name, 'w', 12);
+ 		part->header.checksum = nvram_checksum(&part->header);
+ 		rc = nvram_write_header(part);
+ 		if (rc <= 0) {
+@@ -974,8 +974,8 @@ int __init nvram_remove_partition(const char *name, int sig,
+ 		}
+ 		if (prev) {
+ 			prev->header.length += part->header.length;
+-			prev->header.checksum = nvram_checksum(&part->header);
+-			rc = nvram_write_header(part);
++			prev->header.checksum = nvram_checksum(&prev->header);
++			rc = nvram_write_header(prev);
+ 			if (rc <= 0) {
+ 				printk(KERN_ERR "nvram_remove_partition: nvram_write failed (%d)\n", rc);
+ 				return rc;
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9ee2623e0f67..ad37aa175f59 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -88,7 +88,13 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
+ 		set_thread_flag(TIF_RESTORE_TM);
+ 	}
+ }
++
++static inline bool msr_tm_active(unsigned long msr)
++{
++	return MSR_TM_ACTIVE(msr);
++}
+ #else
++static inline bool msr_tm_active(unsigned long msr) { return false; }
+ static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
+ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+ 
+@@ -208,7 +214,7 @@ void enable_kernel_fp(void)
+ EXPORT_SYMBOL(enable_kernel_fp);
+ 
+ static int restore_fp(struct task_struct *tsk) {
+-	if (tsk->thread.load_fp) {
++	if (tsk->thread.load_fp || msr_tm_active(tsk->thread.regs->msr)) {
+ 		load_fp_state(&current->thread.fp_state);
+ 		current->thread.load_fp++;
+ 		return 1;
+@@ -278,7 +284,8 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
+ 
+ static int restore_altivec(struct task_struct *tsk)
+ {
+-	if (cpu_has_feature(CPU_FTR_ALTIVEC) && tsk->thread.load_vec) {
++	if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
++		(tsk->thread.load_vec || msr_tm_active(tsk->thread.regs->msr))) {
+ 		load_vr_state(&tsk->thread.vr_state);
+ 		tsk->thread.used_vr = 1;
+ 		tsk->thread.load_vec++;
+@@ -438,6 +445,7 @@ void giveup_all(struct task_struct *tsk)
+ 		return;
+ 
+ 	msr_check_and_set(msr_all_available);
++	check_if_tm_restore_required(tsk);
+ 
+ #ifdef CONFIG_PPC_FPU
+ 	if (usermsr & MSR_FP)
+@@ -464,7 +472,8 @@ void restore_math(struct pt_regs *regs)
+ {
+ 	unsigned long msr;
+ 
+-	if (!current->thread.load_fp && !loadvec(current->thread))
++	if (!msr_tm_active(regs->msr) &&
++		!current->thread.load_fp && !loadvec(current->thread))
+ 		return;
+ 
+ 	msr = regs->msr;
+@@ -983,6 +992,13 @@ void restore_tm_state(struct pt_regs *regs)
+ 	msr_diff = current->thread.ckpt_regs.msr & ~regs->msr;
+ 	msr_diff &= MSR_FP | MSR_VEC | MSR_VSX;
+ 
++	/* Ensure that restore_math() will restore */
++	if (msr_diff & MSR_FP)
++		current->thread.load_fp = 1;
++#ifdef CONFIG_ALIVEC
++	if (cpu_has_feature(CPU_FTR_ALTIVEC) && msr_diff & MSR_VEC)
++		current->thread.load_vec = 1;
++#endif
+ 	restore_math(regs);
+ 
+ 	regs->msr |= msr_diff;
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 7372ee13eb1e..a5d3ecdabc44 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -1019,8 +1019,15 @@ int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
+ 
+ 	pte = READ_ONCE(*ptep);
+ 	mask = _PAGE_PRESENT | _PAGE_READ;
++
++	/*
++	 * On some CPUs like the 8xx, _PAGE_RW hence _PAGE_WRITE is defined
++	 * as 0 and _PAGE_RO has to be set when a page is not writable
++	 */
+ 	if (write)
+ 		mask |= _PAGE_WRITE;
++	else
++		mask |= _PAGE_RO;
+ 
+ 	if ((pte_val(pte) & mask) != mask)
+ 		return 0;
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index de7501edb21c..8b8852bc2f4a 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -317,16 +317,11 @@ static phys_addr_t __init i85x_stolen_base(int num, int slot, int func,
+ static phys_addr_t __init i865_stolen_base(int num, int slot, int func,
+ 					   size_t stolen_size)
+ {
+-	u16 toud;
++	u16 toud = 0;
+ 
+-	/*
+-	 * FIXME is the graphics stolen memory region
+-	 * always at TOUD? Ie. is it always the last
+-	 * one to be allocated by the BIOS?
+-	 */
+ 	toud = read_pci_config_16(0, 0, 0, I865_TOUD);
+ 
+-	return (phys_addr_t)toud << 16;
++	return (phys_addr_t)(toud << 16) + i845_tseg_size();
+ }
+ 
+ static phys_addr_t __init gen3_stolen_base(int num, int slot, int func,
+diff --git a/crypto/gcm.c b/crypto/gcm.c
+index 70a892e87ccb..f624ac98c94e 100644
+--- a/crypto/gcm.c
++++ b/crypto/gcm.c
+@@ -117,7 +117,7 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key,
+ 	struct crypto_skcipher *ctr = ctx->ctr;
+ 	struct {
+ 		be128 hash;
+-		u8 iv[8];
++		u8 iv[16];
+ 
+ 		struct crypto_gcm_setkey_result result;
+ 
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index 01d4be2c354b..f5c26a5f6875 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -385,7 +385,7 @@ static int omap_rng_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = pm_runtime_get_sync(&pdev->dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Failed to runtime_get device: %d\n", ret);
+ 		pm_runtime_put_noidle(&pdev->dev);
+ 		goto err_ioremap;
+@@ -443,7 +443,7 @@ static int __maybe_unused omap_rng_resume(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		dev_err(dev, "Failed to runtime_get device: %d\n", ret);
+ 		pm_runtime_put_noidle(dev);
+ 		return ret;
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 7a7970865c2d..0fc71cbaa440 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -1006,16 +1006,28 @@ static int bcm2835_clock_set_rate(struct clk_hw *hw,
+ 	return 0;
+ }
+ 
++static bool
++bcm2835_clk_is_pllc(struct clk_hw *hw)
++{
++	if (!hw)
++		return false;
++
++	return strncmp(clk_hw_get_name(hw), "pllc", 4) == 0;
++}
++
+ static int bcm2835_clock_determine_rate(struct clk_hw *hw,
+ 					struct clk_rate_request *req)
+ {
+ 	struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ 	struct clk_hw *parent, *best_parent = NULL;
++	bool current_parent_is_pllc;
+ 	unsigned long rate, best_rate = 0;
+ 	unsigned long prate, best_prate = 0;
+ 	size_t i;
+ 	u32 div;
+ 
++	current_parent_is_pllc = bcm2835_clk_is_pllc(clk_hw_get_parent(hw));
++
+ 	/*
+ 	 * Select parent clock that results in the closest but lower rate
+ 	 */
+@@ -1023,6 +1035,17 @@ static int bcm2835_clock_determine_rate(struct clk_hw *hw,
+ 		parent = clk_hw_get_parent_by_index(hw, i);
+ 		if (!parent)
+ 			continue;
++
++		/*
++		 * Don't choose a PLLC-derived clock as our parent
++		 * unless it had been manually set that way.  PLLC's
++		 * frequency gets adjusted by the firmware due to
++		 * over-temp or under-voltage conditions, without
++		 * prior notification to our clock consumer.
++		 */
++		if (bcm2835_clk_is_pllc(parent) && !current_parent_is_pllc)
++			continue;
++
+ 		prate = clk_hw_get_rate(parent);
+ 		div = bcm2835_clock_choose_div(hw, req->rate, prate, true);
+ 		rate = bcm2835_clock_rate_from_divisor(clock, prate, div);
+diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c
+index a0f55bc1ad3d..96386ffc8483 100644
+--- a/drivers/clk/clk-divider.c
++++ b/drivers/clk/clk-divider.c
+@@ -352,7 +352,7 @@ static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	/* if read only, just return current value */
+ 	if (divider->flags & CLK_DIVIDER_READ_ONLY) {
+-		bestdiv = readl(divider->reg) >> divider->shift;
++		bestdiv = clk_readl(divider->reg) >> divider->shift;
+ 		bestdiv &= div_mask(divider->width);
+ 		bestdiv = _get_div(divider->table, bestdiv, divider->flags,
+ 			divider->width);
+diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
+index 58566a17944a..20b105584f82 100644
+--- a/drivers/clk/clk-qoriq.c
++++ b/drivers/clk/clk-qoriq.c
+@@ -766,7 +766,11 @@ static struct clk * __init create_one_cmux(struct clockgen *cg, int idx)
+ 	if (!hwc)
+ 		return NULL;
+ 
+-	hwc->reg = cg->regs + 0x20 * idx;
++	if (cg->info.flags & CG_VER3)
++		hwc->reg = cg->regs + 0x70000 + 0x20 * idx;
++	else
++		hwc->reg = cg->regs + 0x20 * idx;
++
+ 	hwc->info = cg->info.cmux_groups[cg->info.cmux_to_group[idx]];
+ 
+ 	/*
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 820a939fb6bb..2877a4ddeda2 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1908,10 +1908,6 @@ int clk_set_phase(struct clk *clk, int degrees)
+ 
+ 	clk_prepare_lock();
+ 
+-	/* bail early if nothing to do */
+-	if (degrees == clk->core->phase)
+-		goto out;
+-
+ 	trace_clk_set_phase(clk->core, degrees);
+ 
+ 	if (clk->core->ops->set_phase)
+@@ -1922,7 +1918,6 @@ int clk_set_phase(struct clk *clk, int degrees)
+ 	if (!ret)
+ 		clk->core->phase = degrees;
+ 
+-out:
+ 	clk_prepare_unlock();
+ 
+ 	return ret;
+@@ -3186,7 +3181,7 @@ struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
+ {
+ 	struct of_clk_provider *provider;
+ 	struct clk *clk = ERR_PTR(-EPROBE_DEFER);
+-	struct clk_hw *hw = ERR_PTR(-EPROBE_DEFER);
++	struct clk_hw *hw;
+ 
+ 	if (!clkspec)
+ 		return ERR_PTR(-EINVAL);
+@@ -3194,12 +3189,13 @@ struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
+ 	/* Check if we have such a provider in our array */
+ 	mutex_lock(&of_clk_mutex);
+ 	list_for_each_entry(provider, &of_clk_providers, link) {
+-		if (provider->node == clkspec->np)
++		if (provider->node == clkspec->np) {
+ 			hw = __of_clk_get_hw_from_provider(provider, clkspec);
+-		if (!IS_ERR(hw)) {
+ 			clk = __clk_create_clk(hw, dev_id, con_id);
++		}
+ 
+-			if (!IS_ERR(clk) && !__clk_get(clk)) {
++		if (!IS_ERR(clk)) {
++			if (!__clk_get(clk)) {
+ 				__clk_free_clk(clk);
+ 				clk = ERR_PTR(-ENOENT);
+ 			}
+diff --git a/drivers/clk/imx/clk-imx35.c b/drivers/clk/imx/clk-imx35.c
+index b0978d3b83e2..d302ed3b8225 100644
+--- a/drivers/clk/imx/clk-imx35.c
++++ b/drivers/clk/imx/clk-imx35.c
+@@ -115,7 +115,7 @@ static void __init _mx35_clocks_init(void)
+ 	}
+ 
+ 	clk[ckih] = imx_clk_fixed("ckih", 24000000);
+-	clk[ckil] = imx_clk_fixed("ckih", 32768);
++	clk[ckil] = imx_clk_fixed("ckil", 32768);
+ 	clk[mpll] = imx_clk_pllv1(IMX_PLLV1_IMX35, "mpll", "ckih", base + MX35_CCM_MPCTL);
+ 	clk[ppll] = imx_clk_pllv1(IMX_PLLV1_IMX35, "ppll", "ckih", base + MX35_CCM_PPCTL);
+ 
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 95e3b3e0fa1c..98909b184d44 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -117,6 +117,7 @@ config MSM_MMCC_8974
+ 
+ config MSM_GCC_8996
+ 	tristate "MSM8996 Global Clock Controller"
++	select QCOM_GDSC
+ 	depends on COMMON_CLK_QCOM
+ 	help
+ 	  Support for the global clock controller on msm8996 devices.
+@@ -126,6 +127,7 @@ config MSM_GCC_8996
+ config MSM_MMCC_8996
+ 	tristate "MSM8996 Multimedia Clock Controller"
+ 	select MSM_GCC_8996
++	select QCOM_GDSC
+ 	depends on COMMON_CLK_QCOM
+ 	help
+ 	  Support for the multimedia clock controller on msm8996 devices.
+diff --git a/drivers/clk/qcom/gcc-msm8996.c b/drivers/clk/qcom/gcc-msm8996.c
+index bbf732bbc3fd..9f643cca85d0 100644
+--- a/drivers/clk/qcom/gcc-msm8996.c
++++ b/drivers/clk/qcom/gcc-msm8996.c
+@@ -2592,9 +2592,9 @@ static struct clk_branch gcc_pcie_2_aux_clk = {
+ };
+ 
+ static struct clk_branch gcc_pcie_2_pipe_clk = {
+-	.halt_reg = 0x6e108,
++	.halt_reg = 0x6e018,
+ 	.clkr = {
+-		.enable_reg = 0x6e108,
++		.enable_reg = 0x6e018,
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gcc_pcie_2_pipe_clk",
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 94f77b0f9ae7..32f645ea77b8 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -650,7 +650,7 @@ int ccp_dmaengine_register(struct ccp_device *ccp)
+ 	dma_desc_cache_name = devm_kasprintf(ccp->dev, GFP_KERNEL,
+ 					     "%s-dmaengine-desc-cache",
+ 					     ccp->name);
+-	if (!dma_cmd_cache_name)
++	if (!dma_desc_cache_name)
+ 		return -ENOMEM;
+ 	ccp->dma_desc_cache = kmem_cache_create(dma_desc_cache_name,
+ 						sizeof(struct ccp_dma_desc),
+diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
+index d64af8625d7e..37dadb2a4feb 100644
+--- a/drivers/crypto/marvell/cesa.c
++++ b/drivers/crypto/marvell/cesa.c
+@@ -166,6 +166,7 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
+ 			if (!req)
+ 				break;
+ 
++			ctx = crypto_tfm_ctx(req->tfm);
+ 			mv_cesa_complete_req(ctx, req, 0);
+ 		}
+ 	}
+diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
+index 82e0f4e6eb1c..b111e14bac1e 100644
+--- a/drivers/crypto/marvell/hash.c
++++ b/drivers/crypto/marvell/hash.c
+@@ -805,13 +805,14 @@ static int mv_cesa_md5_init(struct ahash_request *req)
+ 	struct mv_cesa_op_ctx tmpl = { };
+ 
+ 	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_MD5);
++
++	mv_cesa_ahash_init(req, &tmpl, true);
++
+ 	creq->state[0] = MD5_H0;
+ 	creq->state[1] = MD5_H1;
+ 	creq->state[2] = MD5_H2;
+ 	creq->state[3] = MD5_H3;
+ 
+-	mv_cesa_ahash_init(req, &tmpl, true);
+-
+ 	return 0;
+ }
+ 
+@@ -873,14 +874,15 @@ static int mv_cesa_sha1_init(struct ahash_request *req)
+ 	struct mv_cesa_op_ctx tmpl = { };
+ 
+ 	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_SHA1);
++
++	mv_cesa_ahash_init(req, &tmpl, false);
++
+ 	creq->state[0] = SHA1_H0;
+ 	creq->state[1] = SHA1_H1;
+ 	creq->state[2] = SHA1_H2;
+ 	creq->state[3] = SHA1_H3;
+ 	creq->state[4] = SHA1_H4;
+ 
+-	mv_cesa_ahash_init(req, &tmpl, false);
+-
+ 	return 0;
+ }
+ 
+@@ -942,6 +944,9 @@ static int mv_cesa_sha256_init(struct ahash_request *req)
+ 	struct mv_cesa_op_ctx tmpl = { };
+ 
+ 	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_SHA256);
++
++	mv_cesa_ahash_init(req, &tmpl, false);
++
+ 	creq->state[0] = SHA256_H0;
+ 	creq->state[1] = SHA256_H1;
+ 	creq->state[2] = SHA256_H2;
+@@ -951,8 +956,6 @@ static int mv_cesa_sha256_init(struct ahash_request *req)
+ 	creq->state[6] = SHA256_H6;
+ 	creq->state[7] = SHA256_H7;
+ 
+-	mv_cesa_ahash_init(req, &tmpl, false);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/dma/ipu/ipu_irq.c b/drivers/dma/ipu/ipu_irq.c
+index 2bf37e68ad0f..dd184b50e5b4 100644
+--- a/drivers/dma/ipu/ipu_irq.c
++++ b/drivers/dma/ipu/ipu_irq.c
+@@ -286,22 +286,21 @@ static void ipu_irq_handler(struct irq_desc *desc)
+ 		raw_spin_unlock(&bank_lock);
+ 		while ((line = ffs(status))) {
+ 			struct ipu_irq_map *map;
+-			unsigned int irq = NO_IRQ;
++			unsigned int irq;
+ 
+ 			line--;
+ 			status &= ~(1UL << line);
+ 
+ 			raw_spin_lock(&bank_lock);
+ 			map = src2map(32 * i + line);
+-			if (map)
+-				irq = map->irq;
+-			raw_spin_unlock(&bank_lock);
+-
+ 			if (!map) {
++				raw_spin_unlock(&bank_lock);
+ 				pr_err("IPU: Interrupt on unmapped source %u bank %d\n",
+ 				       line, i);
+ 				continue;
+ 			}
++			irq = map->irq;
++			raw_spin_unlock(&bank_lock);
+ 			generic_handle_irq(irq);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 17e13621fae9..4e71a680e91b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -43,6 +43,9 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, struct amdgpu_ctx *ctx)
+ 		ctx->rings[i].sequence = 1;
+ 		ctx->rings[i].fences = &ctx->fences[amdgpu_sched_jobs * i];
+ 	}
++
++	ctx->reset_counter = atomic_read(&adev->gpu_reset_counter);
++
+ 	/* create context entity for each ring */
+ 	for (i = 0; i < adev->num_rings; i++) {
+ 		struct amdgpu_ring *ring = adev->rings[i];
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
+index fe36caf1b7d7..14f57d9915e3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
+@@ -113,24 +113,26 @@ void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
+ 	printk("\n");
+ }
+ 
++
+ u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
+ {
+ 	struct drm_device *dev = adev->ddev;
+ 	struct drm_crtc *crtc;
+ 	struct amdgpu_crtc *amdgpu_crtc;
+-	u32 line_time_us, vblank_lines;
++	u32 vblank_in_pixels;
+ 	u32 vblank_time_us = 0xffffffff; /* if the displays are off, vblank time is max */
+ 
+ 	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+ 		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 			amdgpu_crtc = to_amdgpu_crtc(crtc);
+ 			if (crtc->enabled && amdgpu_crtc->enabled && amdgpu_crtc->hw_mode.clock) {
+-				line_time_us = (amdgpu_crtc->hw_mode.crtc_htotal * 1000) /
+-					amdgpu_crtc->hw_mode.clock;
+-				vblank_lines = amdgpu_crtc->hw_mode.crtc_vblank_end -
++				vblank_in_pixels =
++					amdgpu_crtc->hw_mode.crtc_htotal *
++					(amdgpu_crtc->hw_mode.crtc_vblank_end -
+ 					amdgpu_crtc->hw_mode.crtc_vdisplay +
+-					(amdgpu_crtc->v_border * 2);
+-				vblank_time_us = vblank_lines * line_time_us;
++					(amdgpu_crtc->v_border * 2));
++
++				vblank_time_us = vblank_in_pixels * 1000 / amdgpu_crtc->hw_mode.clock;
+ 				break;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index d942654a1de0..e24a8af72d90 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -292,7 +292,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 			type = AMD_IP_BLOCK_TYPE_UVD;
+ 			ring_mask = adev->uvd.ring.ready ? 1 : 0;
+ 			ib_start_alignment = AMDGPU_GPU_PAGE_SIZE;
+-			ib_size_alignment = 8;
++			ib_size_alignment = 16;
+ 			break;
+ 		case AMDGPU_HW_IP_VCE:
+ 			type = AMD_IP_BLOCK_TYPE_VCE;
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+index c1b04e9aab57..172bed946287 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+@@ -425,16 +425,6 @@ static void dce_v10_0_hpd_init(struct amdgpu_device *adev)
+ 	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ 		struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
+ 
+-		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
+-		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
+-			/* don't try to enable hpd on eDP or LVDS avoid breaking the
+-			 * aux dp channel on imac and help (but not completely fix)
+-			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
+-			 * also avoid interrupt storms during dpms.
+-			 */
+-			continue;
+-		}
+-
+ 		switch (amdgpu_connector->hpd.hpd) {
+ 		case AMDGPU_HPD_1:
+ 			idx = 0;
+@@ -458,6 +448,19 @@ static void dce_v10_0_hpd_init(struct amdgpu_device *adev)
+ 			continue;
+ 		}
+ 
++		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
++		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
++			/* don't try to enable hpd on eDP or LVDS avoid breaking the
++			 * aux dp channel on imac and help (but not completely fix)
++			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
++			 * also avoid interrupt storms during dpms.
++			 */
++			tmp = RREG32(mmDC_HPD_INT_CONTROL + hpd_offsets[idx]);
++			tmp = REG_SET_FIELD(tmp, DC_HPD_INT_CONTROL, DC_HPD_INT_EN, 0);
++			WREG32(mmDC_HPD_INT_CONTROL + hpd_offsets[idx], tmp);
++			continue;
++		}
++
+ 		tmp = RREG32(mmDC_HPD_CONTROL + hpd_offsets[idx]);
+ 		tmp = REG_SET_FIELD(tmp, DC_HPD_CONTROL, DC_HPD_EN, 1);
+ 		WREG32(mmDC_HPD_CONTROL + hpd_offsets[idx], tmp);
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+index d4bf133908b1..67c7c05a751c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+@@ -443,16 +443,6 @@ static void dce_v11_0_hpd_init(struct amdgpu_device *adev)
+ 	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ 		struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
+ 
+-		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
+-		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
+-			/* don't try to enable hpd on eDP or LVDS avoid breaking the
+-			 * aux dp channel on imac and help (but not completely fix)
+-			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
+-			 * also avoid interrupt storms during dpms.
+-			 */
+-			continue;
+-		}
+-
+ 		switch (amdgpu_connector->hpd.hpd) {
+ 		case AMDGPU_HPD_1:
+ 			idx = 0;
+@@ -476,6 +466,19 @@ static void dce_v11_0_hpd_init(struct amdgpu_device *adev)
+ 			continue;
+ 		}
+ 
++		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
++		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
++			/* don't try to enable hpd on eDP or LVDS avoid breaking the
++			 * aux dp channel on imac and help (but not completely fix)
++			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
++			 * also avoid interrupt storms during dpms.
++			 */
++			tmp = RREG32(mmDC_HPD_INT_CONTROL + hpd_offsets[idx]);
++			tmp = REG_SET_FIELD(tmp, DC_HPD_INT_CONTROL, DC_HPD_INT_EN, 0);
++			WREG32(mmDC_HPD_INT_CONTROL + hpd_offsets[idx], tmp);
++			continue;
++		}
++
+ 		tmp = RREG32(mmDC_HPD_CONTROL + hpd_offsets[idx]);
+ 		tmp = REG_SET_FIELD(tmp, DC_HPD_CONTROL, DC_HPD_EN, 1);
+ 		WREG32(mmDC_HPD_CONTROL + hpd_offsets[idx], tmp);
+@@ -3109,6 +3112,7 @@ static int dce_v11_0_sw_fini(void *handle)
+ 
+ 	dce_v11_0_afmt_fini(adev);
+ 
++	drm_mode_config_cleanup(adev->ddev);
+ 	adev->mode_info.mode_config_initialized = false;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+index 4fdfab1e9200..ea07c50369b4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+@@ -395,15 +395,6 @@ static void dce_v8_0_hpd_init(struct amdgpu_device *adev)
+ 	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ 		struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
+ 
+-		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
+-		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
+-			/* don't try to enable hpd on eDP or LVDS avoid breaking the
+-			 * aux dp channel on imac and help (but not completely fix)
+-			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
+-			 * also avoid interrupt storms during dpms.
+-			 */
+-			continue;
+-		}
+ 		switch (amdgpu_connector->hpd.hpd) {
+ 		case AMDGPU_HPD_1:
+ 			WREG32(mmDC_HPD1_CONTROL, tmp);
+@@ -426,6 +417,45 @@ static void dce_v8_0_hpd_init(struct amdgpu_device *adev)
+ 		default:
+ 			break;
+ 		}
++
++		if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
++		    connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
++			/* don't try to enable hpd on eDP or LVDS avoid breaking the
++			 * aux dp channel on imac and help (but not completely fix)
++			 * https://bugzilla.redhat.com/show_bug.cgi?id=726143
++			 * also avoid interrupt storms during dpms.
++			 */
++			u32 dc_hpd_int_cntl_reg, dc_hpd_int_cntl;
++
++			switch (amdgpu_connector->hpd.hpd) {
++			case AMDGPU_HPD_1:
++				dc_hpd_int_cntl_reg = mmDC_HPD1_INT_CONTROL;
++				break;
++			case AMDGPU_HPD_2:
++				dc_hpd_int_cntl_reg = mmDC_HPD2_INT_CONTROL;
++				break;
++			case AMDGPU_HPD_3:
++				dc_hpd_int_cntl_reg = mmDC_HPD3_INT_CONTROL;
++				break;
++			case AMDGPU_HPD_4:
++				dc_hpd_int_cntl_reg = mmDC_HPD4_INT_CONTROL;
++				break;
++			case AMDGPU_HPD_5:
++				dc_hpd_int_cntl_reg = mmDC_HPD5_INT_CONTROL;
++				break;
++			case AMDGPU_HPD_6:
++				dc_hpd_int_cntl_reg = mmDC_HPD6_INT_CONTROL;
++				break;
++			default:
++				continue;
++			}
++
++			dc_hpd_int_cntl = RREG32(dc_hpd_int_cntl_reg);
++			dc_hpd_int_cntl &= ~DC_HPD1_INT_CONTROL__DC_HPD1_INT_EN_MASK;
++			WREG32(dc_hpd_int_cntl_reg, dc_hpd_int_cntl);
++			continue;
++		}
++
+ 		dce_v8_0_hpd_set_polarity(adev, amdgpu_connector->hpd.hpd);
+ 		amdgpu_irq_get(adev, &adev->hpd_irq, amdgpu_connector->hpd.hpd);
+ 	}
+diff --git a/drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c b/drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c
+index 635fc4b48184..92b117843875 100644
+--- a/drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c
++++ b/drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c
+@@ -262,6 +262,8 @@ static const pem_event_action * const display_config_change_event[] = {
+ 	unblock_adjust_power_state_tasks,
+ 	set_cpu_power_state,
+ 	notify_hw_power_source_tasks,
++	get_2d_performance_state_tasks,
++	set_performance_state_tasks,
+ 	/* updateDALConfigurationTasks,
+ 	variBrightDisplayConfigurationChangeTasks, */
+ 	adjust_power_state_tasks,
+diff --git a/drivers/gpu/drm/amd/powerplay/eventmgr/psm.c b/drivers/gpu/drm/amd/powerplay/eventmgr/psm.c
+index a46225c0fc01..d6bee727497c 100644
+--- a/drivers/gpu/drm/amd/powerplay/eventmgr/psm.c
++++ b/drivers/gpu/drm/amd/powerplay/eventmgr/psm.c
+@@ -100,11 +100,12 @@ int psm_adjust_power_state_dynamic(struct pp_eventmgr *eventmgr, bool skip)
+ 	if (requested == NULL)
+ 		return 0;
+ 
++	phm_apply_state_adjust_rules(hwmgr, requested, pcurrent);
++
+ 	if (pcurrent == NULL || (0 != phm_check_states_equal(hwmgr, &pcurrent->hardware, &requested->hardware, &equal)))
+ 		equal = false;
+ 
+ 	if (!equal || phm_check_smc_update_required_for_display_configuration(hwmgr)) {
+-		phm_apply_state_adjust_rules(hwmgr, requested, pcurrent);
+ 		phm_set_power_state(hwmgr, &pcurrent->hardware, &requested->hardware);
+ 		hwmgr->current_ps = requested;
+ 	}
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index 780589b420a4..9c4387d79d11 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -335,14 +335,17 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
+  * using the PRIME helpers.
+  */
+ struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
+-				     struct drm_gem_object *obj, int flags)
++				     struct drm_gem_object *obj,
++				     int flags)
+ {
+-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+-
+-	exp_info.ops = &drm_gem_prime_dmabuf_ops;
+-	exp_info.size = obj->size;
+-	exp_info.flags = flags;
+-	exp_info.priv = obj;
++	struct dma_buf_export_info exp_info = {
++		.exp_name = KBUILD_MODNAME, /* white lie for debug */
++		.owner = dev->driver->fops->owner,
++		.ops = &drm_gem_prime_dmabuf_ops,
++		.size = obj->size,
++		.flags = flags,
++		.priv = obj,
++	};
+ 
+ 	if (dev->driver->gem_prime_res_obj)
+ 		exp_info.resv = dev->driver->gem_prime_res_obj(obj);
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+index 7882387f9bff..5fc8ebdf40b2 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+@@ -330,6 +330,7 @@ static int fsl_dcu_drm_probe(struct platform_device *pdev)
+ 	const char *pix_clk_in_name;
+ 	const struct of_device_id *id;
+ 	int ret;
++	u8 div_ratio_shift = 0;
+ 
+ 	fsl_dev = devm_kzalloc(dev, sizeof(*fsl_dev), GFP_KERNEL);
+ 	if (!fsl_dev)
+@@ -382,11 +383,14 @@ static int fsl_dcu_drm_probe(struct platform_device *pdev)
+ 		pix_clk_in = fsl_dev->clk;
+ 	}
+ 
++	if (of_property_read_bool(dev->of_node, "big-endian"))
++		div_ratio_shift = 24;
++
+ 	pix_clk_in_name = __clk_get_name(pix_clk_in);
+ 	snprintf(pix_clk_name, sizeof(pix_clk_name), "%s_pix", pix_clk_in_name);
+ 	fsl_dev->pix_clk = clk_register_divider(dev, pix_clk_name,
+ 			pix_clk_in_name, 0, base + DCU_DIV_RATIO,
+-			0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL);
++			div_ratio_shift, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL);
+ 	if (IS_ERR(fsl_dev->pix_clk)) {
+ 		dev_err(dev, "failed to register pix clk\n");
+ 		ret = PTR_ERR(fsl_dev->pix_clk);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index f68c78918d63..84a00105871d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -631,6 +631,8 @@ struct drm_i915_display_funcs {
+ 				  struct intel_crtc_state *crtc_state);
+ 	void (*crtc_enable)(struct drm_crtc *crtc);
+ 	void (*crtc_disable)(struct drm_crtc *crtc);
++	void (*update_crtcs)(struct drm_atomic_state *state,
++			     unsigned int *crtc_vblank_mask);
+ 	void (*audio_codec_enable)(struct drm_connector *connector,
+ 				   struct intel_encoder *encoder,
+ 				   const struct drm_display_mode *adjusted_mode);
+@@ -1965,11 +1967,11 @@ struct drm_i915_private {
+ 	struct vlv_s0ix_state vlv_s0ix_state;
+ 
+ 	enum {
+-		I915_SKL_SAGV_UNKNOWN = 0,
+-		I915_SKL_SAGV_DISABLED,
+-		I915_SKL_SAGV_ENABLED,
+-		I915_SKL_SAGV_NOT_CONTROLLED
+-	} skl_sagv_status;
++		I915_SAGV_UNKNOWN = 0,
++		I915_SAGV_DISABLED,
++		I915_SAGV_ENABLED,
++		I915_SAGV_NOT_CONTROLLED
++	} sagv_status;
+ 
+ 	struct {
+ 		/*
+@@ -2280,21 +2282,19 @@ struct drm_i915_gem_object {
+ 	/** Record of address bit 17 of each page at last unbind. */
+ 	unsigned long *bit_17;
+ 
+-	union {
+-		/** for phy allocated objects */
+-		struct drm_dma_handle *phys_handle;
+-
+-		struct i915_gem_userptr {
+-			uintptr_t ptr;
+-			unsigned read_only :1;
+-			unsigned workers :4;
++	struct i915_gem_userptr {
++		uintptr_t ptr;
++		unsigned read_only :1;
++		unsigned workers :4;
+ #define I915_GEM_USERPTR_MAX_WORKERS 15
+ 
+-			struct i915_mm_struct *mm;
+-			struct i915_mmu_object *mmu_object;
+-			struct work_struct *work;
+-		} userptr;
+-	};
++		struct i915_mm_struct *mm;
++		struct i915_mmu_object *mmu_object;
++		struct work_struct *work;
++	} userptr;
++
++	/** for phys allocated objects */
++	struct drm_dma_handle *phys_handle;
+ };
+ #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
+ 
+diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
+index 66be299a1486..2bb69f3c5b84 100644
+--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
++++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
+@@ -115,17 +115,28 @@ static unsigned long i915_stolen_to_physical(struct drm_device *dev)
+ 
+ 		base = bsm & INTEL_BSM_MASK;
+ 	} else if (IS_I865G(dev)) {
++		u32 tseg_size = 0;
+ 		u16 toud = 0;
++		u8 tmp;
++
++		pci_bus_read_config_byte(dev->pdev->bus, PCI_DEVFN(0, 0),
++					 I845_ESMRAMC, &tmp);
++
++		if (tmp & TSEG_ENABLE) {
++			switch (tmp & I845_TSEG_SIZE_MASK) {
++			case I845_TSEG_SIZE_512K:
++				tseg_size = KB(512);
++				break;
++			case I845_TSEG_SIZE_1M:
++				tseg_size = MB(1);
++				break;
++			}
++		}
+ 
+-		/*
+-		 * FIXME is the graphics stolen memory region
+-		 * always at TOUD? Ie. is it always the last
+-		 * one to be allocated by the BIOS?
+-		 */
+ 		pci_bus_read_config_word(dev->pdev->bus, PCI_DEVFN(0, 0),
+ 					 I865_TOUD, &toud);
+ 
+-		base = toud << 16;
++		base = (toud << 16) + tseg_size;
+ 	} else if (IS_I85X(dev)) {
+ 		u32 tseg_size = 0;
+ 		u32 tom;
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 175595fc3e45..e9a64fba6333 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -2980,6 +2980,7 @@ static void skylake_update_primary_plane(struct drm_plane *plane,
+ 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
+ 	struct drm_framebuffer *fb = plane_state->base.fb;
+ 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
++	const struct skl_wm_values *wm = &dev_priv->wm.skl_results;
+ 	int pipe = intel_crtc->pipe;
+ 	u32 plane_ctl, stride_div, stride;
+ 	u32 tile_height, plane_offset, plane_size;
+@@ -3031,6 +3032,9 @@ static void skylake_update_primary_plane(struct drm_plane *plane,
+ 	intel_crtc->adjusted_x = x_offset;
+ 	intel_crtc->adjusted_y = y_offset;
+ 
++	if (wm->dirty_pipes & drm_crtc_mask(&intel_crtc->base))
++		skl_write_plane_wm(intel_crtc, wm, 0);
++
+ 	I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);
+ 	I915_WRITE(PLANE_OFFSET(pipe, 0), plane_offset);
+ 	I915_WRITE(PLANE_SIZE(pipe, 0), plane_size);
+@@ -3061,7 +3065,15 @@ static void skylake_disable_primary_plane(struct drm_plane *primary,
+ {
+ 	struct drm_device *dev = crtc->dev;
+ 	struct drm_i915_private *dev_priv = to_i915(dev);
+-	int pipe = to_intel_crtc(crtc)->pipe;
++	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
++	int pipe = intel_crtc->pipe;
++
++	/*
++	 * We only populate skl_results on watermark updates, and if the
++	 * plane's visiblity isn't actually changing neither is its watermarks.
++	 */
++	if (!to_intel_plane_state(crtc->primary->state)->visible)
++		skl_write_plane_wm(intel_crtc, &dev_priv->wm.skl_results, 0);
+ 
+ 	I915_WRITE(PLANE_CTL(pipe, 0), 0);
+ 	I915_WRITE(PLANE_SURF(pipe, 0), 0);
+@@ -8995,6 +9007,24 @@ static void ironlake_compute_dpll(struct intel_crtc *intel_crtc,
+ 	if (intel_crtc_has_dp_encoder(crtc_state))
+ 		dpll |= DPLL_SDVO_HIGH_SPEED;
+ 
++	/*
++	 * The high speed IO clock is only really required for
++	 * SDVO/HDMI/DP, but we also enable it for CRT to make it
++	 * possible to share the DPLL between CRT and HDMI. Enabling
++	 * the clock needlessly does no real harm, except use up a
++	 * bit of power potentially.
++	 *
++	 * We'll limit this to IVB with 3 pipes, since it has only two
++	 * DPLLs and so DPLL sharing is the only way to get three pipes
++	 * driving PCH ports at the same time. On SNB we could do this,
++	 * and potentially avoid enabling the second DPLL, but it's not
++	 * clear if it''s a win or loss power wise. No point in doing
++	 * this on ILK at all since it has a fixed DPLL<->pipe mapping.
++	 */
++	if (INTEL_INFO(dev_priv)->num_pipes == 3 &&
++	    intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG))
++		dpll |= DPLL_SDVO_HIGH_SPEED;
++
+ 	/* compute bitmask from p1 value */
+ 	dpll |= (1 << (crtc_state->dpll.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
+ 	/* also FPA1 */
+@@ -10306,9 +10336,13 @@ static void i9xx_update_cursor(struct drm_crtc *crtc, u32 base,
+ 	struct drm_device *dev = crtc->dev;
+ 	struct drm_i915_private *dev_priv = to_i915(dev);
+ 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
++	const struct skl_wm_values *wm = &dev_priv->wm.skl_results;
+ 	int pipe = intel_crtc->pipe;
+ 	uint32_t cntl = 0;
+ 
++	if (INTEL_GEN(dev_priv) >= 9 && wm->dirty_pipes & drm_crtc_mask(crtc))
++		skl_write_cursor_wm(intel_crtc, wm);
++
+ 	if (plane_state && plane_state->visible) {
+ 		cntl = MCURSOR_GAMMA_ENABLE;
+ 		switch (plane_state->base.crtc_w) {
+@@ -12956,16 +12990,23 @@ static void verify_wm_state(struct drm_crtc *crtc,
+ 			  hw_entry->start, hw_entry->end);
+ 	}
+ 
+-	/* cursor */
+-	hw_entry = &hw_ddb.plane[pipe][PLANE_CURSOR];
+-	sw_entry = &sw_ddb->plane[pipe][PLANE_CURSOR];
+-
+-	if (!skl_ddb_entry_equal(hw_entry, sw_entry)) {
+-		DRM_ERROR("mismatch in DDB state pipe %c cursor "
+-			  "(expected (%u,%u), found (%u,%u))\n",
+-			  pipe_name(pipe),
+-			  sw_entry->start, sw_entry->end,
+-			  hw_entry->start, hw_entry->end);
++	/*
++	 * cursor
++	 * If the cursor plane isn't active, we may not have updated it's ddb
++	 * allocation. In that case since the ddb allocation will be updated
++	 * once the plane becomes visible, we can skip this check
++	 */
++	if (intel_crtc->cursor_addr) {
++		hw_entry = &hw_ddb.plane[pipe][PLANE_CURSOR];
++		sw_entry = &sw_ddb->plane[pipe][PLANE_CURSOR];
++
++		if (!skl_ddb_entry_equal(hw_entry, sw_entry)) {
++			DRM_ERROR("mismatch in DDB state pipe %c cursor "
++				  "(expected (%u,%u), found (%u,%u))\n",
++				  pipe_name(pipe),
++				  sw_entry->start, sw_entry->end,
++				  hw_entry->start, hw_entry->end);
++		}
+ 	}
+ }
+ 
+@@ -13671,6 +13712,111 @@ static bool needs_vblank_wait(struct intel_crtc_state *crtc_state)
+ 	return false;
+ }
+ 
++static void intel_update_crtc(struct drm_crtc *crtc,
++			      struct drm_atomic_state *state,
++			      struct drm_crtc_state *old_crtc_state,
++			      unsigned int *crtc_vblank_mask)
++{
++	struct drm_device *dev = crtc->dev;
++	struct drm_i915_private *dev_priv = to_i915(dev);
++	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
++	struct intel_crtc_state *pipe_config = to_intel_crtc_state(crtc->state);
++	bool modeset = needs_modeset(crtc->state);
++
++	if (modeset) {
++		update_scanline_offset(intel_crtc);
++		dev_priv->display.crtc_enable(crtc);
++	} else {
++		intel_pre_plane_update(to_intel_crtc_state(old_crtc_state));
++	}
++
++	if (drm_atomic_get_existing_plane_state(state, crtc->primary)) {
++		intel_fbc_enable(
++		    intel_crtc, pipe_config,
++		    to_intel_plane_state(crtc->primary->state));
++	}
++
++	drm_atomic_helper_commit_planes_on_crtc(old_crtc_state);
++
++	if (needs_vblank_wait(pipe_config))
++		*crtc_vblank_mask |= drm_crtc_mask(crtc);
++}
++
++static void intel_update_crtcs(struct drm_atomic_state *state,
++			       unsigned int *crtc_vblank_mask)
++{
++	struct drm_crtc *crtc;
++	struct drm_crtc_state *old_crtc_state;
++	int i;
++
++	for_each_crtc_in_state(state, crtc, old_crtc_state, i) {
++		if (!crtc->state->active)
++			continue;
++
++		intel_update_crtc(crtc, state, old_crtc_state,
++				  crtc_vblank_mask);
++	}
++}
++
++static void skl_update_crtcs(struct drm_atomic_state *state,
++			     unsigned int *crtc_vblank_mask)
++{
++	struct drm_device *dev = state->dev;
++	struct drm_i915_private *dev_priv = to_i915(dev);
++	struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
++	struct drm_crtc *crtc;
++	struct drm_crtc_state *old_crtc_state;
++	struct skl_ddb_allocation *new_ddb = &intel_state->wm_results.ddb;
++	struct skl_ddb_allocation *cur_ddb = &dev_priv->wm.skl_hw.ddb;
++	unsigned int updated = 0;
++	bool progress;
++	enum pipe pipe;
++
++	/*
++	 * Whenever the number of active pipes changes, we need to make sure we
++	 * update the pipes in the right order so that their ddb allocations
++	 * never overlap with eachother inbetween CRTC updates. Otherwise we'll
++	 * cause pipe underruns and other bad stuff.
++	 */
++	do {
++		int i;
++		progress = false;
++
++		for_each_crtc_in_state(state, crtc, old_crtc_state, i) {
++			bool vbl_wait = false;
++			unsigned int cmask = drm_crtc_mask(crtc);
++			pipe = to_intel_crtc(crtc)->pipe;
++
++			if (updated & cmask || !crtc->state->active)
++				continue;
++			if (skl_ddb_allocation_overlaps(state, cur_ddb, new_ddb,
++							pipe))
++				continue;
++
++			updated |= cmask;
++
++			/*
++			 * If this is an already active pipe, it's DDB changed,
++			 * and this isn't the last pipe that needs updating
++			 * then we need to wait for a vblank to pass for the
++			 * new ddb allocation to take effect.
++			 */
++			if (!skl_ddb_allocation_equals(cur_ddb, new_ddb, pipe) &&
++			    !crtc->state->active_changed &&
++			    intel_state->wm_results.dirty_pipes != updated)
++				vbl_wait = true;
++
++			intel_update_crtc(crtc, state, old_crtc_state,
++					  crtc_vblank_mask);
++
++			if (vbl_wait)
++				intel_wait_for_vblank(dev, pipe);
++
++			progress = true;
++		}
++	} while (progress);
++}
++
+ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ {
+ 	struct drm_device *dev = state->dev;
+@@ -13763,23 +13909,15 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 		 * SKL workaround: bspec recommends we disable the SAGV when we
+ 		 * have more then one pipe enabled
+ 		 */
+-		if (IS_SKYLAKE(dev_priv) && !skl_can_enable_sagv(state))
+-			skl_disable_sagv(dev_priv);
++		if (!intel_can_enable_sagv(state))
++			intel_disable_sagv(dev_priv);
+ 
+ 		intel_modeset_verify_disabled(dev);
+ 	}
+ 
+-	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
++	/* Complete the events for pipes that have now been disabled */
+ 	for_each_crtc_in_state(state, crtc, old_crtc_state, i) {
+-		struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ 		bool modeset = needs_modeset(crtc->state);
+-		struct intel_crtc_state *pipe_config =
+-			to_intel_crtc_state(crtc->state);
+-
+-		if (modeset && crtc->state->active) {
+-			update_scanline_offset(to_intel_crtc(crtc));
+-			dev_priv->display.crtc_enable(crtc);
+-		}
+ 
+ 		/* Complete events for now disable pipes here. */
+ 		if (modeset && !crtc->state->active && crtc->state->event) {
+@@ -13789,21 +13927,11 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 
+ 			crtc->state->event = NULL;
+ 		}
+-
+-		if (!modeset)
+-			intel_pre_plane_update(to_intel_crtc_state(old_crtc_state));
+-
+-		if (crtc->state->active &&
+-		    drm_atomic_get_existing_plane_state(state, crtc->primary))
+-			intel_fbc_enable(intel_crtc, pipe_config, to_intel_plane_state(crtc->primary->state));
+-
+-		if (crtc->state->active)
+-			drm_atomic_helper_commit_planes_on_crtc(old_crtc_state);
+-
+-		if (pipe_config->base.active && needs_vblank_wait(pipe_config))
+-			crtc_vblank_mask |= 1 << i;
+ 	}
+ 
++	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
++	dev_priv->display.update_crtcs(state, &crtc_vblank_mask);
++
+ 	/* FIXME: We should call drm_atomic_helper_commit_hw_done() here
+ 	 * already, but still need the state for the delayed optimization. To
+ 	 * fix this:
+@@ -13839,9 +13967,8 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 		intel_modeset_verify_crtc(crtc, old_crtc_state, crtc->state);
+ 	}
+ 
+-	if (IS_SKYLAKE(dev_priv) && intel_state->modeset &&
+-	    skl_can_enable_sagv(state))
+-		skl_enable_sagv(dev_priv);
++	if (intel_state->modeset && intel_can_enable_sagv(state))
++		intel_enable_sagv(dev_priv);
+ 
+ 	drm_atomic_helper_commit_hw_done(state);
+ 
+@@ -14221,10 +14348,12 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc,
+ 				    struct drm_crtc_state *old_crtc_state)
+ {
+ 	struct drm_device *dev = crtc->dev;
++	struct drm_i915_private *dev_priv = to_i915(dev);
+ 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ 	struct intel_crtc_state *old_intel_state =
+ 		to_intel_crtc_state(old_crtc_state);
+ 	bool modeset = needs_modeset(crtc->state);
++	enum pipe pipe = intel_crtc->pipe;
+ 
+ 	/* Perform vblank evasion around commit operation */
+ 	intel_pipe_update_start(intel_crtc);
+@@ -14239,8 +14368,12 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc,
+ 
+ 	if (to_intel_crtc_state(crtc->state)->update_pipe)
+ 		intel_update_pipe_config(intel_crtc, old_intel_state);
+-	else if (INTEL_INFO(dev)->gen >= 9)
++	else if (INTEL_GEN(dev_priv) >= 9) {
+ 		skl_detach_scalers(intel_crtc);
++
++		I915_WRITE(PIPE_WM_LINETIME(pipe),
++			   dev_priv->wm.skl_hw.wm_linetime[pipe]);
++	}
+ }
+ 
+ static void intel_finish_crtc_commit(struct drm_crtc *crtc,
+@@ -15347,6 +15480,11 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv)
+ 			skl_modeset_calc_cdclk;
+ 	}
+ 
++	if (dev_priv->info.gen >= 9)
++		dev_priv->display.update_crtcs = skl_update_crtcs;
++	else
++		dev_priv->display.update_crtcs = intel_update_crtcs;
++
+ 	switch (INTEL_INFO(dev_priv)->gen) {
+ 	case 2:
+ 		dev_priv->display.queue_flip = intel_gen2_queue_flip;
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 21b04c3eda41..1ca155f4d368 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4148,7 +4148,7 @@ static bool bxt_digital_port_connected(struct drm_i915_private *dev_priv,
+  *
+  * Return %true if @port is connected, %false otherwise.
+  */
+-bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
++static bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
+ 					 struct intel_digital_port *port)
+ {
+ 	if (HAS_PCH_IBX(dev_priv))
+@@ -4207,7 +4207,7 @@ intel_dp_unset_edid(struct intel_dp *intel_dp)
+ 	intel_dp->has_audio = false;
+ }
+ 
+-static void
++static enum drm_connector_status
+ intel_dp_long_pulse(struct intel_connector *intel_connector)
+ {
+ 	struct drm_connector *connector = &intel_connector->base;
+@@ -4232,7 +4232,7 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
+ 	else
+ 		status = connector_status_disconnected;
+ 
+-	if (status != connector_status_connected) {
++	if (status == connector_status_disconnected) {
+ 		intel_dp->compliance_test_active = 0;
+ 		intel_dp->compliance_test_type = 0;
+ 		intel_dp->compliance_test_data = 0;
+@@ -4284,8 +4284,8 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
+ 	intel_dp->aux.i2c_defer_count = 0;
+ 
+ 	intel_dp_set_edid(intel_dp);
+-
+-	status = connector_status_connected;
++	if (is_edp(intel_dp) || intel_connector->detect_edid)
++		status = connector_status_connected;
+ 	intel_dp->detect_done = true;
+ 
+ 	/* Try to read the source of the interrupt */
+@@ -4303,12 +4303,11 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
+ 	}
+ 
+ out:
+-	if ((status != connector_status_connected) &&
+-	    (intel_dp->is_mst == false))
++	if (status != connector_status_connected && !intel_dp->is_mst)
+ 		intel_dp_unset_edid(intel_dp);
+ 
+ 	intel_display_power_put(to_i915(dev), power_domain);
+-	return;
++	return status;
+ }
+ 
+ static enum drm_connector_status
+@@ -4317,7 +4316,7 @@ intel_dp_detect(struct drm_connector *connector, bool force)
+ 	struct intel_dp *intel_dp = intel_attached_dp(connector);
+ 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ 	struct intel_encoder *intel_encoder = &intel_dig_port->base;
+-	struct intel_connector *intel_connector = to_intel_connector(connector);
++	enum drm_connector_status status = connector->status;
+ 
+ 	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ 		      connector->base.id, connector->name);
+@@ -4332,14 +4331,11 @@ intel_dp_detect(struct drm_connector *connector, bool force)
+ 
+ 	/* If full detect is not performed yet, do a full detect */
+ 	if (!intel_dp->detect_done)
+-		intel_dp_long_pulse(intel_dp->attached_connector);
++		status = intel_dp_long_pulse(intel_dp->attached_connector);
+ 
+ 	intel_dp->detect_done = false;
+ 
+-	if (is_edp(intel_dp) || intel_connector->detect_edid)
+-		return connector_status_connected;
+-	else
+-		return connector_status_disconnected;
++	return status;
+ }
+ 
+ static void
+@@ -4696,36 +4692,34 @@ intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd)
+ 		      port_name(intel_dig_port->port),
+ 		      long_hpd ? "long" : "short");
+ 
++	if (long_hpd) {
++		intel_dp->detect_done = false;
++		return IRQ_NONE;
++	}
++
+ 	power_domain = intel_display_port_aux_power_domain(intel_encoder);
+ 	intel_display_power_get(dev_priv, power_domain);
+ 
+-	if (long_hpd) {
+-		intel_dp_long_pulse(intel_dp->attached_connector);
+-		if (intel_dp->is_mst)
+-			ret = IRQ_HANDLED;
+-		goto put_power;
+-
+-	} else {
+-		if (intel_dp->is_mst) {
+-			if (intel_dp_check_mst_status(intel_dp) == -EINVAL) {
+-				/*
+-				 * If we were in MST mode, and device is not
+-				 * there, get out of MST mode
+-				 */
+-				DRM_DEBUG_KMS("MST device may have disappeared %d vs %d\n",
+-					      intel_dp->is_mst, intel_dp->mst_mgr.mst_state);
+-				intel_dp->is_mst = false;
+-				drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr,
+-								intel_dp->is_mst);
+-				goto put_power;
+-			}
++	if (intel_dp->is_mst) {
++		if (intel_dp_check_mst_status(intel_dp) == -EINVAL) {
++			/*
++			 * If we were in MST mode, and device is not
++			 * there, get out of MST mode
++			 */
++			DRM_DEBUG_KMS("MST device may have disappeared %d vs %d\n",
++				      intel_dp->is_mst, intel_dp->mst_mgr.mst_state);
++			intel_dp->is_mst = false;
++			drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr,
++							intel_dp->is_mst);
++			intel_dp->detect_done = false;
++			goto put_power;
+ 		}
++	}
+ 
+-		if (!intel_dp->is_mst) {
+-			if (!intel_dp_short_pulse(intel_dp)) {
+-				intel_dp_long_pulse(intel_dp->attached_connector);
+-				goto put_power;
+-			}
++	if (!intel_dp->is_mst) {
++		if (!intel_dp_short_pulse(intel_dp)) {
++			intel_dp->detect_done = false;
++			goto put_power;
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index ff399b9a5c1f..9a58800cba3b 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -236,6 +236,7 @@ struct intel_panel {
+ 		bool enabled;
+ 		bool combination_mode;	/* gen 2/4 only */
+ 		bool active_low_pwm;
++		bool alternate_pwm_increment;	/* lpt+ */
+ 
+ 		/* PWM chip */
+ 		bool util_pin_active_low;	/* bxt+ */
+@@ -1387,8 +1388,6 @@ void intel_edp_drrs_disable(struct intel_dp *intel_dp);
+ void intel_edp_drrs_invalidate(struct drm_device *dev,
+ 		unsigned frontbuffer_bits);
+ void intel_edp_drrs_flush(struct drm_device *dev, unsigned frontbuffer_bits);
+-bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
+-					 struct intel_digital_port *port);
+ 
+ void
+ intel_dp_program_link_training_pattern(struct intel_dp *intel_dp,
+@@ -1716,9 +1715,21 @@ void ilk_wm_get_hw_state(struct drm_device *dev);
+ void skl_wm_get_hw_state(struct drm_device *dev);
+ void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv,
+ 			  struct skl_ddb_allocation *ddb /* out */);
+-bool skl_can_enable_sagv(struct drm_atomic_state *state);
+-int skl_enable_sagv(struct drm_i915_private *dev_priv);
+-int skl_disable_sagv(struct drm_i915_private *dev_priv);
++bool intel_can_enable_sagv(struct drm_atomic_state *state);
++int intel_enable_sagv(struct drm_i915_private *dev_priv);
++int intel_disable_sagv(struct drm_i915_private *dev_priv);
++bool skl_ddb_allocation_equals(const struct skl_ddb_allocation *old,
++			       const struct skl_ddb_allocation *new,
++			       enum pipe pipe);
++bool skl_ddb_allocation_overlaps(struct drm_atomic_state *state,
++				 const struct skl_ddb_allocation *old,
++				 const struct skl_ddb_allocation *new,
++				 enum pipe pipe);
++void skl_write_cursor_wm(struct intel_crtc *intel_crtc,
++			 const struct skl_wm_values *wm);
++void skl_write_plane_wm(struct intel_crtc *intel_crtc,
++			const struct skl_wm_values *wm,
++			int plane);
+ uint32_t ilk_pipe_pixel_rate(const struct intel_crtc_state *pipe_config);
+ bool ilk_disable_lp_wm(struct drm_device *dev);
+ int sanitize_rc6_option(struct drm_i915_private *dev_priv, int enable_rc6);
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index 4df9f384910c..c3aa9e670d15 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -1422,24 +1422,22 @@ intel_hdmi_dp_dual_mode_detect(struct drm_connector *connector, bool has_edid)
+ }
+ 
+ static bool
+-intel_hdmi_set_edid(struct drm_connector *connector, bool force)
++intel_hdmi_set_edid(struct drm_connector *connector)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+ 	struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+-	struct edid *edid = NULL;
++	struct edid *edid;
+ 	bool connected = false;
+ 
+-	if (force) {
+-		intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS);
++	intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS);
+ 
+-		edid = drm_get_edid(connector,
+-				    intel_gmbus_get_adapter(dev_priv,
+-				    intel_hdmi->ddc_bus));
++	edid = drm_get_edid(connector,
++			    intel_gmbus_get_adapter(dev_priv,
++			    intel_hdmi->ddc_bus));
+ 
+-		intel_hdmi_dp_dual_mode_detect(connector, edid != NULL);
++	intel_hdmi_dp_dual_mode_detect(connector, edid != NULL);
+ 
+-		intel_display_power_put(dev_priv, POWER_DOMAIN_GMBUS);
+-	}
++	intel_display_power_put(dev_priv, POWER_DOMAIN_GMBUS);
+ 
+ 	to_intel_connector(connector)->detect_edid = edid;
+ 	if (edid && edid->input & DRM_EDID_INPUT_DIGITAL) {
+@@ -1465,37 +1463,16 @@ static enum drm_connector_status
+ intel_hdmi_detect(struct drm_connector *connector, bool force)
+ {
+ 	enum drm_connector_status status;
+-	struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+ 	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+-	bool live_status = false;
+-	unsigned int try;
+ 
+ 	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ 		      connector->base.id, connector->name);
+ 
+ 	intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS);
+ 
+-	for (try = 0; !live_status && try < 9; try++) {
+-		if (try)
+-			msleep(10);
+-		live_status = intel_digital_port_connected(dev_priv,
+-				hdmi_to_dig_port(intel_hdmi));
+-	}
+-
+-	if (!live_status) {
+-		DRM_DEBUG_KMS("HDMI live status down\n");
+-		/*
+-		 * Live status register is not reliable on all intel platforms.
+-		 * So consider live_status only for certain platforms, for
+-		 * others, read EDID to determine presence of sink.
+-		 */
+-		if (INTEL_INFO(dev_priv)->gen < 7 || IS_IVYBRIDGE(dev_priv))
+-			live_status = true;
+-	}
+-
+ 	intel_hdmi_unset_edid(connector);
+ 
+-	if (intel_hdmi_set_edid(connector, live_status)) {
++	if (intel_hdmi_set_edid(connector)) {
+ 		struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+ 
+ 		hdmi_to_dig_port(intel_hdmi)->base.type = INTEL_OUTPUT_HDMI;
+@@ -1521,7 +1498,7 @@ intel_hdmi_force(struct drm_connector *connector)
+ 	if (connector->status != connector_status_connected)
+ 		return;
+ 
+-	intel_hdmi_set_edid(connector, true);
++	intel_hdmi_set_edid(connector);
+ 	hdmi_to_dig_port(intel_hdmi)->base.type = INTEL_OUTPUT_HDMI;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
+index 96c65d77e886..9a2393a6b277 100644
+--- a/drivers/gpu/drm/i915/intel_panel.c
++++ b/drivers/gpu/drm/i915/intel_panel.c
+@@ -841,7 +841,7 @@ static void lpt_enable_backlight(struct intel_connector *connector)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	struct intel_panel *panel = &connector->panel;
+-	u32 pch_ctl1, pch_ctl2;
++	u32 pch_ctl1, pch_ctl2, schicken;
+ 
+ 	pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1);
+ 	if (pch_ctl1 & BLM_PCH_PWM_ENABLE) {
+@@ -850,6 +850,22 @@ static void lpt_enable_backlight(struct intel_connector *connector)
+ 		I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1);
+ 	}
+ 
++	if (HAS_PCH_LPT(dev_priv)) {
++		schicken = I915_READ(SOUTH_CHICKEN2);
++		if (panel->backlight.alternate_pwm_increment)
++			schicken |= LPT_PWM_GRANULARITY;
++		else
++			schicken &= ~LPT_PWM_GRANULARITY;
++		I915_WRITE(SOUTH_CHICKEN2, schicken);
++	} else {
++		schicken = I915_READ(SOUTH_CHICKEN1);
++		if (panel->backlight.alternate_pwm_increment)
++			schicken |= SPT_PWM_GRANULARITY;
++		else
++			schicken &= ~SPT_PWM_GRANULARITY;
++		I915_WRITE(SOUTH_CHICKEN1, schicken);
++	}
++
+ 	pch_ctl2 = panel->backlight.max << 16;
+ 	I915_WRITE(BLC_PWM_PCH_CTL2, pch_ctl2);
+ 
+@@ -1242,10 +1258,10 @@ static u32 bxt_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+  */
+ static u32 spt_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ {
+-	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
++	struct intel_panel *panel = &connector->panel;
+ 	u32 mul;
+ 
+-	if (I915_READ(SOUTH_CHICKEN1) & SPT_PWM_GRANULARITY)
++	if (panel->backlight.alternate_pwm_increment)
+ 		mul = 128;
+ 	else
+ 		mul = 16;
+@@ -1261,9 +1277,10 @@ static u32 spt_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ static u32 lpt_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
++	struct intel_panel *panel = &connector->panel;
+ 	u32 mul, clock;
+ 
+-	if (I915_READ(SOUTH_CHICKEN2) & LPT_PWM_GRANULARITY)
++	if (panel->backlight.alternate_pwm_increment)
+ 		mul = 16;
+ 	else
+ 		mul = 128;
+@@ -1414,6 +1431,13 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	struct intel_panel *panel = &connector->panel;
+ 	u32 pch_ctl1, pch_ctl2, val;
++	bool alt;
++
++	if (HAS_PCH_LPT(dev_priv))
++		alt = I915_READ(SOUTH_CHICKEN2) & LPT_PWM_GRANULARITY;
++	else
++		alt = I915_READ(SOUTH_CHICKEN1) & SPT_PWM_GRANULARITY;
++	panel->backlight.alternate_pwm_increment = alt;
+ 
+ 	pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1);
+ 	panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY;
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 2d2481392824..e59a28cb3158 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -2119,32 +2119,34 @@ static void intel_read_wm_latency(struct drm_device *dev, uint16_t wm[8])
+ 				GEN9_MEM_LATENCY_LEVEL_MASK;
+ 
+ 		/*
++		 * If a level n (n > 1) has a 0us latency, all levels m (m >= n)
++		 * need to be disabled. We make sure to sanitize the values out
++		 * of the punit to satisfy this requirement.
++		 */
++		for (level = 1; level <= max_level; level++) {
++			if (wm[level] == 0) {
++				for (i = level + 1; i <= max_level; i++)
++					wm[i] = 0;
++				break;
++			}
++		}
++
++		/*
+ 		 * WaWmMemoryReadLatency:skl
+ 		 *
+ 		 * punit doesn't take into account the read latency so we need
+-		 * to add 2us to the various latency levels we retrieve from
+-		 * the punit.
+-		 *   - W0 is a bit special in that it's the only level that
+-		 *   can't be disabled if we want to have display working, so
+-		 *   we always add 2us there.
+-		 *   - For levels >=1, punit returns 0us latency when they are
+-		 *   disabled, so we respect that and don't add 2us then
+-		 *
+-		 * Additionally, if a level n (n > 1) has a 0us latency, all
+-		 * levels m (m >= n) need to be disabled. We make sure to
+-		 * sanitize the values out of the punit to satisfy this
+-		 * requirement.
++		 * to add 2us to the various latency levels we retrieve from the
++		 * punit when level 0 response data us 0us.
+ 		 */
+-		wm[0] += 2;
+-		for (level = 1; level <= max_level; level++)
+-			if (wm[level] != 0)
++		if (wm[0] == 0) {
++			wm[0] += 2;
++			for (level = 1; level <= max_level; level++) {
++				if (wm[level] == 0)
++					break;
+ 				wm[level] += 2;
+-			else {
+-				for (i = level + 1; i <= max_level; i++)
+-					wm[i] = 0;
+-
+-				break;
+ 			}
++		}
++
+ 	} else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
+ 		uint64_t sskpd = I915_READ64(MCH_SSKPD);
+ 
+@@ -2876,6 +2878,19 @@ skl_wm_plane_id(const struct intel_plane *plane)
+ 	}
+ }
+ 
++static bool
++intel_has_sagv(struct drm_i915_private *dev_priv)
++{
++	if (IS_KABYLAKE(dev_priv))
++		return true;
++
++	if (IS_SKYLAKE(dev_priv) &&
++	    dev_priv->sagv_status != I915_SAGV_NOT_CONTROLLED)
++		return true;
++
++	return false;
++}
++
+ /*
+  * SAGV dynamically adjusts the system agent voltage and clock frequencies
+  * depending on power and performance requirements. The display engine access
+@@ -2888,12 +2903,14 @@ skl_wm_plane_id(const struct intel_plane *plane)
+  *  - We're not using an interlaced display configuration
+  */
+ int
+-skl_enable_sagv(struct drm_i915_private *dev_priv)
++intel_enable_sagv(struct drm_i915_private *dev_priv)
+ {
+ 	int ret;
+ 
+-	if (dev_priv->skl_sagv_status == I915_SKL_SAGV_NOT_CONTROLLED ||
+-	    dev_priv->skl_sagv_status == I915_SKL_SAGV_ENABLED)
++	if (!intel_has_sagv(dev_priv))
++		return 0;
++
++	if (dev_priv->sagv_status == I915_SAGV_ENABLED)
+ 		return 0;
+ 
+ 	DRM_DEBUG_KMS("Enabling the SAGV\n");
+@@ -2909,21 +2926,21 @@ skl_enable_sagv(struct drm_i915_private *dev_priv)
+ 	 * Some skl systems, pre-release machines in particular,
+ 	 * don't actually have an SAGV.
+ 	 */
+-	if (ret == -ENXIO) {
++	if (IS_SKYLAKE(dev_priv) && ret == -ENXIO) {
+ 		DRM_DEBUG_DRIVER("No SAGV found on system, ignoring\n");
+-		dev_priv->skl_sagv_status = I915_SKL_SAGV_NOT_CONTROLLED;
++		dev_priv->sagv_status = I915_SAGV_NOT_CONTROLLED;
+ 		return 0;
+ 	} else if (ret < 0) {
+ 		DRM_ERROR("Failed to enable the SAGV\n");
+ 		return ret;
+ 	}
+ 
+-	dev_priv->skl_sagv_status = I915_SKL_SAGV_ENABLED;
++	dev_priv->sagv_status = I915_SAGV_ENABLED;
+ 	return 0;
+ }
+ 
+ static int
+-skl_do_sagv_disable(struct drm_i915_private *dev_priv)
++intel_do_sagv_disable(struct drm_i915_private *dev_priv)
+ {
+ 	int ret;
+ 	uint32_t temp = GEN9_SAGV_DISABLE;
+@@ -2937,19 +2954,21 @@ skl_do_sagv_disable(struct drm_i915_private *dev_priv)
+ }
+ 
+ int
+-skl_disable_sagv(struct drm_i915_private *dev_priv)
++intel_disable_sagv(struct drm_i915_private *dev_priv)
+ {
+ 	int ret, result;
+ 
+-	if (dev_priv->skl_sagv_status == I915_SKL_SAGV_NOT_CONTROLLED ||
+-	    dev_priv->skl_sagv_status == I915_SKL_SAGV_DISABLED)
++	if (!intel_has_sagv(dev_priv))
++		return 0;
++
++	if (dev_priv->sagv_status == I915_SAGV_DISABLED)
+ 		return 0;
+ 
+ 	DRM_DEBUG_KMS("Disabling the SAGV\n");
+ 	mutex_lock(&dev_priv->rps.hw_lock);
+ 
+ 	/* bspec says to keep retrying for at least 1 ms */
+-	ret = wait_for(result = skl_do_sagv_disable(dev_priv), 1);
++	ret = wait_for(result = intel_do_sagv_disable(dev_priv), 1);
+ 	mutex_unlock(&dev_priv->rps.hw_lock);
+ 
+ 	if (ret == -ETIMEDOUT) {
+@@ -2961,20 +2980,20 @@ skl_disable_sagv(struct drm_i915_private *dev_priv)
+ 	 * Some skl systems, pre-release machines in particular,
+ 	 * don't actually have an SAGV.
+ 	 */
+-	if (result == -ENXIO) {
++	if (IS_SKYLAKE(dev_priv) && result == -ENXIO) {
+ 		DRM_DEBUG_DRIVER("No SAGV found on system, ignoring\n");
+-		dev_priv->skl_sagv_status = I915_SKL_SAGV_NOT_CONTROLLED;
++		dev_priv->sagv_status = I915_SAGV_NOT_CONTROLLED;
+ 		return 0;
+ 	} else if (result < 0) {
+ 		DRM_ERROR("Failed to disable the SAGV\n");
+ 		return result;
+ 	}
+ 
+-	dev_priv->skl_sagv_status = I915_SKL_SAGV_DISABLED;
++	dev_priv->sagv_status = I915_SAGV_DISABLED;
+ 	return 0;
+ }
+ 
+-bool skl_can_enable_sagv(struct drm_atomic_state *state)
++bool intel_can_enable_sagv(struct drm_atomic_state *state)
+ {
+ 	struct drm_device *dev = state->dev;
+ 	struct drm_i915_private *dev_priv = to_i915(dev);
+@@ -2983,6 +3002,9 @@ bool skl_can_enable_sagv(struct drm_atomic_state *state)
+ 	enum pipe pipe;
+ 	int level, plane;
+ 
++	if (!intel_has_sagv(dev_priv))
++		return false;
++
+ 	/*
+ 	 * SKL workaround: bspec recommends we disable the SAGV when we have
+ 	 * more then one pipe enabled
+@@ -3473,29 +3495,14 @@ static uint32_t skl_wm_method1(uint32_t pixel_rate, uint8_t cpp, uint32_t latenc
+ }
+ 
+ static uint32_t skl_wm_method2(uint32_t pixel_rate, uint32_t pipe_htotal,
+-			       uint32_t horiz_pixels, uint8_t cpp,
+-			       uint64_t tiling, uint32_t latency)
++			       uint32_t latency, uint32_t plane_blocks_per_line)
+ {
+ 	uint32_t ret;
+-	uint32_t plane_bytes_per_line, plane_blocks_per_line;
+ 	uint32_t wm_intermediate_val;
+ 
+ 	if (latency == 0)
+ 		return UINT_MAX;
+ 
+-	plane_bytes_per_line = horiz_pixels * cpp;
+-
+-	if (tiling == I915_FORMAT_MOD_Y_TILED ||
+-	    tiling == I915_FORMAT_MOD_Yf_TILED) {
+-		plane_bytes_per_line *= 4;
+-		plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
+-		plane_blocks_per_line /= 4;
+-	} else if (tiling == DRM_FORMAT_MOD_NONE) {
+-		plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512) + 1;
+-	} else {
+-		plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
+-	}
+-
+ 	wm_intermediate_val = latency * pixel_rate;
+ 	ret = DIV_ROUND_UP(wm_intermediate_val, pipe_htotal * 1000) *
+ 				plane_blocks_per_line;
+@@ -3546,6 +3553,7 @@ static int skl_compute_plane_wm(const struct drm_i915_private *dev_priv,
+ 	uint8_t cpp;
+ 	uint32_t width = 0, height = 0;
+ 	uint32_t plane_pixel_rate;
++	uint32_t y_tile_minimum, y_min_scanlines;
+ 
+ 	if (latency == 0 || !cstate->base.active || !intel_pstate->visible) {
+ 		*enabled = false;
+@@ -3561,38 +3569,51 @@ static int skl_compute_plane_wm(const struct drm_i915_private *dev_priv,
+ 	cpp = drm_format_plane_cpp(fb->pixel_format, 0);
+ 	plane_pixel_rate = skl_adjusted_plane_pixel_rate(cstate, intel_pstate);
+ 
++	if (intel_rotation_90_or_270(pstate->rotation)) {
++		int cpp = (fb->pixel_format == DRM_FORMAT_NV12) ?
++			drm_format_plane_cpp(fb->pixel_format, 1) :
++			drm_format_plane_cpp(fb->pixel_format, 0);
++
++		switch (cpp) {
++		case 1:
++			y_min_scanlines = 16;
++			break;
++		case 2:
++			y_min_scanlines = 8;
++			break;
++		default:
++			WARN(1, "Unsupported pixel depth for rotation");
++		case 4:
++			y_min_scanlines = 4;
++			break;
++		}
++	} else {
++		y_min_scanlines = 4;
++	}
++
++	plane_bytes_per_line = width * cpp;
++	if (fb->modifier[0] == I915_FORMAT_MOD_Y_TILED ||
++	    fb->modifier[0] == I915_FORMAT_MOD_Yf_TILED) {
++		plane_blocks_per_line =
++		      DIV_ROUND_UP(plane_bytes_per_line * y_min_scanlines, 512);
++		plane_blocks_per_line /= y_min_scanlines;
++	} else if (fb->modifier[0] == DRM_FORMAT_MOD_NONE) {
++		plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512)
++					+ 1;
++	} else {
++		plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
++	}
++
+ 	method1 = skl_wm_method1(plane_pixel_rate, cpp, latency);
+ 	method2 = skl_wm_method2(plane_pixel_rate,
+ 				 cstate->base.adjusted_mode.crtc_htotal,
+-				 width,
+-				 cpp,
+-				 fb->modifier[0],
+-				 latency);
++				 latency,
++				 plane_blocks_per_line);
+ 
+-	plane_bytes_per_line = width * cpp;
+-	plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
++	y_tile_minimum = plane_blocks_per_line * y_min_scanlines;
+ 
+ 	if (fb->modifier[0] == I915_FORMAT_MOD_Y_TILED ||
+ 	    fb->modifier[0] == I915_FORMAT_MOD_Yf_TILED) {
+-		uint32_t min_scanlines = 4;
+-		uint32_t y_tile_minimum;
+-		if (intel_rotation_90_or_270(pstate->rotation)) {
+-			int cpp = (fb->pixel_format == DRM_FORMAT_NV12) ?
+-				drm_format_plane_cpp(fb->pixel_format, 1) :
+-				drm_format_plane_cpp(fb->pixel_format, 0);
+-
+-			switch (cpp) {
+-			case 1:
+-				min_scanlines = 16;
+-				break;
+-			case 2:
+-				min_scanlines = 8;
+-				break;
+-			case 8:
+-				WARN(1, "Unsupported pixel depth for rotation");
+-			}
+-		}
+-		y_tile_minimum = plane_blocks_per_line * min_scanlines;
+ 		selected_result = max(method2, y_tile_minimum);
+ 	} else {
+ 		if ((ddb_allocation / plane_blocks_per_line) >= 1)
+@@ -3606,10 +3627,12 @@ static int skl_compute_plane_wm(const struct drm_i915_private *dev_priv,
+ 
+ 	if (level >= 1 && level <= 7) {
+ 		if (fb->modifier[0] == I915_FORMAT_MOD_Y_TILED ||
+-		    fb->modifier[0] == I915_FORMAT_MOD_Yf_TILED)
+-			res_lines += 4;
+-		else
++		    fb->modifier[0] == I915_FORMAT_MOD_Yf_TILED) {
++			res_blocks += y_tile_minimum;
++			res_lines += y_min_scanlines;
++		} else {
+ 			res_blocks++;
++		}
+ 	}
+ 
+ 	if (res_blocks >= ddb_allocation || res_lines > 31) {
+@@ -3828,183 +3851,82 @@ static void skl_ddb_entry_write(struct drm_i915_private *dev_priv,
+ 		I915_WRITE(reg, 0);
+ }
+ 
+-static void skl_write_wm_values(struct drm_i915_private *dev_priv,
+-				const struct skl_wm_values *new)
++void skl_write_plane_wm(struct intel_crtc *intel_crtc,
++			const struct skl_wm_values *wm,
++			int plane)
+ {
+-	struct drm_device *dev = &dev_priv->drm;
+-	struct intel_crtc *crtc;
+-
+-	for_each_intel_crtc(dev, crtc) {
+-		int i, level, max_level = ilk_wm_max_level(dev);
+-		enum pipe pipe = crtc->pipe;
+-
+-		if ((new->dirty_pipes & drm_crtc_mask(&crtc->base)) == 0)
+-			continue;
+-		if (!crtc->active)
+-			continue;
+-
+-		I915_WRITE(PIPE_WM_LINETIME(pipe), new->wm_linetime[pipe]);
+-
+-		for (level = 0; level <= max_level; level++) {
+-			for (i = 0; i < intel_num_planes(crtc); i++)
+-				I915_WRITE(PLANE_WM(pipe, i, level),
+-					   new->plane[pipe][i][level]);
+-			I915_WRITE(CUR_WM(pipe, level),
+-				   new->plane[pipe][PLANE_CURSOR][level]);
+-		}
+-		for (i = 0; i < intel_num_planes(crtc); i++)
+-			I915_WRITE(PLANE_WM_TRANS(pipe, i),
+-				   new->plane_trans[pipe][i]);
+-		I915_WRITE(CUR_WM_TRANS(pipe),
+-			   new->plane_trans[pipe][PLANE_CURSOR]);
+-
+-		for (i = 0; i < intel_num_planes(crtc); i++) {
+-			skl_ddb_entry_write(dev_priv,
+-					    PLANE_BUF_CFG(pipe, i),
+-					    &new->ddb.plane[pipe][i]);
+-			skl_ddb_entry_write(dev_priv,
+-					    PLANE_NV12_BUF_CFG(pipe, i),
+-					    &new->ddb.y_plane[pipe][i]);
+-		}
++	struct drm_crtc *crtc = &intel_crtc->base;
++	struct drm_device *dev = crtc->dev;
++	struct drm_i915_private *dev_priv = to_i915(dev);
++	int level, max_level = ilk_wm_max_level(dev);
++	enum pipe pipe = intel_crtc->pipe;
+ 
+-		skl_ddb_entry_write(dev_priv, CUR_BUF_CFG(pipe),
+-				    &new->ddb.plane[pipe][PLANE_CURSOR]);
++	for (level = 0; level <= max_level; level++) {
++		I915_WRITE(PLANE_WM(pipe, plane, level),
++			   wm->plane[pipe][plane][level]);
+ 	}
+-}
++	I915_WRITE(PLANE_WM_TRANS(pipe, plane), wm->plane_trans[pipe][plane]);
+ 
+-/*
+- * When setting up a new DDB allocation arrangement, we need to correctly
+- * sequence the times at which the new allocations for the pipes are taken into
+- * account or we'll have pipes fetching from space previously allocated to
+- * another pipe.
+- *
+- * Roughly the sequence looks like:
+- *  1. re-allocate the pipe(s) with the allocation being reduced and not
+- *     overlapping with a previous light-up pipe (another way to put it is:
+- *     pipes with their new allocation strickly included into their old ones).
+- *  2. re-allocate the other pipes that get their allocation reduced
+- *  3. allocate the pipes having their allocation increased
+- *
+- * Steps 1. and 2. are here to take care of the following case:
+- * - Initially DDB looks like this:
+- *     |   B    |   C    |
+- * - enable pipe A.
+- * - pipe B has a reduced DDB allocation that overlaps with the old pipe C
+- *   allocation
+- *     |  A  |  B  |  C  |
+- *
+- * We need to sequence the re-allocation: C, B, A (and not B, C, A).
+- */
++	skl_ddb_entry_write(dev_priv, PLANE_BUF_CFG(pipe, plane),
++			    &wm->ddb.plane[pipe][plane]);
++	skl_ddb_entry_write(dev_priv, PLANE_NV12_BUF_CFG(pipe, plane),
++			    &wm->ddb.y_plane[pipe][plane]);
++}
+ 
+-static void
+-skl_wm_flush_pipe(struct drm_i915_private *dev_priv, enum pipe pipe, int pass)
++void skl_write_cursor_wm(struct intel_crtc *intel_crtc,
++			 const struct skl_wm_values *wm)
+ {
+-	int plane;
+-
+-	DRM_DEBUG_KMS("flush pipe %c (pass %d)\n", pipe_name(pipe), pass);
++	struct drm_crtc *crtc = &intel_crtc->base;
++	struct drm_device *dev = crtc->dev;
++	struct drm_i915_private *dev_priv = to_i915(dev);
++	int level, max_level = ilk_wm_max_level(dev);
++	enum pipe pipe = intel_crtc->pipe;
+ 
+-	for_each_plane(dev_priv, pipe, plane) {
+-		I915_WRITE(PLANE_SURF(pipe, plane),
+-			   I915_READ(PLANE_SURF(pipe, plane)));
++	for (level = 0; level <= max_level; level++) {
++		I915_WRITE(CUR_WM(pipe, level),
++			   wm->plane[pipe][PLANE_CURSOR][level]);
+ 	}
+-	I915_WRITE(CURBASE(pipe), I915_READ(CURBASE(pipe)));
++	I915_WRITE(CUR_WM_TRANS(pipe), wm->plane_trans[pipe][PLANE_CURSOR]);
++
++	skl_ddb_entry_write(dev_priv, CUR_BUF_CFG(pipe),
++			    &wm->ddb.plane[pipe][PLANE_CURSOR]);
+ }
+ 
+-static bool
+-skl_ddb_allocation_included(const struct skl_ddb_allocation *old,
+-			    const struct skl_ddb_allocation *new,
+-			    enum pipe pipe)
++bool skl_ddb_allocation_equals(const struct skl_ddb_allocation *old,
++			       const struct skl_ddb_allocation *new,
++			       enum pipe pipe)
+ {
+-	uint16_t old_size, new_size;
+-
+-	old_size = skl_ddb_entry_size(&old->pipe[pipe]);
+-	new_size = skl_ddb_entry_size(&new->pipe[pipe]);
+-
+-	return old_size != new_size &&
+-	       new->pipe[pipe].start >= old->pipe[pipe].start &&
+-	       new->pipe[pipe].end <= old->pipe[pipe].end;
++	return new->pipe[pipe].start == old->pipe[pipe].start &&
++	       new->pipe[pipe].end == old->pipe[pipe].end;
+ }
+ 
+-static void skl_flush_wm_values(struct drm_i915_private *dev_priv,
+-				struct skl_wm_values *new_values)
++static inline bool skl_ddb_entries_overlap(const struct skl_ddb_entry *a,
++					   const struct skl_ddb_entry *b)
+ {
+-	struct drm_device *dev = &dev_priv->drm;
+-	struct skl_ddb_allocation *cur_ddb, *new_ddb;
+-	bool reallocated[I915_MAX_PIPES] = {};
+-	struct intel_crtc *crtc;
+-	enum pipe pipe;
+-
+-	new_ddb = &new_values->ddb;
+-	cur_ddb = &dev_priv->wm.skl_hw.ddb;
+-
+-	/*
+-	 * First pass: flush the pipes with the new allocation contained into
+-	 * the old space.
+-	 *
+-	 * We'll wait for the vblank on those pipes to ensure we can safely
+-	 * re-allocate the freed space without this pipe fetching from it.
+-	 */
+-	for_each_intel_crtc(dev, crtc) {
+-		if (!crtc->active)
+-			continue;
+-
+-		pipe = crtc->pipe;
+-
+-		if (!skl_ddb_allocation_included(cur_ddb, new_ddb, pipe))
+-			continue;
+-
+-		skl_wm_flush_pipe(dev_priv, pipe, 1);
+-		intel_wait_for_vblank(dev, pipe);
+-
+-		reallocated[pipe] = true;
+-	}
+-
++	return a->start < b->end && b->start < a->end;
++}
+ 
+-	/*
+-	 * Second pass: flush the pipes that are having their allocation
+-	 * reduced, but overlapping with a previous allocation.
+-	 *
+-	 * Here as well we need to wait for the vblank to make sure the freed
+-	 * space is not used anymore.
+-	 */
+-	for_each_intel_crtc(dev, crtc) {
+-		if (!crtc->active)
+-			continue;
++bool skl_ddb_allocation_overlaps(struct drm_atomic_state *state,
++				 const struct skl_ddb_allocation *old,
++				 const struct skl_ddb_allocation *new,
++				 enum pipe pipe)
++{
++	struct drm_device *dev = state->dev;
++	struct intel_crtc *intel_crtc;
++	enum pipe otherp;
+ 
+-		pipe = crtc->pipe;
++	for_each_intel_crtc(dev, intel_crtc) {
++		otherp = intel_crtc->pipe;
+ 
+-		if (reallocated[pipe])
++		if (otherp == pipe)
+ 			continue;
+ 
+-		if (skl_ddb_entry_size(&new_ddb->pipe[pipe]) <
+-		    skl_ddb_entry_size(&cur_ddb->pipe[pipe])) {
+-			skl_wm_flush_pipe(dev_priv, pipe, 2);
+-			intel_wait_for_vblank(dev, pipe);
+-			reallocated[pipe] = true;
+-		}
++		if (skl_ddb_entries_overlap(&new->pipe[pipe],
++					    &old->pipe[otherp]))
++			return true;
+ 	}
+ 
+-	/*
+-	 * Third pass: flush the pipes that got more space allocated.
+-	 *
+-	 * We don't need to actively wait for the update here, next vblank
+-	 * will just get more DDB space with the correct WM values.
+-	 */
+-	for_each_intel_crtc(dev, crtc) {
+-		if (!crtc->active)
+-			continue;
+-
+-		pipe = crtc->pipe;
+-
+-		/*
+-		 * At this point, only the pipes more space than before are
+-		 * left to re-allocate.
+-		 */
+-		if (reallocated[pipe])
+-			continue;
+-
+-		skl_wm_flush_pipe(dev_priv, pipe, 3);
+-	}
++	return false;
+ }
+ 
+ static int skl_update_pipe_wm(struct drm_crtc_state *cstate,
+@@ -4041,6 +3963,41 @@ pipes_modified(struct drm_atomic_state *state)
+ 	return ret;
+ }
+ 
++int
++skl_ddb_add_affected_planes(struct intel_crtc_state *cstate)
++{
++	struct drm_atomic_state *state = cstate->base.state;
++	struct drm_device *dev = state->dev;
++	struct drm_crtc *crtc = cstate->base.crtc;
++	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
++	struct drm_i915_private *dev_priv = to_i915(dev);
++	struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
++	struct skl_ddb_allocation *new_ddb = &intel_state->wm_results.ddb;
++	struct skl_ddb_allocation *cur_ddb = &dev_priv->wm.skl_hw.ddb;
++	struct drm_plane_state *plane_state;
++	struct drm_plane *plane;
++	enum pipe pipe = intel_crtc->pipe;
++	int id;
++
++	WARN_ON(!drm_atomic_get_existing_crtc_state(state, crtc));
++
++	drm_for_each_plane_mask(plane, dev, crtc->state->plane_mask) {
++		id = skl_wm_plane_id(to_intel_plane(plane));
++
++		if (skl_ddb_entry_equal(&cur_ddb->plane[pipe][id],
++					&new_ddb->plane[pipe][id]) &&
++		    skl_ddb_entry_equal(&cur_ddb->y_plane[pipe][id],
++					&new_ddb->y_plane[pipe][id]))
++			continue;
++
++		plane_state = drm_atomic_get_plane_state(state, plane);
++		if (IS_ERR(plane_state))
++			return PTR_ERR(plane_state);
++	}
++
++	return 0;
++}
++
+ static int
+ skl_compute_ddb(struct drm_atomic_state *state)
+ {
+@@ -4105,6 +4062,10 @@ skl_compute_ddb(struct drm_atomic_state *state)
+ 		if (ret)
+ 			return ret;
+ 
++		ret = skl_ddb_add_affected_planes(cstate);
++		if (ret)
++			return ret;
++
+ 		ret = drm_atomic_add_affected_planes(state, &intel_crtc->base);
+ 		if (ret)
+ 			return ret;
+@@ -4206,7 +4167,7 @@ static void skl_update_wm(struct drm_crtc *crtc)
+ 	struct skl_wm_values *hw_vals = &dev_priv->wm.skl_hw;
+ 	struct intel_crtc_state *cstate = to_intel_crtc_state(crtc->state);
+ 	struct skl_pipe_wm *pipe_wm = &cstate->wm.skl.optimal;
+-	int pipe;
++	enum pipe pipe = intel_crtc->pipe;
+ 
+ 	if ((results->dirty_pipes & drm_crtc_mask(crtc)) == 0)
+ 		return;
+@@ -4215,15 +4176,22 @@ static void skl_update_wm(struct drm_crtc *crtc)
+ 
+ 	mutex_lock(&dev_priv->wm.wm_mutex);
+ 
+-	skl_write_wm_values(dev_priv, results);
+-	skl_flush_wm_values(dev_priv, results);
+-
+ 	/*
+-	 * Store the new configuration (but only for the pipes that have
+-	 * changed; the other values weren't recomputed).
++	 * If this pipe isn't active already, we're going to be enabling it
++	 * very soon. Since it's safe to update a pipe's ddb allocation while
++	 * the pipe's shut off, just do so here. Already active pipes will have
++	 * their watermarks updated once we update their planes.
+ 	 */
+-	for_each_pipe_masked(dev_priv, pipe, results->dirty_pipes)
+-		skl_copy_wm_for_pipe(hw_vals, results, pipe);
++	if (crtc->state->active_changed) {
++		int plane;
++
++		for (plane = 0; plane < intel_num_planes(intel_crtc); plane++)
++			skl_write_plane_wm(intel_crtc, results, plane);
++
++		skl_write_cursor_wm(intel_crtc, results);
++	}
++
++	skl_copy_wm_for_pipe(hw_vals, results, pipe);
+ 
+ 	mutex_unlock(&dev_priv->wm.wm_mutex);
+ }
+diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c
+index 7c08e4f29032..4178849631ad 100644
+--- a/drivers/gpu/drm/i915/intel_sprite.c
++++ b/drivers/gpu/drm/i915/intel_sprite.c
+@@ -203,6 +203,9 @@ skl_update_plane(struct drm_plane *drm_plane,
+ 	struct intel_plane *intel_plane = to_intel_plane(drm_plane);
+ 	struct drm_framebuffer *fb = plane_state->base.fb;
+ 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
++	const struct skl_wm_values *wm = &dev_priv->wm.skl_results;
++	struct drm_crtc *crtc = crtc_state->base.crtc;
++	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ 	const int pipe = intel_plane->pipe;
+ 	const int plane = intel_plane->plane + 1;
+ 	u32 plane_ctl, stride_div, stride;
+@@ -238,6 +241,9 @@ skl_update_plane(struct drm_plane *drm_plane,
+ 	crtc_w--;
+ 	crtc_h--;
+ 
++	if (wm->dirty_pipes & drm_crtc_mask(crtc))
++		skl_write_plane_wm(intel_crtc, wm, plane);
++
+ 	if (key->flags) {
+ 		I915_WRITE(PLANE_KEYVAL(pipe, plane), key->min_value);
+ 		I915_WRITE(PLANE_KEYMAX(pipe, plane), key->max_value);
+@@ -308,6 +314,14 @@ skl_disable_plane(struct drm_plane *dplane, struct drm_crtc *crtc)
+ 	const int pipe = intel_plane->pipe;
+ 	const int plane = intel_plane->plane + 1;
+ 
++	/*
++	 * We only populate skl_results on watermark updates, and if the
++	 * plane's visiblity isn't actually changing neither is its watermarks.
++	 */
++	if (!to_intel_plane_state(dplane->state)->visible)
++		skl_write_plane_wm(to_intel_crtc(crtc),
++				   &dev_priv->wm.skl_results, plane);
++
+ 	I915_WRITE(PLANE_CTL(pipe, plane), 0);
+ 
+ 	I915_WRITE(PLANE_SURF(pipe, plane), 0);
+diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
+index ff80a81b1a84..ec28b15f2724 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.c
++++ b/drivers/gpu/drm/i915/intel_uncore.c
+@@ -796,10 +796,9 @@ __unclaimed_reg_debug(struct drm_i915_private *dev_priv,
+ 		      const bool read,
+ 		      const bool before)
+ {
+-	if (WARN(check_for_unclaimed_mmio(dev_priv),
+-		 "Unclaimed register detected %s %s register 0x%x\n",
+-		 before ? "before" : "after",
+-		 read ? "reading" : "writing to",
++	if (WARN(check_for_unclaimed_mmio(dev_priv) && !before,
++		 "Unclaimed %s register 0x%x\n",
++		 read ? "read from" : "write to",
+ 		 i915_mmio_reg_offset(reg)))
+ 		i915.mmio_debug--; /* Only report the first N failures */
+ }
+diff --git a/drivers/gpu/drm/radeon/r600_dpm.c b/drivers/gpu/drm/radeon/r600_dpm.c
+index 6a4b020dd0b4..5a26eb4545aa 100644
+--- a/drivers/gpu/drm/radeon/r600_dpm.c
++++ b/drivers/gpu/drm/radeon/r600_dpm.c
+@@ -156,19 +156,20 @@ u32 r600_dpm_get_vblank_time(struct radeon_device *rdev)
+ 	struct drm_device *dev = rdev->ddev;
+ 	struct drm_crtc *crtc;
+ 	struct radeon_crtc *radeon_crtc;
+-	u32 line_time_us, vblank_lines;
++	u32 vblank_in_pixels;
+ 	u32 vblank_time_us = 0xffffffff; /* if the displays are off, vblank time is max */
+ 
+ 	if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) {
+ 		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 			radeon_crtc = to_radeon_crtc(crtc);
+ 			if (crtc->enabled && radeon_crtc->enabled && radeon_crtc->hw_mode.clock) {
+-				line_time_us = (radeon_crtc->hw_mode.crtc_htotal * 1000) /
+-					radeon_crtc->hw_mode.clock;
+-				vblank_lines = radeon_crtc->hw_mode.crtc_vblank_end -
+-					radeon_crtc->hw_mode.crtc_vdisplay +
+-					(radeon_crtc->v_border * 2);
+-				vblank_time_us = vblank_lines * line_time_us;
++				vblank_in_pixels =
++					radeon_crtc->hw_mode.crtc_htotal *
++					(radeon_crtc->hw_mode.crtc_vblank_end -
++					 radeon_crtc->hw_mode.crtc_vdisplay +
++					 (radeon_crtc->v_border * 2));
++
++				vblank_time_us = vblank_in_pixels * 1000 / radeon_crtc->hw_mode.clock;
+ 				break;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index a00dd2f74527..554ca7115f98 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -661,8 +661,9 @@ bool radeon_card_posted(struct radeon_device *rdev)
+ {
+ 	uint32_t reg;
+ 
+-	/* for pass through, always force asic_init */
+-	if (radeon_device_is_virtual())
++	/* for pass through, always force asic_init for CI */
++	if (rdev->family >= CHIP_BONAIRE &&
++	    radeon_device_is_virtual())
+ 		return false;
+ 
+ 	/* required for EFI mode on macbook2,1 which uses an r5xx asic */
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 1f78ec2548ec..89bdf20344ae 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -4112,7 +4112,7 @@ static int si_populate_smc_voltage_tables(struct radeon_device *rdev,
+ 							      &rdev->pm.dpm.dyn_state.phase_shedding_limits_table)) {
+ 				si_populate_smc_voltage_table(rdev, &si_pi->vddc_phase_shed_table, table);
+ 
+-				table->phaseMaskTable.lowMask[SISLANDS_SMC_VOLTAGEMASK_VDDC] =
++				table->phaseMaskTable.lowMask[SISLANDS_SMC_VOLTAGEMASK_VDDC_PHASE_SHEDDING] =
+ 					cpu_to_be32(si_pi->vddc_phase_shed_table.mask_low);
+ 
+ 				si_write_smc_soft_register(rdev, SI_SMC_SOFT_REGISTER_phase_shedding_delay,
+diff --git a/drivers/gpu/drm/radeon/sislands_smc.h b/drivers/gpu/drm/radeon/sislands_smc.h
+index 3c779838d9ab..966e3a556011 100644
+--- a/drivers/gpu/drm/radeon/sislands_smc.h
++++ b/drivers/gpu/drm/radeon/sislands_smc.h
+@@ -194,6 +194,7 @@ typedef struct SISLANDS_SMC_SWSTATE SISLANDS_SMC_SWSTATE;
+ #define SISLANDS_SMC_VOLTAGEMASK_VDDC  0
+ #define SISLANDS_SMC_VOLTAGEMASK_MVDD  1
+ #define SISLANDS_SMC_VOLTAGEMASK_VDDCI 2
++#define SISLANDS_SMC_VOLTAGEMASK_VDDC_PHASE_SHEDDING 3
+ #define SISLANDS_SMC_VOLTAGEMASK_MAX   4
+ 
+ struct SISLANDS_SMC_VOLTAGEMASKTABLE
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 428e24919ef1..f696b752886b 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -122,9 +122,16 @@ to_vc4_dev(struct drm_device *dev)
+ struct vc4_bo {
+ 	struct drm_gem_cma_object base;
+ 
+-	/* seqno of the last job to render to this BO. */
++	/* seqno of the last job to render using this BO. */
+ 	uint64_t seqno;
+ 
++	/* seqno of the last job to use the RCL to write to this BO.
++	 *
++	 * Note that this doesn't include binner overflow memory
++	 * writes.
++	 */
++	uint64_t write_seqno;
++
+ 	/* List entry for the BO's position in either
+ 	 * vc4_exec_info->unref_list or vc4_dev->bo_cache.time_list
+ 	 */
+@@ -216,6 +223,9 @@ struct vc4_exec_info {
+ 	/* Sequence number for this bin/render job. */
+ 	uint64_t seqno;
+ 
++	/* Latest write_seqno of any BO that binning depends on. */
++	uint64_t bin_dep_seqno;
++
+ 	/* Last current addresses the hardware was processing when the
+ 	 * hangcheck timer checked on us.
+ 	 */
+@@ -230,6 +240,13 @@ struct vc4_exec_info {
+ 	struct drm_gem_cma_object **bo;
+ 	uint32_t bo_count;
+ 
++	/* List of BOs that are being written by the RCL.  Other than
++	 * the binner temporary storage, this is all the BOs written
++	 * by the job.
++	 */
++	struct drm_gem_cma_object *rcl_write_bo[4];
++	uint32_t rcl_write_bo_count;
++
+ 	/* Pointers for our position in vc4->job_list */
+ 	struct list_head head;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
+index b262c5c26f10..ae1609e739ef 100644
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -471,6 +471,11 @@ vc4_update_bo_seqnos(struct vc4_exec_info *exec, uint64_t seqno)
+ 	list_for_each_entry(bo, &exec->unref_list, unref_head) {
+ 		bo->seqno = seqno;
+ 	}
++
++	for (i = 0; i < exec->rcl_write_bo_count; i++) {
++		bo = to_vc4_bo(&exec->rcl_write_bo[i]->base);
++		bo->write_seqno = seqno;
++	}
+ }
+ 
+ /* Queues a struct vc4_exec_info for execution.  If no job is
+@@ -673,6 +678,14 @@ vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec)
+ 		goto fail;
+ 
+ 	ret = vc4_validate_shader_recs(dev, exec);
++	if (ret)
++		goto fail;
++
++	/* Block waiting on any previous rendering into the CS's VBO,
++	 * IB, or textures, so that pixels are actually written by the
++	 * time we try to read them.
++	 */
++	ret = vc4_wait_for_seqno(dev, exec->bin_dep_seqno, ~0ull, true);
+ 
+ fail:
+ 	drm_free_large(temp);
+diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c
+index 0f12418725e5..08886a309757 100644
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -45,6 +45,8 @@ struct vc4_rcl_setup {
+ 
+ 	struct drm_gem_cma_object *rcl;
+ 	u32 next_offset;
++
++	u32 next_write_bo_index;
+ };
+ 
+ static inline void rcl_u8(struct vc4_rcl_setup *setup, u8 val)
+@@ -407,6 +409,8 @@ static int vc4_rcl_msaa_surface_setup(struct vc4_exec_info *exec,
+ 	if (!*obj)
+ 		return -EINVAL;
+ 
++	exec->rcl_write_bo[exec->rcl_write_bo_count++] = *obj;
++
+ 	if (surf->offset & 0xf) {
+ 		DRM_ERROR("MSAA write must be 16b aligned.\n");
+ 		return -EINVAL;
+@@ -417,7 +421,8 @@ static int vc4_rcl_msaa_surface_setup(struct vc4_exec_info *exec,
+ 
+ static int vc4_rcl_surface_setup(struct vc4_exec_info *exec,
+ 				 struct drm_gem_cma_object **obj,
+-				 struct drm_vc4_submit_rcl_surface *surf)
++				 struct drm_vc4_submit_rcl_surface *surf,
++				 bool is_write)
+ {
+ 	uint8_t tiling = VC4_GET_FIELD(surf->bits,
+ 				       VC4_LOADSTORE_TILE_BUFFER_TILING);
+@@ -440,6 +445,9 @@ static int vc4_rcl_surface_setup(struct vc4_exec_info *exec,
+ 	if (!*obj)
+ 		return -EINVAL;
+ 
++	if (is_write)
++		exec->rcl_write_bo[exec->rcl_write_bo_count++] = *obj;
++
+ 	if (surf->flags & VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
+ 		if (surf == &exec->args->zs_write) {
+ 			DRM_ERROR("general zs write may not be a full-res.\n");
+@@ -542,6 +550,8 @@ vc4_rcl_render_config_surface_setup(struct vc4_exec_info *exec,
+ 	if (!*obj)
+ 		return -EINVAL;
+ 
++	exec->rcl_write_bo[exec->rcl_write_bo_count++] = *obj;
++
+ 	if (tiling > VC4_TILING_FORMAT_LT) {
+ 		DRM_ERROR("Bad tiling format\n");
+ 		return -EINVAL;
+@@ -599,15 +609,18 @@ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = vc4_rcl_surface_setup(exec, &setup.color_read, &args->color_read);
++	ret = vc4_rcl_surface_setup(exec, &setup.color_read, &args->color_read,
++				    false);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = vc4_rcl_surface_setup(exec, &setup.zs_read, &args->zs_read);
++	ret = vc4_rcl_surface_setup(exec, &setup.zs_read, &args->zs_read,
++				    false);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = vc4_rcl_surface_setup(exec, &setup.zs_write, &args->zs_write);
++	ret = vc4_rcl_surface_setup(exec, &setup.zs_write, &args->zs_write,
++				    true);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c
+index 9ce1d0adf882..26503e307438 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -267,6 +267,9 @@ validate_indexed_prim_list(VALIDATE_ARGS)
+ 	if (!ib)
+ 		return -EINVAL;
+ 
++	exec->bin_dep_seqno = max(exec->bin_dep_seqno,
++				  to_vc4_bo(&ib->base)->write_seqno);
++
+ 	if (offset > ib->base.size ||
+ 	    (ib->base.size - offset) / index_size < length) {
+ 		DRM_ERROR("IB access overflow (%d + %d*%d > %zd)\n",
+@@ -555,8 +558,7 @@ static bool
+ reloc_tex(struct vc4_exec_info *exec,
+ 	  void *uniform_data_u,
+ 	  struct vc4_texture_sample_info *sample,
+-	  uint32_t texture_handle_index)
+-
++	  uint32_t texture_handle_index, bool is_cs)
+ {
+ 	struct drm_gem_cma_object *tex;
+ 	uint32_t p0 = *(uint32_t *)(uniform_data_u + sample->p_offset[0]);
+@@ -714,6 +716,11 @@ reloc_tex(struct vc4_exec_info *exec,
+ 
+ 	*validated_p0 = tex->paddr + p0;
+ 
++	if (is_cs) {
++		exec->bin_dep_seqno = max(exec->bin_dep_seqno,
++					  to_vc4_bo(&tex->base)->write_seqno);
++	}
++
+ 	return true;
+  fail:
+ 	DRM_INFO("Texture p0 at %d: 0x%08x\n", sample->p_offset[0], p0);
+@@ -835,7 +842,8 @@ validate_gl_shader_rec(struct drm_device *dev,
+ 			if (!reloc_tex(exec,
+ 				       uniform_data_u,
+ 				       &validated_shader->texture_samples[tex],
+-				       texture_handles_u[tex])) {
++				       texture_handles_u[tex],
++				       i == 2)) {
+ 				return -EINVAL;
+ 			}
+ 		}
+@@ -867,6 +875,9 @@ validate_gl_shader_rec(struct drm_device *dev,
+ 		uint32_t stride = *(uint8_t *)(pkt_u + o + 5);
+ 		uint32_t max_index;
+ 
++		exec->bin_dep_seqno = max(exec->bin_dep_seqno,
++					  to_vc4_bo(&vbo->base)->write_seqno);
++
+ 		if (state->addr & 0x8)
+ 			stride |= (*(uint32_t *)(pkt_u + 100 + i * 4)) & ~0xff;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index dc5beff2b4aa..8a15c4aa84c1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -34,6 +34,24 @@
+ 
+ #define VMW_RES_HT_ORDER 12
+ 
++ /**
++ * enum vmw_resource_relocation_type - Relocation type for resources
++ *
++ * @vmw_res_rel_normal: Traditional relocation. The resource id in the
++ * command stream is replaced with the actual id after validation.
++ * @vmw_res_rel_nop: NOP relocation. The command is unconditionally replaced
++ * with a NOP.
++ * @vmw_res_rel_cond_nop: Conditional NOP relocation. If the resource id
++ * after validation is -1, the command is replaced with a NOP. Otherwise no
++ * action.
++ */
++enum vmw_resource_relocation_type {
++	vmw_res_rel_normal,
++	vmw_res_rel_nop,
++	vmw_res_rel_cond_nop,
++	vmw_res_rel_max
++};
++
+ /**
+  * struct vmw_resource_relocation - Relocation info for resources
+  *
+@@ -41,11 +59,13 @@
+  * @res: Non-ref-counted pointer to the resource.
+  * @offset: Offset of 4 byte entries into the command buffer where the
+  * id that needs fixup is located.
++ * @rel_type: Type of relocation.
+  */
+ struct vmw_resource_relocation {
+ 	struct list_head head;
+ 	const struct vmw_resource *res;
+-	unsigned long offset;
++	u32 offset:29;
++	enum vmw_resource_relocation_type rel_type:3;
+ };
+ 
+ /**
+@@ -410,10 +430,13 @@ static int vmw_resource_context_res_add(struct vmw_private *dev_priv,
+  * @res: The resource.
+  * @offset: Offset into the command buffer currently being parsed where the
+  * id that needs fixup is located. Granularity is 4 bytes.
++ * @rel_type: Relocation type.
+  */
+ static int vmw_resource_relocation_add(struct list_head *list,
+ 				       const struct vmw_resource *res,
+-				       unsigned long offset)
++				       unsigned long offset,
++				       enum vmw_resource_relocation_type
++				       rel_type)
+ {
+ 	struct vmw_resource_relocation *rel;
+ 
+@@ -425,6 +448,7 @@ static int vmw_resource_relocation_add(struct list_head *list,
+ 
+ 	rel->res = res;
+ 	rel->offset = offset;
++	rel->rel_type = rel_type;
+ 	list_add_tail(&rel->head, list);
+ 
+ 	return 0;
+@@ -459,11 +483,23 @@ static void vmw_resource_relocations_apply(uint32_t *cb,
+ {
+ 	struct vmw_resource_relocation *rel;
+ 
++	/* Validate the struct vmw_resource_relocation member size */
++	BUILD_BUG_ON(SVGA_CB_MAX_SIZE >= (1 << 29));
++	BUILD_BUG_ON(vmw_res_rel_max >= (1 << 3));
++
+ 	list_for_each_entry(rel, list, head) {
+-		if (likely(rel->res != NULL))
++		switch (rel->rel_type) {
++		case vmw_res_rel_normal:
+ 			cb[rel->offset] = rel->res->id;
+-		else
++			break;
++		case vmw_res_rel_nop:
+ 			cb[rel->offset] = SVGA_3D_CMD_NOP;
++			break;
++		default:
++			if (rel->res->id == -1)
++				cb[rel->offset] = SVGA_3D_CMD_NOP;
++			break;
++		}
+ 	}
+ }
+ 
+@@ -655,7 +691,8 @@ static int vmw_cmd_res_reloc_add(struct vmw_private *dev_priv,
+ 	*p_val = NULL;
+ 	ret = vmw_resource_relocation_add(&sw_context->res_relocations,
+ 					  res,
+-					  id_loc - sw_context->buf_start);
++					  id_loc - sw_context->buf_start,
++					  vmw_res_rel_normal);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+ 
+@@ -721,7 +758,8 @@ vmw_cmd_res_check(struct vmw_private *dev_priv,
+ 
+ 		return vmw_resource_relocation_add
+ 			(&sw_context->res_relocations, res,
+-			 id_loc - sw_context->buf_start);
++			 id_loc - sw_context->buf_start,
++			 vmw_res_rel_normal);
+ 	}
+ 
+ 	ret = vmw_user_resource_lookup_handle(dev_priv,
+@@ -2144,7 +2182,8 @@ static int vmw_cmd_shader_define(struct vmw_private *dev_priv,
+ 
+ 	return vmw_resource_relocation_add(&sw_context->res_relocations,
+ 					   NULL, &cmd->header.id -
+-					   sw_context->buf_start);
++					   sw_context->buf_start,
++					   vmw_res_rel_nop);
+ 
+ 	return 0;
+ }
+@@ -2189,7 +2228,8 @@ static int vmw_cmd_shader_destroy(struct vmw_private *dev_priv,
+ 
+ 	return vmw_resource_relocation_add(&sw_context->res_relocations,
+ 					   NULL, &cmd->header.id -
+-					   sw_context->buf_start);
++					   sw_context->buf_start,
++					   vmw_res_rel_nop);
+ 
+ 	return 0;
+ }
+@@ -2848,8 +2888,7 @@ static int vmw_cmd_dx_cid_check(struct vmw_private *dev_priv,
+  * @header: Pointer to the command header in the command stream.
+  *
+  * Check that the view exists, and if it was not created using this
+- * command batch, make sure it's validated (present in the device) so that
+- * the remove command will not confuse the device.
++ * command batch, conditionally make this command a NOP.
+  */
+ static int vmw_cmd_dx_view_remove(struct vmw_private *dev_priv,
+ 				  struct vmw_sw_context *sw_context,
+@@ -2877,10 +2916,15 @@ static int vmw_cmd_dx_view_remove(struct vmw_private *dev_priv,
+ 		return ret;
+ 
+ 	/*
+-	 * Add view to the validate list iff it was not created using this
+-	 * command batch.
++	 * If the view wasn't created during this command batch, it might
++	 * have been removed due to a context swapout, so add a
++	 * relocation to conditionally make this command a NOP to avoid
++	 * device errors.
+ 	 */
+-	return vmw_view_res_val_add(sw_context, view);
++	return vmw_resource_relocation_add(&sw_context->res_relocations,
++					   view,
++					   &cmd->header.id - sw_context->buf_start,
++					   vmw_res_rel_cond_nop);
+ }
+ 
+ /**
+@@ -3848,14 +3892,14 @@ static void *vmw_execbuf_cmdbuf(struct vmw_private *dev_priv,
+ 	int ret;
+ 
+ 	*header = NULL;
+-	if (!dev_priv->cman || kernel_commands)
+-		return kernel_commands;
+-
+ 	if (command_size > SVGA_CB_MAX_SIZE) {
+ 		DRM_ERROR("Command buffer is too large.\n");
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
++	if (!dev_priv->cman || kernel_commands)
++		return kernel_commands;
++
+ 	/* If possible, add a little space for fencing. */
+ 	cmdbuf_size = command_size + 512;
+ 	cmdbuf_size = min_t(size_t, cmdbuf_size, SVGA_CB_MAX_SIZE);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 4ed9a4fdfea7..e92b09d32605 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -64,6 +64,9 @@
+ #define USB_VENDOR_ID_AKAI		0x2011
+ #define USB_DEVICE_ID_AKAI_MPKMINI2	0x0715
+ 
++#define USB_VENDOR_ID_AKAI_09E8		0x09E8
++#define USB_DEVICE_ID_AKAI_09E8_MIDIMIX	0x0031
++
+ #define USB_VENDOR_ID_ALCOR		0x058f
+ #define USB_DEVICE_ID_ALCOR_USBRS232	0x9720
+ 
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index b4b8c6abb03e..bb400081efe4 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -56,6 +56,7 @@ static const struct hid_blacklist {
+ 
+ 	{ USB_VENDOR_ID_AIREN, USB_DEVICE_ID_AIREN_SLIMPLUS, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2, HID_QUIRK_NO_INIT_REPORTS },
++	{ USB_VENDOR_ID_AKAI_09E8, USB_DEVICE_ID_AKAI_09E8_MIDIMIX, HID_QUIRK_NO_INIT_REPORTS },
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_UC100KM, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS124U, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM, HID_QUIRK_NOGET },
+diff --git a/drivers/hwtracing/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c
+index 9e02ac963cd0..3978cbb6b038 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc.c
++++ b/drivers/hwtracing/coresight/coresight-tmc.c
+@@ -388,9 +388,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
+ err_misc_register:
+ 	coresight_unregister(drvdata->csdev);
+ err_devm_kzalloc:
+-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETR)
+-		dma_free_coherent(dev, drvdata->size,
+-				drvdata->vaddr, drvdata->paddr);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/dac/ad5755.c b/drivers/iio/dac/ad5755.c
+index 0fde593ec0d9..5f7968232564 100644
+--- a/drivers/iio/dac/ad5755.c
++++ b/drivers/iio/dac/ad5755.c
+@@ -655,7 +655,7 @@ static struct ad5755_platform_data *ad5755_parse_dt(struct device *dev)
+ 
+ 	devnr = 0;
+ 	for_each_child_of_node(np, pp) {
+-		if (devnr > AD5755_NUM_CHANNELS) {
++		if (devnr >= AD5755_NUM_CHANNELS) {
+ 			dev_err(dev,
+ 				"There is to many channels defined in DT\n");
+ 			goto error_out;
+diff --git a/drivers/iio/light/us5182d.c b/drivers/iio/light/us5182d.c
+index 20c40f780964..18cf2e29e4d5 100644
+--- a/drivers/iio/light/us5182d.c
++++ b/drivers/iio/light/us5182d.c
+@@ -894,7 +894,7 @@ static int us5182d_probe(struct i2c_client *client,
+ 		goto out_err;
+ 
+ 	if (data->default_continuous) {
+-		pm_runtime_set_active(&client->dev);
++		ret = pm_runtime_set_active(&client->dev);
+ 		if (ret < 0)
+ 			goto out_err;
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c
+index 4e4d8317c281..c17c9dd7cde1 100644
+--- a/drivers/infiniband/hw/hfi1/qp.c
++++ b/drivers/infiniband/hw/hfi1/qp.c
+@@ -808,6 +808,13 @@ void *qp_priv_alloc(struct rvt_dev_info *rdi, struct rvt_qp *qp,
+ 		kfree(priv);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
++	iowait_init(
++		&priv->s_iowait,
++		1,
++		_hfi1_do_send,
++		iowait_sleep,
++		iowait_wakeup,
++		iowait_sdma_drained);
+ 	setup_timer(&priv->s_rnr_timer, hfi1_rc_rnr_retry, (unsigned long)qp);
+ 	qp->s_timer.function = hfi1_rc_timeout;
+ 	return priv;
+@@ -873,13 +880,6 @@ void notify_qp_reset(struct rvt_qp *qp)
+ {
+ 	struct hfi1_qp_priv *priv = qp->priv;
+ 
+-	iowait_init(
+-		&priv->s_iowait,
+-		1,
+-		_hfi1_do_send,
+-		iowait_sleep,
+-		iowait_wakeup,
+-		iowait_sdma_drained);
+ 	priv->r_adefered = 0;
+ 	clear_ahg(qp);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index e19537cf44ab..bff8707a2f1f 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1843,6 +1843,7 @@ static struct mlx5_ib_flow_handler *create_leftovers_rule(struct mlx5_ib_dev *de
+ 						 &leftovers_specs[LEFTOVERS_UC].flow_attr,
+ 						 dst);
+ 		if (IS_ERR(handler_ucast)) {
++			mlx5_del_flow_rule(handler->rule);
+ 			kfree(handler);
+ 			handler = handler_ucast;
+ 		} else {
+diff --git a/drivers/infiniband/hw/qib/qib.h b/drivers/infiniband/hw/qib/qib.h
+index bbf0a163aeab..54bb655f5332 100644
+--- a/drivers/infiniband/hw/qib/qib.h
++++ b/drivers/infiniband/hw/qib/qib.h
+@@ -1131,7 +1131,6 @@ extern spinlock_t qib_devs_lock;
+ extern struct qib_devdata *qib_lookup(int unit);
+ extern u32 qib_cpulist_count;
+ extern unsigned long *qib_cpulist;
+-extern u16 qpt_mask;
+ extern unsigned qib_cc_table_size;
+ 
+ int qib_init(struct qib_devdata *, int);
+diff --git a/drivers/infiniband/hw/qib/qib_qp.c b/drivers/infiniband/hw/qib/qib_qp.c
+index f9b8cd2354d1..99d31efe4c2f 100644
+--- a/drivers/infiniband/hw/qib/qib_qp.c
++++ b/drivers/infiniband/hw/qib/qib_qp.c
+@@ -41,14 +41,6 @@
+ 
+ #include "qib.h"
+ 
+-/*
+- * mask field which was present in now deleted qib_qpn_table
+- * is not present in rvt_qpn_table. Defining the same field
+- * as qpt_mask here instead of adding the mask field to
+- * rvt_qpn_table.
+- */
+-u16 qpt_mask;
+-
+ static inline unsigned mk_qpn(struct rvt_qpn_table *qpt,
+ 			      struct rvt_qpn_map *map, unsigned off)
+ {
+@@ -57,7 +49,7 @@ static inline unsigned mk_qpn(struct rvt_qpn_table *qpt,
+ 
+ static inline unsigned find_next_offset(struct rvt_qpn_table *qpt,
+ 					struct rvt_qpn_map *map, unsigned off,
+-					unsigned n)
++					unsigned n, u16 qpt_mask)
+ {
+ 	if (qpt_mask) {
+ 		off++;
+@@ -179,6 +171,7 @@ int qib_alloc_qpn(struct rvt_dev_info *rdi, struct rvt_qpn_table *qpt,
+ 	struct qib_ibdev *verbs_dev = container_of(rdi, struct qib_ibdev, rdi);
+ 	struct qib_devdata *dd = container_of(verbs_dev, struct qib_devdata,
+ 					      verbs_dev);
++	u16 qpt_mask = dd->qpn_mask;
+ 
+ 	if (type == IB_QPT_SMI || type == IB_QPT_GSI) {
+ 		unsigned n;
+@@ -215,7 +208,7 @@ int qib_alloc_qpn(struct rvt_dev_info *rdi, struct rvt_qpn_table *qpt,
+ 				goto bail;
+ 			}
+ 			offset = find_next_offset(qpt, map, offset,
+-				dd->n_krcv_queues);
++				dd->n_krcv_queues, qpt_mask);
+ 			qpn = mk_qpn(qpt, map, offset);
+ 			/*
+ 			 * This test differs from alloc_pidmap().
+diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
+index fd1dfbce5539..b2b845f9f7df 100644
+--- a/drivers/infiniband/hw/qib/qib_verbs.c
++++ b/drivers/infiniband/hw/qib/qib_verbs.c
+@@ -1606,8 +1606,6 @@ int qib_register_ib_device(struct qib_devdata *dd)
+ 	/* Only need to initialize non-zero fields. */
+ 	setup_timer(&dev->mem_timer, mem_timer, (unsigned long)dev);
+ 
+-	qpt_mask = dd->qpn_mask;
+-
+ 	INIT_LIST_HEAD(&dev->piowait);
+ 	INIT_LIST_HEAD(&dev->dmawait);
+ 	INIT_LIST_HEAD(&dev->txwait);
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 870b4f212fbc..5911c534cc18 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -501,12 +501,9 @@ static void rvt_remove_qp(struct rvt_dev_info *rdi, struct rvt_qp *qp)
+  */
+ static void rvt_reset_qp(struct rvt_dev_info *rdi, struct rvt_qp *qp,
+ 		  enum ib_qp_type type)
+-	__releases(&qp->s_lock)
+-	__releases(&qp->s_hlock)
+-	__releases(&qp->r_lock)
+-	__acquires(&qp->r_lock)
+-	__acquires(&qp->s_hlock)
+-	__acquires(&qp->s_lock)
++	__must_hold(&qp->r_lock)
++	__must_hold(&qp->s_hlock)
++	__must_hold(&qp->s_lock)
+ {
+ 	if (qp->state != IB_QPS_RESET) {
+ 		qp->state = IB_QPS_RESET;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index 618f18436618..c65e17fae24e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -1009,7 +1009,6 @@ int i40e_unregister_client(struct i40e_client *client)
+ 	if (!i40e_client_is_registered(client)) {
+ 		pr_info("i40e: Client %s has not been registered\n",
+ 			client->name);
+-		mutex_unlock(&i40e_client_mutex);
+ 		ret = -ENODEV;
+ 		goto out;
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index dad15b6c66dd..c74d16409941 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -7990,45 +7990,34 @@ static int i40e_setup_misc_vector(struct i40e_pf *pf)
+ static int i40e_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
+ 			      u8 *lut, u16 lut_size)
+ {
+-	struct i40e_aqc_get_set_rss_key_data rss_key;
+ 	struct i40e_pf *pf = vsi->back;
+ 	struct i40e_hw *hw = &pf->hw;
+-	bool pf_lut = false;
+-	u8 *rss_lut;
+-	int ret, i;
+-
+-	memcpy(&rss_key, seed, sizeof(rss_key));
+-
+-	rss_lut = kzalloc(pf->rss_table_size, GFP_KERNEL);
+-	if (!rss_lut)
+-		return -ENOMEM;
+-
+-	/* Populate the LUT with max no. of queues in round robin fashion */
+-	for (i = 0; i < vsi->rss_table_size; i++)
+-		rss_lut[i] = i % vsi->rss_size;
++	int ret = 0;
+ 
+-	ret = i40e_aq_set_rss_key(hw, vsi->id, &rss_key);
+-	if (ret) {
+-		dev_info(&pf->pdev->dev,
+-			 "Cannot set RSS key, err %s aq_err %s\n",
+-			 i40e_stat_str(&pf->hw, ret),
+-			 i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
+-		goto config_rss_aq_out;
++	if (seed) {
++		struct i40e_aqc_get_set_rss_key_data *seed_dw =
++			(struct i40e_aqc_get_set_rss_key_data *)seed;
++		ret = i40e_aq_set_rss_key(hw, vsi->id, seed_dw);
++		if (ret) {
++			dev_info(&pf->pdev->dev,
++				 "Cannot set RSS key, err %s aq_err %s\n",
++				 i40e_stat_str(hw, ret),
++				 i40e_aq_str(hw, hw->aq.asq_last_status));
++			return ret;
++		}
+ 	}
++	if (lut) {
++		bool pf_lut = vsi->type == I40E_VSI_MAIN ? true : false;
+ 
+-	if (vsi->type == I40E_VSI_MAIN)
+-		pf_lut = true;
+-
+-	ret = i40e_aq_set_rss_lut(hw, vsi->id, pf_lut, rss_lut,
+-				  vsi->rss_table_size);
+-	if (ret)
+-		dev_info(&pf->pdev->dev,
+-			 "Cannot set RSS lut, err %s aq_err %s\n",
+-			 i40e_stat_str(&pf->hw, ret),
+-			 i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
+-
+-config_rss_aq_out:
+-	kfree(rss_lut);
++		ret = i40e_aq_set_rss_lut(hw, vsi->id, pf_lut, lut, lut_size);
++		if (ret) {
++			dev_info(&pf->pdev->dev,
++				 "Cannot set RSS lut, err %s aq_err %s\n",
++				 i40e_stat_str(hw, ret),
++				 i40e_aq_str(hw, hw->aq.asq_last_status));
++			return ret;
++		}
++	}
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index 24c8d65bcf34..09ca63466504 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -2394,6 +2394,8 @@ static void ath10k_htt_txrx_compl_task(unsigned long ptr)
+ 	skb_queue_splice_init(&htt->rx_in_ord_compl_q, &rx_ind_q);
+ 	spin_unlock_irqrestore(&htt->rx_in_ord_compl_q.lock, flags);
+ 
++	ath10k_mac_tx_push_pending(ar);
++
+ 	spin_lock_irqsave(&htt->tx_fetch_ind_q.lock, flags);
+ 	skb_queue_splice_init(&htt->tx_fetch_ind_q, &tx_ind_q);
+ 	spin_unlock_irqrestore(&htt->tx_fetch_ind_q.lock, flags);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 0bbd0a00edcc..146365b93ff5 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3777,7 +3777,9 @@ int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
+ 	enum ath10k_hw_txrx_mode txmode;
+ 	enum ath10k_mac_tx_path txpath;
+ 	struct sk_buff *skb;
++	struct ieee80211_hdr *hdr;
+ 	size_t skb_len;
++	bool is_mgmt, is_presp;
+ 	int ret;
+ 
+ 	spin_lock_bh(&ar->htt.tx_lock);
+@@ -3801,6 +3803,22 @@ int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
+ 	skb_len = skb->len;
+ 	txmode = ath10k_mac_tx_h_get_txmode(ar, vif, sta, skb);
+ 	txpath = ath10k_mac_tx_h_get_txpath(ar, skb, txmode);
++	is_mgmt = (txpath == ATH10K_MAC_TX_HTT_MGMT);
++
++	if (is_mgmt) {
++		hdr = (struct ieee80211_hdr *)skb->data;
++		is_presp = ieee80211_is_probe_resp(hdr->frame_control);
++
++		spin_lock_bh(&ar->htt.tx_lock);
++		ret = ath10k_htt_tx_mgmt_inc_pending(htt, is_mgmt, is_presp);
++
++		if (ret) {
++			ath10k_htt_tx_dec_pending(htt);
++			spin_unlock_bh(&ar->htt.tx_lock);
++			return ret;
++		}
++		spin_unlock_bh(&ar->htt.tx_lock);
++	}
+ 
+ 	ret = ath10k_mac_tx(ar, vif, sta, txmode, txpath, skb);
+ 	if (unlikely(ret)) {
+@@ -3808,6 +3826,8 @@ int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
+ 
+ 		spin_lock_bh(&ar->htt.tx_lock);
+ 		ath10k_htt_tx_dec_pending(htt);
++		if (is_mgmt)
++			ath10k_htt_tx_mgmt_dec_pending(htt);
+ 		spin_unlock_bh(&ar->htt.tx_lock);
+ 
+ 		return ret;
+@@ -6538,7 +6558,7 @@ static int ath10k_get_survey(struct ieee80211_hw *hw, int idx,
+ 		goto exit;
+ 	}
+ 
+-	ath10k_mac_update_bss_chan_survey(ar, survey->channel);
++	ath10k_mac_update_bss_chan_survey(ar, &sband->channels[idx]);
+ 
+ 	spin_lock_bh(&ar->data_lock);
+ 	memcpy(survey, ar_survey, sizeof(*survey));
+diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
+index b29a86a26c13..28ff5cb4ec28 100644
+--- a/drivers/net/wireless/ath/ath10k/txrx.c
++++ b/drivers/net/wireless/ath/ath10k/txrx.c
+@@ -119,8 +119,6 @@ int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ 	ieee80211_tx_status(htt->ar->hw, msdu);
+ 	/* we do not own the msdu anymore */
+ 
+-	ath10k_mac_tx_push_pending(ar);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 3ef468893b3f..f67cc198bc0e 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -180,6 +180,7 @@ enum wmi_service {
+ 	WMI_SERVICE_MESH_NON_11S,
+ 	WMI_SERVICE_PEER_STATS,
+ 	WMI_SERVICE_RESTRT_CHNL_SUPPORT,
++	WMI_SERVICE_PERIODIC_CHAN_STAT_SUPPORT,
+ 	WMI_SERVICE_TX_MODE_PUSH_ONLY,
+ 	WMI_SERVICE_TX_MODE_PUSH_PULL,
+ 	WMI_SERVICE_TX_MODE_DYNAMIC,
+@@ -305,6 +306,7 @@ enum wmi_10_4_service {
+ 	WMI_10_4_SERVICE_RESTRT_CHNL_SUPPORT,
+ 	WMI_10_4_SERVICE_PEER_STATS,
+ 	WMI_10_4_SERVICE_MESH_11S,
++	WMI_10_4_SERVICE_PERIODIC_CHAN_STAT_SUPPORT,
+ 	WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+ 	WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+ 	WMI_10_4_SERVICE_TX_MODE_DYNAMIC,
+@@ -402,6 +404,7 @@ static inline char *wmi_service_name(int service_id)
+ 	SVCSTR(WMI_SERVICE_MESH_NON_11S);
+ 	SVCSTR(WMI_SERVICE_PEER_STATS);
+ 	SVCSTR(WMI_SERVICE_RESTRT_CHNL_SUPPORT);
++	SVCSTR(WMI_SERVICE_PERIODIC_CHAN_STAT_SUPPORT);
+ 	SVCSTR(WMI_SERVICE_TX_MODE_PUSH_ONLY);
+ 	SVCSTR(WMI_SERVICE_TX_MODE_PUSH_PULL);
+ 	SVCSTR(WMI_SERVICE_TX_MODE_DYNAMIC);
+@@ -652,6 +655,8 @@ static inline void wmi_10_4_svc_map(const __le32 *in, unsigned long *out,
+ 	       WMI_SERVICE_PEER_STATS, len);
+ 	SVCMAP(WMI_10_4_SERVICE_MESH_11S,
+ 	       WMI_SERVICE_MESH_11S, len);
++	SVCMAP(WMI_10_4_SERVICE_PERIODIC_CHAN_STAT_SUPPORT,
++	       WMI_SERVICE_PERIODIC_CHAN_STAT_SUPPORT, len);
+ 	SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+ 	       WMI_SERVICE_TX_MODE_PUSH_ONLY, len);
+ 	SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 43f8f7d45ddb..adba3b003f55 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -564,11 +564,16 @@ static void iwl_set_hw_address_from_csr(struct iwl_trans *trans,
+ 	__le32 mac_addr0 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR0_STRAP));
+ 	__le32 mac_addr1 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR1_STRAP));
+ 
+-	/* If OEM did not fuse address - get it from OTP */
+-	if (!mac_addr0 && !mac_addr1) {
+-		mac_addr0 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR0_OTP));
+-		mac_addr1 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR1_OTP));
+-	}
++	iwl_flip_hw_address(mac_addr0, mac_addr1, data->hw_addr);
++	/*
++	 * If the OEM fused a valid address, use it instead of the one in the
++	 * OTP
++	 */
++	if (is_valid_ether_addr(data->hw_addr))
++		return;
++
++	mac_addr0 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR0_OTP));
++	mac_addr1 = cpu_to_le32(iwl_read32(trans, CSR_MAC_ADDR1_OTP));
+ 
+ 	iwl_flip_hw_address(mac_addr0, mac_addr1, data->hw_addr);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 7e0cdbf8bf74..794c57486e02 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1214,9 +1214,12 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 	}
+ 
+ 	/* TODO: read the budget from BIOS / Platform NVM */
+-	if (iwl_mvm_is_ctdp_supported(mvm) && mvm->cooling_dev.cur_state > 0)
++	if (iwl_mvm_is_ctdp_supported(mvm) && mvm->cooling_dev.cur_state > 0) {
+ 		ret = iwl_mvm_ctdp_command(mvm, CTDP_CMD_OPERATION_START,
+ 					   mvm->cooling_dev.cur_state);
++		if (ret)
++			goto error;
++	}
+ #else
+ 	/* Initialize tx backoffs to the minimal possible */
+ 	iwl_mvm_tt_tx_backoff(mvm, 0);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index 69c42ce45b8a..d742d27d8de0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -539,6 +539,11 @@ void iwl_mvm_mac_ctxt_release(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 			iwl_mvm_disable_txq(mvm, IWL_MVM_OFFCHANNEL_QUEUE,
+ 					    IWL_MVM_OFFCHANNEL_QUEUE,
+ 					    IWL_MAX_TID_COUNT, 0);
++		else
++			iwl_mvm_disable_txq(mvm,
++					    IWL_MVM_DQA_P2P_DEVICE_QUEUE,
++					    vif->hw_queue[0], IWL_MAX_TID_COUNT,
++					    0);
+ 
+ 		break;
+ 	case NL80211_IFTYPE_AP:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index df6c32caa5f0..afb7eb60e454 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -598,9 +598,10 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
+ 
+ 	mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ 
+-	/* not a data packet */
+-	if (!ieee80211_is_data_qos(hdr->frame_control) ||
+-	    is_multicast_ether_addr(hdr->addr1))
++	/* not a data packet or a bar */
++	if (!ieee80211_is_back_req(hdr->frame_control) &&
++	    (!ieee80211_is_data_qos(hdr->frame_control) ||
++	     is_multicast_ether_addr(hdr->addr1)))
+ 		return false;
+ 
+ 	if (unlikely(!ieee80211_is_data_present(hdr->frame_control)))
+@@ -624,6 +625,11 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
+ 
+ 	spin_lock_bh(&buffer->lock);
+ 
++	if (ieee80211_is_back_req(hdr->frame_control)) {
++		iwl_mvm_release_frames(mvm, sta, napi, buffer, nssn);
++		goto drop;
++	}
++
+ 	/*
+ 	 * If there was a significant jump in the nssn - adjust.
+ 	 * If the SN is smaller than the NSSN it might need to first go into
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 3130b9c68a74..e933c12d80aa 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -576,9 +576,7 @@ static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid,
+ 			ret);
+ 
+ 	/* Make sure the SCD wrptr is correctly set before reconfiguring */
+-	iwl_trans_txq_enable(mvm->trans, queue, iwl_mvm_ac_to_tx_fifo[ac],
+-			     cmd.sta_id, tid, LINK_QUAL_AGG_FRAME_LIMIT_DEF,
+-			     ssn, wdg_timeout);
++	iwl_trans_txq_enable_cfg(mvm->trans, queue, ssn, NULL, wdg_timeout);
+ 
+ 	/* TODO: Work-around SCD bug when moving back by multiples of 0x40 */
+ 
+@@ -1270,9 +1268,31 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ 		ret = iwl_mvm_drain_sta(mvm, mvm_sta, false);
+ 
+ 		/* If DQA is supported - the queues can be disabled now */
+-		if (iwl_mvm_is_dqa_supported(mvm))
++		if (iwl_mvm_is_dqa_supported(mvm)) {
++			u8 reserved_txq = mvm_sta->reserved_queue;
++			enum iwl_mvm_queue_status *status;
++
+ 			iwl_mvm_disable_sta_queues(mvm, vif, mvm_sta);
+ 
++			/*
++			 * If no traffic has gone through the reserved TXQ - it
++			 * is still marked as IWL_MVM_QUEUE_RESERVED, and
++			 * should be manually marked as free again
++			 */
++			spin_lock_bh(&mvm->queue_info_lock);
++			status = &mvm->queue_info[reserved_txq].status;
++			if (WARN((*status != IWL_MVM_QUEUE_RESERVED) &&
++				 (*status != IWL_MVM_QUEUE_FREE),
++				 "sta_id %d reserved txq %d status %d",
++				 mvm_sta->sta_id, reserved_txq, *status)) {
++				spin_unlock_bh(&mvm->queue_info_lock);
++				return -EINVAL;
++			}
++
++			*status = IWL_MVM_QUEUE_FREE;
++			spin_unlock_bh(&mvm->queue_info_lock);
++		}
++
+ 		if (vif->type == NL80211_IFTYPE_STATION &&
+ 		    mvmvif->ap_sta_id == mvm_sta->sta_id) {
+ 			/* if associated - we can't remove the AP STA now */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index b3a87a31de30..a0c1e3d07db5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -903,9 +903,13 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 		tid = IWL_MAX_TID_COUNT;
+ 	}
+ 
+-	if (iwl_mvm_is_dqa_supported(mvm))
++	if (iwl_mvm_is_dqa_supported(mvm)) {
+ 		txq_id = mvmsta->tid_data[tid].txq_id;
+ 
++		if (ieee80211_is_mgmt(fc))
++			tx_cmd->tid_tspec = IWL_TID_NON_QOS;
++	}
++
+ 	/* Copy MAC header from skb into command buffer */
+ 	memcpy(tx_cmd->hdr, hdr, hdrlen);
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/join.c b/drivers/net/wireless/marvell/mwifiex/join.c
+index 1c7b00630b90..b89596c18b41 100644
+--- a/drivers/net/wireless/marvell/mwifiex/join.c
++++ b/drivers/net/wireless/marvell/mwifiex/join.c
+@@ -669,9 +669,8 @@ int mwifiex_ret_802_11_associate(struct mwifiex_private *priv,
+ 	priv->assoc_rsp_size = min(le16_to_cpu(resp->size) - S_DS_GEN,
+ 				   sizeof(priv->assoc_rsp_buf));
+ 
+-	memcpy(priv->assoc_rsp_buf, &resp->params, priv->assoc_rsp_size);
+-
+ 	assoc_rsp->a_id = cpu_to_le16(aid);
++	memcpy(priv->assoc_rsp_buf, &resp->params, priv->assoc_rsp_size);
+ 
+ 	if (status_code) {
+ 		priv->adapter->dbg.num_cmd_assoc_failure++;
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+index a422f3306d4d..7e394d485f54 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
+@@ -708,7 +708,11 @@ int mwifiex_process_sta_event(struct mwifiex_private *priv)
+ 
+ 	case EVENT_EXT_SCAN_REPORT:
+ 		mwifiex_dbg(adapter, EVENT, "event: EXT_SCAN Report\n");
+-		if (adapter->ext_scan && !priv->scan_aborting)
++		/* We intend to skip this event during suspend, but handle
++		 * it in interface disabled case
++		 */
++		if (adapter->ext_scan && (!priv->scan_aborting ||
++					  !netif_running(priv->netdev)))
+ 			ret = mwifiex_handle_event_ext_scan_report(priv,
+ 						adapter->event_skb->data);
+ 
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+index 7cf26c6124d1..6005e14213ca 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+@@ -831,8 +831,10 @@ int rt2x00usb_probe(struct usb_interface *usb_intf,
+ 	rt2x00dev->anchor = devm_kmalloc(&usb_dev->dev,
+ 					sizeof(struct usb_anchor),
+ 					GFP_KERNEL);
+-	if (!rt2x00dev->anchor)
++	if (!rt2x00dev->anchor) {
++		retval = -ENOMEM;
+ 		goto exit_free_reg;
++	}
+ 
+ 	init_usb_anchor(rt2x00dev->anchor);
+ 	return 0;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 935866fe5ec2..a8b6949a8778 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -217,6 +217,8 @@ long nvdimm_clear_poison(struct device *dev, phys_addr_t phys,
+ 		return rc;
+ 	if (cmd_rc < 0)
+ 		return cmd_rc;
++
++	nvdimm_clear_from_poison_list(nvdimm_bus, phys, len);
+ 	return clear_err.cleared;
+ }
+ EXPORT_SYMBOL_GPL(nvdimm_clear_poison);
+diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
+index 4d7bbd2df5c0..7ceba08774b6 100644
+--- a/drivers/nvdimm/core.c
++++ b/drivers/nvdimm/core.c
+@@ -547,11 +547,12 @@ void nvdimm_badblocks_populate(struct nd_region *nd_region,
+ }
+ EXPORT_SYMBOL_GPL(nvdimm_badblocks_populate);
+ 
+-static int add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length)
++static int add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length,
++			gfp_t flags)
+ {
+ 	struct nd_poison *pl;
+ 
+-	pl = kzalloc(sizeof(*pl), GFP_KERNEL);
++	pl = kzalloc(sizeof(*pl), flags);
+ 	if (!pl)
+ 		return -ENOMEM;
+ 
+@@ -567,7 +568,7 @@ static int bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length)
+ 	struct nd_poison *pl;
+ 
+ 	if (list_empty(&nvdimm_bus->poison_list))
+-		return add_poison(nvdimm_bus, addr, length);
++		return add_poison(nvdimm_bus, addr, length, GFP_KERNEL);
+ 
+ 	/*
+ 	 * There is a chance this is a duplicate, check for those first.
+@@ -587,7 +588,7 @@ static int bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length)
+ 	 * as any overlapping ranges will get resolved when the list is consumed
+ 	 * and converted to badblocks
+ 	 */
+-	return add_poison(nvdimm_bus, addr, length);
++	return add_poison(nvdimm_bus, addr, length, GFP_KERNEL);
+ }
+ 
+ int nvdimm_bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length)
+@@ -602,6 +603,70 @@ int nvdimm_bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length)
+ }
+ EXPORT_SYMBOL_GPL(nvdimm_bus_add_poison);
+ 
++void nvdimm_clear_from_poison_list(struct nvdimm_bus *nvdimm_bus,
++		phys_addr_t start, unsigned int len)
++{
++	struct list_head *poison_list = &nvdimm_bus->poison_list;
++	u64 clr_end = start + len - 1;
++	struct nd_poison *pl, *next;
++
++	nvdimm_bus_lock(&nvdimm_bus->dev);
++	WARN_ON_ONCE(list_empty(poison_list));
++
++	/*
++	 * [start, clr_end] is the poison interval being cleared.
++	 * [pl->start, pl_end] is the poison_list entry we're comparing
++	 * the above interval against. The poison list entry may need
++	 * to be modified (update either start or length), deleted, or
++	 * split into two based on the overlap characteristics
++	 */
++
++	list_for_each_entry_safe(pl, next, poison_list, list) {
++		u64 pl_end = pl->start + pl->length - 1;
++
++		/* Skip intervals with no intersection */
++		if (pl_end < start)
++			continue;
++		if (pl->start >  clr_end)
++			continue;
++		/* Delete completely overlapped poison entries */
++		if ((pl->start >= start) && (pl_end <= clr_end)) {
++			list_del(&pl->list);
++			kfree(pl);
++			continue;
++		}
++		/* Adjust start point of partially cleared entries */
++		if ((start <= pl->start) && (clr_end > pl->start)) {
++			pl->length -= clr_end - pl->start + 1;
++			pl->start = clr_end + 1;
++			continue;
++		}
++		/* Adjust pl->length for partial clearing at the tail end */
++		if ((pl->start < start) && (pl_end <= clr_end)) {
++			/* pl->start remains the same */
++			pl->length = start - pl->start;
++			continue;
++		}
++		/*
++		 * If clearing in the middle of an entry, we split it into
++		 * two by modifying the current entry to represent one half of
++		 * the split, and adding a new entry for the second half.
++		 */
++		if ((pl->start < start) && (pl_end > clr_end)) {
++			u64 new_start = clr_end + 1;
++			u64 new_len = pl_end - new_start + 1;
++
++			/* Add new entry covering the right half */
++			add_poison(nvdimm_bus, new_start, new_len, GFP_NOIO);
++			/* Adjust this entry to cover the left half */
++			pl->length = start - pl->start;
++			continue;
++		}
++	}
++	nvdimm_bus_unlock(&nvdimm_bus->dev);
++}
++EXPORT_SYMBOL_GPL(nvdimm_clear_from_poison_list);
++
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ int nd_integrity_init(struct gendisk *disk, unsigned long meta_size)
+ {
+diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c
+index ef9893fa3176..4f5e567fd7e0 100644
+--- a/drivers/pci/host/pci-aardvark.c
++++ b/drivers/pci/host/pci-aardvark.c
+@@ -848,7 +848,7 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
+ 	int err, res_valid = 0;
+ 	struct device *dev = &pcie->pdev->dev;
+ 	struct device_node *np = dev->of_node;
+-	struct resource_entry *win;
++	struct resource_entry *win, *tmp;
+ 	resource_size_t iobase;
+ 
+ 	INIT_LIST_HEAD(&pcie->resources);
+@@ -862,7 +862,7 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
+ 	if (err)
+ 		goto out_release_res;
+ 
+-	resource_list_for_each_entry(win, &pcie->resources) {
++	resource_list_for_each_entry_safe(win, tmp, &pcie->resources) {
+ 		struct resource *res = win->res;
+ 
+ 		switch (resource_type(res)) {
+@@ -874,9 +874,11 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
+ 					     lower_32_bits(res->start),
+ 					     OB_PCIE_IO);
+ 			err = pci_remap_iospace(res, iobase);
+-			if (err)
++			if (err) {
+ 				dev_warn(dev, "error %d: failed to map resource %pR\n",
+ 					 err, res);
++				resource_list_destroy_entry(win);
++			}
+ 			break;
+ 		case IORESOURCE_MEM:
+ 			advk_pcie_set_ob_win(pcie, 0,
+diff --git a/drivers/pci/host/pci-host-common.c b/drivers/pci/host/pci-host-common.c
+index 9d9d34e959b6..61eb4d46eb50 100644
+--- a/drivers/pci/host/pci-host-common.c
++++ b/drivers/pci/host/pci-host-common.c
+@@ -29,7 +29,7 @@ static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
+ 	int err, res_valid = 0;
+ 	struct device_node *np = dev->of_node;
+ 	resource_size_t iobase;
+-	struct resource_entry *win;
++	struct resource_entry *win, *tmp;
+ 
+ 	err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
+ 	if (err)
+@@ -39,15 +39,17 @@ static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	resource_list_for_each_entry(win, resources) {
++	resource_list_for_each_entry_safe(win, tmp, resources) {
+ 		struct resource *res = win->res;
+ 
+ 		switch (resource_type(res)) {
+ 		case IORESOURCE_IO:
+ 			err = pci_remap_iospace(res, iobase);
+-			if (err)
++			if (err) {
+ 				dev_warn(dev, "error %d: failed to map resource %pR\n",
+ 					 err, res);
++				resource_list_destroy_entry(win);
++			}
+ 			break;
+ 		case IORESOURCE_MEM:
+ 			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
+diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c
+index 84d650d892e7..7ec1e800096a 100644
+--- a/drivers/pci/host/pci-tegra.c
++++ b/drivers/pci/host/pci-tegra.c
+@@ -621,7 +621,11 @@ static int tegra_pcie_setup(int nr, struct pci_sys_data *sys)
+ 	if (err < 0)
+ 		return err;
+ 
+-	pci_add_resource_offset(&sys->resources, &pcie->pio, sys->io_offset);
++	err = pci_remap_iospace(&pcie->pio, pcie->io.start);
++	if (!err)
++		pci_add_resource_offset(&sys->resources, &pcie->pio,
++					sys->io_offset);
++
+ 	pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
+ 	pci_add_resource_offset(&sys->resources, &pcie->prefetch,
+ 				sys->mem_offset);
+@@ -631,7 +635,6 @@ static int tegra_pcie_setup(int nr, struct pci_sys_data *sys)
+ 	if (err < 0)
+ 		return err;
+ 
+-	pci_remap_iospace(&pcie->pio, pcie->io.start);
+ 	return 1;
+ }
+ 
+diff --git a/drivers/pci/host/pci-versatile.c b/drivers/pci/host/pci-versatile.c
+index f234405770ab..b7dc07002f13 100644
+--- a/drivers/pci/host/pci-versatile.c
++++ b/drivers/pci/host/pci-versatile.c
+@@ -74,7 +74,7 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
+ 	int err, mem = 1, res_valid = 0;
+ 	struct device_node *np = dev->of_node;
+ 	resource_size_t iobase;
+-	struct resource_entry *win;
++	struct resource_entry *win, *tmp;
+ 
+ 	err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase);
+ 	if (err)
+@@ -84,15 +84,17 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
+ 	if (err)
+ 		goto out_release_res;
+ 
+-	resource_list_for_each_entry(win, res) {
++	resource_list_for_each_entry_safe(win, tmp, res) {
+ 		struct resource *res = win->res;
+ 
+ 		switch (resource_type(res)) {
+ 		case IORESOURCE_IO:
+ 			err = pci_remap_iospace(res, iobase);
+-			if (err)
++			if (err) {
+ 				dev_warn(dev, "error %d: failed to map resource %pR\n",
+ 					 err, res);
++				resource_list_destroy_entry(win);
++			}
+ 			break;
+ 		case IORESOURCE_MEM:
+ 			res_valid |= !(res->flags & IORESOURCE_PREFETCH);
+diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c
+index 12afce19890b..2a500f270c01 100644
+--- a/drivers/pci/host/pcie-designware.c
++++ b/drivers/pci/host/pcie-designware.c
+@@ -436,7 +436,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	struct resource *cfg_res;
+ 	int i, ret;
+ 	LIST_HEAD(res);
+-	struct resource_entry *win;
++	struct resource_entry *win, *tmp;
+ 
+ 	cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
+ 	if (cfg_res) {
+@@ -457,17 +457,20 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 		goto error;
+ 
+ 	/* Get the I/O and memory ranges from DT */
+-	resource_list_for_each_entry(win, &res) {
++	resource_list_for_each_entry_safe(win, tmp, &res) {
+ 		switch (resource_type(win->res)) {
+ 		case IORESOURCE_IO:
+-			pp->io = win->res;
+-			pp->io->name = "I/O";
+-			pp->io_size = resource_size(pp->io);
+-			pp->io_bus_addr = pp->io->start - win->offset;
+-			ret = pci_remap_iospace(pp->io, pp->io_base);
+-			if (ret)
++			ret = pci_remap_iospace(win->res, pp->io_base);
++			if (ret) {
+ 				dev_warn(pp->dev, "error %d: failed to map resource %pR\n",
+-					 ret, pp->io);
++					 ret, win->res);
++				resource_list_destroy_entry(win);
++			} else {
++				pp->io = win->res;
++				pp->io->name = "I/O";
++				pp->io_size = resource_size(pp->io);
++				pp->io_bus_addr = pp->io->start - win->offset;
++			}
+ 			break;
+ 		case IORESOURCE_MEM:
+ 			pp->mem = win->res;
+diff --git a/drivers/pci/host/pcie-rcar.c b/drivers/pci/host/pcie-rcar.c
+index 65db7a221509..5f7fcc971cae 100644
+--- a/drivers/pci/host/pcie-rcar.c
++++ b/drivers/pci/host/pcie-rcar.c
+@@ -945,7 +945,7 @@ static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci)
+ 	struct device *dev = pci->dev;
+ 	struct device_node *np = dev->of_node;
+ 	resource_size_t iobase;
+-	struct resource_entry *win;
++	struct resource_entry *win, *tmp;
+ 
+ 	err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, &iobase);
+ 	if (err)
+@@ -955,14 +955,17 @@ static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci)
+ 	if (err)
+ 		goto out_release_res;
+ 
+-	resource_list_for_each_entry(win, &pci->resources) {
++	resource_list_for_each_entry_safe(win, tmp, &pci->resources) {
+ 		struct resource *res = win->res;
+ 
+ 		if (resource_type(res) == IORESOURCE_IO) {
+ 			err = pci_remap_iospace(res, iobase);
+-			if (err)
++			if (err) {
+ 				dev_warn(dev, "error %d: failed to map resource %pR\n",
+ 					 err, res);
++
++				resource_list_destroy_entry(win);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 51c42d746883..775c88303017 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -156,7 +156,7 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 	spin_lock_irqsave(&pctrl->lock, flags);
+ 
+ 	val = readl(pctrl->regs + g->ctl_reg);
+-	val &= mask;
++	val &= ~mask;
+ 	val |= i << g->mux_bit;
+ 	writel(val, pctrl->regs + g->ctl_reg);
+ 
+diff --git a/drivers/power/bq24257_charger.c b/drivers/power/bq24257_charger.c
+index 1fea2c7ef97f..6fc31bdc639b 100644
+--- a/drivers/power/bq24257_charger.c
++++ b/drivers/power/bq24257_charger.c
+@@ -1068,6 +1068,12 @@ static int bq24257_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
++	ret = bq24257_power_supply_init(bq);
++	if (ret < 0) {
++		dev_err(dev, "Failed to register power supply\n");
++		return ret;
++	}
++
+ 	ret = devm_request_threaded_irq(dev, client->irq, NULL,
+ 					bq24257_irq_handler_thread,
+ 					IRQF_TRIGGER_FALLING |
+@@ -1078,12 +1084,6 @@ static int bq24257_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
+-	ret = bq24257_power_supply_init(bq);
+-	if (ret < 0) {
+-		dev_err(dev, "Failed to register power supply\n");
+-		return ret;
+-	}
+-
+ 	ret = sysfs_create_group(&bq->charger->dev.kobj, &bq24257_attr_group);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Can't create sysfs entries\n");
+diff --git a/drivers/s390/char/con3270.c b/drivers/s390/char/con3270.c
+index 6b1577c73fe7..285b4006f44b 100644
+--- a/drivers/s390/char/con3270.c
++++ b/drivers/s390/char/con3270.c
+@@ -124,7 +124,12 @@ con3270_create_status(struct con3270 *cp)
+ static void
+ con3270_update_string(struct con3270 *cp, struct string *s, int nr)
+ {
+-	if (s->len >= cp->view.cols - 5)
++	if (s->len < 4) {
++		/* This indicates a bug, but printing a warning would
++		 * cause a deadlock. */
++		return;
++	}
++	if (s->string[s->len - 4] != TO_RA)
+ 		return;
+ 	raw3270_buffer_address(cp->view.dev, s->string + s->len - 3,
+ 			       cp->view.cols * (nr + 1));
+@@ -460,11 +465,11 @@ con3270_cline_end(struct con3270 *cp)
+ 		cp->cline->len + 4 : cp->view.cols;
+ 	s = con3270_alloc_string(cp, size);
+ 	memcpy(s->string, cp->cline->string, cp->cline->len);
+-	if (s->len < cp->view.cols - 5) {
++	if (cp->cline->len < cp->view.cols - 5) {
+ 		s->string[s->len - 4] = TO_RA;
+ 		s->string[s->len - 1] = 0;
+ 	} else {
+-		while (--size > cp->cline->len)
++		while (--size >= cp->cline->len)
+ 			s->string[size] = cp->view.ascebc[' '];
+ 	}
+ 	/* Replace cline with allocated line s and reset cline. */
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index 940e725bde1e..11674698b36d 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -95,12 +95,13 @@ struct chsc_ssd_area {
+ int chsc_get_ssd_info(struct subchannel_id schid, struct chsc_ssd_info *ssd)
+ {
+ 	struct chsc_ssd_area *ssd_area;
++	unsigned long flags;
+ 	int ccode;
+ 	int ret;
+ 	int i;
+ 	int mask;
+ 
+-	spin_lock_irq(&chsc_page_lock);
++	spin_lock_irqsave(&chsc_page_lock, flags);
+ 	memset(chsc_page, 0, PAGE_SIZE);
+ 	ssd_area = chsc_page;
+ 	ssd_area->request.length = 0x0010;
+@@ -144,7 +145,7 @@ int chsc_get_ssd_info(struct subchannel_id schid, struct chsc_ssd_info *ssd)
+ 			ssd->fla[i] = ssd_area->fla[i];
+ 	}
+ out:
+-	spin_unlock_irq(&chsc_page_lock);
++	spin_unlock_irqrestore(&chsc_page_lock, flags);
+ 	return ret;
+ }
+ 
+@@ -832,9 +833,10 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable)
+ 		u32 fmt : 4;
+ 		u32 : 16;
+ 	} __attribute__ ((packed)) *secm_area;
++	unsigned long flags;
+ 	int ret, ccode;
+ 
+-	spin_lock_irq(&chsc_page_lock);
++	spin_lock_irqsave(&chsc_page_lock, flags);
+ 	memset(chsc_page, 0, PAGE_SIZE);
+ 	secm_area = chsc_page;
+ 	secm_area->request.length = 0x0050;
+@@ -864,7 +866,7 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable)
+ 		CIO_CRW_EVENT(2, "chsc: secm failed (rc=%04x)\n",
+ 			      secm_area->response.code);
+ out:
+-	spin_unlock_irq(&chsc_page_lock);
++	spin_unlock_irqrestore(&chsc_page_lock, flags);
+ 	return ret;
+ }
+ 
+@@ -992,6 +994,7 @@ chsc_initialize_cmg_chars(struct channel_path *chp, u8 cmcv,
+ 
+ int chsc_get_channel_measurement_chars(struct channel_path *chp)
+ {
++	unsigned long flags;
+ 	int ccode, ret;
+ 
+ 	struct {
+@@ -1021,7 +1024,7 @@ int chsc_get_channel_measurement_chars(struct channel_path *chp)
+ 	if (!css_chsc_characteristics.scmc || !css_chsc_characteristics.secm)
+ 		return -EINVAL;
+ 
+-	spin_lock_irq(&chsc_page_lock);
++	spin_lock_irqsave(&chsc_page_lock, flags);
+ 	memset(chsc_page, 0, PAGE_SIZE);
+ 	scmc_area = chsc_page;
+ 	scmc_area->request.length = 0x0010;
+@@ -1053,7 +1056,7 @@ int chsc_get_channel_measurement_chars(struct channel_path *chp)
+ 	chsc_initialize_cmg_chars(chp, scmc_area->cmcv,
+ 				  (struct cmg_chars *) &scmc_area->data);
+ out:
+-	spin_unlock_irq(&chsc_page_lock);
++	spin_unlock_irqrestore(&chsc_page_lock, flags);
+ 	return ret;
+ }
+ 
+@@ -1134,6 +1137,7 @@ struct css_chsc_char css_chsc_characteristics;
+ int __init
+ chsc_determine_css_characteristics(void)
+ {
++	unsigned long flags;
+ 	int result;
+ 	struct {
+ 		struct chsc_header request;
+@@ -1146,7 +1150,7 @@ chsc_determine_css_characteristics(void)
+ 		u32 chsc_char[508];
+ 	} __attribute__ ((packed)) *scsc_area;
+ 
+-	spin_lock_irq(&chsc_page_lock);
++	spin_lock_irqsave(&chsc_page_lock, flags);
+ 	memset(chsc_page, 0, PAGE_SIZE);
+ 	scsc_area = chsc_page;
+ 	scsc_area->request.length = 0x0010;
+@@ -1168,7 +1172,7 @@ chsc_determine_css_characteristics(void)
+ 		CIO_CRW_EVENT(2, "chsc: scsc failed (rc=%04x)\n",
+ 			      scsc_area->response.code);
+ exit:
+-	spin_unlock_irq(&chsc_page_lock);
++	spin_unlock_irqrestore(&chsc_page_lock, flags);
+ 	return result;
+ }
+ 
+diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c
+index 661bb94e2548..228b99ee0483 100644
+--- a/drivers/scsi/cxlflash/main.c
++++ b/drivers/scsi/cxlflash/main.c
+@@ -823,17 +823,6 @@ static void notify_shutdown(struct cxlflash_cfg *cfg, bool wait)
+ }
+ 
+ /**
+- * cxlflash_shutdown() - shutdown handler
+- * @pdev:	PCI device associated with the host.
+- */
+-static void cxlflash_shutdown(struct pci_dev *pdev)
+-{
+-	struct cxlflash_cfg *cfg = pci_get_drvdata(pdev);
+-
+-	notify_shutdown(cfg, false);
+-}
+-
+-/**
+  * cxlflash_remove() - PCI entry point to tear down host
+  * @pdev:	PCI device associated with the host.
+  *
+@@ -844,6 +833,11 @@ static void cxlflash_remove(struct pci_dev *pdev)
+ 	struct cxlflash_cfg *cfg = pci_get_drvdata(pdev);
+ 	ulong lock_flags;
+ 
++	if (!pci_is_enabled(pdev)) {
++		pr_debug("%s: Device is disabled\n", __func__);
++		return;
++	}
++
+ 	/* If a Task Management Function is active, wait for it to complete
+ 	 * before continuing with remove.
+ 	 */
+@@ -2685,7 +2679,7 @@ static struct pci_driver cxlflash_driver = {
+ 	.id_table = cxlflash_pci_table,
+ 	.probe = cxlflash_probe,
+ 	.remove = cxlflash_remove,
+-	.shutdown = cxlflash_shutdown,
++	.shutdown = cxlflash_remove,
+ 	.err_handler = &cxlflash_err_handler,
+ };
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index cd91a684c945..4cb79902e7a8 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -4701,7 +4701,7 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ 			    le16_to_cpu(mpi_reply->DevHandle));
+ 		mpt3sas_trigger_scsi(ioc, data.skey, data.asc, data.ascq);
+ 
+-		if (!(ioc->logging_level & MPT_DEBUG_REPLY) &&
++		if ((ioc->logging_level & MPT_DEBUG_REPLY) &&
+ 		     ((scmd->sense_buffer[2] == UNIT_ATTENTION) ||
+ 		     (scmd->sense_buffer[2] == MEDIUM_ERROR) ||
+ 		     (scmd->sense_buffer[2] == HARDWARE_ERROR)))
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 9e9dadb52b3d..eec5e3f6e06b 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -760,7 +760,6 @@ static int dspi_remove(struct platform_device *pdev)
+ 	/* Disconnect from the SPI framework */
+ 	clk_disable_unprepare(dspi->clk);
+ 	spi_unregister_master(dspi->master);
+-	spi_master_put(dspi->master);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/android/ion/Kconfig b/drivers/staging/android/ion/Kconfig
+index 19c1572f1525..800245eac390 100644
+--- a/drivers/staging/android/ion/Kconfig
++++ b/drivers/staging/android/ion/Kconfig
+@@ -36,6 +36,7 @@ config ION_TEGRA
+ config ION_HISI
+ 	tristate "Ion for Hisilicon"
+ 	depends on ARCH_HISI && ION
++	select ION_OF
+ 	help
+ 	  Choose this option if you wish to use ion on Hisilicon Platform.
+ 
+diff --git a/drivers/staging/ks7010/ks_hostif.c b/drivers/staging/ks7010/ks_hostif.c
+index a8822fe2bd60..f4cee811cabd 100644
+--- a/drivers/staging/ks7010/ks_hostif.c
++++ b/drivers/staging/ks7010/ks_hostif.c
+@@ -69,16 +69,20 @@ inline u32 get_DWORD(struct ks_wlan_private *priv)
+ 	return data;
+ }
+ 
+-void ks_wlan_hw_wakeup_task(struct work_struct *work)
++static void ks_wlan_hw_wakeup_task(struct work_struct *work)
+ {
+ 	struct ks_wlan_private *priv =
+ 	    container_of(work, struct ks_wlan_private, ks_wlan_wakeup_task);
+ 	int ps_status = atomic_read(&priv->psstatus.status);
++	long time_left;
+ 
+ 	if (ps_status == PS_SNOOZE) {
+ 		ks_wlan_hw_wakeup_request(priv);
+-		if (!wait_for_completion_interruptible_timeout(&priv->psstatus.wakeup_wait, HZ / 50)) {	/* 20ms timeout */
+-			DPRINTK(1, "wake up timeout !!!\n");
++		time_left = wait_for_completion_interruptible_timeout(
++				&priv->psstatus.wakeup_wait,
++				msecs_to_jiffies(20));
++		if (time_left <= 0) {
++			DPRINTK(1, "wake up timeout or interrupted !!!\n");
+ 			schedule_work(&priv->ks_wlan_wakeup_task);
+ 			return;
+ 		}
+@@ -1505,7 +1509,7 @@ void hostif_infrastructure_set_request(struct ks_wlan_private *priv)
+ 	ks_wlan_hw_tx(priv, pp, hif_align_size(sizeof(*pp)), NULL, NULL, NULL);
+ }
+ 
+-void hostif_infrastructure_set2_request(struct ks_wlan_private *priv)
++static void hostif_infrastructure_set2_request(struct ks_wlan_private *priv)
+ {
+ 	struct hostif_infrastructure_set2_request_t *pp;
+ 	uint16_t capability;
+diff --git a/drivers/staging/rtl8188eu/core/rtw_cmd.c b/drivers/staging/rtl8188eu/core/rtw_cmd.c
+index 77485235c615..32d3a9c07aa3 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_cmd.c
++++ b/drivers/staging/rtl8188eu/core/rtw_cmd.c
+@@ -670,13 +670,13 @@ u8 rtw_addbareq_cmd(struct adapter *padapter, u8 tid, u8 *addr)
+ 	u8	res = _SUCCESS;
+ 
+ 
+-	ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL);
++	ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC);
+ 	if (!ph2c) {
+ 		res = _FAIL;
+ 		goto exit;
+ 	}
+ 
+-	paddbareq_parm = kzalloc(sizeof(struct addBaReq_parm), GFP_KERNEL);
++	paddbareq_parm = kzalloc(sizeof(struct addBaReq_parm), GFP_ATOMIC);
+ 	if (!paddbareq_parm) {
+ 		kfree(ph2c);
+ 		res = _FAIL;
+diff --git a/drivers/staging/sm750fb/ddk750_mode.c b/drivers/staging/sm750fb/ddk750_mode.c
+index ccb4e067661a..e29d4bd5dcec 100644
+--- a/drivers/staging/sm750fb/ddk750_mode.c
++++ b/drivers/staging/sm750fb/ddk750_mode.c
+@@ -63,7 +63,7 @@ static unsigned long displayControlAdjust_SM750LE(mode_parameter_t *pModeParam,
+ 	dispControl |= (CRT_DISPLAY_CTRL_CRTSELECT | CRT_DISPLAY_CTRL_RGBBIT);
+ 
+ 	/* Set bit 14 of display controller */
+-	dispControl = DISPLAY_CTRL_CLOCK_PHASE;
++	dispControl |= DISPLAY_CTRL_CLOCK_PHASE;
+ 
+ 	POKE32(CRT_DISPLAY_CTRL, dispControl);
+ 
+diff --git a/drivers/uio/uio_dmem_genirq.c b/drivers/uio/uio_dmem_genirq.c
+index 915facbf552e..e1134a4d97f3 100644
+--- a/drivers/uio/uio_dmem_genirq.c
++++ b/drivers/uio/uio_dmem_genirq.c
+@@ -229,7 +229,7 @@ static int uio_dmem_genirq_probe(struct platform_device *pdev)
+ 		++uiomem;
+ 	}
+ 
+-	priv->dmem_region_start = i;
++	priv->dmem_region_start = uiomem - &uioinfo->mem[0];
+ 	priv->num_dmem_regions = pdata->num_dynamic_regions;
+ 
+ 	for (i = 0; i < pdata->num_dynamic_regions; ++i) {
+diff --git a/fs/9p/acl.c b/fs/9p/acl.c
+index 5b6a1743ea17..b3c2cc79c20d 100644
+--- a/fs/9p/acl.c
++++ b/fs/9p/acl.c
+@@ -276,32 +276,26 @@ static int v9fs_xattr_set_acl(const struct xattr_handler *handler,
+ 	switch (handler->flags) {
+ 	case ACL_TYPE_ACCESS:
+ 		if (acl) {
+-			umode_t mode = inode->i_mode;
+-			retval = posix_acl_equiv_mode(acl, &mode);
+-			if (retval < 0)
++			struct iattr iattr;
++
++			retval = posix_acl_update_mode(inode, &iattr.ia_mode, &acl);
++			if (retval)
+ 				goto err_out;
+-			else {
+-				struct iattr iattr;
+-				if (retval == 0) {
+-					/*
+-					 * ACL can be represented
+-					 * by the mode bits. So don't
+-					 * update ACL.
+-					 */
+-					acl = NULL;
+-					value = NULL;
+-					size = 0;
+-				}
+-				/* Updte the mode bits */
+-				iattr.ia_mode = ((mode & S_IALLUGO) |
+-						 (inode->i_mode & ~S_IALLUGO));
+-				iattr.ia_valid = ATTR_MODE;
+-				/* FIXME should we update ctime ?
+-				 * What is the following setxattr update the
+-				 * mode ?
++			if (!acl) {
++				/*
++				 * ACL can be represented
++				 * by the mode bits. So don't
++				 * update ACL.
+ 				 */
+-				v9fs_vfs_setattr_dotl(dentry, &iattr);
++				value = NULL;
++				size = 0;
+ 			}
++			iattr.ia_valid = ATTR_MODE;
++			/* FIXME should we update ctime ?
++			 * What is the following setxattr update the
++			 * mode ?
++			 */
++			v9fs_vfs_setattr_dotl(dentry, &iattr);
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
+index 53bb7af4e5f0..247b8dfaf6e5 100644
+--- a/fs/btrfs/acl.c
++++ b/fs/btrfs/acl.c
+@@ -79,11 +79,9 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans,
+ 	case ACL_TYPE_ACCESS:
+ 		name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			ret = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (ret < 0)
++			ret = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (ret)
+ 				return ret;
+-			if (ret == 0)
+-				acl = NULL;
+ 		}
+ 		ret = 0;
+ 		break;
+diff --git a/fs/ceph/acl.c b/fs/ceph/acl.c
+index 4f67227f69a5..d0b6b342dff9 100644
+--- a/fs/ceph/acl.c
++++ b/fs/ceph/acl.c
+@@ -95,11 +95,9 @@ int ceph_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 	case ACL_TYPE_ACCESS:
+ 		name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			ret = posix_acl_equiv_mode(acl, &new_mode);
+-			if (ret < 0)
++			ret = posix_acl_update_mode(inode, &new_mode, &acl);
++			if (ret)
+ 				goto out;
+-			if (ret == 0)
+-				acl = NULL;
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/ext2/acl.c b/fs/ext2/acl.c
+index 42f1d1814083..e725aa0890e0 100644
+--- a/fs/ext2/acl.c
++++ b/fs/ext2/acl.c
+@@ -190,15 +190,11 @@ ext2_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 		case ACL_TYPE_ACCESS:
+ 			name_index = EXT2_XATTR_INDEX_POSIX_ACL_ACCESS;
+ 			if (acl) {
+-				error = posix_acl_equiv_mode(acl, &inode->i_mode);
+-				if (error < 0)
++				error = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++				if (error)
+ 					return error;
+-				else {
+-					inode->i_ctime = CURRENT_TIME_SEC;
+-					mark_inode_dirty(inode);
+-					if (error == 0)
+-						acl = NULL;
+-				}
++				inode->i_ctime = CURRENT_TIME_SEC;
++				mark_inode_dirty(inode);
+ 			}
+ 			break;
+ 
+diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
+index c6601a476c02..dfa519979038 100644
+--- a/fs/ext4/acl.c
++++ b/fs/ext4/acl.c
+@@ -193,15 +193,11 @@ __ext4_set_acl(handle_t *handle, struct inode *inode, int type,
+ 	case ACL_TYPE_ACCESS:
+ 		name_index = EXT4_XATTR_INDEX_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			error = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (error < 0)
++			error = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (error)
+ 				return error;
+-			else {
+-				inode->i_ctime = ext4_current_time(inode);
+-				ext4_mark_inode_dirty(handle, inode);
+-				if (error == 0)
+-					acl = NULL;
+-			}
++			inode->i_ctime = ext4_current_time(inode);
++			ext4_mark_inode_dirty(handle, inode);
+ 		}
+ 		break;
+ 
+diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c
+index 4dcc9e28dc5c..31344247ce89 100644
+--- a/fs/f2fs/acl.c
++++ b/fs/f2fs/acl.c
+@@ -210,12 +210,10 @@ static int __f2fs_set_acl(struct inode *inode, int type,
+ 	case ACL_TYPE_ACCESS:
+ 		name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			error = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (error < 0)
++			error = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (error)
+ 				return error;
+ 			set_acl_inode(inode, inode->i_mode);
+-			if (error == 0)
+-				acl = NULL;
+ 		}
+ 		break;
+ 
+diff --git a/fs/gfs2/acl.c b/fs/gfs2/acl.c
+index 363ba9e9d8d0..2524807ee070 100644
+--- a/fs/gfs2/acl.c
++++ b/fs/gfs2/acl.c
+@@ -92,17 +92,11 @@ int __gfs2_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 	if (type == ACL_TYPE_ACCESS) {
+ 		umode_t mode = inode->i_mode;
+ 
+-		error = posix_acl_equiv_mode(acl, &mode);
+-		if (error < 0)
++		error = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++		if (error)
+ 			return error;
+-
+-		if (error == 0)
+-			acl = NULL;
+-
+-		if (mode != inode->i_mode) {
+-			inode->i_mode = mode;
++		if (mode != inode->i_mode)
+ 			mark_inode_dirty(inode);
+-		}
+ 	}
+ 
+ 	if (acl) {
+diff --git a/fs/hfsplus/posix_acl.c b/fs/hfsplus/posix_acl.c
+index ab7ea2506b4d..9b92058a1240 100644
+--- a/fs/hfsplus/posix_acl.c
++++ b/fs/hfsplus/posix_acl.c
+@@ -65,8 +65,8 @@ int hfsplus_set_posix_acl(struct inode *inode, struct posix_acl *acl,
+ 	case ACL_TYPE_ACCESS:
+ 		xattr_name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			err = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (err < 0)
++			err = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (err)
+ 				return err;
+ 		}
+ 		err = 0;
+diff --git a/fs/jffs2/acl.c b/fs/jffs2/acl.c
+index bc2693d56298..2a0f2a1044c1 100644
+--- a/fs/jffs2/acl.c
++++ b/fs/jffs2/acl.c
+@@ -233,9 +233,10 @@ int jffs2_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 	case ACL_TYPE_ACCESS:
+ 		xprefix = JFFS2_XPREFIX_ACL_ACCESS;
+ 		if (acl) {
+-			umode_t mode = inode->i_mode;
+-			rc = posix_acl_equiv_mode(acl, &mode);
+-			if (rc < 0)
++			umode_t mode;
++
++			rc = posix_acl_update_mode(inode, &mode, &acl);
++			if (rc)
+ 				return rc;
+ 			if (inode->i_mode != mode) {
+ 				struct iattr attr;
+@@ -247,8 +248,6 @@ int jffs2_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 				if (rc < 0)
+ 					return rc;
+ 			}
+-			if (rc == 0)
+-				acl = NULL;
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/jfs/acl.c b/fs/jfs/acl.c
+index 21fa92ba2c19..3a1e1554a4e3 100644
+--- a/fs/jfs/acl.c
++++ b/fs/jfs/acl.c
+@@ -78,13 +78,11 @@ static int __jfs_set_acl(tid_t tid, struct inode *inode, int type,
+ 	case ACL_TYPE_ACCESS:
+ 		ea_name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			rc = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (rc < 0)
++			rc = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (rc)
+ 				return rc;
+ 			inode->i_ctime = CURRENT_TIME;
+ 			mark_inode_dirty(inode);
+-			if (rc == 0)
+-				acl = NULL;
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/ocfs2/acl.c b/fs/ocfs2/acl.c
+index 2162434728c0..164307b99405 100644
+--- a/fs/ocfs2/acl.c
++++ b/fs/ocfs2/acl.c
+@@ -241,13 +241,11 @@ int ocfs2_set_acl(handle_t *handle,
+ 	case ACL_TYPE_ACCESS:
+ 		name_index = OCFS2_XATTR_INDEX_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			umode_t mode = inode->i_mode;
+-			ret = posix_acl_equiv_mode(acl, &mode);
+-			if (ret < 0)
+-				return ret;
++			umode_t mode;
+ 
+-			if (ret == 0)
+-				acl = NULL;
++			ret = posix_acl_update_mode(inode, &mode, &acl);
++			if (ret)
++				return ret;
+ 
+ 			ret = ocfs2_acl_set_mode(inode, di_bh,
+ 						 handle, mode);
+diff --git a/fs/orangefs/acl.c b/fs/orangefs/acl.c
+index 28f2195cd798..7a3754488312 100644
+--- a/fs/orangefs/acl.c
++++ b/fs/orangefs/acl.c
+@@ -73,14 +73,11 @@ int orangefs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 	case ACL_TYPE_ACCESS:
+ 		name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			umode_t mode = inode->i_mode;
+-			/*
+-			 * can we represent this with the traditional file
+-			 * mode permission bits?
+-			 */
+-			error = posix_acl_equiv_mode(acl, &mode);
+-			if (error < 0) {
+-				gossip_err("%s: posix_acl_equiv_mode err: %d\n",
++			umode_t mode;
++
++			error = posix_acl_update_mode(inode, &mode, &acl);
++			if (error) {
++				gossip_err("%s: posix_acl_update_mode err: %d\n",
+ 					   __func__,
+ 					   error);
+ 				return error;
+@@ -90,8 +87,6 @@ int orangefs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 				SetModeFlag(orangefs_inode);
+ 			inode->i_mode = mode;
+ 			mark_inode_dirty_sync(inode);
+-			if (error == 0)
+-				acl = NULL;
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/posix_acl.c b/fs/posix_acl.c
+index 59d47ab0791a..bfc3ec388322 100644
+--- a/fs/posix_acl.c
++++ b/fs/posix_acl.c
+@@ -626,6 +626,37 @@ no_mem:
+ }
+ EXPORT_SYMBOL_GPL(posix_acl_create);
+ 
++/**
++ * posix_acl_update_mode  -  update mode in set_acl
++ *
++ * Update the file mode when setting an ACL: compute the new file permission
++ * bits based on the ACL.  In addition, if the ACL is equivalent to the new
++ * file mode, set *acl to NULL to indicate that no ACL should be set.
++ *
++ * As with chmod, clear the setgit bit if the caller is not in the owning group
++ * or capable of CAP_FSETID (see inode_change_ok).
++ *
++ * Called from set_acl inode operations.
++ */
++int posix_acl_update_mode(struct inode *inode, umode_t *mode_p,
++			  struct posix_acl **acl)
++{
++	umode_t mode = inode->i_mode;
++	int error;
++
++	error = posix_acl_equiv_mode(*acl, &mode);
++	if (error < 0)
++		return error;
++	if (error == 0)
++		*acl = NULL;
++	if (!in_group_p(inode->i_gid) &&
++	    !capable_wrt_inode_uidgid(inode, CAP_FSETID))
++		mode &= ~S_ISGID;
++	*mode_p = mode;
++	return 0;
++}
++EXPORT_SYMBOL(posix_acl_update_mode);
++
+ /*
+  * Fix up the uids and gids in posix acl extended attributes in place.
+  */
+diff --git a/fs/reiserfs/xattr_acl.c b/fs/reiserfs/xattr_acl.c
+index dbed42f755e0..27376681c640 100644
+--- a/fs/reiserfs/xattr_acl.c
++++ b/fs/reiserfs/xattr_acl.c
+@@ -242,13 +242,9 @@ __reiserfs_set_acl(struct reiserfs_transaction_handle *th, struct inode *inode,
+ 	case ACL_TYPE_ACCESS:
+ 		name = XATTR_NAME_POSIX_ACL_ACCESS;
+ 		if (acl) {
+-			error = posix_acl_equiv_mode(acl, &inode->i_mode);
+-			if (error < 0)
++			error = posix_acl_update_mode(inode, &inode->i_mode, &acl);
++			if (error)
+ 				return error;
+-			else {
+-				if (error == 0)
+-					acl = NULL;
+-			}
+ 		}
+ 		break;
+ 	case ACL_TYPE_DEFAULT:
+diff --git a/fs/xfs/xfs_acl.c b/fs/xfs/xfs_acl.c
+index b6e527b8eccb..8a0dec89ca56 100644
+--- a/fs/xfs/xfs_acl.c
++++ b/fs/xfs/xfs_acl.c
+@@ -257,16 +257,11 @@ xfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ 		return error;
+ 
+ 	if (type == ACL_TYPE_ACCESS) {
+-		umode_t mode = inode->i_mode;
+-		error = posix_acl_equiv_mode(acl, &mode);
+-
+-		if (error <= 0) {
+-			acl = NULL;
+-
+-			if (error < 0)
+-				return error;
+-		}
++		umode_t mode;
+ 
++		error = posix_acl_update_mode(inode, &mode, &acl);
++		if (error)
++			return error;
+ 		error = xfs_set_mode(inode, mode);
+ 		if (error)
+ 			return error;
+diff --git a/include/drm/drmP.h b/include/drm/drmP.h
+index d3778652e462..988903a59007 100644
+--- a/include/drm/drmP.h
++++ b/include/drm/drmP.h
+@@ -938,7 +938,8 @@ static inline int drm_debugfs_remove_files(const struct drm_info_list *files,
+ #endif
+ 
+ extern struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
+-		struct drm_gem_object *obj, int flags);
++					    struct drm_gem_object *obj,
++					    int flags);
+ extern int drm_gem_prime_handle_to_fd(struct drm_device *dev,
+ 		struct drm_file *file_priv, uint32_t handle, uint32_t flags,
+ 		int *prime_fd);
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index c26d4638f665..fe99e6f956e2 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -450,8 +450,8 @@ static inline pgoff_t basepage_index(struct page *page)
+ 	return __basepage_index(page);
+ }
+ 
+-extern void dissolve_free_huge_pages(unsigned long start_pfn,
+-				     unsigned long end_pfn);
++extern int dissolve_free_huge_pages(unsigned long start_pfn,
++				    unsigned long end_pfn);
+ static inline bool hugepage_migration_supported(struct hstate *h)
+ {
+ #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
+@@ -518,7 +518,7 @@ static inline pgoff_t basepage_index(struct page *page)
+ {
+ 	return page->index;
+ }
+-#define dissolve_free_huge_pages(s, e)	do {} while (0)
++#define dissolve_free_huge_pages(s, e)	0
+ #define hugepage_migration_supported(h)	false
+ 
+ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index b519e137b9b7..bbfce62a0bd7 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -129,6 +129,8 @@ static inline struct nd_blk_region_desc *to_blk_region_desc(
+ }
+ 
+ int nvdimm_bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length);
++void nvdimm_clear_from_poison_list(struct nvdimm_bus *nvdimm_bus,
++		phys_addr_t start, unsigned int len);
+ struct nvdimm_bus *nvdimm_bus_register(struct device *parent,
+ 		struct nvdimm_bus_descriptor *nfit_desc);
+ void nvdimm_bus_unregister(struct nvdimm_bus *nvdimm_bus);
+diff --git a/include/linux/posix_acl.h b/include/linux/posix_acl.h
+index d5d3d741f028..bf1046d0397b 100644
+--- a/include/linux/posix_acl.h
++++ b/include/linux/posix_acl.h
+@@ -93,6 +93,7 @@ extern int set_posix_acl(struct inode *, int, struct posix_acl *);
+ extern int posix_acl_chmod(struct inode *, umode_t);
+ extern int posix_acl_create(struct inode *, umode_t *, struct posix_acl **,
+ 		struct posix_acl **);
++extern int posix_acl_update_mode(struct inode *, umode_t *, struct posix_acl **);
+ 
+ extern int simple_set_acl(struct inode *, struct posix_acl *, int);
+ extern int simple_acl_create(struct inode *, struct inode *);
+diff --git a/kernel/irq/generic-chip.c b/kernel/irq/generic-chip.c
+index abd286afbd27..a4775f3451b9 100644
+--- a/kernel/irq/generic-chip.c
++++ b/kernel/irq/generic-chip.c
+@@ -411,8 +411,29 @@ int irq_map_generic_chip(struct irq_domain *d, unsigned int virq,
+ }
+ EXPORT_SYMBOL_GPL(irq_map_generic_chip);
+ 
++static void irq_unmap_generic_chip(struct irq_domain *d, unsigned int virq)
++{
++	struct irq_data *data = irq_domain_get_irq_data(d, virq);
++	struct irq_domain_chip_generic *dgc = d->gc;
++	unsigned int hw_irq = data->hwirq;
++	struct irq_chip_generic *gc;
++	int irq_idx;
++
++	gc = irq_get_domain_generic_chip(d, hw_irq);
++	if (!gc)
++		return;
++
++	irq_idx = hw_irq % dgc->irqs_per_chip;
++
++	clear_bit(irq_idx, &gc->installed);
++	irq_domain_set_info(d, virq, hw_irq, &no_irq_chip, NULL, NULL, NULL,
++			    NULL);
++
++}
++
+ struct irq_domain_ops irq_generic_chip_ops = {
+ 	.map	= irq_map_generic_chip,
++	.unmap  = irq_unmap_generic_chip,
+ 	.xlate	= irq_domain_xlate_onetwocell,
+ };
+ EXPORT_SYMBOL_GPL(irq_generic_chip_ops);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 603bdd01ec2c..770d83eb3f48 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1437,22 +1437,32 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
+ 
+ /*
+  * Dissolve a given free hugepage into free buddy pages. This function does
+- * nothing for in-use (including surplus) hugepages.
++ * nothing for in-use (including surplus) hugepages. Returns -EBUSY if the
++ * number of free hugepages would be reduced below the number of reserved
++ * hugepages.
+  */
+-static void dissolve_free_huge_page(struct page *page)
++static int dissolve_free_huge_page(struct page *page)
+ {
++	int rc = 0;
++
+ 	spin_lock(&hugetlb_lock);
+ 	if (PageHuge(page) && !page_count(page)) {
+ 		struct page *head = compound_head(page);
+ 		struct hstate *h = page_hstate(head);
+ 		int nid = page_to_nid(head);
++		if (h->free_huge_pages - h->resv_huge_pages == 0) {
++			rc = -EBUSY;
++			goto out;
++		}
+ 		list_del(&head->lru);
+ 		h->free_huge_pages--;
+ 		h->free_huge_pages_node[nid]--;
+ 		h->max_huge_pages--;
+ 		update_and_free_page(h, head);
+ 	}
++out:
+ 	spin_unlock(&hugetlb_lock);
++	return rc;
+ }
+ 
+ /*
+@@ -1460,16 +1470,28 @@ static void dissolve_free_huge_page(struct page *page)
+  * make specified memory blocks removable from the system.
+  * Note that this will dissolve a free gigantic hugepage completely, if any
+  * part of it lies within the given range.
++ * Also note that if dissolve_free_huge_page() returns with an error, all
++ * free hugepages that were dissolved before that error are lost.
+  */
+-void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
++int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ 	unsigned long pfn;
++	struct page *page;
++	int rc = 0;
+ 
+ 	if (!hugepages_supported())
+-		return;
++		return rc;
++
++	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
++		page = pfn_to_page(pfn);
++		if (PageHuge(page) && !page_count(page)) {
++			rc = dissolve_free_huge_page(page);
++			if (rc)
++				break;
++		}
++	}
+ 
+-	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
+-		dissolve_free_huge_page(pfn_to_page(pfn));
++	return rc;
+ }
+ 
+ /*
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 9d29ba0f7192..962927309b6e 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1945,7 +1945,9 @@ repeat:
+ 	 * dissolve free hugepages in the memory block before doing offlining
+ 	 * actually in order to make hugetlbfs's object counting consistent.
+ 	 */
+-	dissolve_free_huge_pages(start_pfn, end_pfn);
++	ret = dissolve_free_huge_pages(start_pfn, end_pfn);
++	if (ret)
++		goto failed_removal;
+ 	/* check again */
+ 	offlined_pages = check_pages_isolated(start_pfn, end_pfn);
+ 	if (offlined_pages < 0) {
+diff --git a/sound/soc/intel/boards/bxt_da7219_max98357a.c b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+index 3774b117d365..49b65d481949 100644
+--- a/sound/soc/intel/boards/bxt_da7219_max98357a.c
++++ b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+@@ -255,7 +255,7 @@ static struct snd_soc_ops broxton_da7219_ops = {
+ /* broxton digital audio interface glue - connects codec <--> CPU */
+ static struct snd_soc_dai_link broxton_dais[] = {
+ 	/* Front End DAI links */
+-	[BXT_DPCM_AUDIO_PB]
++	[BXT_DPCM_AUDIO_PB] =
+ 	{
+ 		.name = "Bxt Audio Port",
+ 		.stream_name = "Audio",
+@@ -271,7 +271,7 @@ static struct snd_soc_dai_link broxton_dais[] = {
+ 		.dpcm_playback = 1,
+ 		.ops = &broxton_da7219_fe_ops,
+ 	},
+-	[BXT_DPCM_AUDIO_CP]
++	[BXT_DPCM_AUDIO_CP] =
+ 	{
+ 		.name = "Bxt Audio Capture Port",
+ 		.stream_name = "Audio Record",
+@@ -286,7 +286,7 @@ static struct snd_soc_dai_link broxton_dais[] = {
+ 		.dpcm_capture = 1,
+ 		.ops = &broxton_da7219_fe_ops,
+ 	},
+-	[BXT_DPCM_AUDIO_REF_CP]
++	[BXT_DPCM_AUDIO_REF_CP] =
+ 	{
+ 		.name = "Bxt Audio Reference cap",
+ 		.stream_name = "Refcap",
+@@ -300,7 +300,7 @@ static struct snd_soc_dai_link broxton_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI1_PB]
++	[BXT_DPCM_AUDIO_HDMI1_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port1",
+ 		.stream_name = "Hdmi1",
+@@ -313,7 +313,7 @@ static struct snd_soc_dai_link broxton_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI2_PB]
++	[BXT_DPCM_AUDIO_HDMI2_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port2",
+ 		.stream_name = "Hdmi2",
+@@ -326,7 +326,7 @@ static struct snd_soc_dai_link broxton_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI3_PB]
++	[BXT_DPCM_AUDIO_HDMI3_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port3",
+ 		.stream_name = "Hdmi3",
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index 253d7bfbf511..d610bdca1608 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -271,7 +271,7 @@ static const struct snd_soc_ops broxton_rt286_fe_ops = {
+ /* broxton digital audio interface glue - connects codec <--> CPU */
+ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 	/* Front End DAI links */
+-	[BXT_DPCM_AUDIO_PB]
++	[BXT_DPCM_AUDIO_PB] =
+ 	{
+ 		.name = "Bxt Audio Port",
+ 		.stream_name = "Audio",
+@@ -286,7 +286,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.dpcm_playback = 1,
+ 		.ops = &broxton_rt286_fe_ops,
+ 	},
+-	[BXT_DPCM_AUDIO_CP]
++	[BXT_DPCM_AUDIO_CP] =
+ 	{
+ 		.name = "Bxt Audio Capture Port",
+ 		.stream_name = "Audio Record",
+@@ -300,7 +300,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.dpcm_capture = 1,
+ 		.ops = &broxton_rt286_fe_ops,
+ 	},
+-	[BXT_DPCM_AUDIO_REF_CP]
++	[BXT_DPCM_AUDIO_REF_CP] =
+ 	{
+ 		.name = "Bxt Audio Reference cap",
+ 		.stream_name = "refcap",
+@@ -313,7 +313,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_DMIC_CP]
++	[BXT_DPCM_AUDIO_DMIC_CP] =
+ 	{
+ 		.name = "Bxt Audio DMIC cap",
+ 		.stream_name = "dmiccap",
+@@ -327,7 +327,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.dynamic = 1,
+ 		.ops = &broxton_dmic_ops,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI1_PB]
++	[BXT_DPCM_AUDIO_HDMI1_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port1",
+ 		.stream_name = "Hdmi1",
+@@ -340,7 +340,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI2_PB]
++	[BXT_DPCM_AUDIO_HDMI2_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port2",
+ 		.stream_name = "Hdmi2",
+@@ -353,7 +353,7 @@ static struct snd_soc_dai_link broxton_rt298_dais[] = {
+ 		.nonatomic = 1,
+ 		.dynamic = 1,
+ 	},
+-	[BXT_DPCM_AUDIO_HDMI3_PB]
++	[BXT_DPCM_AUDIO_HDMI3_PB] =
+ 	{
+ 		.name = "Bxt HDMI Port3",
+ 		.stream_name = "Hdmi3",
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index d908ff8f9755..801082fdc3e0 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -823,6 +823,7 @@ static int dapm_create_or_share_kcontrol(struct snd_soc_dapm_widget *w,
+ 			case snd_soc_dapm_switch:
+ 			case snd_soc_dapm_mixer:
+ 			case snd_soc_dapm_pga:
++			case snd_soc_dapm_out_drv:
+ 				wname_in_long_name = true;
+ 				kcname_in_long_name = true;
+ 				break;
+@@ -3049,6 +3050,9 @@ int snd_soc_dapm_get_volsw(struct snd_kcontrol *kcontrol,
+ 	}
+ 	mutex_unlock(&card->dapm_mutex);
+ 
++	if (ret)
++		return ret;
++
+ 	if (invert)
+ 		ucontrol->value.integer.value[0] = max - val;
+ 	else
+@@ -3200,7 +3204,7 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
+ 	if (e->shift_l != e->shift_r) {
+ 		if (item[1] > e->items)
+ 			return -EINVAL;
+-		val |= snd_soc_enum_item_to_val(e, item[1]) << e->shift_l;
++		val |= snd_soc_enum_item_to_val(e, item[1]) << e->shift_r;
+ 		mask |= e->mask << e->shift_r;
+ 	}
+ 
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index ee7f15aa46fc..34069076bf8e 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -1475,6 +1475,7 @@ widget:
+ 	if (widget == NULL) {
+ 		dev_err(tplg->dev, "ASoC: failed to create widget %s controls\n",
+ 			w->name);
++		ret = -ENOMEM;
+ 		goto hdr_err;
+ 	}
+ 
+diff --git a/tools/perf/perf-sys.h b/tools/perf/perf-sys.h
+index 7ed72a475c57..e4b717e9eb6c 100644
+--- a/tools/perf/perf-sys.h
++++ b/tools/perf/perf-sys.h
+@@ -20,7 +20,6 @@
+ #endif
+ 
+ #ifdef __powerpc__
+-#include "../../arch/powerpc/include/uapi/asm/unistd.h"
+ #define CPUINFO_PROC	{"cpu"}
+ #endif
+ 
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index 13d414384739..7aee954b307f 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -1091,7 +1091,6 @@ static int __hpp__slsmg_color_printf(struct perf_hpp *hpp, const char *fmt, ...)
+ 	ret = scnprintf(hpp->buf, hpp->size, fmt, len, percent);
+ 	ui_browser__printf(arg->b, "%s", hpp->buf);
+ 
+-	advance_hpp(hpp, ret);
+ 	return ret;
+ }
+ 
+@@ -2046,6 +2045,7 @@ void hist_browser__init(struct hist_browser *browser,
+ 			struct hists *hists)
+ {
+ 	struct perf_hpp_fmt *fmt;
++	struct perf_hpp_list_node *node;
+ 
+ 	browser->hists			= hists;
+ 	browser->b.refresh		= hist_browser__refresh;
+@@ -2058,6 +2058,11 @@ void hist_browser__init(struct hist_browser *browser,
+ 		perf_hpp__reset_width(fmt, hists);
+ 		++browser->b.columns;
+ 	}
++	/* hierarchy entries have their own hpp list */
++	list_for_each_entry(node, &hists->hpp_formats, list) {
++		perf_hpp_list__for_each_format(&node->hpp, fmt)
++			perf_hpp__reset_width(fmt, hists);
++	}
+ }
+ 
+ struct hist_browser *hist_browser__new(struct hists *hists)
+diff --git a/tools/perf/ui/stdio/hist.c b/tools/perf/ui/stdio/hist.c
+index f04a63112079..d0cae75408ff 100644
+--- a/tools/perf/ui/stdio/hist.c
++++ b/tools/perf/ui/stdio/hist.c
+@@ -628,14 +628,6 @@ hists__fprintf_hierarchy_headers(struct hists *hists,
+ 				 struct perf_hpp *hpp,
+ 				 FILE *fp)
+ {
+-	struct perf_hpp_list_node *fmt_node;
+-	struct perf_hpp_fmt *fmt;
+-
+-	list_for_each_entry(fmt_node, &hists->hpp_formats, list) {
+-		perf_hpp_list__for_each_format(&fmt_node->hpp, fmt)
+-			perf_hpp__reset_width(fmt, hists);
+-	}
+-
+ 	return print_hierarchy_header(hists, hpp, symbol_conf.field_sep, fp);
+ }
+ 
+@@ -714,6 +706,7 @@ size_t hists__fprintf(struct hists *hists, bool show_header, int max_rows,
+ 		      bool use_callchain)
+ {
+ 	struct perf_hpp_fmt *fmt;
++	struct perf_hpp_list_node *node;
+ 	struct rb_node *nd;
+ 	size_t ret = 0;
+ 	const char *sep = symbol_conf.field_sep;
+@@ -726,6 +719,11 @@ size_t hists__fprintf(struct hists *hists, bool show_header, int max_rows,
+ 
+ 	hists__for_each_format(hists, fmt)
+ 		perf_hpp__reset_width(fmt, hists);
++	/* hierarchy entries have their own hpp list */
++	list_for_each_entry(node, &hists->hpp_formats, list) {
++		perf_hpp_list__for_each_format(&node->hpp, fmt)
++			perf_hpp__reset_width(fmt, hists);
++	}
+ 
+ 	if (symbol_conf.col_width_list_str)
+ 		perf_hpp__set_user_width(symbol_conf.col_width_list_str);
+diff --git a/tools/perf/util/data-convert-bt.c b/tools/perf/util/data-convert-bt.c
+index 4f979bb27b6c..7123f4de32cc 100644
+--- a/tools/perf/util/data-convert-bt.c
++++ b/tools/perf/util/data-convert-bt.c
+@@ -437,7 +437,7 @@ add_bpf_output_values(struct bt_ctf_event_class *event_class,
+ 	int ret;
+ 
+ 	if (nr_elements * sizeof(u32) != raw_size)
+-		pr_warning("Incorrect raw_size (%u) in bpf output event, skip %lu bytes\n",
++		pr_warning("Incorrect raw_size (%u) in bpf output event, skip %zu bytes\n",
+ 			   raw_size, nr_elements * sizeof(u32) - raw_size);
+ 
+ 	len_type = bt_ctf_event_class_get_field_by_name(event_class, "raw_len");
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index a811c13a74d6..f77b3167585c 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1113,9 +1113,8 @@ new_symbol:
+ 	 * For misannotated, zeroed, ASM function sizes.
+ 	 */
+ 	if (nr > 0) {
+-		if (!symbol_conf.allow_aliases)
+-			symbols__fixup_duplicate(&dso->symbols[map->type]);
+ 		symbols__fixup_end(&dso->symbols[map->type]);
++		symbols__fixup_duplicate(&dso->symbols[map->type]);
+ 		if (kmap) {
+ 			/*
+ 			 * We need to fixup this here too because we create new
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 37e8d20ae03e..f29f336ed17b 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -152,6 +152,9 @@ void symbols__fixup_duplicate(struct rb_root *symbols)
+ 	struct rb_node *nd;
+ 	struct symbol *curr, *next;
+ 
++	if (symbol_conf.allow_aliases)
++		return;
++
+ 	nd = rb_first(symbols);
+ 
+ 	while (nd) {
+@@ -1234,8 +1237,8 @@ int __dso__load_kallsyms(struct dso *dso, const char *filename,
+ 	if (kallsyms__delta(map, filename, &delta))
+ 		return -1;
+ 
+-	symbols__fixup_duplicate(&dso->symbols[map->type]);
+ 	symbols__fixup_end(&dso->symbols[map->type]);
++	symbols__fixup_duplicate(&dso->symbols[map->type]);
+ 
+ 	if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
+ 		dso->symtab_type = DSO_BINARY_TYPE__GUEST_KALLSYMS;


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-04 17:17 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-11-04 17:17 UTC (permalink / raw
  To: gentoo-commits

commit:     b83b53d35e700f57f880fc71bfe91ff2b06f1560
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov  4 17:17:26 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov  4 17:17:26 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b83b53d3

BFQ v8r4 for kernel version 4.8

 0000_README                                                             | 2 +-
 ...3-for-4.patch1 => 5004_Turn-BFQ-v7r11-into-BFQ-v8r4-for-4.8.0.patch1 | 0
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index cd373fa..ef025ef 100644
--- a/0000_README
+++ b/0000_README
@@ -107,7 +107,7 @@ Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.8.patch
 From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
 Desc:   BFQ v7r11 patch 3 for 4.8: Early Queue Merge (EQM)
 
-Patch:  5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1
+Patch:  5004_Turn-BFQ-v7r11-into-BFQ-v8r4-for-4.8.0.patch1
 From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
 Desc:   BFQ v8r3 patch 4 for 4.8: Early Queue Merge (EQM)
 

diff --git a/5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1 b/5004_Turn-BFQ-v7r11-into-BFQ-v8r4-for-4.8.0.patch1
similarity index 100%
rename from 5004_blkck-bfq-turn-BFQ-v7r11-for-4.8.0-into-BFQ-v8r3-for-4.patch1
rename to 5004_Turn-BFQ-v7r11-into-BFQ-v8r4-for-4.8.0.patch1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-10 16:37 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-11-10 16:37 UTC (permalink / raw
  To: gentoo-commits

commit:     a75f22819733b4f5c7e9081b4805a4f378a873d7
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 10 16:36:53 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Nov 10 16:36:53 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a75f2281

Linux patch 4.8.7

 0000_README            |    4 +
 1006_linux-4.8.7.patch | 4331 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4335 insertions(+)

diff --git a/0000_README b/0000_README
index ef025ef..9cd8633 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.8.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.6
 
+Patch:  1006_linux-4.8.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.8.7.patch b/1006_linux-4.8.7.patch
new file mode 100644
index 0000000..652f761
--- /dev/null
+++ b/1006_linux-4.8.7.patch
@@ -0,0 +1,4331 @@
+diff --git a/Documentation/device-mapper/dm-raid.txt b/Documentation/device-mapper/dm-raid.txt
+index e5b6497116f4..c75b64a85859 100644
+--- a/Documentation/device-mapper/dm-raid.txt
++++ b/Documentation/device-mapper/dm-raid.txt
+@@ -309,3 +309,4 @@ Version History
+ 	with a reshape in progress.
+ 1.9.0   Add support for RAID level takeover/reshape/region size
+ 	and set size reduction.
++1.9.1   Fix activation of existing RAID 4/10 mapped devices
+diff --git a/Makefile b/Makefile
+index b249529204cd..4d0f28cb481d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index b3df1c60d465..386eee6de232 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -239,14 +239,25 @@
+ 			arm,primecell-periphid = <0x10480180>;
+ 			max-frequency = <100000000>;
+ 			bus-width = <4>;
++			cap-sd-highspeed;
+ 			cap-mmc-highspeed;
++			sd-uhs-sdr12;
++			sd-uhs-sdr25;
++			/* All direction control is used */
++			st,sig-dir-cmd;
++			st,sig-dir-dat0;
++			st,sig-dir-dat2;
++			st,sig-dir-dat31;
++			st,sig-pin-fbclk;
++			full-pwr-cycle;
+ 			vmmc-supply = <&ab8500_ldo_aux3_reg>;
+ 			vqmmc-supply = <&vmmci>;
+ 			pinctrl-names = "default", "sleep";
+ 			pinctrl-0 = <&sdi0_default_mode>;
+ 			pinctrl-1 = <&sdi0_sleep_mode>;
+ 
+-			cd-gpios  = <&gpio6 26 GPIO_ACTIVE_LOW>; // 218
++			/* GPIO218 MMC_CD */
++			cd-gpios  = <&gpio6 26 GPIO_ACTIVE_LOW>;
+ 
+ 			status = "okay";
+ 		};
+@@ -549,7 +560,7 @@
+ 					/* VMMCI level-shifter enable */
+ 					snowball_cfg3 {
+ 						pins = "GPIO217_AH12";
+-						ste,config = <&gpio_out_lo>;
++						ste,config = <&gpio_out_hi>;
+ 					};
+ 					/* VMMCI level-shifter voltage select */
+ 					snowball_cfg4 {
+diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig
+index f9b6bd306cfe..541647f57192 100644
+--- a/arch/arm/mach-mvebu/Kconfig
++++ b/arch/arm/mach-mvebu/Kconfig
+@@ -23,6 +23,7 @@ config MACH_MVEBU_V7
+ 	select CACHE_L2X0
+ 	select ARM_CPU_SUSPEND
+ 	select MACH_MVEBU_ANY
++	select MVEBU_CLK_COREDIV
+ 
+ config MACH_ARMADA_370
+ 	bool "Marvell Armada 370 boards"
+@@ -32,7 +33,6 @@ config MACH_ARMADA_370
+ 	select CPU_PJ4B
+ 	select MACH_MVEBU_V7
+ 	select PINCTRL_ARMADA_370
+-	select MVEBU_CLK_COREDIV
+ 	help
+ 	  Say 'Y' here if you want your kernel to support boards based
+ 	  on the Marvell Armada 370 SoC with device tree.
+@@ -50,7 +50,6 @@ config MACH_ARMADA_375
+ 	select HAVE_SMP
+ 	select MACH_MVEBU_V7
+ 	select PINCTRL_ARMADA_375
+-	select MVEBU_CLK_COREDIV
+ 	help
+ 	  Say 'Y' here if you want your kernel to support boards based
+ 	  on the Marvell Armada 375 SoC with device tree.
+@@ -68,7 +67,6 @@ config MACH_ARMADA_38X
+ 	select HAVE_SMP
+ 	select MACH_MVEBU_V7
+ 	select PINCTRL_ARMADA_38X
+-	select MVEBU_CLK_COREDIV
+ 	help
+ 	  Say 'Y' here if you want your kernel to support boards based
+ 	  on the Marvell Armada 380/385 SoC with device tree.
+diff --git a/arch/arm/mm/abort-lv4t.S b/arch/arm/mm/abort-lv4t.S
+index 6d8e8e3365d1..4cdfab31a0b6 100644
+--- a/arch/arm/mm/abort-lv4t.S
++++ b/arch/arm/mm/abort-lv4t.S
+@@ -7,7 +7,7 @@
+  *	   : r4 = aborted context pc
+  *	   : r5 = aborted context psr
+  *
+- * Returns : r4-r5, r10-r11, r13 preserved
++ * Returns : r4-r5, r9-r11, r13 preserved
+  *
+  * Purpose : obtain information about current aborted instruction.
+  * Note: we read user space.  This means we might cause a data
+@@ -48,7 +48,10 @@ ENTRY(v4t_late_abort)
+ /* c */	b	do_DataAbort			@ ldc	rd, [rn], #m	@ Same as ldr	rd, [rn], #m
+ /* d */	b	do_DataAbort			@ ldc	rd, [rn, #m]
+ /* e */	b	.data_unknown
+-/* f */
++/* f */	b	.data_unknown
++
++.data_unknown_r9:
++	ldr	r9, [sp], #4
+ .data_unknown:	@ Part of jumptable
+ 	mov	r0, r4
+ 	mov	r1, r8
+@@ -57,6 +60,7 @@ ENTRY(v4t_late_abort)
+ .data_arm_ldmstm:
+ 	tst	r8, #1 << 21			@ check writeback bit
+ 	beq	do_DataAbort			@ no writeback -> no fixup
++	str	r9, [sp, #-4]!
+ 	mov	r7, #0x11
+ 	orr	r7, r7, #0x1100
+ 	and	r6, r8, r7
+@@ -75,12 +79,14 @@ ENTRY(v4t_late_abort)
+ 	subne	r7, r7, r6, lsl #2		@ Undo increment
+ 	addeq	r7, r7, r6, lsl #2		@ Undo decrement
+ 	str	r7, [r2, r9, lsr #14]		@ Put register 'Rn'
++	ldr	r9, [sp], #4
+ 	b	do_DataAbort
+ 
+ .data_arm_lateldrhpre:
+ 	tst	r8, #1 << 21			@ Check writeback bit
+ 	beq	do_DataAbort			@ No writeback -> no fixup
+ .data_arm_lateldrhpost:
++	str	r9, [sp, #-4]!
+ 	and	r9, r8, #0x00f			@ get Rm / low nibble of immediate value
+ 	tst	r8, #1 << 22			@ if (immediate offset)
+ 	andne	r6, r8, #0xf00			@ { immediate high nibble
+@@ -93,6 +99,7 @@ ENTRY(v4t_late_abort)
+ 	subne	r7, r7, r6			@ Undo incrmenet
+ 	addeq	r7, r7, r6			@ Undo decrement
+ 	str	r7, [r2, r9, lsr #14]		@ Put register 'Rn'
++	ldr	r9, [sp], #4
+ 	b	do_DataAbort
+ 
+ .data_arm_lateldrpreconst:
+@@ -101,12 +108,14 @@ ENTRY(v4t_late_abort)
+ .data_arm_lateldrpostconst:
+ 	movs	r6, r8, lsl #20			@ Get offset
+ 	beq	do_DataAbort			@ zero -> no fixup
++	str	r9, [sp, #-4]!
+ 	and	r9, r8, #15 << 16		@ Extract 'n' from instruction
+ 	ldr	r7, [r2, r9, lsr #14]		@ Get register 'Rn'
+ 	tst	r8, #1 << 23			@ Check U bit
+ 	subne	r7, r7, r6, lsr #20		@ Undo increment
+ 	addeq	r7, r7, r6, lsr #20		@ Undo decrement
+ 	str	r7, [r2, r9, lsr #14]		@ Put register 'Rn'
++	ldr	r9, [sp], #4
+ 	b	do_DataAbort
+ 
+ .data_arm_lateldrprereg:
+@@ -115,6 +124,7 @@ ENTRY(v4t_late_abort)
+ .data_arm_lateldrpostreg:
+ 	and	r7, r8, #15			@ Extract 'm' from instruction
+ 	ldr	r6, [r2, r7, lsl #2]		@ Get register 'Rm'
++	str	r9, [sp, #-4]!
+ 	mov	r9, r8, lsr #7			@ get shift count
+ 	ands	r9, r9, #31
+ 	and	r7, r8, #0x70			@ get shift type
+@@ -126,33 +136,33 @@ ENTRY(v4t_late_abort)
+ 	b	.data_arm_apply_r6_and_rn
+ 	b	.data_arm_apply_r6_and_rn	@ 1: LSL #0
+ 	nop
+-	b	.data_unknown			@ 2: MUL?
++	b	.data_unknown_r9		@ 2: MUL?
+ 	nop
+-	b	.data_unknown			@ 3: MUL?
++	b	.data_unknown_r9		@ 3: MUL?
+ 	nop
+ 	mov	r6, r6, lsr r9			@ 4: LSR #!0
+ 	b	.data_arm_apply_r6_and_rn
+ 	mov	r6, r6, lsr #32			@ 5: LSR #32
+ 	b	.data_arm_apply_r6_and_rn
+-	b	.data_unknown			@ 6: MUL?
++	b	.data_unknown_r9		@ 6: MUL?
+ 	nop
+-	b	.data_unknown			@ 7: MUL?
++	b	.data_unknown_r9		@ 7: MUL?
+ 	nop
+ 	mov	r6, r6, asr r9			@ 8: ASR #!0
+ 	b	.data_arm_apply_r6_and_rn
+ 	mov	r6, r6, asr #32			@ 9: ASR #32
+ 	b	.data_arm_apply_r6_and_rn
+-	b	.data_unknown			@ A: MUL?
++	b	.data_unknown_r9		@ A: MUL?
+ 	nop
+-	b	.data_unknown			@ B: MUL?
++	b	.data_unknown_r9		@ B: MUL?
+ 	nop
+ 	mov	r6, r6, ror r9			@ C: ROR #!0
+ 	b	.data_arm_apply_r6_and_rn
+ 	mov	r6, r6, rrx			@ D: RRX
+ 	b	.data_arm_apply_r6_and_rn
+-	b	.data_unknown			@ E: MUL?
++	b	.data_unknown_r9		@ E: MUL?
+ 	nop
+-	b	.data_unknown			@ F: MUL?
++	b	.data_unknown_r9		@ F: MUL?
+ 
+ .data_thumb_abort:
+ 	ldrh	r8, [r4]			@ read instruction
+@@ -190,6 +200,7 @@ ENTRY(v4t_late_abort)
+ .data_thumb_pushpop:
+ 	tst	r8, #1 << 10
+ 	beq	.data_unknown
++	str	r9, [sp, #-4]!
+ 	and	r6, r8, #0x55			@ hweight8(r8) + R bit
+ 	and	r9, r8, #0xaa
+ 	add	r6, r6, r9, lsr #1
+@@ -204,9 +215,11 @@ ENTRY(v4t_late_abort)
+ 	addeq	r7, r7, r6, lsl #2		@ increment SP if PUSH
+ 	subne	r7, r7, r6, lsl #2		@ decrement SP if POP
+ 	str	r7, [r2, #13 << 2]
++	ldr	r9, [sp], #4
+ 	b	do_DataAbort
+ 
+ .data_thumb_ldmstm:
++	str	r9, [sp, #-4]!
+ 	and	r6, r8, #0x55			@ hweight8(r8)
+ 	and	r9, r8, #0xaa
+ 	add	r6, r6, r9, lsr #1
+@@ -219,4 +232,5 @@ ENTRY(v4t_late_abort)
+ 	and	r6, r6, #15			@ number of regs to transfer
+ 	sub	r7, r7, r6, lsl #2		@ always decrement
+ 	str	r7, [r2, r9, lsr #6]
++	ldr	r9, [sp], #4
+ 	b	do_DataAbort
+diff --git a/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi
+index da31bbbbb59e..399271853aad 100644
+--- a/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi
+@@ -131,7 +131,7 @@
+ 				#address-cells = <0x1>;
+ 				#size-cells = <0x0>;
+ 				cell-index = <1>;
+-				clocks = <&cpm_syscon0 0 3>;
++				clocks = <&cpm_syscon0 1 21>;
+ 				status = "disabled";
+ 			};
+ 
+diff --git a/arch/h8300/include/asm/thread_info.h b/arch/h8300/include/asm/thread_info.h
+index b408fe660cf8..3cef06875f5c 100644
+--- a/arch/h8300/include/asm/thread_info.h
++++ b/arch/h8300/include/asm/thread_info.h
+@@ -31,7 +31,6 @@ struct thread_info {
+ 	int		   cpu;			/* cpu we're on */
+ 	int		   preempt_count;	/* 0 => preemptable, <0 => BUG */
+ 	mm_segment_t		addr_limit;
+-	struct restart_block restart_block;
+ };
+ 
+ /*
+@@ -44,9 +43,6 @@ struct thread_info {
+ 	.cpu =		0,			\
+ 	.preempt_count = INIT_PREEMPT_COUNT,	\
+ 	.addr_limit	= KERNEL_DS,		\
+-	.restart_block	= {			\
+-		.fn = do_no_restart_syscall,	\
+-	},					\
+ }
+ 
+ #define init_thread_info	(init_thread_union.thread_info)
+diff --git a/arch/h8300/kernel/signal.c b/arch/h8300/kernel/signal.c
+index ad1f81f574e5..7138303cbbf2 100644
+--- a/arch/h8300/kernel/signal.c
++++ b/arch/h8300/kernel/signal.c
+@@ -79,7 +79,7 @@ restore_sigcontext(struct sigcontext *usc, int *pd0)
+ 	unsigned int er0;
+ 
+ 	/* Always make any pending restarted system calls return -EINTR */
+-	current_thread_info()->restart_block.fn = do_no_restart_syscall;
++	current->restart_block.fn = do_no_restart_syscall;
+ 
+ 	/* restore passed registers */
+ #define COPY(r)  do { err |= get_user(regs->r, &usc->sc_##r); } while (0)
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index b54bcadd8aec..45799ef232df 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -279,7 +279,10 @@ struct kvm_vcpu_arch {
+ 	/* Host KSEG0 address of the EI/DI offset */
+ 	void *kseg0_commpage;
+ 
+-	u32 io_gpr;		/* GPR used as IO source/target */
++	/* Resume PC after MMIO completion */
++	unsigned long io_pc;
++	/* GPR used as IO source/target */
++	u32 io_gpr;
+ 
+ 	struct hrtimer comparecount_timer;
+ 	/* Count timer control KVM register */
+@@ -301,8 +304,6 @@ struct kvm_vcpu_arch {
+ 	/* Bitmask of pending exceptions to be cleared */
+ 	unsigned long pending_exceptions_clr;
+ 
+-	u32 pending_load_cause;
+-
+ 	/* Save/Restore the entryhi register when are are preempted/scheduled back in */
+ 	unsigned long preempt_entryhi;
+ 
+diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
+index ca1cc30c0891..1958910b75c0 100644
+--- a/arch/mips/kernel/relocate.c
++++ b/arch/mips/kernel/relocate.c
+@@ -200,7 +200,7 @@ static inline __init unsigned long get_random_boot(void)
+ 
+ #if defined(CONFIG_USE_OF)
+ 	/* Get any additional entropy passed in device tree */
+-	{
++	if (initial_boot_params) {
+ 		int node, len;
+ 		u64 *prop;
+ 
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index 43853ec6e160..4d65285ca418 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -791,15 +791,15 @@ enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
+ 	struct mips_coproc *cop0 = vcpu->arch.cop0;
+ 	enum emulation_result er = EMULATE_DONE;
+ 
+-	if (kvm_read_c0_guest_status(cop0) & ST0_EXL) {
++	if (kvm_read_c0_guest_status(cop0) & ST0_ERL) {
++		kvm_clear_c0_guest_status(cop0, ST0_ERL);
++		vcpu->arch.pc = kvm_read_c0_guest_errorepc(cop0);
++	} else if (kvm_read_c0_guest_status(cop0) & ST0_EXL) {
+ 		kvm_debug("[%#lx] ERET to %#lx\n", vcpu->arch.pc,
+ 			  kvm_read_c0_guest_epc(cop0));
+ 		kvm_clear_c0_guest_status(cop0, ST0_EXL);
+ 		vcpu->arch.pc = kvm_read_c0_guest_epc(cop0);
+ 
+-	} else if (kvm_read_c0_guest_status(cop0) & ST0_ERL) {
+-		kvm_clear_c0_guest_status(cop0, ST0_ERL);
+-		vcpu->arch.pc = kvm_read_c0_guest_errorepc(cop0);
+ 	} else {
+ 		kvm_err("[%#lx] ERET when MIPS_SR_EXL|MIPS_SR_ERL == 0\n",
+ 			vcpu->arch.pc);
+@@ -1522,13 +1522,25 @@ enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
+ 					    struct kvm_vcpu *vcpu)
+ {
+ 	enum emulation_result er = EMULATE_DO_MMIO;
++	unsigned long curr_pc;
+ 	u32 op, rt;
+ 	u32 bytes;
+ 
+ 	rt = inst.i_format.rt;
+ 	op = inst.i_format.opcode;
+ 
+-	vcpu->arch.pending_load_cause = cause;
++	/*
++	 * Find the resume PC now while we have safe and easy access to the
++	 * prior branch instruction, and save it for
++	 * kvm_mips_complete_mmio_load() to restore later.
++	 */
++	curr_pc = vcpu->arch.pc;
++	er = update_pc(vcpu, cause);
++	if (er == EMULATE_FAIL)
++		return er;
++	vcpu->arch.io_pc = vcpu->arch.pc;
++	vcpu->arch.pc = curr_pc;
++
+ 	vcpu->arch.io_gpr = rt;
+ 
+ 	switch (op) {
+@@ -2488,9 +2500,8 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
+ 		goto done;
+ 	}
+ 
+-	er = update_pc(vcpu, vcpu->arch.pending_load_cause);
+-	if (er == EMULATE_FAIL)
+-		return er;
++	/* Restore saved resume PC */
++	vcpu->arch.pc = vcpu->arch.io_pc;
+ 
+ 	switch (run->mmio.len) {
+ 	case 4:
+@@ -2512,11 +2523,6 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
+ 		break;
+ 	}
+ 
+-	if (vcpu->arch.pending_load_cause & CAUSEF_BD)
+-		kvm_debug("[%#lx] Completing %d byte BD Load to gpr %d (0x%08lx) type %d\n",
+-			  vcpu->arch.pc, run->mmio.len, vcpu->arch.io_gpr, *gpr,
+-			  vcpu->mmio_needed);
+-
+ done:
+ 	return er;
+ }
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index d03422e5f188..7ed036c57d00 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -106,8 +106,6 @@ linux_gateway_entry:
+ 	mtsp	%r0,%sr4			/* get kernel space into sr4 */
+ 	mtsp	%r0,%sr5			/* get kernel space into sr5 */
+ 	mtsp	%r0,%sr6			/* get kernel space into sr6 */
+-	mfsp    %sr7,%r1                        /* save user sr7 */
+-	mtsp    %r1,%sr3                        /* and store it in sr3 */
+ 
+ #ifdef CONFIG_64BIT
+ 	/* for now we can *always* set the W bit on entry to the syscall
+@@ -133,6 +131,14 @@ linux_gateway_entry:
+ 	depdi	0, 31, 32, %r21
+ 1:	
+ #endif
++
++	/* We use a rsm/ssm pair to prevent sr3 from being clobbered
++	 * by external interrupts.
++	 */
++	mfsp    %sr7,%r1                        /* save user sr7 */
++	rsm	PSW_SM_I, %r0			/* disable interrupts */
++	mtsp    %r1,%sr3                        /* and store it in sr3 */
++
+ 	mfctl   %cr30,%r1
+ 	xor     %r1,%r30,%r30                   /* ye olde xor trick */
+ 	xor     %r1,%r30,%r1
+@@ -147,6 +153,7 @@ linux_gateway_entry:
+ 	 */
+ 
+ 	mtsp	%r0,%sr7			/* get kernel space into sr7 */
++	ssm	PSW_SM_I, %r0			/* enable interrupts */
+ 	STREGM	%r1,FRAME_SIZE(%r30)		/* save r1 (usp) here for now */
+ 	mfctl	%cr30,%r1			/* get task ptr in %r1 */
+ 	LDREG	TI_TASK(%r1),%r1
+diff --git a/arch/powerpc/include/asm/cpuidle.h b/arch/powerpc/include/asm/cpuidle.h
+index 01b8a13f0224..3919332965af 100644
+--- a/arch/powerpc/include/asm/cpuidle.h
++++ b/arch/powerpc/include/asm/cpuidle.h
+@@ -26,7 +26,7 @@ extern u64 pnv_first_deep_stop_state;
+ 	std	r0,0(r1);					\
+ 	ptesync;						\
+ 	ld	r0,0(r1);					\
+-1:	cmp	cr0,r0,r0;					\
++1:	cmpd	cr0,r0,r0;					\
+ 	bne	1b;						\
+ 	IDLE_INST;						\
+ 	b	.
+diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
+index f6f68f73e858..99e1397b71da 100644
+--- a/arch/powerpc/include/asm/tlb.h
++++ b/arch/powerpc/include/asm/tlb.h
+@@ -52,11 +52,23 @@ static inline int mm_is_core_local(struct mm_struct *mm)
+ 	return cpumask_subset(mm_cpumask(mm),
+ 			      topology_sibling_cpumask(smp_processor_id()));
+ }
++
++static inline int mm_is_thread_local(struct mm_struct *mm)
++{
++	return cpumask_equal(mm_cpumask(mm),
++			      cpumask_of(smp_processor_id()));
++}
++
+ #else
+ static inline int mm_is_core_local(struct mm_struct *mm)
+ {
+ 	return 1;
+ }
++
++static inline int mm_is_thread_local(struct mm_struct *mm)
++{
++	return 1;
++}
+ #endif
+ 
+ #endif /* __KERNEL__ */
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index bd739fed26e3..72dac0b58061 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -90,6 +90,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_ARCH_300)
+  * Threads will spin in HMT_LOW until the lock bit is cleared.
+  * r14 - pointer to core_idle_state
+  * r15 - used to load contents of core_idle_state
++ * r9  - used as a temporary variable
+  */
+ 
+ core_idle_lock_held:
+@@ -99,6 +100,8 @@ core_idle_lock_held:
+ 	bne	3b
+ 	HMT_MEDIUM
+ 	lwarx	r15,0,r14
++	andi.	r9,r15,PNV_CORE_IDLE_LOCK_BIT
++	bne	core_idle_lock_held
+ 	blr
+ 
+ /*
+@@ -163,12 +166,6 @@ _GLOBAL(pnv_powersave_common)
+ 	std	r9,_MSR(r1)
+ 	std	r1,PACAR1(r13)
+ 
+-#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+-	/* Tell KVM we're entering idle */
+-	li	r4,KVM_HWTHREAD_IN_IDLE
+-	stb	r4,HSTATE_HWTHREAD_STATE(r13)
+-#endif
+-
+ 	/*
+ 	 * Go to real mode to do the nap, as required by the architecture.
+ 	 * Also, we need to be in real mode before setting hwthread_state,
+@@ -185,6 +182,26 @@ _GLOBAL(pnv_powersave_common)
+ 
+ 	.globl pnv_enter_arch207_idle_mode
+ pnv_enter_arch207_idle_mode:
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++	/* Tell KVM we're entering idle */
++	li	r4,KVM_HWTHREAD_IN_IDLE
++	/******************************************************/
++	/*  N O T E   W E L L    ! ! !    N O T E   W E L L   */
++	/* The following store to HSTATE_HWTHREAD_STATE(r13)  */
++	/* MUST occur in real mode, i.e. with the MMU off,    */
++	/* and the MMU must stay off until we clear this flag */
++	/* and test HSTATE_HWTHREAD_REQ(r13) in the system    */
++	/* reset interrupt vector in exceptions-64s.S.        */
++	/* The reason is that another thread can switch the   */
++	/* MMU to a guest context whenever this flag is set   */
++	/* to KVM_HWTHREAD_IN_IDLE, and if the MMU was on,    */
++	/* that would potentially cause this thread to start  */
++	/* executing instructions from guest memory in        */
++	/* hypervisor mode, leading to a host crash or data   */
++	/* corruption, or worse.                              */
++	/******************************************************/
++	stb	r4,HSTATE_HWTHREAD_STATE(r13)
++#endif
+ 	stb	r3,PACA_THREAD_IDLE_STATE(r13)
+ 	cmpwi	cr3,r3,PNV_THREAD_SLEEP
+ 	bge	cr3,2f
+@@ -250,6 +267,12 @@ enter_winkle:
+  * r3 - requested stop state
+  */
+ power_enter_stop:
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++	/* Tell KVM we're entering idle */
++	li	r4,KVM_HWTHREAD_IN_IDLE
++	/* DO THIS IN REAL MODE!  See comment above. */
++	stb	r4,HSTATE_HWTHREAD_STATE(r13)
++#endif
+ /*
+  * Check if the requested state is a deep idle state.
+  */
+diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
+index 48df05ef5231..d6960681939a 100644
+--- a/arch/powerpc/mm/tlb-radix.c
++++ b/arch/powerpc/mm/tlb-radix.c
+@@ -175,7 +175,7 @@ void radix__flush_tlb_mm(struct mm_struct *mm)
+ 	if (unlikely(pid == MMU_NO_CONTEXT))
+ 		goto no_context;
+ 
+-	if (!mm_is_core_local(mm)) {
++	if (!mm_is_thread_local(mm)) {
+ 		int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+ 
+ 		if (lock_tlbie)
+@@ -201,7 +201,7 @@ void radix__flush_tlb_pwc(struct mmu_gather *tlb, unsigned long addr)
+ 	if (unlikely(pid == MMU_NO_CONTEXT))
+ 		goto no_context;
+ 
+-	if (!mm_is_core_local(mm)) {
++	if (!mm_is_thread_local(mm)) {
+ 		int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+ 
+ 		if (lock_tlbie)
+@@ -226,7 +226,7 @@ void radix__flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
+ 	pid = mm ? mm->context.id : 0;
+ 	if (unlikely(pid == MMU_NO_CONTEXT))
+ 		goto bail;
+-	if (!mm_is_core_local(mm)) {
++	if (!mm_is_thread_local(mm)) {
+ 		int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+ 
+ 		if (lock_tlbie)
+@@ -321,7 +321,7 @@ void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start,
+ {
+ 	unsigned long pid;
+ 	unsigned long addr;
+-	int local = mm_is_core_local(mm);
++	int local = mm_is_thread_local(mm);
+ 	unsigned long ap = mmu_get_ap(psize);
+ 	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+ 	unsigned long page_size = 1UL << mmu_psize_defs[psize].shift;
+diff --git a/arch/s390/kvm/sthyi.c b/arch/s390/kvm/sthyi.c
+index bd98b7d25200..05c98bb853cf 100644
+--- a/arch/s390/kvm/sthyi.c
++++ b/arch/s390/kvm/sthyi.c
+@@ -315,7 +315,7 @@ static void fill_diag(struct sthyi_sctns *sctns)
+ 	if (r < 0)
+ 		goto out;
+ 
+-	diag224_buf = kmalloc(PAGE_SIZE, GFP_KERNEL | GFP_DMA);
++	diag224_buf = (void *)__get_free_page(GFP_KERNEL | GFP_DMA);
+ 	if (!diag224_buf || diag224(diag224_buf))
+ 		goto out;
+ 
+@@ -378,7 +378,7 @@ static void fill_diag(struct sthyi_sctns *sctns)
+ 	sctns->par.infpval1 |= PAR_WGHT_VLD;
+ 
+ out:
+-	kfree(diag224_buf);
++	free_page((unsigned long)diag224_buf);
+ 	vfree(diag204_buf);
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 620ab06bcf45..017bda12caae 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -429,7 +429,7 @@ int __init save_microcode_in_initrd_amd(void)
+ 	 * We need the physical address of the container for both bitness since
+ 	 * boot_params.hdr.ramdisk_image is a physical address.
+ 	 */
+-	cont    = __pa(container);
++	cont    = __pa_nodebug(container);
+ 	cont_va = container;
+ #endif
+ 
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 98c9cd6f3b5d..d5219b1c8775 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1222,11 +1222,16 @@ void __init setup_arch(char **cmdline_p)
+ 	if (smp_found_config)
+ 		get_smp_config();
+ 
++	/*
++	 * Systems w/o ACPI and mptables might not have it mapped the local
++	 * APIC yet, but prefill_possible_map() might need to access it.
++	 */
++	init_apic_mappings();
++
+ 	prefill_possible_map();
+ 
+ 	init_cpu_to_node();
+ 
+-	init_apic_mappings();
+ 	io_apic_init_mappings();
+ 
+ 	kvm_guest_init();
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 4e95d3eb2955..cbd7b92585bb 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -5045,7 +5045,7 @@ done_prefixes:
+ 	/* Decode and fetch the destination operand: register or memory. */
+ 	rc = decode_operand(ctxt, &ctxt->dst, (ctxt->d >> DstShift) & OpMask);
+ 
+-	if (ctxt->rip_relative)
++	if (ctxt->rip_relative && likely(ctxt->memopp))
+ 		ctxt->memopp->addr.mem.ea = address_mask(ctxt,
+ 					ctxt->memopp->addr.mem.ea + ctxt->_eip);
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 699f8726539a..46f74d461f3f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7372,10 +7372,12 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
+ {
++	void *wbinvd_dirty_mask = vcpu->arch.wbinvd_dirty_mask;
++
+ 	kvmclock_reset(vcpu);
+ 
+-	free_cpumask_var(vcpu->arch.wbinvd_dirty_mask);
+ 	kvm_x86_ops->vcpu_free(vcpu);
++	free_cpumask_var(wbinvd_dirty_mask);
+ }
+ 
+ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 16288e777ec3..4b1e4eabeba4 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1003,7 +1003,7 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal)
+ 
+ 
+ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
+-					 uint32_t desc)
++					 u32 desc, bool need_strong_ref)
+ {
+ 	struct rb_node *n = proc->refs_by_desc.rb_node;
+ 	struct binder_ref *ref;
+@@ -1011,12 +1011,16 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc,
+ 	while (n) {
+ 		ref = rb_entry(n, struct binder_ref, rb_node_desc);
+ 
+-		if (desc < ref->desc)
++		if (desc < ref->desc) {
+ 			n = n->rb_left;
+-		else if (desc > ref->desc)
++		} else if (desc > ref->desc) {
+ 			n = n->rb_right;
+-		else
++		} else if (need_strong_ref && !ref->strong) {
++			binder_user_error("tried to use weak ref as strong ref\n");
++			return NULL;
++		} else {
+ 			return ref;
++		}
+ 	}
+ 	return NULL;
+ }
+@@ -1286,7 +1290,10 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 		} break;
+ 		case BINDER_TYPE_HANDLE:
+ 		case BINDER_TYPE_WEAK_HANDLE: {
+-			struct binder_ref *ref = binder_get_ref(proc, fp->handle);
++			struct binder_ref *ref;
++
++			ref = binder_get_ref(proc, fp->handle,
++					     fp->type == BINDER_TYPE_HANDLE);
+ 
+ 			if (ref == NULL) {
+ 				pr_err("transaction release %d bad handle %d\n",
+@@ -1381,7 +1388,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		if (tr->target.handle) {
+ 			struct binder_ref *ref;
+ 
+-			ref = binder_get_ref(proc, tr->target.handle);
++			ref = binder_get_ref(proc, tr->target.handle, true);
+ 			if (ref == NULL) {
+ 				binder_user_error("%d:%d got transaction to invalid handle\n",
+ 					proc->pid, thread->pid);
+@@ -1578,7 +1585,9 @@ static void binder_transaction(struct binder_proc *proc,
+ 				fp->type = BINDER_TYPE_HANDLE;
+ 			else
+ 				fp->type = BINDER_TYPE_WEAK_HANDLE;
++			fp->binder = 0;
+ 			fp->handle = ref->desc;
++			fp->cookie = 0;
+ 			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
+ 				       &thread->todo);
+ 
+@@ -1590,7 +1599,10 @@ static void binder_transaction(struct binder_proc *proc,
+ 		} break;
+ 		case BINDER_TYPE_HANDLE:
+ 		case BINDER_TYPE_WEAK_HANDLE: {
+-			struct binder_ref *ref = binder_get_ref(proc, fp->handle);
++			struct binder_ref *ref;
++
++			ref = binder_get_ref(proc, fp->handle,
++					     fp->type == BINDER_TYPE_HANDLE);
+ 
+ 			if (ref == NULL) {
+ 				binder_user_error("%d:%d got transaction with invalid handle, %d\n",
+@@ -1625,7 +1637,9 @@ static void binder_transaction(struct binder_proc *proc,
+ 					return_error = BR_FAILED_REPLY;
+ 					goto err_binder_get_ref_for_node_failed;
+ 				}
++				fp->binder = 0;
+ 				fp->handle = new_ref->desc;
++				fp->cookie = 0;
+ 				binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
+ 				trace_binder_transaction_ref_to_ref(t, ref,
+ 								    new_ref);
+@@ -1679,6 +1693,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 			binder_debug(BINDER_DEBUG_TRANSACTION,
+ 				     "        fd %d -> %d\n", fp->handle, target_fd);
+ 			/* TODO: fput? */
++			fp->binder = 0;
+ 			fp->handle = target_fd;
+ 		} break;
+ 
+@@ -1801,7 +1816,9 @@ static int binder_thread_write(struct binder_proc *proc,
+ 						ref->desc);
+ 				}
+ 			} else
+-				ref = binder_get_ref(proc, target);
++				ref = binder_get_ref(proc, target,
++						     cmd == BC_ACQUIRE ||
++						     cmd == BC_RELEASE);
+ 			if (ref == NULL) {
+ 				binder_user_error("%d:%d refcount change on invalid ref %d\n",
+ 					proc->pid, thread->pid, target);
+@@ -1997,7 +2014,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ 			if (get_user(cookie, (binder_uintptr_t __user *)ptr))
+ 				return -EFAULT;
+ 			ptr += sizeof(binder_uintptr_t);
+-			ref = binder_get_ref(proc, target);
++			ref = binder_get_ref(proc, target, false);
+ 			if (ref == NULL) {
+ 				binder_user_error("%d:%d %s invalid ref %d\n",
+ 					proc->pid, thread->pid,
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 5da47e26a012..4aae0d27e382 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1540,19 +1540,29 @@ static void remove_port_data(struct port *port)
+ 	spin_lock_irq(&port->inbuf_lock);
+ 	/* Remove unused data this port might have received. */
+ 	discard_port_data(port);
++	spin_unlock_irq(&port->inbuf_lock);
+ 
+ 	/* Remove buffers we queued up for the Host to send us data in. */
+-	while ((buf = virtqueue_detach_unused_buf(port->in_vq)))
+-		free_buf(buf, true);
+-	spin_unlock_irq(&port->inbuf_lock);
++	do {
++		spin_lock_irq(&port->inbuf_lock);
++		buf = virtqueue_detach_unused_buf(port->in_vq);
++		spin_unlock_irq(&port->inbuf_lock);
++		if (buf)
++			free_buf(buf, true);
++	} while (buf);
+ 
+ 	spin_lock_irq(&port->outvq_lock);
+ 	reclaim_consumed_buffers(port);
++	spin_unlock_irq(&port->outvq_lock);
+ 
+ 	/* Free pending buffers from the out-queue. */
+-	while ((buf = virtqueue_detach_unused_buf(port->out_vq)))
+-		free_buf(buf, true);
+-	spin_unlock_irq(&port->outvq_lock);
++	do {
++		spin_lock_irq(&port->outvq_lock);
++		buf = virtqueue_detach_unused_buf(port->out_vq);
++		spin_unlock_irq(&port->outvq_lock);
++		if (buf)
++			free_buf(buf, true);
++	} while (buf);
+ }
+ 
+ /*
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index b46547e907be..8c347f5c2562 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1133,10 +1133,8 @@ static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
+ 	*min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf);
+ }
+ 
+-static void intel_pstate_set_min_pstate(struct cpudata *cpu)
++static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
+ {
+-	int pstate = cpu->pstate.min_pstate;
+-
+ 	trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
+ 	cpu->pstate.current_pstate = pstate;
+ 	/*
+@@ -1148,6 +1146,20 @@ static void intel_pstate_set_min_pstate(struct cpudata *cpu)
+ 		      pstate_funcs.get_val(cpu, pstate));
+ }
+ 
++static void intel_pstate_set_min_pstate(struct cpudata *cpu)
++{
++	intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate);
++}
++
++static void intel_pstate_max_within_limits(struct cpudata *cpu)
++{
++	int min_pstate, max_pstate;
++
++	update_turbo_state();
++	intel_pstate_get_min_max(cpu, &min_pstate, &max_pstate);
++	intel_pstate_set_pstate(cpu, max_pstate);
++}
++
+ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
+ {
+ 	cpu->pstate.min_pstate = pstate_funcs.get_min();
+@@ -1465,7 +1477,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
+ 	pr_debug("set_policy cpuinfo.max %u policy->max %u\n",
+ 		 policy->cpuinfo.max_freq, policy->max);
+ 
+-	cpu = all_cpu_data[0];
++	cpu = all_cpu_data[policy->cpu];
+ 	if (cpu->pstate.max_pstate_physical > cpu->pstate.max_pstate &&
+ 	    policy->max < policy->cpuinfo.max_freq &&
+ 	    policy->max > cpu->pstate.max_pstate * cpu->pstate.scaling) {
+@@ -1509,6 +1521,15 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
+ 	limits->max_perf = round_up(limits->max_perf, FRAC_BITS);
+ 
+  out:
++	if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
++		/*
++		 * NOHZ_FULL CPUs need this as the governor callback may not
++		 * be invoked on them.
++		 */
++		intel_pstate_clear_update_util_hook(policy->cpu);
++		intel_pstate_max_within_limits(cpu);
++	}
++
+ 	intel_pstate_set_update_util_hook(policy->cpu);
+ 
+ 	intel_pstate_hwp_set_policy(policy);
+diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
+index 1f01e98c83c7..73ae849f5170 100644
+--- a/drivers/dax/pmem.c
++++ b/drivers/dax/pmem.c
+@@ -44,7 +44,6 @@ static void dax_pmem_percpu_exit(void *data)
+ 
+ 	dev_dbg(dax_pmem->dev, "%s\n", __func__);
+ 	percpu_ref_exit(ref);
+-	wait_for_completion(&dax_pmem->cmp);
+ }
+ 
+ static void dax_pmem_percpu_kill(void *data)
+@@ -54,6 +53,7 @@ static void dax_pmem_percpu_kill(void *data)
+ 
+ 	dev_dbg(dax_pmem->dev, "%s\n", __func__);
+ 	percpu_ref_kill(ref);
++	wait_for_completion(&dax_pmem->cmp);
+ }
+ 
+ static int dax_pmem_probe(struct device *dev)
+diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c
+index 309311b1faae..15475892af0c 100644
+--- a/drivers/firewire/net.c
++++ b/drivers/firewire/net.c
+@@ -73,13 +73,13 @@ struct rfc2734_header {
+ 
+ #define fwnet_get_hdr_lf(h)		(((h)->w0 & 0xc0000000) >> 30)
+ #define fwnet_get_hdr_ether_type(h)	(((h)->w0 & 0x0000ffff))
+-#define fwnet_get_hdr_dg_size(h)	(((h)->w0 & 0x0fff0000) >> 16)
++#define fwnet_get_hdr_dg_size(h)	((((h)->w0 & 0x0fff0000) >> 16) + 1)
+ #define fwnet_get_hdr_fg_off(h)		(((h)->w0 & 0x00000fff))
+ #define fwnet_get_hdr_dgl(h)		(((h)->w1 & 0xffff0000) >> 16)
+ 
+-#define fwnet_set_hdr_lf(lf)		((lf)  << 30)
++#define fwnet_set_hdr_lf(lf)		((lf) << 30)
+ #define fwnet_set_hdr_ether_type(et)	(et)
+-#define fwnet_set_hdr_dg_size(dgs)	((dgs) << 16)
++#define fwnet_set_hdr_dg_size(dgs)	(((dgs) - 1) << 16)
+ #define fwnet_set_hdr_fg_off(fgo)	(fgo)
+ 
+ #define fwnet_set_hdr_dgl(dgl)		((dgl) << 16)
+@@ -578,6 +578,9 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len,
+ 	int retval;
+ 	u16 ether_type;
+ 
++	if (len <= RFC2374_UNFRAG_HDR_SIZE)
++		return 0;
++
+ 	hdr.w0 = be32_to_cpu(buf[0]);
+ 	lf = fwnet_get_hdr_lf(&hdr);
+ 	if (lf == RFC2374_HDR_UNFRAG) {
+@@ -602,7 +605,12 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len,
+ 		return fwnet_finish_incoming_packet(net, skb, source_node_id,
+ 						    is_broadcast, ether_type);
+ 	}
++
+ 	/* A datagram fragment has been received, now the fun begins. */
++
++	if (len <= RFC2374_FRAG_HDR_SIZE)
++		return 0;
++
+ 	hdr.w1 = ntohl(buf[1]);
+ 	buf += 2;
+ 	len -= RFC2374_FRAG_HDR_SIZE;
+@@ -614,7 +622,10 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len,
+ 		fg_off = fwnet_get_hdr_fg_off(&hdr);
+ 	}
+ 	datagram_label = fwnet_get_hdr_dgl(&hdr);
+-	dg_size = fwnet_get_hdr_dg_size(&hdr); /* ??? + 1 */
++	dg_size = fwnet_get_hdr_dg_size(&hdr);
++
++	if (fg_off + len > dg_size)
++		return 0;
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+ 
+@@ -722,6 +733,22 @@ static void fwnet_receive_packet(struct fw_card *card, struct fw_request *r,
+ 	fw_send_response(card, r, rcode);
+ }
+ 
++static int gasp_source_id(__be32 *p)
++{
++	return be32_to_cpu(p[0]) >> 16;
++}
++
++static u32 gasp_specifier_id(__be32 *p)
++{
++	return (be32_to_cpu(p[0]) & 0xffff) << 8 |
++	       (be32_to_cpu(p[1]) & 0xff000000) >> 24;
++}
++
++static u32 gasp_version(__be32 *p)
++{
++	return be32_to_cpu(p[1]) & 0xffffff;
++}
++
+ static void fwnet_receive_broadcast(struct fw_iso_context *context,
+ 		u32 cycle, size_t header_length, void *header, void *data)
+ {
+@@ -731,9 +758,6 @@ static void fwnet_receive_broadcast(struct fw_iso_context *context,
+ 	__be32 *buf_ptr;
+ 	int retval;
+ 	u32 length;
+-	u16 source_node_id;
+-	u32 specifier_id;
+-	u32 ver;
+ 	unsigned long offset;
+ 	unsigned long flags;
+ 
+@@ -750,22 +774,17 @@ static void fwnet_receive_broadcast(struct fw_iso_context *context,
+ 
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+-	specifier_id =    (be32_to_cpu(buf_ptr[0]) & 0xffff) << 8
+-			| (be32_to_cpu(buf_ptr[1]) & 0xff000000) >> 24;
+-	ver = be32_to_cpu(buf_ptr[1]) & 0xffffff;
+-	source_node_id = be32_to_cpu(buf_ptr[0]) >> 16;
+-
+-	if (specifier_id == IANA_SPECIFIER_ID &&
+-	    (ver == RFC2734_SW_VERSION
++	if (length > IEEE1394_GASP_HDR_SIZE &&
++	    gasp_specifier_id(buf_ptr) == IANA_SPECIFIER_ID &&
++	    (gasp_version(buf_ptr) == RFC2734_SW_VERSION
+ #if IS_ENABLED(CONFIG_IPV6)
+-	     || ver == RFC3146_SW_VERSION
++	     || gasp_version(buf_ptr) == RFC3146_SW_VERSION
+ #endif
+-	    )) {
+-		buf_ptr += 2;
+-		length -= IEEE1394_GASP_HDR_SIZE;
+-		fwnet_incoming_packet(dev, buf_ptr, length, source_node_id,
++	    ))
++		fwnet_incoming_packet(dev, buf_ptr + 2,
++				      length - IEEE1394_GASP_HDR_SIZE,
++				      gasp_source_id(buf_ptr),
+ 				      context->card->generation, true);
+-	}
+ 
+ 	packet.payload_length = dev->rcv_buffer_size;
+ 	packet.interrupt = 1;
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index af514618d7fb..14f2d9835723 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -602,14 +602,17 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
+ {
+ 	int idx, i;
+ 	unsigned int irq_flags;
++	int ret = -ENOENT;
+ 
+ 	for (i = 0, idx = 0; idx <= index; i++) {
+ 		struct acpi_gpio_info info;
+ 		struct gpio_desc *desc;
+ 
+ 		desc = acpi_get_gpiod_by_index(adev, NULL, i, &info);
+-		if (IS_ERR(desc))
++		if (IS_ERR(desc)) {
++			ret = PTR_ERR(desc);
+ 			break;
++		}
+ 		if (info.gpioint && idx++ == index) {
+ 			int irq = gpiod_to_irq(desc);
+ 
+@@ -628,7 +631,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
+ 		}
+ 
+ 	}
+-	return -ENOENT;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get);
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 53ff25ac66d8..b2dee1024166 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -21,6 +21,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/compat.h>
+ #include <linux/anon_inodes.h>
++#include <linux/file.h>
+ #include <linux/kfifo.h>
+ #include <linux/poll.h>
+ #include <linux/timekeeping.h>
+@@ -331,6 +332,13 @@ struct linehandle_state {
+ 	u32 numdescs;
+ };
+ 
++#define GPIOHANDLE_REQUEST_VALID_FLAGS \
++	(GPIOHANDLE_REQUEST_INPUT | \
++	GPIOHANDLE_REQUEST_OUTPUT | \
++	GPIOHANDLE_REQUEST_ACTIVE_LOW | \
++	GPIOHANDLE_REQUEST_OPEN_DRAIN | \
++	GPIOHANDLE_REQUEST_OPEN_SOURCE)
++
+ static long linehandle_ioctl(struct file *filep, unsigned int cmd,
+ 			     unsigned long arg)
+ {
+@@ -342,6 +350,8 @@ static long linehandle_ioctl(struct file *filep, unsigned int cmd,
+ 	if (cmd == GPIOHANDLE_GET_LINE_VALUES_IOCTL) {
+ 		int val;
+ 
++		memset(&ghd, 0, sizeof(ghd));
++
+ 		/* TODO: check if descriptors are really input */
+ 		for (i = 0; i < lh->numdescs; i++) {
+ 			val = gpiod_get_value_cansleep(lh->descs[i]);
+@@ -412,6 +422,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ {
+ 	struct gpiohandle_request handlereq;
+ 	struct linehandle_state *lh;
++	struct file *file;
+ 	int fd, i, ret;
+ 
+ 	if (copy_from_user(&handlereq, ip, sizeof(handlereq)))
+@@ -442,6 +453,17 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 		u32 lflags = handlereq.flags;
+ 		struct gpio_desc *desc;
+ 
++		if (offset >= gdev->ngpio) {
++			ret = -EINVAL;
++			goto out_free_descs;
++		}
++
++		/* Return an error if a unknown flag is set */
++		if (lflags & ~GPIOHANDLE_REQUEST_VALID_FLAGS) {
++			ret = -EINVAL;
++			goto out_free_descs;
++		}
++
+ 		desc = &gdev->descs[offset];
+ 		ret = gpiod_request(desc, lh->label);
+ 		if (ret)
+@@ -477,26 +499,41 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 	i--;
+ 	lh->numdescs = handlereq.lines;
+ 
+-	fd = anon_inode_getfd("gpio-linehandle",
+-			      &linehandle_fileops,
+-			      lh,
+-			      O_RDONLY | O_CLOEXEC);
++	fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
+ 	if (fd < 0) {
+ 		ret = fd;
+ 		goto out_free_descs;
+ 	}
+ 
++	file = anon_inode_getfile("gpio-linehandle",
++				  &linehandle_fileops,
++				  lh,
++				  O_RDONLY | O_CLOEXEC);
++	if (IS_ERR(file)) {
++		ret = PTR_ERR(file);
++		goto out_put_unused_fd;
++	}
++
+ 	handlereq.fd = fd;
+ 	if (copy_to_user(ip, &handlereq, sizeof(handlereq))) {
+-		ret = -EFAULT;
+-		goto out_free_descs;
++		/*
++		 * fput() will trigger the release() callback, so do not go onto
++		 * the regular error cleanup path here.
++		 */
++		fput(file);
++		put_unused_fd(fd);
++		return -EFAULT;
+ 	}
+ 
++	fd_install(fd, file);
++
+ 	dev_dbg(&gdev->dev, "registered chardev handle for %d lines\n",
+ 		lh->numdescs);
+ 
+ 	return 0;
+ 
++out_put_unused_fd:
++	put_unused_fd(fd);
+ out_free_descs:
+ 	for (; i >= 0; i--)
+ 		gpiod_free(lh->descs[i]);
+@@ -534,6 +571,10 @@ struct lineevent_state {
+ 	struct mutex read_lock;
+ };
+ 
++#define GPIOEVENT_REQUEST_VALID_FLAGS \
++	(GPIOEVENT_REQUEST_RISING_EDGE | \
++	GPIOEVENT_REQUEST_FALLING_EDGE)
++
+ static unsigned int lineevent_poll(struct file *filep,
+ 				   struct poll_table_struct *wait)
+ {
+@@ -621,6 +662,8 @@ static long lineevent_ioctl(struct file *filep, unsigned int cmd,
+ 	if (cmd == GPIOHANDLE_GET_LINE_VALUES_IOCTL) {
+ 		int val;
+ 
++		memset(&ghd, 0, sizeof(ghd));
++
+ 		val = gpiod_get_value_cansleep(le->desc);
+ 		if (val < 0)
+ 			return val;
+@@ -693,6 +736,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 	struct gpioevent_request eventreq;
+ 	struct lineevent_state *le;
+ 	struct gpio_desc *desc;
++	struct file *file;
+ 	u32 offset;
+ 	u32 lflags;
+ 	u32 eflags;
+@@ -724,6 +768,18 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 	lflags = eventreq.handleflags;
+ 	eflags = eventreq.eventflags;
+ 
++	if (offset >= gdev->ngpio) {
++		ret = -EINVAL;
++		goto out_free_label;
++	}
++
++	/* Return an error if a unknown flag is set */
++	if ((lflags & ~GPIOHANDLE_REQUEST_VALID_FLAGS) ||
++	    (eflags & ~GPIOEVENT_REQUEST_VALID_FLAGS)) {
++		ret = -EINVAL;
++		goto out_free_label;
++	}
++
+ 	/* This is just wrong: we don't look for events on output lines */
+ 	if (lflags & GPIOHANDLE_REQUEST_OUTPUT) {
+ 		ret = -EINVAL;
+@@ -775,23 +831,38 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
+ 	if (ret)
+ 		goto out_free_desc;
+ 
+-	fd = anon_inode_getfd("gpio-event",
+-			      &lineevent_fileops,
+-			      le,
+-			      O_RDONLY | O_CLOEXEC);
++	fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
+ 	if (fd < 0) {
+ 		ret = fd;
+ 		goto out_free_irq;
+ 	}
+ 
++	file = anon_inode_getfile("gpio-event",
++				  &lineevent_fileops,
++				  le,
++				  O_RDONLY | O_CLOEXEC);
++	if (IS_ERR(file)) {
++		ret = PTR_ERR(file);
++		goto out_put_unused_fd;
++	}
++
+ 	eventreq.fd = fd;
+ 	if (copy_to_user(ip, &eventreq, sizeof(eventreq))) {
+-		ret = -EFAULT;
+-		goto out_free_irq;
++		/*
++		 * fput() will trigger the release() callback, so do not go onto
++		 * the regular error cleanup path here.
++		 */
++		fput(file);
++		put_unused_fd(fd);
++		return -EFAULT;
+ 	}
+ 
++	fd_install(fd, file);
++
+ 	return 0;
+ 
++out_put_unused_fd:
++	put_unused_fd(fd);
+ out_free_irq:
+ 	free_irq(le->irq, le);
+ out_free_desc:
+@@ -821,6 +892,8 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	if (cmd == GPIO_GET_CHIPINFO_IOCTL) {
+ 		struct gpiochip_info chipinfo;
+ 
++		memset(&chipinfo, 0, sizeof(chipinfo));
++
+ 		strncpy(chipinfo.name, dev_name(&gdev->dev),
+ 			sizeof(chipinfo.name));
+ 		chipinfo.name[sizeof(chipinfo.name)-1] = '\0';
+@@ -837,7 +910,7 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 
+ 		if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
+ 			return -EFAULT;
+-		if (lineinfo.line_offset > gdev->ngpio)
++		if (lineinfo.line_offset >= gdev->ngpio)
+ 			return -EINVAL;
+ 
+ 		desc = &gdev->descs[lineinfo.line_offset];
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 2a3ded44cf2a..7c8c185c90ea 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -420,18 +420,21 @@ drm_atomic_replace_property_blob_from_id(struct drm_crtc *crtc,
+ 					 ssize_t expected_size,
+ 					 bool *replaced)
+ {
+-	struct drm_device *dev = crtc->dev;
+ 	struct drm_property_blob *new_blob = NULL;
+ 
+ 	if (blob_id != 0) {
+-		new_blob = drm_property_lookup_blob(dev, blob_id);
++		new_blob = drm_property_lookup_blob(crtc->dev, blob_id);
+ 		if (new_blob == NULL)
+ 			return -EINVAL;
+-		if (expected_size > 0 && expected_size != new_blob->length)
++
++		if (expected_size > 0 && expected_size != new_blob->length) {
++			drm_property_unreference_blob(new_blob);
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	drm_atomic_replace_property_blob(blob, new_blob, replaced);
++	drm_property_unreference_blob(new_blob);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 04e457117980..aa644487749c 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -914,6 +914,7 @@ static void drm_dp_destroy_port(struct kref *kref)
+ 		/* no need to clean up vcpi
+ 		 * as if we have no connector we never setup a vcpi */
+ 		drm_dp_port_teardown_pdt(port, port->pdt);
++		port->pdt = DP_PEER_DEVICE_NONE;
+ 	}
+ 	kfree(port);
+ }
+@@ -1159,7 +1160,9 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
+ 			drm_dp_put_port(port);
+ 			goto out;
+ 		}
+-		if (port->port_num >= DP_MST_LOGICAL_PORT_0) {
++		if ((port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
++		     port->pdt == DP_PEER_DEVICE_SST_SINK) &&
++		    port->port_num >= DP_MST_LOGICAL_PORT_0) {
+ 			port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);
+ 			drm_mode_connector_set_tile_property(port->connector);
+ 		}
+@@ -2919,6 +2922,7 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
+ 		mgr->cbs->destroy_connector(mgr, port->connector);
+ 
+ 		drm_dp_port_teardown_pdt(port, port->pdt);
++		port->pdt = DP_PEER_DEVICE_NONE;
+ 
+ 		if (!port->input && port->vcpi.vcpi > 0) {
+ 			drm_dp_mst_reset_vcpi_slots(mgr, port);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 0a06f9120b5a..337c55597ccd 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -129,7 +129,12 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
+ 	return 0;
+ fail:
+ 	for (i = 0; i < fb_helper->connector_count; i++) {
+-		kfree(fb_helper->connector_info[i]);
++		struct drm_fb_helper_connector *fb_helper_connector =
++			fb_helper->connector_info[i];
++
++		drm_connector_unreference(fb_helper_connector->connector);
++
++		kfree(fb_helper_connector);
+ 		fb_helper->connector_info[i] = NULL;
+ 	}
+ 	fb_helper->connector_count = 0;
+@@ -601,6 +606,24 @@ int drm_fb_helper_blank(int blank, struct fb_info *info)
+ }
+ EXPORT_SYMBOL(drm_fb_helper_blank);
+ 
++static void drm_fb_helper_modeset_release(struct drm_fb_helper *helper,
++					  struct drm_mode_set *modeset)
++{
++	int i;
++
++	for (i = 0; i < modeset->num_connectors; i++) {
++		drm_connector_unreference(modeset->connectors[i]);
++		modeset->connectors[i] = NULL;
++	}
++	modeset->num_connectors = 0;
++
++	drm_mode_destroy(helper->dev, modeset->mode);
++	modeset->mode = NULL;
++
++	/* FIXME should hold a ref? */
++	modeset->fb = NULL;
++}
++
+ static void drm_fb_helper_crtc_free(struct drm_fb_helper *helper)
+ {
+ 	int i;
+@@ -610,10 +633,12 @@ static void drm_fb_helper_crtc_free(struct drm_fb_helper *helper)
+ 		kfree(helper->connector_info[i]);
+ 	}
+ 	kfree(helper->connector_info);
++
+ 	for (i = 0; i < helper->crtc_count; i++) {
+-		kfree(helper->crtc_info[i].mode_set.connectors);
+-		if (helper->crtc_info[i].mode_set.mode)
+-			drm_mode_destroy(helper->dev, helper->crtc_info[i].mode_set.mode);
++		struct drm_mode_set *modeset = &helper->crtc_info[i].mode_set;
++
++		drm_fb_helper_modeset_release(helper, modeset);
++		kfree(modeset->connectors);
+ 	}
+ 	kfree(helper->crtc_info);
+ }
+@@ -632,7 +657,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
+ 	clip->x2 = clip->y2 = 0;
+ 	spin_unlock_irqrestore(&helper->dirty_lock, flags);
+ 
+-	helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
++	/* call dirty callback only when it has been really touched */
++	if (clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2)
++		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
+ }
+ 
+ /**
+@@ -2027,7 +2054,6 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
+ 	struct drm_fb_helper_crtc **crtcs;
+ 	struct drm_display_mode **modes;
+ 	struct drm_fb_offset *offsets;
+-	struct drm_mode_set *modeset;
+ 	bool *enabled;
+ 	int width, height;
+ 	int i;
+@@ -2075,45 +2101,35 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
+ 
+ 	/* need to set the modesets up here for use later */
+ 	/* fill out the connector<->crtc mappings into the modesets */
+-	for (i = 0; i < fb_helper->crtc_count; i++) {
+-		modeset = &fb_helper->crtc_info[i].mode_set;
+-		modeset->num_connectors = 0;
+-		modeset->fb = NULL;
+-	}
++	for (i = 0; i < fb_helper->crtc_count; i++)
++		drm_fb_helper_modeset_release(fb_helper,
++					      &fb_helper->crtc_info[i].mode_set);
+ 
+ 	for (i = 0; i < fb_helper->connector_count; i++) {
+ 		struct drm_display_mode *mode = modes[i];
+ 		struct drm_fb_helper_crtc *fb_crtc = crtcs[i];
+ 		struct drm_fb_offset *offset = &offsets[i];
+-		modeset = &fb_crtc->mode_set;
++		struct drm_mode_set *modeset = &fb_crtc->mode_set;
+ 
+ 		if (mode && fb_crtc) {
++			struct drm_connector *connector =
++				fb_helper->connector_info[i]->connector;
++
+ 			DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n",
+ 				      mode->name, fb_crtc->mode_set.crtc->base.id, offset->x, offset->y);
++
+ 			fb_crtc->desired_mode = mode;
+ 			fb_crtc->x = offset->x;
+ 			fb_crtc->y = offset->y;
+-			if (modeset->mode)
+-				drm_mode_destroy(dev, modeset->mode);
+ 			modeset->mode = drm_mode_duplicate(dev,
+ 							   fb_crtc->desired_mode);
+-			modeset->connectors[modeset->num_connectors++] = fb_helper->connector_info[i]->connector;
++			drm_connector_reference(connector);
++			modeset->connectors[modeset->num_connectors++] = connector;
+ 			modeset->fb = fb_helper->fb;
+ 			modeset->x = offset->x;
+ 			modeset->y = offset->y;
+ 		}
+ 	}
+-
+-	/* Clear out any old modes if there are no more connected outputs. */
+-	for (i = 0; i < fb_helper->crtc_count; i++) {
+-		modeset = &fb_helper->crtc_info[i].mode_set;
+-		if (modeset->num_connectors == 0) {
+-			BUG_ON(modeset->fb);
+-			if (modeset->mode)
+-				drm_mode_destroy(dev, modeset->mode);
+-			modeset->mode = NULL;
+-		}
+-	}
+ out:
+ 	kfree(crtcs);
+ 	kfree(modes);
+diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
+index c6e69e4cfa83..1f8af87c6294 100644
+--- a/drivers/gpu/drm/i915/intel_bios.c
++++ b/drivers/gpu/drm/i915/intel_bios.c
+@@ -1031,6 +1031,77 @@ static u8 translate_iboost(u8 val)
+ 	return mapping[val];
+ }
+ 
++static void sanitize_ddc_pin(struct drm_i915_private *dev_priv,
++			     enum port port)
++{
++	const struct ddi_vbt_port_info *info =
++		&dev_priv->vbt.ddi_port_info[port];
++	enum port p;
++
++	if (!info->alternate_ddc_pin)
++		return;
++
++	for_each_port_masked(p, (1 << port) - 1) {
++		struct ddi_vbt_port_info *i = &dev_priv->vbt.ddi_port_info[p];
++
++		if (info->alternate_ddc_pin != i->alternate_ddc_pin)
++			continue;
++
++		DRM_DEBUG_KMS("port %c trying to use the same DDC pin (0x%x) as port %c, "
++			      "disabling port %c DVI/HDMI support\n",
++			      port_name(p), i->alternate_ddc_pin,
++			      port_name(port), port_name(p));
++
++		/*
++		 * If we have multiple ports supposedly sharing the
++		 * pin, then dvi/hdmi couldn't exist on the shared
++		 * port. Otherwise they share the same ddc bin and
++		 * system couldn't communicate with them separately.
++		 *
++		 * Due to parsing the ports in alphabetical order,
++		 * a higher port will always clobber a lower one.
++		 */
++		i->supports_dvi = false;
++		i->supports_hdmi = false;
++		i->alternate_ddc_pin = 0;
++	}
++}
++
++static void sanitize_aux_ch(struct drm_i915_private *dev_priv,
++			    enum port port)
++{
++	const struct ddi_vbt_port_info *info =
++		&dev_priv->vbt.ddi_port_info[port];
++	enum port p;
++
++	if (!info->alternate_aux_channel)
++		return;
++
++	for_each_port_masked(p, (1 << port) - 1) {
++		struct ddi_vbt_port_info *i = &dev_priv->vbt.ddi_port_info[p];
++
++		if (info->alternate_aux_channel != i->alternate_aux_channel)
++			continue;
++
++		DRM_DEBUG_KMS("port %c trying to use the same AUX CH (0x%x) as port %c, "
++			      "disabling port %c DP support\n",
++			      port_name(p), i->alternate_aux_channel,
++			      port_name(port), port_name(p));
++
++		/*
++		 * If we have multiple ports supposedlt sharing the
++		 * aux channel, then DP couldn't exist on the shared
++		 * port. Otherwise they share the same aux channel
++		 * and system couldn't communicate with them separately.
++		 *
++		 * Due to parsing the ports in alphabetical order,
++		 * a higher port will always clobber a lower one.
++		 */
++		i->supports_dp = false;
++		i->alternate_aux_channel = 0;
++	}
++}
++
+ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
+ 			   const struct bdb_header *bdb)
+ {
+@@ -1105,54 +1176,15 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
+ 		DRM_DEBUG_KMS("Port %c is internal DP\n", port_name(port));
+ 
+ 	if (is_dvi) {
+-		if (port == PORT_E) {
+-			info->alternate_ddc_pin = ddc_pin;
+-			/* if DDIE share ddc pin with other port, then
+-			 * dvi/hdmi couldn't exist on the shared port.
+-			 * Otherwise they share the same ddc bin and system
+-			 * couldn't communicate with them seperately. */
+-			if (ddc_pin == DDC_PIN_B) {
+-				dev_priv->vbt.ddi_port_info[PORT_B].supports_dvi = 0;
+-				dev_priv->vbt.ddi_port_info[PORT_B].supports_hdmi = 0;
+-			} else if (ddc_pin == DDC_PIN_C) {
+-				dev_priv->vbt.ddi_port_info[PORT_C].supports_dvi = 0;
+-				dev_priv->vbt.ddi_port_info[PORT_C].supports_hdmi = 0;
+-			} else if (ddc_pin == DDC_PIN_D) {
+-				dev_priv->vbt.ddi_port_info[PORT_D].supports_dvi = 0;
+-				dev_priv->vbt.ddi_port_info[PORT_D].supports_hdmi = 0;
+-			}
+-		} else if (ddc_pin == DDC_PIN_B && port != PORT_B)
+-			DRM_DEBUG_KMS("Unexpected DDC pin for port B\n");
+-		else if (ddc_pin == DDC_PIN_C && port != PORT_C)
+-			DRM_DEBUG_KMS("Unexpected DDC pin for port C\n");
+-		else if (ddc_pin == DDC_PIN_D && port != PORT_D)
+-			DRM_DEBUG_KMS("Unexpected DDC pin for port D\n");
++		info->alternate_ddc_pin = ddc_pin;
++
++		sanitize_ddc_pin(dev_priv, port);
+ 	}
+ 
+ 	if (is_dp) {
+-		if (port == PORT_E) {
+-			info->alternate_aux_channel = aux_channel;
+-			/* if DDIE share aux channel with other port, then
+-			 * DP couldn't exist on the shared port. Otherwise
+-			 * they share the same aux channel and system
+-			 * couldn't communicate with them seperately. */
+-			if (aux_channel == DP_AUX_A)
+-				dev_priv->vbt.ddi_port_info[PORT_A].supports_dp = 0;
+-			else if (aux_channel == DP_AUX_B)
+-				dev_priv->vbt.ddi_port_info[PORT_B].supports_dp = 0;
+-			else if (aux_channel == DP_AUX_C)
+-				dev_priv->vbt.ddi_port_info[PORT_C].supports_dp = 0;
+-			else if (aux_channel == DP_AUX_D)
+-				dev_priv->vbt.ddi_port_info[PORT_D].supports_dp = 0;
+-		}
+-		else if (aux_channel == DP_AUX_A && port != PORT_A)
+-			DRM_DEBUG_KMS("Unexpected AUX channel for port A\n");
+-		else if (aux_channel == DP_AUX_B && port != PORT_B)
+-			DRM_DEBUG_KMS("Unexpected AUX channel for port B\n");
+-		else if (aux_channel == DP_AUX_C && port != PORT_C)
+-			DRM_DEBUG_KMS("Unexpected AUX channel for port C\n");
+-		else if (aux_channel == DP_AUX_D && port != PORT_D)
+-			DRM_DEBUG_KMS("Unexpected AUX channel for port D\n");
++		info->alternate_aux_channel = aux_channel;
++
++		sanitize_aux_ch(dev_priv, port);
+ 	}
+ 
+ 	if (bdb->version >= 158) {
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index e9a64fba6333..63462f279187 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -13834,7 +13834,7 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 
+ 	for_each_plane_in_state(state, plane, plane_state, i) {
+ 		struct intel_plane_state *intel_plane_state =
+-			to_intel_plane_state(plane_state);
++			to_intel_plane_state(plane->state);
+ 
+ 		if (!intel_plane_state->wait_req)
+ 			continue;
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 1ca155f4d368..3051182cf483 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1090,6 +1090,44 @@ intel_dp_aux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
+ 	return ret;
+ }
+ 
++static enum port intel_aux_port(struct drm_i915_private *dev_priv,
++				enum port port)
++{
++	const struct ddi_vbt_port_info *info =
++		&dev_priv->vbt.ddi_port_info[port];
++	enum port aux_port;
++
++	if (!info->alternate_aux_channel) {
++		DRM_DEBUG_KMS("using AUX %c for port %c (platform default)\n",
++			      port_name(port), port_name(port));
++		return port;
++	}
++
++	switch (info->alternate_aux_channel) {
++	case DP_AUX_A:
++		aux_port = PORT_A;
++		break;
++	case DP_AUX_B:
++		aux_port = PORT_B;
++		break;
++	case DP_AUX_C:
++		aux_port = PORT_C;
++		break;
++	case DP_AUX_D:
++		aux_port = PORT_D;
++		break;
++	default:
++		MISSING_CASE(info->alternate_aux_channel);
++		aux_port = PORT_A;
++		break;
++	}
++
++	DRM_DEBUG_KMS("using AUX %c for port %c (VBT)\n",
++		      port_name(aux_port), port_name(port));
++
++	return aux_port;
++}
++
+ static i915_reg_t g4x_aux_ctl_reg(struct drm_i915_private *dev_priv,
+ 				       enum port port)
+ {
+@@ -1150,36 +1188,9 @@ static i915_reg_t ilk_aux_data_reg(struct drm_i915_private *dev_priv,
+ 	}
+ }
+ 
+-/*
+- * On SKL we don't have Aux for port E so we rely
+- * on VBT to set a proper alternate aux channel.
+- */
+-static enum port skl_porte_aux_port(struct drm_i915_private *dev_priv)
+-{
+-	const struct ddi_vbt_port_info *info =
+-		&dev_priv->vbt.ddi_port_info[PORT_E];
+-
+-	switch (info->alternate_aux_channel) {
+-	case DP_AUX_A:
+-		return PORT_A;
+-	case DP_AUX_B:
+-		return PORT_B;
+-	case DP_AUX_C:
+-		return PORT_C;
+-	case DP_AUX_D:
+-		return PORT_D;
+-	default:
+-		MISSING_CASE(info->alternate_aux_channel);
+-		return PORT_A;
+-	}
+-}
+-
+ static i915_reg_t skl_aux_ctl_reg(struct drm_i915_private *dev_priv,
+ 				       enum port port)
+ {
+-	if (port == PORT_E)
+-		port = skl_porte_aux_port(dev_priv);
+-
+ 	switch (port) {
+ 	case PORT_A:
+ 	case PORT_B:
+@@ -1195,9 +1206,6 @@ static i915_reg_t skl_aux_ctl_reg(struct drm_i915_private *dev_priv,
+ static i915_reg_t skl_aux_data_reg(struct drm_i915_private *dev_priv,
+ 					enum port port, int index)
+ {
+-	if (port == PORT_E)
+-		port = skl_porte_aux_port(dev_priv);
+-
+ 	switch (port) {
+ 	case PORT_A:
+ 	case PORT_B:
+@@ -1235,7 +1243,8 @@ static i915_reg_t intel_aux_data_reg(struct drm_i915_private *dev_priv,
+ static void intel_aux_reg_init(struct intel_dp *intel_dp)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
+-	enum port port = dp_to_dig_port(intel_dp)->port;
++	enum port port = intel_aux_port(dev_priv,
++					dp_to_dig_port(intel_dp)->port);
+ 	int i;
+ 
+ 	intel_dp->aux_ch_ctl_reg = intel_aux_ctl_reg(dev_priv, port);
+diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c
+index 3836a1c79714..ad483376bdfa 100644
+--- a/drivers/gpu/drm/i915/intel_fbc.c
++++ b/drivers/gpu/drm/i915/intel_fbc.c
+@@ -104,8 +104,10 @@ static int intel_fbc_calculate_cfb_size(struct drm_i915_private *dev_priv,
+ 	int lines;
+ 
+ 	intel_fbc_get_plane_source_size(cache, NULL, &lines);
+-	if (INTEL_INFO(dev_priv)->gen >= 7)
++	if (INTEL_GEN(dev_priv) == 7)
+ 		lines = min(lines, 2048);
++	else if (INTEL_GEN(dev_priv) >= 8)
++		lines = min(lines, 2560);
+ 
+ 	/* Hardware needs the full buffer stride, not just the active area. */
+ 	return lines * cache->fb.stride;
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index e59a28cb3158..a69160568254 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3363,13 +3363,15 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate,
+ 	int num_active;
+ 	int id, i;
+ 
++	/* Clear the partitioning for disabled planes. */
++	memset(ddb->plane[pipe], 0, sizeof(ddb->plane[pipe]));
++	memset(ddb->y_plane[pipe], 0, sizeof(ddb->y_plane[pipe]));
++
+ 	if (WARN_ON(!state))
+ 		return 0;
+ 
+ 	if (!cstate->base.active) {
+ 		ddb->pipe[pipe].start = ddb->pipe[pipe].end = 0;
+-		memset(ddb->plane[pipe], 0, sizeof(ddb->plane[pipe]));
+-		memset(ddb->y_plane[pipe], 0, sizeof(ddb->y_plane[pipe]));
+ 		return 0;
+ 	}
+ 
+@@ -3469,12 +3471,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate,
+ 	return 0;
+ }
+ 
+-static uint32_t skl_pipe_pixel_rate(const struct intel_crtc_state *config)
+-{
+-	/* TODO: Take into account the scalers once we support them */
+-	return config->base.adjusted_mode.crtc_clock;
+-}
+-
+ /*
+  * The max latency should be 257 (max the punit can code is 255 and we add 2us
+  * for the read latency) and cpp should always be <= 8, so that
+@@ -3525,7 +3521,7 @@ static uint32_t skl_adjusted_plane_pixel_rate(const struct intel_crtc_state *cst
+ 	 * Adjusted plane pixel rate is just the pipe's adjusted pixel rate
+ 	 * with additional adjustments for plane-specific scaling.
+ 	 */
+-	adjusted_pixel_rate = skl_pipe_pixel_rate(cstate);
++	adjusted_pixel_rate = ilk_pipe_pixel_rate(cstate);
+ 	downscale_amount = skl_plane_downscale_amount(pstate);
+ 
+ 	pixel_rate = adjusted_pixel_rate * downscale_amount >> 16;
+@@ -3737,11 +3733,11 @@ skl_compute_linetime_wm(struct intel_crtc_state *cstate)
+ 	if (!cstate->base.active)
+ 		return 0;
+ 
+-	if (WARN_ON(skl_pipe_pixel_rate(cstate) == 0))
++	if (WARN_ON(ilk_pipe_pixel_rate(cstate) == 0))
+ 		return 0;
+ 
+ 	return DIV_ROUND_UP(8 * cstate->base.adjusted_mode.crtc_htotal * 1000,
+-			    skl_pipe_pixel_rate(cstate));
++			    ilk_pipe_pixel_rate(cstate));
+ }
+ 
+ static void skl_compute_transition_wm(struct intel_crtc_state *cstate,
+@@ -4051,6 +4047,12 @@ skl_compute_ddb(struct drm_atomic_state *state)
+ 		intel_state->wm_results.dirty_pipes = ~0;
+ 	}
+ 
++	/*
++	 * We're not recomputing for the pipes not included in the commit, so
++	 * make sure we start with the current state.
++	 */
++	memcpy(ddb, &dev_priv->wm.skl_hw.ddb, sizeof(*ddb));
++
+ 	for_each_intel_crtc_mask(dev, intel_crtc, realloc_pipes) {
+ 		struct intel_crtc_state *cstate;
+ 
+diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
+index 29423e757d36..927c51e8abc6 100644
+--- a/drivers/gpu/drm/imx/ipuv3-plane.c
++++ b/drivers/gpu/drm/imx/ipuv3-plane.c
+@@ -108,6 +108,7 @@ static void ipu_plane_atomic_set_base(struct ipu_plane *ipu_plane,
+ {
+ 	struct drm_plane *plane = &ipu_plane->base;
+ 	struct drm_plane_state *state = plane->state;
++	struct drm_crtc_state *crtc_state = state->crtc->state;
+ 	struct drm_framebuffer *fb = state->fb;
+ 	unsigned long eba, ubo, vbo;
+ 	int active;
+@@ -149,7 +150,7 @@ static void ipu_plane_atomic_set_base(struct ipu_plane *ipu_plane,
+ 		break;
+ 	}
+ 
+-	if (old_state->fb) {
++	if (!drm_atomic_crtc_needs_modeset(crtc_state)) {
+ 		active = ipu_idmac_get_current_buffer(ipu_plane->ipu_ch);
+ 		ipu_cpmem_set_buffer(ipu_plane->ipu_ch, !active, eba);
+ 		ipu_idmac_select_buffer(ipu_plane->ipu_ch, !active);
+@@ -359,7 +360,9 @@ static int ipu_plane_atomic_check(struct drm_plane *plane,
+ 		if ((ubo > 0xfffff8) || (vbo > 0xfffff8))
+ 			return -EINVAL;
+ 
+-		if (old_fb) {
++		if (old_fb &&
++		    (old_fb->pixel_format == DRM_FORMAT_YUV420 ||
++		     old_fb->pixel_format == DRM_FORMAT_YVU420)) {
+ 			old_ubo = drm_plane_state_to_ubo(old_state);
+ 			old_vbo = drm_plane_state_to_vbo(old_state);
+ 			if (ubo != old_ubo || vbo != old_vbo)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+index dc57b628e074..193573d191e5 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c
++++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c
+@@ -240,7 +240,8 @@ static bool nouveau_pr3_present(struct pci_dev *pdev)
+ 	if (!parent_adev)
+ 		return false;
+ 
+-	return acpi_has_method(parent_adev->handle, "_PR3");
++	return parent_adev->power.flags.power_resources &&
++		acpi_has_method(parent_adev->handle, "_PR3");
+ }
+ 
+ static void nouveau_dsm_pci_probe(struct pci_dev *pdev, acpi_handle *dhandle_out,
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index 4a3d7cab83f7..4b9c2d5ff6a1 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -1396,9 +1396,7 @@ static void cayman_pcie_gart_fini(struct radeon_device *rdev)
+ void cayman_cp_int_cntl_setup(struct radeon_device *rdev,
+ 			      int ring, u32 cp_int_cntl)
+ {
+-	u32 srbm_gfx_cntl = RREG32(SRBM_GFX_CNTL) & ~3;
+-
+-	WREG32(SRBM_GFX_CNTL, srbm_gfx_cntl | (ring & 3));
++	WREG32(SRBM_GFX_CNTL, RINGID(ring));
+ 	WREG32(CP_INT_CNTL, cp_int_cntl);
+ }
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_dp_auxch.c b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
+index db64e0062689..3b0c229d7dcd 100644
+--- a/drivers/gpu/drm/radeon/radeon_dp_auxch.c
++++ b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
+@@ -105,7 +105,7 @@ radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg
+ 
+ 	tmp &= AUX_HPD_SEL(0x7);
+ 	tmp |= AUX_HPD_SEL(chan->rec.hpd);
+-	tmp |= AUX_EN | AUX_LS_READ_EN | AUX_HPD_DISCON(0x1);
++	tmp |= AUX_EN | AUX_LS_READ_EN;
+ 
+ 	WREG32(AUX_CONTROL + aux_offset[instance], tmp);
+ 
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 89bdf20344ae..c49934527a87 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2999,6 +2999,49 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ 	int i;
+ 	struct si_dpm_quirk *p = si_dpm_quirk_list;
+ 
++	/* limit all SI kickers */
++	if (rdev->family == CHIP_PITCAIRN) {
++		if ((rdev->pdev->revision == 0x81) ||
++		    (rdev->pdev->device == 0x6810) ||
++		    (rdev->pdev->device == 0x6811) ||
++		    (rdev->pdev->device == 0x6816) ||
++		    (rdev->pdev->device == 0x6817) ||
++		    (rdev->pdev->device == 0x6806))
++			max_mclk = 120000;
++	} else if (rdev->family == CHIP_VERDE) {
++		if ((rdev->pdev->revision == 0x81) ||
++		    (rdev->pdev->revision == 0x83) ||
++		    (rdev->pdev->revision == 0x87) ||
++		    (rdev->pdev->device == 0x6820) ||
++		    (rdev->pdev->device == 0x6821) ||
++		    (rdev->pdev->device == 0x6822) ||
++		    (rdev->pdev->device == 0x6823) ||
++		    (rdev->pdev->device == 0x682A) ||
++		    (rdev->pdev->device == 0x682B)) {
++			max_sclk = 75000;
++			max_mclk = 80000;
++		}
++	} else if (rdev->family == CHIP_OLAND) {
++		if ((rdev->pdev->revision == 0xC7) ||
++		    (rdev->pdev->revision == 0x80) ||
++		    (rdev->pdev->revision == 0x81) ||
++		    (rdev->pdev->revision == 0x83) ||
++		    (rdev->pdev->device == 0x6604) ||
++		    (rdev->pdev->device == 0x6605)) {
++			max_sclk = 75000;
++			max_mclk = 80000;
++		}
++	} else if (rdev->family == CHIP_HAINAN) {
++		if ((rdev->pdev->revision == 0x81) ||
++		    (rdev->pdev->revision == 0x83) ||
++		    (rdev->pdev->revision == 0xC3) ||
++		    (rdev->pdev->device == 0x6664) ||
++		    (rdev->pdev->device == 0x6665) ||
++		    (rdev->pdev->device == 0x6667)) {
++			max_sclk = 75000;
++			max_mclk = 80000;
++		}
++	}
+ 	/* Apply dpm quirks */
+ 	while (p && p->chip_device != 0) {
+ 		if (rdev->pdev->vendor == p->chip_vendor &&
+@@ -3011,16 +3054,6 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ 		}
+ 		++p;
+ 	}
+-	/* limit mclk on all R7 370 parts for stability */
+-	if (rdev->pdev->device == 0x6811 &&
+-	    rdev->pdev->revision == 0x81)
+-		max_mclk = 120000;
+-	/* limit sclk/mclk on Jet parts for stability */
+-	if (rdev->pdev->device == 0x6665 &&
+-	    rdev->pdev->revision == 0xc3) {
+-		max_sclk = 75000;
+-		max_mclk = 80000;
+-	}
+ 
+ 	if (rps->vce_active) {
+ 		rps->evclk = rdev->pm.dpm.vce_states[rdev->pm.dpm.vce_level].evclk;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e92b09d32605..9ab703c1042e 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -179,6 +179,7 @@
+ #define USB_DEVICE_ID_ATEN_4PORTKVM	0x2205
+ #define USB_DEVICE_ID_ATEN_4PORTKVMC	0x2208
+ #define USB_DEVICE_ID_ATEN_CS682	0x2213
++#define USB_DEVICE_ID_ATEN_CS692	0x8021
+ 
+ #define USB_VENDOR_ID_ATMEL		0x03eb
+ #define USB_DEVICE_ID_ATMEL_MULTITOUCH	0x211c
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index bb400081efe4..85fcf60f3bba 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -63,6 +63,7 @@ static const struct hid_blacklist {
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS682, HID_QUIRK_NOGET },
++	{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS692, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FIGHTERSTICK, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_COMBATSTICK, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_ECLIPSE_YOKE, HID_QUIRK_NOGET },
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index d5acaa2d8e61..9dc63725363d 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -283,10 +283,14 @@ static void heartbeat_onchannelcallback(void *context)
+ 	u8 *hbeat_txf_buf = util_heartbeat.recv_buffer;
+ 	struct icmsg_negotiate *negop = NULL;
+ 
+-	vmbus_recvpacket(channel, hbeat_txf_buf,
+-			 PAGE_SIZE, &recvlen, &requestid);
++	while (1) {
++
++		vmbus_recvpacket(channel, hbeat_txf_buf,
++				 PAGE_SIZE, &recvlen, &requestid);
++
++		if (!recvlen)
++			break;
+ 
+-	if (recvlen > 0) {
+ 		icmsghdrp = (struct icmsg_hdr *)&hbeat_txf_buf[
+ 				sizeof(struct vmbuspipe_hdr)];
+ 
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 5c5b7cada8be..dfae43523d34 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -694,6 +694,8 @@ static int rk3x_i2c_v0_calc_timings(unsigned long clk_rate,
+ 	t_calc->div_low--;
+ 	t_calc->div_high--;
+ 
++	/* Give the tuning value 0, that would not update con register */
++	t_calc->tuning = 0;
+ 	/* Maximum divider supported by hw is 0xffff */
+ 	if (t_calc->div_low > 0xffff) {
+ 		t_calc->div_low = 0xffff;
+diff --git a/drivers/i2c/busses/i2c-xgene-slimpro.c b/drivers/i2c/busses/i2c-xgene-slimpro.c
+index 4233f5695352..3c38029e3fe9 100644
+--- a/drivers/i2c/busses/i2c-xgene-slimpro.c
++++ b/drivers/i2c/busses/i2c-xgene-slimpro.c
+@@ -105,7 +105,7 @@ struct slimpro_i2c_dev {
+ 	struct mbox_chan *mbox_chan;
+ 	struct mbox_client mbox_client;
+ 	struct completion rd_complete;
+-	u8 dma_buffer[I2C_SMBUS_BLOCK_MAX];
++	u8 dma_buffer[I2C_SMBUS_BLOCK_MAX + 1]; /* dma_buffer[0] is used for length */
+ 	u32 *resp_msg;
+ };
+ 
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index da3a02ef4a31..a9a9f66947e8 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -1592,6 +1592,7 @@ static struct i2c_client *of_i2c_register_device(struct i2c_adapter *adap,
+ static void of_i2c_register_devices(struct i2c_adapter *adap)
+ {
+ 	struct device_node *node;
++	struct i2c_client *client;
+ 
+ 	/* Only register child devices if the adapter has a node pointer set */
+ 	if (!adap->dev.of_node)
+@@ -1602,7 +1603,14 @@ static void of_i2c_register_devices(struct i2c_adapter *adap)
+ 	for_each_available_child_of_node(adap->dev.of_node, node) {
+ 		if (of_node_test_and_set_flag(node, OF_POPULATED))
+ 			continue;
+-		of_i2c_register_device(adap, node);
++
++		client = of_i2c_register_device(adap, node);
++		if (IS_ERR(client)) {
++			dev_warn(&adap->dev,
++				 "Failed to create I2C device for %s\n",
++				 node->full_name);
++			of_node_clear_flag(node, OF_POPULATED);
++		}
+ 	}
+ }
+ 
+@@ -2073,6 +2081,7 @@ int i2c_register_driver(struct module *owner, struct i2c_driver *driver)
+ 	/* add the driver to the list of i2c drivers in the driver core */
+ 	driver->driver.owner = owner;
+ 	driver->driver.bus = &i2c_bus_type;
++	INIT_LIST_HEAD(&driver->clients);
+ 
+ 	/* When registration returns, the driver core
+ 	 * will have called probe() for all matching-but-unbound devices.
+@@ -2083,7 +2092,6 @@ int i2c_register_driver(struct module *owner, struct i2c_driver *driver)
+ 
+ 	pr_debug("driver [%s] registered\n", driver->driver.name);
+ 
+-	INIT_LIST_HEAD(&driver->clients);
+ 	/* Walk the adapters that are already present */
+ 	i2c_for_each_dev(driver, __process_new_driver);
+ 
+@@ -2201,6 +2209,7 @@ static int of_i2c_notify(struct notifier_block *nb, unsigned long action,
+ 		if (IS_ERR(client)) {
+ 			dev_err(&adap->dev, "failed to create client for '%s'\n",
+ 				 rd->dn->full_name);
++			of_node_clear_flag(rd->dn, OF_POPULATED);
+ 			return notifier_from_errno(PTR_ERR(client));
+ 		}
+ 		break;
+diff --git a/drivers/iio/chemical/atlas-ph-sensor.c b/drivers/iio/chemical/atlas-ph-sensor.c
+index 407f141a1eee..a3fbdb761b5f 100644
+--- a/drivers/iio/chemical/atlas-ph-sensor.c
++++ b/drivers/iio/chemical/atlas-ph-sensor.c
+@@ -207,13 +207,14 @@ static int atlas_check_ec_calibration(struct atlas_data *data)
+ 	struct device *dev = &data->client->dev;
+ 	int ret;
+ 	unsigned int val;
++	__be16	rval;
+ 
+-	ret = regmap_bulk_read(data->regmap, ATLAS_REG_EC_PROBE, &val, 2);
++	ret = regmap_bulk_read(data->regmap, ATLAS_REG_EC_PROBE, &rval, 2);
+ 	if (ret)
+ 		return ret;
+ 
+-	dev_info(dev, "probe set to K = %d.%.2d", be16_to_cpu(val) / 100,
+-						 be16_to_cpu(val) % 100);
++	val = be16_to_cpu(rval);
++	dev_info(dev, "probe set to K = %d.%.2d", val / 100, val % 100);
+ 
+ 	ret = regmap_read(data->regmap, ATLAS_REG_EC_CALIB_STATUS, &val);
+ 	if (ret)
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index f4bfb4b2d50a..073246c7d163 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -877,6 +877,13 @@ static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "P34"),
+ 		},
+ 	},
++	{
++		/* Schenker XMG C504 - Elantech touchpad */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "XMG"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "C504"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 8abde6b8cedc..6d53810963f7 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -266,7 +266,7 @@ static struct raid_type {
+ 	{"raid10_offset", "raid10 offset (striped mirrors)",	    0, 2, 10, ALGORITHM_RAID10_OFFSET},
+ 	{"raid10_near",	  "raid10 near (striped mirrors)",	    0, 2, 10, ALGORITHM_RAID10_NEAR},
+ 	{"raid10",	  "raid10 (striped mirrors)",		    0, 2, 10, ALGORITHM_RAID10_DEFAULT},
+-	{"raid4",	  "raid4 (dedicated last parity disk)",	    1, 2, 4,  ALGORITHM_PARITY_N}, /* raid4 layout = raid5_n */
++	{"raid4",	  "raid4 (dedicated first parity disk)",    1, 2, 5,  ALGORITHM_PARITY_0}, /* raid4 layout = raid5_0 */
+ 	{"raid5_n",	  "raid5 (dedicated last parity disk)",	    1, 2, 5,  ALGORITHM_PARITY_N},
+ 	{"raid5_ls",	  "raid5 (left symmetric)",		    1, 2, 5,  ALGORITHM_LEFT_SYMMETRIC},
+ 	{"raid5_rs",	  "raid5 (right symmetric)",		    1, 2, 5,  ALGORITHM_RIGHT_SYMMETRIC},
+@@ -2087,11 +2087,11 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 		/*
+ 		 * No takeover/reshaping, because we don't have the extended v1.9.0 metadata
+ 		 */
+-		if (le32_to_cpu(sb->level) != mddev->level) {
++		if (le32_to_cpu(sb->level) != mddev->new_level) {
+ 			DMERR("Reshaping/takeover raid sets not yet supported. (raid level/stripes/size change)");
+ 			return -EINVAL;
+ 		}
+-		if (le32_to_cpu(sb->layout) != mddev->layout) {
++		if (le32_to_cpu(sb->layout) != mddev->new_layout) {
+ 			DMERR("Reshaping raid sets not yet supported. (raid layout change)");
+ 			DMERR("	 0x%X vs 0x%X", le32_to_cpu(sb->layout), mddev->layout);
+ 			DMERR("	 Old layout: %s w/ %d copies",
+@@ -2102,7 +2102,7 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 			      raid10_md_layout_to_copies(mddev->layout));
+ 			return -EINVAL;
+ 		}
+-		if (le32_to_cpu(sb->stripe_sectors) != mddev->chunk_sectors) {
++		if (le32_to_cpu(sb->stripe_sectors) != mddev->new_chunk_sectors) {
+ 			DMERR("Reshaping raid sets not yet supported. (stripe sectors change)");
+ 			return -EINVAL;
+ 		}
+@@ -2115,6 +2115,8 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 			return -EINVAL;
+ 		}
+ 
++		DMINFO("Discovered old metadata format; upgrading to extended metadata format");
++
+ 		/* Table line is checked vs. authoritative superblock */
+ 		rs_set_new(rs);
+ 	}
+@@ -2258,7 +2260,8 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev)
+ 	if (!mddev->events && super_init_validation(rs, rdev))
+ 		return -EINVAL;
+ 
+-	if (le32_to_cpu(sb->compat_features) != FEATURE_FLAG_SUPPORTS_V190) {
++	if (le32_to_cpu(sb->compat_features) &&
++	    le32_to_cpu(sb->compat_features) != FEATURE_FLAG_SUPPORTS_V190) {
+ 		rs->ti->error = "Unable to assemble array: Unknown flag(s) in compatible feature flags";
+ 		return -EINVAL;
+ 	}
+@@ -3646,7 +3649,7 @@ static void raid_resume(struct dm_target *ti)
+ 
+ static struct target_type raid_target = {
+ 	.name = "raid",
+-	.version = {1, 9, 0},
++	.version = {1, 9, 1},
+ 	.module = THIS_MODULE,
+ 	.ctr = raid_ctr,
+ 	.dtr = raid_dtr,
+diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
+index bdf1606f67bc..7a6254d54baf 100644
+--- a/drivers/md/dm-raid1.c
++++ b/drivers/md/dm-raid1.c
+@@ -1292,6 +1292,7 @@ static int mirror_end_io(struct dm_target *ti, struct bio *bio, int error)
+ 
+ 			dm_bio_restore(bd, bio);
+ 			bio_record->details.bi_bdev = NULL;
++			bio->bi_error = 0;
+ 
+ 			queue_bio(ms, bio, rw);
+ 			return DM_ENDIO_INCOMPLETE;
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 5da86c8b6545..2154596eedf3 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -835,8 +835,11 @@ int dm_old_init_request_queue(struct mapped_device *md)
+ 	init_kthread_worker(&md->kworker);
+ 	md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker,
+ 				       "kdmwork-%s", dm_device_name(md));
+-	if (IS_ERR(md->kworker_task))
+-		return PTR_ERR(md->kworker_task);
++	if (IS_ERR(md->kworker_task)) {
++		int error = PTR_ERR(md->kworker_task);
++		md->kworker_task = NULL;
++		return error;
++	}
+ 
+ 	elv_register_queue(md->queue);
+ 
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 3e407a9cde1f..c4b53b332607 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -695,37 +695,32 @@ int dm_table_add_target(struct dm_table *t, const char *type,
+ 
+ 	tgt->type = dm_get_target_type(type);
+ 	if (!tgt->type) {
+-		DMERR("%s: %s: unknown target type", dm_device_name(t->md),
+-		      type);
++		DMERR("%s: %s: unknown target type", dm_device_name(t->md), type);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (dm_target_needs_singleton(tgt->type)) {
+ 		if (t->num_targets) {
+-			DMERR("%s: target type %s must appear alone in table",
+-			      dm_device_name(t->md), type);
+-			return -EINVAL;
++			tgt->error = "singleton target type must appear alone in table";
++			goto bad;
+ 		}
+ 		t->singleton = true;
+ 	}
+ 
+ 	if (dm_target_always_writeable(tgt->type) && !(t->mode & FMODE_WRITE)) {
+-		DMERR("%s: target type %s may not be included in read-only tables",
+-		      dm_device_name(t->md), type);
+-		return -EINVAL;
++		tgt->error = "target type may not be included in a read-only table";
++		goto bad;
+ 	}
+ 
+ 	if (t->immutable_target_type) {
+ 		if (t->immutable_target_type != tgt->type) {
+-			DMERR("%s: immutable target type %s cannot be mixed with other target types",
+-			      dm_device_name(t->md), t->immutable_target_type->name);
+-			return -EINVAL;
++			tgt->error = "immutable target type cannot be mixed with other target types";
++			goto bad;
+ 		}
+ 	} else if (dm_target_is_immutable(tgt->type)) {
+ 		if (t->num_targets) {
+-			DMERR("%s: immutable target type %s cannot be mixed with other target types",
+-			      dm_device_name(t->md), tgt->type->name);
+-			return -EINVAL;
++			tgt->error = "immutable target type cannot be mixed with other target types";
++			goto bad;
+ 		}
+ 		t->immutable_target_type = tgt->type;
+ 	}
+@@ -740,7 +735,6 @@ int dm_table_add_target(struct dm_table *t, const char *type,
+ 	 */
+ 	if (!adjoin(t, tgt)) {
+ 		tgt->error = "Gap in table";
+-		r = -EINVAL;
+ 		goto bad;
+ 	}
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 0f2928b3136b..eeef575fb54b 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1423,8 +1423,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 	if (md->bs)
+ 		bioset_free(md->bs);
+ 
+-	cleanup_srcu_struct(&md->io_barrier);
+-
+ 	if (md->disk) {
+ 		spin_lock(&_minor_lock);
+ 		md->disk->private_data = NULL;
+@@ -1436,6 +1434,8 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ 	if (md->queue)
+ 		blk_cleanup_queue(md->queue);
+ 
++	cleanup_srcu_struct(&md->io_barrier);
++
+ 	if (md->bdev) {
+ 		bdput(md->bdev);
+ 		md->bdev = NULL;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 915e84d631a2..db0aa6c058e7 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8120,14 +8120,14 @@ void md_do_sync(struct md_thread *thread)
+ 
+ 	if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
+ 	    !test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
+-	    mddev->curr_resync > 2) {
++	    mddev->curr_resync > 3) {
+ 		mddev->curr_resync_completed = mddev->curr_resync;
+ 		sysfs_notify(&mddev->kobj, NULL, "sync_completed");
+ 	}
+ 	mddev->pers->sync_request(mddev, max_sectors, &skipped);
+ 
+ 	if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) &&
+-	    mddev->curr_resync > 2) {
++	    mddev->curr_resync > 3) {
+ 		if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
+ 			if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
+ 				if (mddev->curr_resync >= mddev->recovery_cp) {
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 21dc00eb1989..95bf4cd8fcc0 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -407,11 +407,14 @@ static void raid1_end_write_request(struct bio *bio)
+ 	struct bio *to_put = NULL;
+ 	int mirror = find_bio_disk(r1_bio, bio);
+ 	struct md_rdev *rdev = conf->mirrors[mirror].rdev;
++	bool discard_error;
++
++	discard_error = bio->bi_error && bio_op(bio) == REQ_OP_DISCARD;
+ 
+ 	/*
+ 	 * 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_error) {
++	if (bio->bi_error && !discard_error) {
+ 		set_bit(WriteErrorSeen,	&rdev->flags);
+ 		if (!test_and_set_bit(WantReplacement, &rdev->flags))
+ 			set_bit(MD_RECOVERY_NEEDED, &
+@@ -448,7 +451,7 @@ static void raid1_end_write_request(struct bio *bio)
+ 
+ 		/* Maybe we can clear some bad blocks. */
+ 		if (is_badblock(rdev, r1_bio->sector, r1_bio->sectors,
+-				&first_bad, &bad_sectors)) {
++				&first_bad, &bad_sectors) && !discard_error) {
+ 			r1_bio->bios[mirror] = IO_MADE_GOOD;
+ 			set_bit(R1BIO_MadeGood, &r1_bio->state);
+ 		}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index be1a9fca3b2d..39fddda2fef2 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -447,6 +447,9 @@ static void raid10_end_write_request(struct bio *bio)
+ 	struct r10conf *conf = r10_bio->mddev->private;
+ 	int slot, repl;
+ 	struct md_rdev *rdev = NULL;
++	bool discard_error;
++
++	discard_error = bio->bi_error && bio_op(bio) == REQ_OP_DISCARD;
+ 
+ 	dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl);
+ 
+@@ -460,7 +463,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 	/*
+ 	 * this branch is our 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_error) {
++	if (bio->bi_error && !discard_error) {
+ 		if (repl)
+ 			/* Never record new bad blocks to replacement,
+ 			 * just fail it.
+@@ -503,7 +506,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 		if (is_badblock(rdev,
+ 				r10_bio->devs[slot].addr,
+ 				r10_bio->sectors,
+-				&first_bad, &bad_sectors)) {
++				&first_bad, &bad_sectors) && !discard_error) {
+ 			bio_put(bio);
+ 			if (repl)
+ 				r10_bio->devs[slot].repl_bio = IO_MADE_GOOD;
+diff --git a/drivers/media/platform/vsp1/vsp1_video.c b/drivers/media/platform/vsp1/vsp1_video.c
+index 9fb4fc26a359..ed9759e8a6fc 100644
+--- a/drivers/media/platform/vsp1/vsp1_video.c
++++ b/drivers/media/platform/vsp1/vsp1_video.c
+@@ -675,6 +675,13 @@ static void vsp1_video_stop_streaming(struct vb2_queue *vq)
+ 	unsigned long flags;
+ 	int ret;
+ 
++	/* Clear the buffers ready flag to make sure the device won't be started
++	 * by a QBUF on the video node on the other side of the pipeline.
++	 */
++	spin_lock_irqsave(&video->irqlock, flags);
++	pipe->buffers_ready &= ~(1 << video->pipe_index);
++	spin_unlock_irqrestore(&video->irqlock, flags);
++
+ 	mutex_lock(&pipe->lock);
+ 	if (--pipe->stream_count == pipe->num_inputs) {
+ 		/* Stop the pipeline. */
+diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
+index af23d7dfe752..2e5233b60971 100644
+--- a/drivers/misc/cxl/api.c
++++ b/drivers/misc/cxl/api.c
+@@ -247,7 +247,9 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
+ 	cxl_ctx_get();
+ 
+ 	if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
++		put_pid(ctx->glpid);
+ 		put_pid(ctx->pid);
++		ctx->glpid = ctx->pid = NULL;
+ 		cxl_adapter_context_put(ctx->afu->adapter);
+ 		cxl_ctx_put();
+ 		goto out;
+diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
+index d0b421f49b39..77080cc5fa0a 100644
+--- a/drivers/misc/cxl/file.c
++++ b/drivers/misc/cxl/file.c
+@@ -194,6 +194,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
+ 	ctx->mmio_err_ff = !!(work.flags & CXL_START_WORK_ERR_FF);
+ 
+ 	/*
++	 * Increment the mapped context count for adapter. This also checks
++	 * if adapter_context_lock is taken.
++	 */
++	rc = cxl_adapter_context_get(ctx->afu->adapter);
++	if (rc) {
++		afu_release_irqs(ctx, ctx);
++		goto out;
++	}
++
++	/*
+ 	 * We grab the PID here and not in the file open to allow for the case
+ 	 * where a process (master, some daemon, etc) has opened the chardev on
+ 	 * behalf of another process, so the AFU's mm gets bound to the process
+@@ -205,15 +215,6 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
+ 	ctx->pid = get_task_pid(current, PIDTYPE_PID);
+ 	ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
+ 
+-	/*
+-	 * Increment the mapped context count for adapter. This also checks
+-	 * if adapter_context_lock is taken.
+-	 */
+-	rc = cxl_adapter_context_get(ctx->afu->adapter);
+-	if (rc) {
+-		afu_release_irqs(ctx, ctx);
+-		goto out;
+-	}
+ 
+ 	trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
+ 
+@@ -221,6 +222,9 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
+ 							amr))) {
+ 		afu_release_irqs(ctx, ctx);
+ 		cxl_adapter_context_put(ctx->afu->adapter);
++		put_pid(ctx->glpid);
++		put_pid(ctx->pid);
++		ctx->glpid = ctx->pid = NULL;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c
+index 222367cc8c81..524660510599 100644
+--- a/drivers/misc/genwqe/card_utils.c
++++ b/drivers/misc/genwqe/card_utils.c
+@@ -352,17 +352,27 @@ int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl,
+ 		if (copy_from_user(sgl->lpage, user_addr + user_size -
+ 				   sgl->lpage_size, sgl->lpage_size)) {
+ 			rc = -EFAULT;
+-			goto err_out1;
++			goto err_out2;
+ 		}
+ 	}
+ 	return 0;
+ 
++ err_out2:
++	__genwqe_free_consistent(cd, PAGE_SIZE, sgl->lpage,
++				 sgl->lpage_dma_addr);
++	sgl->lpage = NULL;
++	sgl->lpage_dma_addr = 0;
+  err_out1:
+ 	__genwqe_free_consistent(cd, PAGE_SIZE, sgl->fpage,
+ 				 sgl->fpage_dma_addr);
++	sgl->fpage = NULL;
++	sgl->fpage_dma_addr = 0;
+  err_out:
+ 	__genwqe_free_consistent(cd, sgl->sgl_size, sgl->sgl,
+ 				 sgl->sgl_dma_addr);
++	sgl->sgl = NULL;
++	sgl->sgl_dma_addr = 0;
++	sgl->sgl_size = 0;
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
+index 4a6c1b85f11e..2d23cdf8a734 100644
+--- a/drivers/misc/mei/hw-txe.c
++++ b/drivers/misc/mei/hw-txe.c
+@@ -978,11 +978,13 @@ static bool mei_txe_check_and_ack_intrs(struct mei_device *dev, bool do_ack)
+ 	hisr = mei_txe_br_reg_read(hw, HISR_REG);
+ 
+ 	aliveness = mei_txe_aliveness_get(dev);
+-	if (hhisr & IPC_HHIER_SEC && aliveness)
++	if (hhisr & IPC_HHIER_SEC && aliveness) {
+ 		ipc_isr = mei_txe_sec_reg_read_silent(hw,
+ 				SEC_IPC_HOST_INT_STATUS_REG);
+-	else
++	} else {
+ 		ipc_isr = 0;
++		hhisr &= ~IPC_HHIER_SEC;
++	}
+ 
+ 	generated = generated ||
+ 		(hisr & HISR_INT_STS_MSK) ||
+diff --git a/drivers/mmc/host/dw_mmc-pltfm.c b/drivers/mmc/host/dw_mmc-pltfm.c
+index c0bb0c793e84..dbbc4303bdd0 100644
+--- a/drivers/mmc/host/dw_mmc-pltfm.c
++++ b/drivers/mmc/host/dw_mmc-pltfm.c
+@@ -46,12 +46,13 @@ int dw_mci_pltfm_register(struct platform_device *pdev,
+ 	host->pdata = pdev->dev.platform_data;
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	/* Get registers' physical base address */
+-	host->phy_regs = regs->start;
+ 	host->regs = devm_ioremap_resource(&pdev->dev, regs);
+ 	if (IS_ERR(host->regs))
+ 		return PTR_ERR(host->regs);
+ 
++	/* Get registers' physical base address */
++	host->phy_regs = regs->start;
++
+ 	platform_set_drvdata(pdev, host);
+ 	return dw_mci_probe(host);
+ }
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 48eb55f344eb..a01a70a8fd3b 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -515,10 +515,11 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ 			unsigned long long ec = be64_to_cpu(ech->ec);
+ 			unmap_peb(ai, pnum);
+ 			dbg_bld("Adding PEB to free: %i", pnum);
++
+ 			if (err == UBI_IO_FF_BITFLIPS)
+-				add_aeb(ai, free, pnum, ec, 1);
+-			else
+-				add_aeb(ai, free, pnum, ec, 0);
++				scrub = 1;
++
++			add_aeb(ai, free, pnum, ec, scrub);
+ 			continue;
+ 		} else if (err == 0 || err == UBI_IO_BITFLIPS) {
+ 			dbg_bld("Found non empty PEB:%i in pool", pnum);
+@@ -750,11 +751,11 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ 			     fmvhdr->vol_type,
+ 			     be32_to_cpu(fmvhdr->last_eb_bytes));
+ 
+-		if (!av)
+-			goto fail_bad;
+-		if (PTR_ERR(av) == -EINVAL) {
+-			ubi_err(ubi, "volume (ID %i) already exists",
+-				fmvhdr->vol_id);
++		if (IS_ERR(av)) {
++			if (PTR_ERR(av) == -EEXIST)
++				ubi_err(ubi, "volume (ID %i) already exists",
++					fmvhdr->vol_id);
++
+ 			goto fail_bad;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
+index 30ae5bf81611..76ad825a823d 100644
+--- a/drivers/net/wireless/ath/ath10k/core.h
++++ b/drivers/net/wireless/ath/ath10k/core.h
+@@ -445,6 +445,7 @@ struct ath10k_debug {
+ 	u32 pktlog_filter;
+ 	u32 reg_addr;
+ 	u32 nf_cal_period;
++	void *cal_data;
+ 
+ 	struct ath10k_fw_crash_data *fw_crash_data;
+ };
+diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
+index 8f0fd41dfd4b..8c6a5dd7e178 100644
+--- a/drivers/net/wireless/ath/ath10k/debug.c
++++ b/drivers/net/wireless/ath/ath10k/debug.c
+@@ -30,6 +30,8 @@
+ /* ms */
+ #define ATH10K_DEBUG_HTT_STATS_INTERVAL 1000
+ 
++#define ATH10K_DEBUG_CAL_DATA_LEN 12064
++
+ #define ATH10K_FW_CRASH_DUMP_VERSION 1
+ 
+ /**
+@@ -1450,56 +1452,51 @@ static const struct file_operations fops_fw_dbglog = {
+ 	.llseek = default_llseek,
+ };
+ 
+-static int ath10k_debug_cal_data_open(struct inode *inode, struct file *file)
++static int ath10k_debug_cal_data_fetch(struct ath10k *ar)
+ {
+-	struct ath10k *ar = inode->i_private;
+-	void *buf;
+ 	u32 hi_addr;
+ 	__le32 addr;
+ 	int ret;
+ 
+-	mutex_lock(&ar->conf_mutex);
+-
+-	if (ar->state != ATH10K_STATE_ON &&
+-	    ar->state != ATH10K_STATE_UTF) {
+-		ret = -ENETDOWN;
+-		goto err;
+-	}
++	lockdep_assert_held(&ar->conf_mutex);
+ 
+-	buf = vmalloc(ar->hw_params.cal_data_len);
+-	if (!buf) {
+-		ret = -ENOMEM;
+-		goto err;
+-	}
++	if (WARN_ON(ar->hw_params.cal_data_len > ATH10K_DEBUG_CAL_DATA_LEN))
++		return -EINVAL;
+ 
+ 	hi_addr = host_interest_item_address(HI_ITEM(hi_board_data));
+ 
+ 	ret = ath10k_hif_diag_read(ar, hi_addr, &addr, sizeof(addr));
+ 	if (ret) {
+-		ath10k_warn(ar, "failed to read hi_board_data address: %d\n", ret);
+-		goto err_vfree;
++		ath10k_warn(ar, "failed to read hi_board_data address: %d\n",
++			    ret);
++		return ret;
+ 	}
+ 
+-	ret = ath10k_hif_diag_read(ar, le32_to_cpu(addr), buf,
++	ret = ath10k_hif_diag_read(ar, le32_to_cpu(addr), ar->debug.cal_data,
+ 				   ar->hw_params.cal_data_len);
+ 	if (ret) {
+ 		ath10k_warn(ar, "failed to read calibration data: %d\n", ret);
+-		goto err_vfree;
++		return ret;
+ 	}
+ 
+-	file->private_data = buf;
++	return 0;
++}
+ 
+-	mutex_unlock(&ar->conf_mutex);
++static int ath10k_debug_cal_data_open(struct inode *inode, struct file *file)
++{
++	struct ath10k *ar = inode->i_private;
+ 
+-	return 0;
++	mutex_lock(&ar->conf_mutex);
+ 
+-err_vfree:
+-	vfree(buf);
++	if (ar->state == ATH10K_STATE_ON ||
++	    ar->state == ATH10K_STATE_UTF) {
++		ath10k_debug_cal_data_fetch(ar);
++	}
+ 
+-err:
++	file->private_data = ar;
+ 	mutex_unlock(&ar->conf_mutex);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static ssize_t ath10k_debug_cal_data_read(struct file *file,
+@@ -1507,18 +1504,16 @@ static ssize_t ath10k_debug_cal_data_read(struct file *file,
+ 					  size_t count, loff_t *ppos)
+ {
+ 	struct ath10k *ar = file->private_data;
+-	void *buf = file->private_data;
+ 
+-	return simple_read_from_buffer(user_buf, count, ppos,
+-				       buf, ar->hw_params.cal_data_len);
+-}
++	mutex_lock(&ar->conf_mutex);
+ 
+-static int ath10k_debug_cal_data_release(struct inode *inode,
+-					 struct file *file)
+-{
+-	vfree(file->private_data);
++	count = simple_read_from_buffer(user_buf, count, ppos,
++					ar->debug.cal_data,
++					ar->hw_params.cal_data_len);
+ 
+-	return 0;
++	mutex_unlock(&ar->conf_mutex);
++
++	return count;
+ }
+ 
+ static ssize_t ath10k_write_ani_enable(struct file *file,
+@@ -1579,7 +1574,6 @@ static const struct file_operations fops_ani_enable = {
+ static const struct file_operations fops_cal_data = {
+ 	.open = ath10k_debug_cal_data_open,
+ 	.read = ath10k_debug_cal_data_read,
+-	.release = ath10k_debug_cal_data_release,
+ 	.owner = THIS_MODULE,
+ 	.llseek = default_llseek,
+ };
+@@ -1931,6 +1925,8 @@ void ath10k_debug_stop(struct ath10k *ar)
+ {
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
++	ath10k_debug_cal_data_fetch(ar);
++
+ 	/* Must not use _sync to avoid deadlock, we do that in
+ 	 * ath10k_debug_destroy(). The check for htt_stats_mask is to avoid
+ 	 * warning from del_timer(). */
+@@ -2343,6 +2339,10 @@ int ath10k_debug_create(struct ath10k *ar)
+ 	if (!ar->debug.fw_crash_data)
+ 		return -ENOMEM;
+ 
++	ar->debug.cal_data = vzalloc(ATH10K_DEBUG_CAL_DATA_LEN);
++	if (!ar->debug.cal_data)
++		return -ENOMEM;
++
+ 	INIT_LIST_HEAD(&ar->debug.fw_stats.pdevs);
+ 	INIT_LIST_HEAD(&ar->debug.fw_stats.vdevs);
+ 	INIT_LIST_HEAD(&ar->debug.fw_stats.peers);
+@@ -2356,6 +2356,9 @@ void ath10k_debug_destroy(struct ath10k *ar)
+ 	vfree(ar->debug.fw_crash_data);
+ 	ar->debug.fw_crash_data = NULL;
+ 
++	vfree(ar->debug.cal_data);
++	ar->debug.cal_data = NULL;
++
+ 	ath10k_debug_fw_stats_reset(ar);
+ 
+ 	kfree(ar->debug.tpc_stats);
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_calib.c b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
+index b6f064a8d264..7e27a06e5df1 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_calib.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
+@@ -33,7 +33,6 @@ struct coeff {
+ 
+ enum ar9003_cal_types {
+ 	IQ_MISMATCH_CAL = BIT(0),
+-	TEMP_COMP_CAL = BIT(1),
+ };
+ 
+ static void ar9003_hw_setup_calibration(struct ath_hw *ah,
+@@ -59,12 +58,6 @@ static void ar9003_hw_setup_calibration(struct ath_hw *ah,
+ 		/* Kick-off cal */
+ 		REG_SET_BIT(ah, AR_PHY_TIMING4, AR_PHY_TIMING4_DO_CAL);
+ 		break;
+-	case TEMP_COMP_CAL:
+-		ath_dbg(common, CALIBRATE,
+-			"starting Temperature Compensation Calibration\n");
+-		REG_SET_BIT(ah, AR_CH0_THERM, AR_CH0_THERM_LOCAL);
+-		REG_SET_BIT(ah, AR_CH0_THERM, AR_CH0_THERM_START);
+-		break;
+ 	default:
+ 		ath_err(common, "Invalid calibration type\n");
+ 		break;
+@@ -93,8 +86,7 @@ static bool ar9003_hw_per_calibration(struct ath_hw *ah,
+ 		/*
+ 		* Accumulate cal measures for active chains
+ 		*/
+-		if (cur_caldata->calCollect)
+-			cur_caldata->calCollect(ah);
++		cur_caldata->calCollect(ah);
+ 		ah->cal_samples++;
+ 
+ 		if (ah->cal_samples >= cur_caldata->calNumSamples) {
+@@ -107,8 +99,7 @@ static bool ar9003_hw_per_calibration(struct ath_hw *ah,
+ 			/*
+ 			* Process accumulated data
+ 			*/
+-			if (cur_caldata->calPostProc)
+-				cur_caldata->calPostProc(ah, numChains);
++			cur_caldata->calPostProc(ah, numChains);
+ 
+ 			/* Calibration has finished. */
+ 			caldata->CalValid |= cur_caldata->calType;
+@@ -323,16 +314,9 @@ static const struct ath9k_percal_data iq_cal_single_sample = {
+ 	ar9003_hw_iqcalibrate
+ };
+ 
+-static const struct ath9k_percal_data temp_cal_single_sample = {
+-	TEMP_COMP_CAL,
+-	MIN_CAL_SAMPLES,
+-	PER_MAX_LOG_COUNT,
+-};
+-
+ static void ar9003_hw_init_cal_settings(struct ath_hw *ah)
+ {
+ 	ah->iq_caldata.calData = &iq_cal_single_sample;
+-	ah->temp_caldata.calData = &temp_cal_single_sample;
+ 
+ 	if (AR_SREV_9300_20_OR_LATER(ah)) {
+ 		ah->enabled_cals |= TX_IQ_CAL;
+@@ -340,7 +324,7 @@ static void ar9003_hw_init_cal_settings(struct ath_hw *ah)
+ 			ah->enabled_cals |= TX_IQ_ON_AGC_CAL;
+ 	}
+ 
+-	ah->supp_cals = IQ_MISMATCH_CAL | TEMP_COMP_CAL;
++	ah->supp_cals = IQ_MISMATCH_CAL;
+ }
+ 
+ #define OFF_UPPER_LT 24
+@@ -1399,9 +1383,6 @@ static void ar9003_hw_init_cal_common(struct ath_hw *ah)
+ 	INIT_CAL(&ah->iq_caldata);
+ 	INSERT_CAL(ah, &ah->iq_caldata);
+ 
+-	INIT_CAL(&ah->temp_caldata);
+-	INSERT_CAL(ah, &ah->temp_caldata);
+-
+ 	/* Initialize current pointer to first element in list */
+ 	ah->cal_list_curr = ah->cal_list;
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h
+index 2a5d3ad1169c..9cbca1229bac 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.h
++++ b/drivers/net/wireless/ath/ath9k/hw.h
+@@ -830,7 +830,6 @@ struct ath_hw {
+ 	/* Calibration */
+ 	u32 supp_cals;
+ 	struct ath9k_cal_list iq_caldata;
+-	struct ath9k_cal_list temp_caldata;
+ 	struct ath9k_cal_list adcgain_caldata;
+ 	struct ath9k_cal_list adcdc_caldata;
+ 	struct ath9k_cal_list *cal_list;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index 4341d56805f8..a28093235ee0 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -231,7 +231,7 @@ struct rtl8xxxu_rxdesc16 {
+ 	u32 pattern1match:1;
+ 	u32 pattern0match:1;
+ #endif
+-	__le32 tsfl;
++	u32 tsfl;
+ #if 0
+ 	u32 bassn:12;
+ 	u32 bavld:1;
+@@ -361,7 +361,7 @@ struct rtl8xxxu_rxdesc24 {
+ 	u32 ldcp:1;
+ 	u32 splcp:1;
+ #endif
+-	__le32 tsfl;
++	u32 tsfl;
+ };
+ 
+ struct rtl8xxxu_txdesc32 {
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
+index 9d45afb0e3fd..c831a586766c 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
+@@ -1498,6 +1498,10 @@ static void rtl8723b_enable_rf(struct rtl8xxxu_priv *priv)
+ 	u32 val32;
+ 	u8 val8;
+ 
++	val32 = rtl8xxxu_read32(priv, REG_RX_WAIT_CCA);
++	val32 |= (BIT(22) | BIT(23));
++	rtl8xxxu_write32(priv, REG_RX_WAIT_CCA, val32);
++
+ 	/*
+ 	 * No indication anywhere as to what 0x0790 does. The 2 antenna
+ 	 * vendor code preserves bits 6-7 here.
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 77048db3b32a..c6b246aa2419 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -5201,7 +5201,12 @@ int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
+ 		pkt_offset = roundup(pkt_len + drvinfo_sz + desc_shift +
+ 				     sizeof(struct rtl8xxxu_rxdesc16), 128);
+ 
+-		if (pkt_cnt > 1)
++		/*
++		 * Only clone the skb if there's enough data at the end to
++		 * at least cover the rx descriptor
++		 */
++		if (pkt_cnt > 1 &&
++		    urb_len > (pkt_offset + sizeof(struct rtl8xxxu_rxdesc16)))
+ 			next_skb = skb_clone(skb, GFP_ATOMIC);
+ 
+ 		rx_status = IEEE80211_SKB_RXCB(skb);
+@@ -5219,7 +5224,7 @@ int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
+ 			rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
+ 						   rx_desc->rxmcs);
+ 
+-		rx_status->mactime = le32_to_cpu(rx_desc->tsfl);
++		rx_status->mactime = rx_desc->tsfl;
+ 		rx_status->flag |= RX_FLAG_MACTIME_START;
+ 
+ 		if (!rx_desc->swdec)
+@@ -5289,7 +5294,7 @@ int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
+ 		rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
+ 					   rx_desc->rxmcs);
+ 
+-	rx_status->mactime = le32_to_cpu(rx_desc->tsfl);
++	rx_status->mactime = rx_desc->tsfl;
+ 	rx_status->flag |= RX_FLAG_MACTIME_START;
+ 
+ 	if (!rx_desc->swdec)
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 0dbd29e287db..172ef8245811 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -339,6 +339,8 @@ int pwmchip_remove(struct pwm_chip *chip)
+ 	unsigned int i;
+ 	int ret = 0;
+ 
++	pwmchip_sysfs_unexport_children(chip);
++
+ 	mutex_lock(&pwm_lock);
+ 
+ 	for (i = 0; i < chip->npwm; i++) {
+diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
+index 18ed725594c3..0296d8178ae2 100644
+--- a/drivers/pwm/sysfs.c
++++ b/drivers/pwm/sysfs.c
+@@ -409,6 +409,24 @@ void pwmchip_sysfs_unexport(struct pwm_chip *chip)
+ 	}
+ }
+ 
++void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
++{
++	struct device *parent;
++	unsigned int i;
++
++	parent = class_find_device(&pwm_class, NULL, chip,
++				   pwmchip_sysfs_match);
++	if (!parent)
++		return;
++
++	for (i = 0; i < chip->npwm; i++) {
++		struct pwm_device *pwm = &chip->pwms[i];
++
++		if (test_bit(PWMF_EXPORTED, &pwm->flags))
++			pwm_unexport_child(parent, pwm);
++	}
++}
++
+ static int __init pwm_sysfs_init(void)
+ {
+ 	return class_register(&pwm_class);
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index 3d53d636b17b..f0cfb0451757 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -2636,18 +2636,9 @@ static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd,
+ 	struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
+ 	struct CommandControlBlock *ccb;
+ 	int target = cmd->device->id;
+-	int lun = cmd->device->lun;
+-	uint8_t scsicmd = cmd->cmnd[0];
+ 	cmd->scsi_done = done;
+ 	cmd->host_scribble = NULL;
+ 	cmd->result = 0;
+-	if ((scsicmd == SYNCHRONIZE_CACHE) ||(scsicmd == SEND_DIAGNOSTIC)){
+-		if(acb->devstate[target][lun] == ARECA_RAID_GONE) {
+-    			cmd->result = (DID_NO_CONNECT << 16);
+-		}
+-		cmd->scsi_done(cmd);
+-		return 0;
+-	}
+ 	if (target == 16) {
+ 		/* virtual device for iop message transfer */
+ 		arcmsr_handle_virtual_command(acb, cmd);
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 6a219a0844d3..05e892a231a5 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -5134,6 +5134,7 @@ static void __exit scsi_debug_exit(void)
+ 	bus_unregister(&pseudo_lld_bus);
+ 	root_device_unregister(pseudo_primary);
+ 
++	vfree(map_storep);
+ 	vfree(dif_storep);
+ 	vfree(fake_storep);
+ 	kfree(sdebug_q_arr);
+diff --git a/drivers/spi/spi-fsl-espi.c b/drivers/spi/spi-fsl-espi.c
+index 8d85a3c343da..3f3561371410 100644
+--- a/drivers/spi/spi-fsl-espi.c
++++ b/drivers/spi/spi-fsl-espi.c
+@@ -581,7 +581,7 @@ void fsl_espi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events)
+ 
+ 		mspi->len -= rx_nr_bytes;
+ 
+-		if (mspi->rx)
++		if (rx_nr_bytes && mspi->rx)
+ 			mspi->get_rx(rx_data, mspi);
+ 	}
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 200ca228d885..935f1a511856 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1607,9 +1607,11 @@ static void of_register_spi_devices(struct spi_master *master)
+ 		if (of_node_test_and_set_flag(nc, OF_POPULATED))
+ 			continue;
+ 		spi = of_register_spi_device(master, nc);
+-		if (IS_ERR(spi))
++		if (IS_ERR(spi)) {
+ 			dev_warn(&master->dev, "Failed to create SPI device for %s\n",
+ 				nc->full_name);
++			of_node_clear_flag(nc, OF_POPULATED);
++		}
+ 	}
+ }
+ #else
+@@ -3120,6 +3122,7 @@ static int of_spi_notify(struct notifier_block *nb, unsigned long action,
+ 		if (IS_ERR(spi)) {
+ 			pr_err("%s: failed to create for '%s'\n",
+ 					__func__, rd->dn->full_name);
++			of_node_clear_flag(rd->dn, OF_POPULATED);
+ 			return notifier_from_errno(PTR_ERR(spi));
+ 		}
+ 		break;
+diff --git a/drivers/staging/wilc1000/host_interface.c b/drivers/staging/wilc1000/host_interface.c
+index 78f524fcd214..f4dbcb19d7c5 100644
+--- a/drivers/staging/wilc1000/host_interface.c
++++ b/drivers/staging/wilc1000/host_interface.c
+@@ -3391,7 +3391,6 @@ int wilc_init(struct net_device *dev, struct host_if_drv **hif_drv_handler)
+ 
+ 	clients_count++;
+ 
+-	destroy_workqueue(hif_workqueue);
+ _fail_:
+ 	return result;
+ }
+diff --git a/drivers/thermal/intel_powerclamp.c b/drivers/thermal/intel_powerclamp.c
+index 0e4dc0afcfd2..7a223074df3d 100644
+--- a/drivers/thermal/intel_powerclamp.c
++++ b/drivers/thermal/intel_powerclamp.c
+@@ -669,20 +669,10 @@ static struct thermal_cooling_device_ops powerclamp_cooling_ops = {
+ 	.set_cur_state = powerclamp_set_cur_state,
+ };
+ 
+-static const struct x86_cpu_id intel_powerclamp_ids[] __initconst = {
+-	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_MWAIT },
+-	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_ARAT },
+-	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_NONSTOP_TSC },
+-	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_CONSTANT_TSC},
+-	{}
+-};
+-MODULE_DEVICE_TABLE(x86cpu, intel_powerclamp_ids);
+-
+ static int __init powerclamp_probe(void)
+ {
+-	if (!x86_match_cpu(intel_powerclamp_ids)) {
+-		pr_err("Intel powerclamp does not run on family %d model %d\n",
+-				boot_cpu_data.x86, boot_cpu_data.x86_model);
++	if (!boot_cpu_has(X86_FEATURE_MWAIT)) {
++		pr_err("CPU does not support MWAIT");
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 2705ca960e92..fd375f15bbbd 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -870,10 +870,15 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+ 		return 0;
+ 
++	if (new_screen_size > (4 << 20))
++		return -EINVAL;
+ 	newscreen = kmalloc(new_screen_size, GFP_USER);
+ 	if (!newscreen)
+ 		return -ENOMEM;
+ 
++	if (vc == sel_cons)
++		clear_selection();
++
+ 	old_rows = vc->vc_rows;
+ 	old_row_size = vc->vc_size_row;
+ 
+@@ -1176,7 +1181,7 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			break;
+ 		case 3: /* erase scroll-back buffer (and whole display) */
+ 			scr_memsetw(vc->vc_screenbuf, vc->vc_video_erase_char,
+-				    vc->vc_screenbuf_size >> 1);
++				    vc->vc_screenbuf_size);
+ 			set_origin(vc);
+ 			if (con_is_visible(vc))
+ 				update_screen(vc);
+diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c
+index 053bac9d983c..887be343fcd4 100644
+--- a/drivers/usb/chipidea/host.c
++++ b/drivers/usb/chipidea/host.c
+@@ -185,6 +185,8 @@ static void host_stop(struct ci_hdrc *ci)
+ 
+ 	if (hcd) {
+ 		usb_remove_hcd(hcd);
++		ci->role = CI_ROLE_END;
++		synchronize_irq(ci->irq);
+ 		usb_put_hcd(hcd);
+ 		if (ci->platdata->reg_vbus && !ci_otg_is_fsm_mode(ci) &&
+ 			(ci->platdata->flags & CI_HDRC_TURN_VBUS_EARLY_ON))
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 68544618982e..6443cfba7b55 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3055,7 +3055,7 @@ err3:
+ 	kfree(dwc->setup_buf);
+ 
+ err2:
+-	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb),
++	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb) * 2,
+ 			dwc->ep0_trb, dwc->ep0_trb_addr);
+ 
+ err1:
+@@ -3080,7 +3080,7 @@ void dwc3_gadget_exit(struct dwc3 *dwc)
+ 	kfree(dwc->setup_buf);
+ 	kfree(dwc->zlp_buf);
+ 
+-	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb),
++	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb) * 2,
+ 			dwc->ep0_trb, dwc->ep0_trb_addr);
+ 
+ 	dma_free_coherent(dwc->dev, sizeof(*dwc->ctrl_req),
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 5f562c1ec795..9b9e71f2c66e 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -587,8 +587,9 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
+ 
+ 	/* throttle high/super speed IRQ rate back slightly */
+ 	if (gadget_is_dualspeed(dev->gadget))
+-		req->no_interrupt = (dev->gadget->speed == USB_SPEED_HIGH ||
+-				     dev->gadget->speed == USB_SPEED_SUPER)
++		req->no_interrupt = (((dev->gadget->speed == USB_SPEED_HIGH ||
++				       dev->gadget->speed == USB_SPEED_SUPER)) &&
++					!list_empty(&dev->tx_reqs))
+ 			? ((atomic_read(&dev->tx_qlen) % dev->qmult) != 0)
+ 			: 0;
+ 
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index bb1f6c8f0f01..45bc997d0711 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -1978,7 +1978,7 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
+ 			dev_err(&pdev->dev, "of_probe: name error(%d)\n", ret);
+ 			goto err;
+ 		}
+-		ep->ep.name = name;
++		ep->ep.name = kasprintf(GFP_KERNEL, "ep%d", ep->index);
+ 
+ 		ep->ep_regs = udc->regs + USBA_EPT_BASE(i);
+ 		ep->dma_regs = udc->regs + USBA_DMA_BASE(i);
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index 1700908b84ef..86612ac3fda2 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -72,7 +72,7 @@
+ static const char	hcd_name [] = "ohci_hcd";
+ 
+ #define	STATECHANGE_DELAY	msecs_to_jiffies(300)
+-#define	IO_WATCHDOG_DELAY	msecs_to_jiffies(250)
++#define	IO_WATCHDOG_DELAY	msecs_to_jiffies(275)
+ 
+ #include "ohci.h"
+ #include "pci-quirks.h"
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 730b9fd26685..0ef16900efed 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1166,7 +1166,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				xhci_set_link_state(xhci, port_array, wIndex,
+ 							XDEV_RESUME);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+-				msleep(20);
++				msleep(USB_RESUME_TIMEOUT);
+ 				spin_lock_irqsave(&xhci->lock, flags);
+ 				xhci_set_link_state(xhci, port_array, wIndex,
+ 							XDEV_U0);
+@@ -1355,6 +1355,35 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
+ 	return 0;
+ }
+ 
++/*
++ * Workaround for missing Cold Attach Status (CAS) if device re-plugged in S3.
++ * warm reset a USB3 device stuck in polling or compliance mode after resume.
++ * See Intel 100/c230 series PCH specification update Doc #332692-006 Errata #8
++ */
++static bool xhci_port_missing_cas_quirk(int port_index,
++					     __le32 __iomem **port_array)
++{
++	u32 portsc;
++
++	portsc = readl(port_array[port_index]);
++
++	/* if any of these are set we are not stuck */
++	if (portsc & (PORT_CONNECT | PORT_CAS))
++		return false;
++
++	if (((portsc & PORT_PLS_MASK) != XDEV_POLLING) &&
++	    ((portsc & PORT_PLS_MASK) != XDEV_COMP_MODE))
++		return false;
++
++	/* clear wakeup/change bits, and do a warm port reset */
++	portsc &= ~(PORT_RWC_BITS | PORT_CEC | PORT_WAKE_BITS);
++	portsc |= PORT_WR;
++	writel(portsc, port_array[port_index]);
++	/* flush write */
++	readl(port_array[port_index]);
++	return true;
++}
++
+ int xhci_bus_resume(struct usb_hcd *hcd)
+ {
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+@@ -1392,6 +1421,14 @@ int xhci_bus_resume(struct usb_hcd *hcd)
+ 		u32 temp;
+ 
+ 		temp = readl(port_array[port_index]);
++
++		/* warm reset CAS limited ports stuck in polling/compliance */
++		if ((xhci->quirks & XHCI_MISSING_CAS) &&
++		    (hcd->speed >= HCD_USB3) &&
++		    xhci_port_missing_cas_quirk(port_index, port_array)) {
++			xhci_dbg(xhci, "reset stuck port %d\n", port_index);
++			continue;
++		}
+ 		if (DEV_SUPERSPEED_ANY(temp))
+ 			temp &= ~(PORT_RWC_BITS | PORT_CEC | PORT_WAKE_BITS);
+ 		else
+@@ -1410,7 +1447,7 @@ int xhci_bus_resume(struct usb_hcd *hcd)
+ 
+ 	if (need_usb2_u3_exit) {
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
+-		msleep(20);
++		msleep(USB_RESUME_TIMEOUT);
+ 		spin_lock_irqsave(&xhci->lock, flags);
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index d7b0f97abbad..e96ae80d107e 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -45,11 +45,13 @@
+ 
+ #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI	0x8c31
+ #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI	0x9c31
++#define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI	0x9cb1
+ #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI		0x22b5
+ #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI		0xa12f
+ #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI	0x9d2f
+ #define PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI		0x0aa8
+ #define PCI_DEVICE_ID_INTEL_BROXTON_B_XHCI		0x1aa8
++#define PCI_DEVICE_ID_INTEL_APL_XHCI			0x5aa8
+ 
+ static const char hcd_name[] = "xhci_hcd";
+ 
+@@ -153,7 +155,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-		pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) {
++		(pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI ||
++		 pdev->device == PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI)) {
+ 		xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+ 		xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
+ 	}
+@@ -169,6 +172,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
+ 	}
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
++		xhci->quirks |= XHCI_MISSING_CAS;
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+ 			pdev->device == PCI_DEVICE_ID_EJ168) {
+ 		xhci->quirks |= XHCI_RESET_ON_RESUME;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index b2c1dc5dc0f3..f945380035d0 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -314,6 +314,8 @@ struct xhci_op_regs {
+ #define XDEV_U2		(0x2 << 5)
+ #define XDEV_U3		(0x3 << 5)
+ #define XDEV_INACTIVE	(0x6 << 5)
++#define XDEV_POLLING	(0x7 << 5)
++#define XDEV_COMP_MODE  (0xa << 5)
+ #define XDEV_RESUME	(0xf << 5)
+ /* true: port has power (see HCC_PPC) */
+ #define PORT_POWER	(1 << 9)
+@@ -1653,6 +1655,7 @@ struct xhci_hcd {
+ #define XHCI_MTK_HOST		(1 << 21)
+ #define XHCI_SSIC_PORT_UNUSED	(1 << 22)
+ #define XHCI_NO_64BIT_SUPPORT	(1 << 23)
++#define XHCI_MISSING_CAS	(1 << 24)
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+ 	/* There are two roothubs to keep track of bus suspend info for */
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index 0b4cec940386..dae92de92592 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -337,6 +337,7 @@ static int omap2430_musb_init(struct musb *musb)
+ 	}
+ 	musb->isr = omap2430_musb_interrupt;
+ 	phy_init(musb->phy);
++	phy_power_on(musb->phy);
+ 
+ 	l = musb_readl(musb->mregs, OTG_INTERFSEL);
+ 
+@@ -373,8 +374,6 @@ static void omap2430_musb_enable(struct musb *musb)
+ 	struct musb_hdrc_platform_data *pdata = dev_get_platdata(dev);
+ 	struct omap_musb_board_data *data = pdata->board_data;
+ 
+-	if (!WARN_ON(!musb->phy))
+-		phy_power_on(musb->phy);
+ 
+ 	omap2430_set_power(musb, true, glue->cable_connected);
+ 
+@@ -413,9 +412,6 @@ static void omap2430_musb_disable(struct musb *musb)
+ 	struct device *dev = musb->controller;
+ 	struct omap2430_glue *glue = dev_get_drvdata(dev->parent);
+ 
+-	if (!WARN_ON(!musb->phy))
+-		phy_power_off(musb->phy);
+-
+ 	if (glue->status != MUSB_UNKNOWN)
+ 		omap_control_usb_set_mode(glue->control_otghs,
+ 			USB_MODE_DISCONNECT);
+@@ -429,6 +425,7 @@ static int omap2430_musb_exit(struct musb *musb)
+ 	struct omap2430_glue *glue = dev_get_drvdata(dev->parent);
+ 
+ 	omap2430_low_level_exit(musb);
++	phy_power_off(musb->phy);
+ 	phy_exit(musb->phy);
+ 	musb->phy = NULL;
+ 	cancel_work_sync(&glue->omap_musb_mailbox_work);
+diff --git a/drivers/usb/renesas_usbhs/rcar3.c b/drivers/usb/renesas_usbhs/rcar3.c
+index 1d70add926f0..d544b331c9f2 100644
+--- a/drivers/usb/renesas_usbhs/rcar3.c
++++ b/drivers/usb/renesas_usbhs/rcar3.c
+@@ -9,6 +9,7 @@
+  *
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/io.h>
+ #include "common.h"
+ #include "rcar3.h"
+@@ -35,10 +36,13 @@ static int usbhs_rcar3_power_ctrl(struct platform_device *pdev,
+ 
+ 	usbhs_write32(priv, UGCTRL2, UGCTRL2_RESERVED_3 | UGCTRL2_USB0SEL_OTG);
+ 
+-	if (enable)
++	if (enable) {
+ 		usbhs_bset(priv, LPSTS, LPSTS_SUSPM, LPSTS_SUSPM);
+-	else
++		/* The controller on R-Car Gen3 needs to wait up to 45 usec */
++		udelay(45);
++	} else {
+ 		usbhs_bset(priv, LPSTS, LPSTS_SUSPM, 0);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 54a4de0efdba..f61477bed3a8 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -1077,7 +1077,9 @@ static int cp210x_tiocmget(struct tty_struct *tty)
+ 	u8 control;
+ 	int result;
+ 
+-	cp210x_read_u8_reg(port, CP210X_GET_MDMSTS, &control);
++	result = cp210x_read_u8_reg(port, CP210X_GET_MDMSTS, &control);
++	if (result)
++		return result;
+ 
+ 	result = ((control & CONTROL_DTR) ? TIOCM_DTR : 0)
+ 		|((control & CONTROL_RTS) ? TIOCM_RTS : 0)
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index b2d767e743fc..0ff7f38d7800 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -986,7 +986,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 	/* ekey Devices */
+ 	{ USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) },
+ 	/* Infineon Devices */
+-	{ USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) },
++	{ USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_TC1798_PID, 1) },
++	{ USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_TC2X7_PID, 1) },
+ 	/* GE Healthcare devices */
+ 	{ USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) },
+ 	/* Active Research (Actisense) devices */
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index f87a938cf005..21011c0a4c64 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -626,8 +626,9 @@
+ /*
+  * Infineon Technologies
+  */
+-#define INFINEON_VID		0x058b
+-#define INFINEON_TRIBOARD_PID	0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */
++#define INFINEON_VID		        0x058b
++#define INFINEON_TRIBOARD_TC1798_PID	0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */
++#define INFINEON_TRIBOARD_TC2X7_PID	0x0043 /* DAS JTAG TriBoard TC2X7 V1.0 */
+ 
+ /*
+  * Acton Research Corp.
+diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
+index d213cf44a7e4..4a037b4a79cf 100644
+--- a/drivers/usb/serial/usb-serial.c
++++ b/drivers/usb/serial/usb-serial.c
+@@ -1078,7 +1078,8 @@ static int usb_serial_probe(struct usb_interface *interface,
+ 
+ 	serial->disconnected = 0;
+ 
+-	usb_serial_console_init(serial->port[0]->minor);
++	if (num_ports > 0)
++		usb_serial_console_init(serial->port[0]->minor);
+ exit:
+ 	module_put(type->driver.owner);
+ 	return 0;
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+index 9e4800a4e3d1..951dd93f89b2 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+@@ -5348,7 +5348,7 @@ static int dsi_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	dsi->phy_base = devm_ioremap(&dsidev->dev, res->start,
+ 		resource_size(res));
+-	if (!dsi->proto_base) {
++	if (!dsi->phy_base) {
+ 		DSSERR("can't ioremap DSI PHY\n");
+ 		return -ENOMEM;
+ 	}
+@@ -5368,7 +5368,7 @@ static int dsi_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	dsi->pll_base = devm_ioremap(&dsidev->dev, res->start,
+ 		resource_size(res));
+-	if (!dsi->proto_base) {
++	if (!dsi->pll_base) {
+ 		DSSERR("can't ioremap DSI PLL\n");
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 2c0487f4f805..ed41fdb42d13 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2125,7 +2125,7 @@ static int of_get_pxafb_display(struct device *dev, struct device_node *disp,
+ 
+ 	timings = of_get_display_timings(disp);
+ 	if (!timings)
+-		goto out;
++		return -EINVAL;
+ 
+ 	ret = -ENOMEM;
+ 	info->modes = kmalloc_array(timings->num_timings,
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index 8c4e61783441..6d9e5173d5fa 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -212,10 +212,18 @@ int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
+ 		return -ENODEV;
+ 	}
+ 
+-	rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+-	if (rc)
+-		rc = dma_set_mask_and_coherent(&pci_dev->dev,
+-						DMA_BIT_MASK(32));
++	rc = dma_set_mask(&pci_dev->dev, DMA_BIT_MASK(64));
++	if (rc) {
++		rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
++	} else {
++		/*
++		 * The virtio ring base address is expressed as a 32-bit PFN,
++		 * with a page size of 1 << VIRTIO_PCI_QUEUE_ADDR_SHIFT.
++		 */
++		dma_set_coherent_mask(&pci_dev->dev,
++				DMA_BIT_MASK(32 + VIRTIO_PCI_QUEUE_ADDR_SHIFT));
++	}
++
+ 	if (rc)
+ 		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+ 
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index ed9c9eeedfe5..6b2cd922d322 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -732,7 +732,8 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
+ 
+ 	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
+ 		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
+-		vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
++		if (!vq->event)
++			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+ 	}
+ 
+ }
+@@ -764,7 +765,8 @@ unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq)
+ 	 * entry. Always do both to keep code simple. */
+ 	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
+ 		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
+-		vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
++		if (!vq->event)
++			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+ 	}
+ 	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);
+ 	END_USE(vq);
+@@ -832,10 +834,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
+ 	 * more to do. */
+ 	/* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
+ 	 * either clear the flags bit or point the event index at the next
+-	 * entry. Always do both to keep code simple. */
++	 * entry. Always update the event index to keep code simple. */
+ 	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
+ 		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
+-		vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
++		if (!vq->event)
++			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+ 	}
+ 	/* TODO: tune this threshold */
+ 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
+@@ -953,7 +956,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
+ 	/* No callback?  Tell other side not to bother us. */
+ 	if (!callback) {
+ 		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
+-		vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow);
++		if (!vq->event)
++			vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow);
+ 	}
+ 
+ 	/* Put everything in free lists. */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e6811c42e41e..bc1a004d4264 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -8915,9 +8915,14 @@ again:
+ 	 *    So even we call qgroup_free_data(), it won't decrease reserved
+ 	 *    space.
+ 	 * 2) Not written to disk
+-	 *    This means the reserved space should be freed here.
++	 *    This means the reserved space should be freed here. However,
++	 *    if a truncate invalidates the page (by clearing PageDirty)
++	 *    and the page is accounted for while allocating extent
++	 *    in btrfs_check_data_free_space() we let delayed_ref to
++	 *    free the entire extent.
+ 	 */
+-	btrfs_qgroup_free_data(inode, page_start, PAGE_SIZE);
++	if (PageDirty(page))
++		btrfs_qgroup_free_data(inode, page_start, PAGE_SIZE);
+ 	if (!inode_evicting) {
+ 		clear_extent_bit(tree, page_start, page_end,
+ 				 EXTENT_LOCKED | EXTENT_DIRTY |
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ef9c55bc7907..90e1198bc63d 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -2713,14 +2713,12 @@ static inline void btrfs_remove_all_log_ctxs(struct btrfs_root *root,
+ 					     int index, int error)
+ {
+ 	struct btrfs_log_ctx *ctx;
++	struct btrfs_log_ctx *safe;
+ 
+-	if (!error) {
+-		INIT_LIST_HEAD(&root->log_ctxs[index]);
+-		return;
+-	}
+-
+-	list_for_each_entry(ctx, &root->log_ctxs[index], list)
++	list_for_each_entry_safe(ctx, safe, &root->log_ctxs[index], list) {
++		list_del_init(&ctx->list);
+ 		ctx->log_ret = error;
++	}
+ 
+ 	INIT_LIST_HEAD(&root->log_ctxs[index]);
+ }
+@@ -2961,13 +2959,9 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 	mutex_unlock(&root->log_mutex);
+ 
+ out_wake_log_root:
+-	/*
+-	 * We needn't get log_mutex here because we are sure all
+-	 * the other tasks are blocked.
+-	 */
++	mutex_lock(&log_root_tree->log_mutex);
+ 	btrfs_remove_all_log_ctxs(log_root_tree, index2, ret);
+ 
+-	mutex_lock(&log_root_tree->log_mutex);
+ 	log_root_tree->log_transid_committed++;
+ 	atomic_set(&log_root_tree->log_commit[index2], 0);
+ 	mutex_unlock(&log_root_tree->log_mutex);
+@@ -2978,10 +2972,8 @@ out_wake_log_root:
+ 	if (waitqueue_active(&log_root_tree->log_commit_wait[index2]))
+ 		wake_up(&log_root_tree->log_commit_wait[index2]);
+ out:
+-	/* See above. */
+-	btrfs_remove_all_log_ctxs(root, index1, ret);
+-
+ 	mutex_lock(&root->log_mutex);
++	btrfs_remove_all_log_ctxs(root, index1, ret);
+ 	root->log_transid_committed++;
+ 	atomic_set(&root->log_commit[index1], 0);
+ 	mutex_unlock(&root->log_mutex);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index a204d7e109d4..0fe31b4b110d 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1147,9 +1147,7 @@ static void put_ol_stateid_locked(struct nfs4_ol_stateid *stp,
+ 
+ static bool unhash_lock_stateid(struct nfs4_ol_stateid *stp)
+ {
+-	struct nfs4_openowner *oo = openowner(stp->st_openstp->st_stateowner);
+-
+-	lockdep_assert_held(&oo->oo_owner.so_client->cl_lock);
++	lockdep_assert_held(&stp->st_stid.sc_client->cl_lock);
+ 
+ 	list_del_init(&stp->st_locks);
+ 	nfs4_unhash_stid(&stp->st_stid);
+@@ -1158,12 +1156,12 @@ static bool unhash_lock_stateid(struct nfs4_ol_stateid *stp)
+ 
+ static void release_lock_stateid(struct nfs4_ol_stateid *stp)
+ {
+-	struct nfs4_openowner *oo = openowner(stp->st_openstp->st_stateowner);
++	struct nfs4_client *clp = stp->st_stid.sc_client;
+ 	bool unhashed;
+ 
+-	spin_lock(&oo->oo_owner.so_client->cl_lock);
++	spin_lock(&clp->cl_lock);
+ 	unhashed = unhash_lock_stateid(stp);
+-	spin_unlock(&oo->oo_owner.so_client->cl_lock);
++	spin_unlock(&clp->cl_lock);
+ 	if (unhashed)
+ 		nfs4_put_stid(&stp->st_stid);
+ }
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index abadbc30e013..767377e522c6 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -171,6 +171,8 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
+ 		len -= bytes;
+ 	}
+ 
++	if (!error)
++		error = vfs_fsync(new_file, 0);
+ 	fput(new_file);
+ out_fput:
+ 	fput(old_file);
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index c75625c1efa3..cf2bfeb1b385 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -294,9 +294,6 @@ struct posix_acl *ovl_get_acl(struct inode *inode, int type)
+ 	if (!IS_ENABLED(CONFIG_FS_POSIX_ACL) || !IS_POSIXACL(realinode))
+ 		return NULL;
+ 
+-	if (!realinode->i_op->get_acl)
+-		return NULL;
+-
+ 	old_cred = ovl_override_creds(inode->i_sb);
+ 	acl = get_acl(realinode, type);
+ 	revert_creds(old_cred);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index e2a94a26767b..a78415d77434 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1026,6 +1026,21 @@ ovl_posix_acl_xattr_set(const struct xattr_handler *handler,
+ 
+ 	posix_acl_release(acl);
+ 
++	/*
++	 * Check if sgid bit needs to be cleared (actual setacl operation will
++	 * be done with mounter's capabilities and so that won't do it for us).
++	 */
++	if (unlikely(inode->i_mode & S_ISGID) &&
++	    handler->flags == ACL_TYPE_ACCESS &&
++	    !in_group_p(inode->i_gid) &&
++	    !capable_wrt_inode_uidgid(inode, CAP_FSETID)) {
++		struct iattr iattr = { .ia_valid = ATTR_KILL_SGID };
++
++		err = ovl_setattr(dentry, &iattr);
++		if (err)
++			return err;
++	}
++
+ 	err = ovl_xattr_set(dentry, handler->name, value, size, flags);
+ 	if (!err)
+ 		ovl_copyattr(ovl_inode_real(inode, NULL), inode);
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 4b86d3a738e1..3b27145f985f 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -350,7 +350,7 @@ static unsigned int vfs_dent_type(uint8_t type)
+  */
+ static int ubifs_readdir(struct file *file, struct dir_context *ctx)
+ {
+-	int err;
++	int err = 0;
+ 	struct qstr nm;
+ 	union ubifs_key key;
+ 	struct ubifs_dent_node *dent;
+@@ -452,14 +452,20 @@ out:
+ 	kfree(file->private_data);
+ 	file->private_data = NULL;
+ 
+-	if (err != -ENOENT) {
++	if (err != -ENOENT)
+ 		ubifs_err(c, "cannot find next direntry, error %d", err);
+-		return err;
+-	}
++	else
++		/*
++		 * -ENOENT is a non-fatal error in this context, the TNC uses
++		 * it to indicate that the cursor moved past the current directory
++		 * and readdir() has to stop.
++		 */
++		err = 0;
++
+ 
+ 	/* 2 is a special value indicating that there are no more direntries */
+ 	ctx->pos = 2;
+-	return 0;
++	return err;
+ }
+ 
+ /* Free saved readdir() state when the directory is closed */
+diff --git a/fs/xfs/libxfs/xfs_dquot_buf.c b/fs/xfs/libxfs/xfs_dquot_buf.c
+index 3cc3cf767474..ac9a003dd29a 100644
+--- a/fs/xfs/libxfs/xfs_dquot_buf.c
++++ b/fs/xfs/libxfs/xfs_dquot_buf.c
+@@ -191,8 +191,7 @@ xfs_dquot_buf_verify_crc(
+ 	if (mp->m_quotainfo)
+ 		ndquots = mp->m_quotainfo->qi_dqperchunk;
+ 	else
+-		ndquots = xfs_calc_dquots_per_chunk(
+-					XFS_BB_TO_FSB(mp, bp->b_length));
++		ndquots = xfs_calc_dquots_per_chunk(bp->b_length);
+ 
+ 	for (i = 0; i < ndquots; i++, d++) {
+ 		if (!xfs_verify_cksum((char *)d, sizeof(struct xfs_dqblk),
+diff --git a/include/linux/pwm.h b/include/linux/pwm.h
+index f1bbae014889..2c6c5114c089 100644
+--- a/include/linux/pwm.h
++++ b/include/linux/pwm.h
+@@ -641,6 +641,7 @@ static inline void pwm_remove_table(struct pwm_lookup *table, size_t num)
+ #ifdef CONFIG_PWM_SYSFS
+ void pwmchip_sysfs_export(struct pwm_chip *chip);
+ void pwmchip_sysfs_unexport(struct pwm_chip *chip);
++void pwmchip_sysfs_unexport_children(struct pwm_chip *chip);
+ #else
+ static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
+ {
+@@ -649,6 +650,10 @@ static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
+ static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip)
+ {
+ }
++
++static inline void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
++{
++}
+ #endif /* CONFIG_PWM_SYSFS */
+ 
+ #endif /* __LINUX_PWM_H */
+diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
+index 185f8ea2702f..407ca0d7a938 100644
+--- a/include/uapi/linux/Kbuild
++++ b/include/uapi/linux/Kbuild
+@@ -396,6 +396,7 @@ header-y += string.h
+ header-y += suspend_ioctls.h
+ header-y += swab.h
+ header-y += synclink.h
++header-y += sync_file.h
+ header-y += sysctl.h
+ header-y += sysinfo.h
+ header-y += target_core_user.h
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 32bf6f75a8fe..96db64bdedbb 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -878,7 +878,7 @@ static inline struct timer_base *get_timer_base(u32 tflags)
+ 
+ #ifdef CONFIG_NO_HZ_COMMON
+ static inline struct timer_base *
+-__get_target_base(struct timer_base *base, unsigned tflags)
++get_target_base(struct timer_base *base, unsigned tflags)
+ {
+ #ifdef CONFIG_SMP
+ 	if ((tflags & TIMER_PINNED) || !base->migration_enabled)
+@@ -891,25 +891,27 @@ __get_target_base(struct timer_base *base, unsigned tflags)
+ 
+ static inline void forward_timer_base(struct timer_base *base)
+ {
++	unsigned long jnow = READ_ONCE(jiffies);
++
+ 	/*
+ 	 * We only forward the base when it's idle and we have a delta between
+ 	 * base clock and jiffies.
+ 	 */
+-	if (!base->is_idle || (long) (jiffies - base->clk) < 2)
++	if (!base->is_idle || (long) (jnow - base->clk) < 2)
+ 		return;
+ 
+ 	/*
+ 	 * If the next expiry value is > jiffies, then we fast forward to
+ 	 * jiffies otherwise we forward to the next expiry value.
+ 	 */
+-	if (time_after(base->next_expiry, jiffies))
+-		base->clk = jiffies;
++	if (time_after(base->next_expiry, jnow))
++		base->clk = jnow;
+ 	else
+ 		base->clk = base->next_expiry;
+ }
+ #else
+ static inline struct timer_base *
+-__get_target_base(struct timer_base *base, unsigned tflags)
++get_target_base(struct timer_base *base, unsigned tflags)
+ {
+ 	return get_timer_this_cpu_base(tflags);
+ }
+@@ -917,14 +919,6 @@ __get_target_base(struct timer_base *base, unsigned tflags)
+ static inline void forward_timer_base(struct timer_base *base) { }
+ #endif
+ 
+-static inline struct timer_base *
+-get_target_base(struct timer_base *base, unsigned tflags)
+-{
+-	struct timer_base *target = __get_target_base(base, tflags);
+-
+-	forward_timer_base(target);
+-	return target;
+-}
+ 
+ /*
+  * We are using hashed locking: Holding per_cpu(timer_bases[x]).lock means
+@@ -943,7 +937,14 @@ static struct timer_base *lock_timer_base(struct timer_list *timer,
+ {
+ 	for (;;) {
+ 		struct timer_base *base;
+-		u32 tf = timer->flags;
++		u32 tf;
++
++		/*
++		 * We need to use READ_ONCE() here, otherwise the compiler
++		 * might re-read @tf between the check for TIMER_MIGRATING
++		 * and spin_lock().
++		 */
++		tf = READ_ONCE(timer->flags);
+ 
+ 		if (!(tf & TIMER_MIGRATING)) {
+ 			base = get_timer_base(tf);
+@@ -964,6 +965,8 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
+ 	unsigned long clk = 0, flags;
+ 	int ret = 0;
+ 
++	BUG_ON(!timer->function);
++
+ 	/*
+ 	 * This is a common optimization triggered by the networking code - if
+ 	 * the timer is re-modified to have the same timeout or ends up in the
+@@ -972,13 +975,16 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
+ 	if (timer_pending(timer)) {
+ 		if (timer->expires == expires)
+ 			return 1;
++
+ 		/*
+-		 * Take the current timer_jiffies of base, but without holding
+-		 * the lock!
++		 * We lock timer base and calculate the bucket index right
++		 * here. If the timer ends up in the same bucket, then we
++		 * just update the expiry time and avoid the whole
++		 * dequeue/enqueue dance.
+ 		 */
+-		base = get_timer_base(timer->flags);
+-		clk = base->clk;
++		base = lock_timer_base(timer, &flags);
+ 
++		clk = base->clk;
+ 		idx = calc_wheel_index(expires, clk);
+ 
+ 		/*
+@@ -988,14 +994,14 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
+ 		 */
+ 		if (idx == timer_get_idx(timer)) {
+ 			timer->expires = expires;
+-			return 1;
++			ret = 1;
++			goto out_unlock;
+ 		}
++	} else {
++		base = lock_timer_base(timer, &flags);
+ 	}
+ 
+ 	timer_stats_timer_set_start_info(timer);
+-	BUG_ON(!timer->function);
+-
+-	base = lock_timer_base(timer, &flags);
+ 
+ 	ret = detach_if_pending(timer, base, false);
+ 	if (!ret && pending_only)
+@@ -1025,12 +1031,16 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
+ 		}
+ 	}
+ 
++	/* Try to forward a stale timer base clock */
++	forward_timer_base(base);
++
+ 	timer->expires = expires;
+ 	/*
+ 	 * If 'idx' was calculated above and the base time did not advance
+-	 * between calculating 'idx' and taking the lock, only enqueue_timer()
+-	 * and trigger_dyntick_cpu() is required. Otherwise we need to
+-	 * (re)calculate the wheel index via internal_add_timer().
++	 * between calculating 'idx' and possibly switching the base, only
++	 * enqueue_timer() and trigger_dyntick_cpu() is required. Otherwise
++	 * we need to (re)calculate the wheel index via
++	 * internal_add_timer().
+ 	 */
+ 	if (idx != UINT_MAX && clk == base->clk) {
+ 		enqueue_timer(base, timer, idx);
+@@ -1510,12 +1520,16 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 	is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);
+ 	base->next_expiry = nextevt;
+ 	/*
+-	 * We have a fresh next event. Check whether we can forward the base:
++	 * We have a fresh next event. Check whether we can forward the
++	 * base. We can only do that when @basej is past base->clk
++	 * otherwise we might rewind base->clk.
+ 	 */
+-	if (time_after(nextevt, jiffies))
+-		base->clk = jiffies;
+-	else if (time_after(nextevt, base->clk))
+-		base->clk = nextevt;
++	if (time_after(basej, base->clk)) {
++		if (time_after(nextevt, basej))
++			base->clk = basej;
++		else if (time_after(nextevt, base->clk))
++			base->clk = nextevt;
++	}
+ 
+ 	if (time_before_eq(nextevt, basej)) {
+ 		expires = basem;
+diff --git a/mm/list_lru.c b/mm/list_lru.c
+index 1d05cb9d363d..234676e31edd 100644
+--- a/mm/list_lru.c
++++ b/mm/list_lru.c
+@@ -554,6 +554,8 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware,
+ 	err = memcg_init_list_lru(lru, memcg_aware);
+ 	if (err) {
+ 		kfree(lru->node);
++		/* Do this so a list_lru_destroy() doesn't crash: */
++		lru->node = NULL;
+ 		goto out;
+ 	}
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 4be518d4e68a..dddead146459 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1947,6 +1947,15 @@ retry:
+ 		     current->flags & PF_EXITING))
+ 		goto force;
+ 
++	/*
++	 * Prevent unbounded recursion when reclaim operations need to
++	 * allocate memory. This might exceed the limits temporarily,
++	 * but we prefer facilitating memory reclaim and getting back
++	 * under the limit over triggering OOM kills in these cases.
++	 */
++	if (unlikely(current->flags & PF_MEMALLOC))
++		goto force;
++
+ 	if (unlikely(task_in_memcg_oom(current)))
+ 		goto nomem;
+ 
+diff --git a/mm/slab.c b/mm/slab.c
+index b67271024135..525a911985a2 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -964,7 +964,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
+ 	 * guaranteed to be valid until irq is re-enabled, because it will be
+ 	 * freed after synchronize_sched().
+ 	 */
+-	if (force_change)
++	if (old_shared && force_change)
+ 		synchronize_sched();
+ 
+ fail:
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 0fe8b7113868..ba0fad78e5d4 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -3048,7 +3048,9 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
+ 					    sc.gfp_mask,
+ 					    sc.reclaim_idx);
+ 
++	current->flags |= PF_MEMALLOC;
+ 	nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
++	current->flags &= ~PF_MEMALLOC;
+ 
+ 	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 9dce3b157908..59a96034979b 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2253,16 +2253,22 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
+ 	if (!(status->rx_flags & IEEE80211_RX_AMSDU))
+ 		return RX_CONTINUE;
+ 
+-	if (ieee80211_has_a4(hdr->frame_control) &&
+-	    rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+-	    !rx->sdata->u.vlan.sta)
+-		return RX_DROP_UNUSABLE;
++	if (unlikely(ieee80211_has_a4(hdr->frame_control))) {
++		switch (rx->sdata->vif.type) {
++		case NL80211_IFTYPE_AP_VLAN:
++			if (!rx->sdata->u.vlan.sta)
++				return RX_DROP_UNUSABLE;
++			break;
++		case NL80211_IFTYPE_STATION:
++			if (!rx->sdata->u.mgd.use_4addr)
++				return RX_DROP_UNUSABLE;
++			break;
++		default:
++			return RX_DROP_UNUSABLE;
++		}
++	}
+ 
+-	if (is_multicast_ether_addr(hdr->addr1) &&
+-	    ((rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+-	      rx->sdata->u.vlan.sta) ||
+-	     (rx->sdata->vif.type == NL80211_IFTYPE_STATION &&
+-	      rx->sdata->u.mgd.use_4addr)))
++	if (is_multicast_ether_addr(hdr->addr1))
+ 		return RX_DROP_UNUSABLE;
+ 
+ 	skb->dev = dev;
+diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
+index 018eed7e1ff1..8668a5c18dc3 100644
+--- a/net/netfilter/xt_NFLOG.c
++++ b/net/netfilter/xt_NFLOG.c
+@@ -32,6 +32,7 @@ nflog_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ 	li.u.ulog.copy_len   = info->len;
+ 	li.u.ulog.group	     = info->group;
+ 	li.u.ulog.qthreshold = info->threshold;
++	li.u.ulog.flags	     = 0;
+ 
+ 	if (info->flags & XT_NFLOG_F_COPY_LEN)
+ 		li.u.ulog.flags |= NF_LOG_F_COPY_LEN;
+diff --git a/security/keys/Kconfig b/security/keys/Kconfig
+index f826e8739023..d942c7c2bc0a 100644
+--- a/security/keys/Kconfig
++++ b/security/keys/Kconfig
+@@ -41,7 +41,7 @@ config BIG_KEYS
+ 	bool "Large payload keys"
+ 	depends on KEYS
+ 	depends on TMPFS
+-	select CRYPTO
++	depends on (CRYPTO_ANSI_CPRNG = y || CRYPTO_DRBG = y)
+ 	select CRYPTO_AES
+ 	select CRYPTO_ECB
+ 	select CRYPTO_RNG
+diff --git a/security/keys/big_key.c b/security/keys/big_key.c
+index c0b3030b5634..835c1ab30d01 100644
+--- a/security/keys/big_key.c
++++ b/security/keys/big_key.c
+@@ -9,6 +9,7 @@
+  * 2 of the Licence, or (at your option) any later version.
+  */
+ 
++#define pr_fmt(fmt) "big_key: "fmt
+ #include <linux/init.h>
+ #include <linux/seq_file.h>
+ #include <linux/file.h>
+@@ -341,44 +342,48 @@ error:
+  */
+ static int __init big_key_init(void)
+ {
+-	return register_key_type(&key_type_big_key);
+-}
+-
+-/*
+- * Initialize big_key crypto and RNG algorithms
+- */
+-static int __init big_key_crypto_init(void)
+-{
+-	int ret = -EINVAL;
++	struct crypto_skcipher *cipher;
++	struct crypto_rng *rng;
++	int ret;
+ 
+-	/* init RNG */
+-	big_key_rng = crypto_alloc_rng(big_key_rng_name, 0, 0);
+-	if (IS_ERR(big_key_rng)) {
+-		big_key_rng = NULL;
+-		return -EFAULT;
++	rng = crypto_alloc_rng(big_key_rng_name, 0, 0);
++	if (IS_ERR(rng)) {
++		pr_err("Can't alloc rng: %ld\n", PTR_ERR(rng));
++		return PTR_ERR(rng);
+ 	}
+ 
++	big_key_rng = rng;
++
+ 	/* seed RNG */
+-	ret = crypto_rng_reset(big_key_rng, NULL, crypto_rng_seedsize(big_key_rng));
+-	if (ret)
+-		goto error;
++	ret = crypto_rng_reset(rng, NULL, crypto_rng_seedsize(rng));
++	if (ret) {
++		pr_err("Can't reset rng: %d\n", ret);
++		goto error_rng;
++	}
+ 
+ 	/* init block cipher */
+-	big_key_skcipher = crypto_alloc_skcipher(big_key_alg_name,
+-						 0, CRYPTO_ALG_ASYNC);
+-	if (IS_ERR(big_key_skcipher)) {
+-		big_key_skcipher = NULL;
+-		ret = -EFAULT;
+-		goto error;
++	cipher = crypto_alloc_skcipher(big_key_alg_name, 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(cipher)) {
++		ret = PTR_ERR(cipher);
++		pr_err("Can't alloc crypto: %d\n", ret);
++		goto error_rng;
++	}
++
++	big_key_skcipher = cipher;
++
++	ret = register_key_type(&key_type_big_key);
++	if (ret < 0) {
++		pr_err("Can't register type: %d\n", ret);
++		goto error_cipher;
+ 	}
+ 
+ 	return 0;
+ 
+-error:
++error_cipher:
++	crypto_free_skcipher(big_key_skcipher);
++error_rng:
+ 	crypto_free_rng(big_key_rng);
+-	big_key_rng = NULL;
+ 	return ret;
+ }
+ 
+-device_initcall(big_key_init);
+-late_initcall(big_key_crypto_init);
++late_initcall(big_key_init);
+diff --git a/security/keys/proc.c b/security/keys/proc.c
+index f0611a6368cd..b9f531c9e4fa 100644
+--- a/security/keys/proc.c
++++ b/security/keys/proc.c
+@@ -181,7 +181,7 @@ static int proc_keys_show(struct seq_file *m, void *v)
+ 	struct timespec now;
+ 	unsigned long timo;
+ 	key_ref_t key_ref, skey_ref;
+-	char xbuf[12];
++	char xbuf[16];
+ 	int rc;
+ 
+ 	struct keyring_search_context ctx = {
+diff --git a/sound/core/seq/seq_timer.c b/sound/core/seq/seq_timer.c
+index dcc102813aef..37d9cfbc29f9 100644
+--- a/sound/core/seq/seq_timer.c
++++ b/sound/core/seq/seq_timer.c
+@@ -448,8 +448,8 @@ snd_seq_real_time_t snd_seq_timer_get_cur_time(struct snd_seq_timer *tmr)
+ 
+ 		ktime_get_ts64(&tm);
+ 		tm = timespec64_sub(tm, tmr->last_update);
+-		cur_time.tv_nsec = tm.tv_nsec;
+-		cur_time.tv_sec = tm.tv_sec;
++		cur_time.tv_nsec += tm.tv_nsec;
++		cur_time.tv_sec += tm.tv_sec;
+ 		snd_seq_sanity_real_time(&cur_time);
+ 	}
+ 	spin_unlock_irqrestore(&tmr->lock, flags);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 160c7f713722..487fcbf9473e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -340,8 +340,7 @@ enum {
+ 
+ /* quirks for Nvidia */
+ #define AZX_DCAPS_PRESET_NVIDIA \
+-	(AZX_DCAPS_NO_MSI | /*AZX_DCAPS_ALIGN_BUFSIZE |*/ \
+-	 AZX_DCAPS_NO_64BIT | AZX_DCAPS_CORBRP_SELF_CLEAR |\
++	(AZX_DCAPS_NO_MSI | AZX_DCAPS_CORBRP_SELF_CLEAR |\
+ 	 AZX_DCAPS_SNOOP_TYPE(NVIDIA))
+ 
+ #define AZX_DCAPS_PRESET_CTHDA \
+@@ -1699,6 +1698,10 @@ static int azx_first_init(struct azx *chip)
+ 		}
+ 	}
+ 
++	/* NVidia hardware normally only supports up to 40 bits of DMA */
++	if (chip->pci->vendor == PCI_VENDOR_ID_NVIDIA)
++		dma_bits = 40;
++
+ 	/* disable 64bit DMA address on some devices */
+ 	if (chip->driver_caps & AZX_DCAPS_NO_64BIT) {
+ 		dev_dbg(card->dev, "Disabling 64bit DMA\n");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index bd481ac23faf..26e866f65314 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5809,8 +5809,6 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ #define ALC295_STANDARD_PINS \
+ 	{0x12, 0xb7a60130}, \
+ 	{0x14, 0x90170110}, \
+-	{0x17, 0x21014020}, \
+-	{0x18, 0x21a19030}, \
+ 	{0x21, 0x04211020}
+ 
+ #define ALC298_STANDARD_PINS \
+@@ -5857,11 +5855,19 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1b, 0x02011020},
+ 		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x14, 0x90170110},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170130},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221103f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170130},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221103f}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x14, 0x90170130},
+ 		{0x1b, 0x02011020},
+ 		{0x21, 0x0221103f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+@@ -6037,7 +6043,13 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		ALC292_STANDARD_PINS,
+ 		{0x13, 0x90a60140}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		ALC295_STANDARD_PINS),
++		ALC295_STANDARD_PINS,
++		{0x17, 0x21014020},
++		{0x18, 0x21a19030}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC295_STANDARD_PINS,
++		{0x17, 0x21014040},
++		{0x18, 0x21a19050}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC298_STANDARD_PINS,
+ 		{0x17, 0x90170110}),
+@@ -6611,6 +6623,7 @@ enum {
+ 	ALC891_FIXUP_HEADSET_MODE,
+ 	ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
+ 	ALC662_FIXUP_ACER_VERITON,
++	ALC892_FIXUP_ASROCK_MOBO,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -6887,6 +6900,16 @@ static const struct hda_fixup alc662_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[ALC892_FIXUP_ASROCK_MOBO] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x15, 0x40f000f0 }, /* disabled */
++			{ 0x16, 0x40f000f0 }, /* disabled */
++			{ 0x18, 0x01014011 }, /* LO */
++			{ 0x1a, 0x01014012 }, /* LO */
++			{ }
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -6924,6 +6947,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
++	SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+ 	SND_PCI_QUIRK(0x19da, 0xa130, "Zotac Z68", ALC662_FIXUP_ZOTAC_Z68),
+ 	SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
+ 	SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index c60a776e815d..8a59d4782a0f 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2907,6 +2907,23 @@ AU0828_DEVICE(0x2040, 0x7260, "Hauppauge", "HVR-950Q"),
+ AU0828_DEVICE(0x2040, 0x7213, "Hauppauge", "HVR-950Q"),
+ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 
++/* Syntek STK1160 */
++{
++	.match_flags = USB_DEVICE_ID_MATCH_DEVICE |
++		       USB_DEVICE_ID_MATCH_INT_CLASS |
++		       USB_DEVICE_ID_MATCH_INT_SUBCLASS,
++	.idVendor = 0x05e1,
++	.idProduct = 0x0408,
++	.bInterfaceClass = USB_CLASS_AUDIO,
++	.bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL,
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.vendor_name = "Syntek",
++		.product_name = "STK1160",
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_AUDIO_ALIGN_TRANSFER
++	}
++},
++
+ /* Digidesign Mbox */
+ {
+ 	/* Thanks to Clemens Ladisch <clemens@ladisch.de> */


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-15  7:59 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-11-15  7:59 UTC (permalink / raw
  To: gentoo-commits

commit:     215deabf1be5d79b5db37aee287bca795cf0805d
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 15 07:58:39 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Nov 15 07:58:39 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=215deabf

Linux patch 4.8.8

 0000_README            |    4 +
 1007_linux-4.8.8.patch | 1846 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1850 insertions(+)

diff --git a/0000_README b/0000_README
index 9cd8633..236529a 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-4.8.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.7
 
+Patch:  1007_linux-4.8.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-4.8.8.patch b/1007_linux-4.8.8.patch
new file mode 100644
index 0000000..7f46629
--- /dev/null
+++ b/1007_linux-4.8.8.patch
@@ -0,0 +1,1846 @@
+diff --git a/Makefile b/Makefile
+index 4d0f28cb481d..8f18daa2c76a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/powerpc/include/asm/checksum.h b/arch/powerpc/include/asm/checksum.h
+index ee655ed1ff1b..1e8fceb308a5 100644
+--- a/arch/powerpc/include/asm/checksum.h
++++ b/arch/powerpc/include/asm/checksum.h
+@@ -53,10 +53,8 @@ static inline __sum16 csum_fold(__wsum sum)
+ 	return (__force __sum16)(~((__force u32)sum + tmp) >> 16);
+ }
+ 
+-static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+-                                     unsigned short len,
+-                                     unsigned short proto,
+-                                     __wsum sum)
++static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len,
++					__u8 proto, __wsum sum)
+ {
+ #ifdef __powerpc64__
+ 	unsigned long s = (__force u32)sum;
+@@ -83,10 +81,8 @@ static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+  * computes the checksum of the TCP/UDP pseudo-header
+  * returns a 16-bit checksum, already complemented
+  */
+-static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
+-					unsigned short len,
+-					unsigned short proto,
+-					__wsum sum)
++static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr, __u32 len,
++					__u8 proto, __wsum sum)
+ {
+ 	return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
+ }
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
+index 9dbfcc0ab577..5ff64afd69f9 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib.h
++++ b/drivers/infiniband/ulp/ipoib/ipoib.h
+@@ -63,6 +63,8 @@ enum ipoib_flush_level {
+ 
+ enum {
+ 	IPOIB_ENCAP_LEN		  = 4,
++	IPOIB_PSEUDO_LEN	  = 20,
++	IPOIB_HARD_LEN		  = IPOIB_ENCAP_LEN + IPOIB_PSEUDO_LEN,
+ 
+ 	IPOIB_UD_HEAD_SIZE	  = IB_GRH_BYTES + IPOIB_ENCAP_LEN,
+ 	IPOIB_UD_RX_SG		  = 2, /* max buffer needed for 4K mtu */
+@@ -134,15 +136,21 @@ struct ipoib_header {
+ 	u16	reserved;
+ };
+ 
+-struct ipoib_cb {
+-	struct qdisc_skb_cb	qdisc_cb;
+-	u8			hwaddr[INFINIBAND_ALEN];
++struct ipoib_pseudo_header {
++	u8	hwaddr[INFINIBAND_ALEN];
+ };
+ 
+-static inline struct ipoib_cb *ipoib_skb_cb(const struct sk_buff *skb)
++static inline void skb_add_pseudo_hdr(struct sk_buff *skb)
+ {
+-	BUILD_BUG_ON(sizeof(skb->cb) < sizeof(struct ipoib_cb));
+-	return (struct ipoib_cb *)skb->cb;
++	char *data = skb_push(skb, IPOIB_PSEUDO_LEN);
++
++	/*
++	 * only the ipoib header is present now, make room for a dummy
++	 * pseudo header and set skb field accordingly
++	 */
++	memset(data, 0, IPOIB_PSEUDO_LEN);
++	skb_reset_mac_header(skb);
++	skb_pull(skb, IPOIB_HARD_LEN);
+ }
+ 
+ /* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 4ad297d3de89..339a1eecdfe3 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -63,6 +63,8 @@ MODULE_PARM_DESC(cm_data_debug_level,
+ #define IPOIB_CM_RX_DELAY       (3 * 256 * HZ)
+ #define IPOIB_CM_RX_UPDATE_MASK (0x3)
+ 
++#define IPOIB_CM_RX_RESERVE     (ALIGN(IPOIB_HARD_LEN, 16) - IPOIB_ENCAP_LEN)
++
+ static struct ib_qp_attr ipoib_cm_err_attr = {
+ 	.qp_state = IB_QPS_ERR
+ };
+@@ -146,15 +148,15 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
+ 	struct sk_buff *skb;
+ 	int i;
+ 
+-	skb = dev_alloc_skb(IPOIB_CM_HEAD_SIZE + 12);
++	skb = dev_alloc_skb(ALIGN(IPOIB_CM_HEAD_SIZE + IPOIB_PSEUDO_LEN, 16));
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+ 	/*
+-	 * IPoIB adds a 4 byte header. So we need 12 more bytes to align the
++	 * IPoIB adds a IPOIB_ENCAP_LEN byte header, this will align the
+ 	 * IP header to a multiple of 16.
+ 	 */
+-	skb_reserve(skb, 12);
++	skb_reserve(skb, IPOIB_CM_RX_RESERVE);
+ 
+ 	mapping[0] = ib_dma_map_single(priv->ca, skb->data, IPOIB_CM_HEAD_SIZE,
+ 				       DMA_FROM_DEVICE);
+@@ -624,9 +626,9 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
+ 	if (wc->byte_len < IPOIB_CM_COPYBREAK) {
+ 		int dlen = wc->byte_len;
+ 
+-		small_skb = dev_alloc_skb(dlen + 12);
++		small_skb = dev_alloc_skb(dlen + IPOIB_CM_RX_RESERVE);
+ 		if (small_skb) {
+-			skb_reserve(small_skb, 12);
++			skb_reserve(small_skb, IPOIB_CM_RX_RESERVE);
+ 			ib_dma_sync_single_for_cpu(priv->ca, rx_ring[wr_id].mapping[0],
+ 						   dlen, DMA_FROM_DEVICE);
+ 			skb_copy_from_linear_data(skb, small_skb->data, dlen);
+@@ -663,8 +665,7 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
+ 
+ copied:
+ 	skb->protocol = ((struct ipoib_header *) skb->data)->proto;
+-	skb_reset_mac_header(skb);
+-	skb_pull(skb, IPOIB_ENCAP_LEN);
++	skb_add_pseudo_hdr(skb);
+ 
+ 	++dev->stats.rx_packets;
+ 	dev->stats.rx_bytes += skb->len;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index be11d5d5b8c1..830fecb6934c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -128,16 +128,15 @@ static struct sk_buff *ipoib_alloc_rx_skb(struct net_device *dev, int id)
+ 
+ 	buf_size = IPOIB_UD_BUF_SIZE(priv->max_ib_mtu);
+ 
+-	skb = dev_alloc_skb(buf_size + IPOIB_ENCAP_LEN);
++	skb = dev_alloc_skb(buf_size + IPOIB_HARD_LEN);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+ 	/*
+-	 * IB will leave a 40 byte gap for a GRH and IPoIB adds a 4 byte
+-	 * header.  So we need 4 more bytes to get to 48 and align the
+-	 * IP header to a multiple of 16.
++	 * the IP header will be at IPOIP_HARD_LEN + IB_GRH_BYTES, that is
++	 * 64 bytes aligned
+ 	 */
+-	skb_reserve(skb, 4);
++	skb_reserve(skb, sizeof(struct ipoib_pseudo_header));
+ 
+ 	mapping = priv->rx_ring[id].mapping;
+ 	mapping[0] = ib_dma_map_single(priv->ca, skb->data, buf_size,
+@@ -253,8 +252,7 @@ static void ipoib_ib_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
+ 	skb_pull(skb, IB_GRH_BYTES);
+ 
+ 	skb->protocol = ((struct ipoib_header *) skb->data)->proto;
+-	skb_reset_mac_header(skb);
+-	skb_pull(skb, IPOIB_ENCAP_LEN);
++	skb_add_pseudo_hdr(skb);
+ 
+ 	++dev->stats.rx_packets;
+ 	dev->stats.rx_bytes += skb->len;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index cc1c1b062ea5..823a528ef4eb 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -925,9 +925,12 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
+ 				ipoib_neigh_free(neigh);
+ 				goto err_drop;
+ 			}
+-			if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE)
++			if (skb_queue_len(&neigh->queue) <
++			    IPOIB_MAX_PATH_REC_QUEUE) {
++				/* put pseudoheader back on for next time */
++				skb_push(skb, IPOIB_PSEUDO_LEN);
+ 				__skb_queue_tail(&neigh->queue, skb);
+-			else {
++			} else {
+ 				ipoib_warn(priv, "queue length limit %d. Packet drop.\n",
+ 					   skb_queue_len(&neigh->queue));
+ 				goto err_drop;
+@@ -964,7 +967,7 @@ err_drop:
+ }
+ 
+ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+-			     struct ipoib_cb *cb)
++			     struct ipoib_pseudo_header *phdr)
+ {
+ 	struct ipoib_dev_priv *priv = netdev_priv(dev);
+ 	struct ipoib_path *path;
+@@ -972,16 +975,18 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+-	path = __path_find(dev, cb->hwaddr + 4);
++	path = __path_find(dev, phdr->hwaddr + 4);
+ 	if (!path || !path->valid) {
+ 		int new_path = 0;
+ 
+ 		if (!path) {
+-			path = path_rec_create(dev, cb->hwaddr + 4);
++			path = path_rec_create(dev, phdr->hwaddr + 4);
+ 			new_path = 1;
+ 		}
+ 		if (path) {
+ 			if (skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
++				/* put pseudoheader back on for next time */
++				skb_push(skb, IPOIB_PSEUDO_LEN);
+ 				__skb_queue_tail(&path->queue, skb);
+ 			} else {
+ 				++dev->stats.tx_dropped;
+@@ -1009,10 +1014,12 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+ 			  be16_to_cpu(path->pathrec.dlid));
+ 
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+-		ipoib_send(dev, skb, path->ah, IPOIB_QPN(cb->hwaddr));
++		ipoib_send(dev, skb, path->ah, IPOIB_QPN(phdr->hwaddr));
+ 		return;
+ 	} else if ((path->query || !path_rec_start(dev, path)) &&
+ 		   skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
++		/* put pseudoheader back on for next time */
++		skb_push(skb, IPOIB_PSEUDO_LEN);
+ 		__skb_queue_tail(&path->queue, skb);
+ 	} else {
+ 		++dev->stats.tx_dropped;
+@@ -1026,13 +1033,15 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ipoib_dev_priv *priv = netdev_priv(dev);
+ 	struct ipoib_neigh *neigh;
+-	struct ipoib_cb *cb = ipoib_skb_cb(skb);
++	struct ipoib_pseudo_header *phdr;
+ 	struct ipoib_header *header;
+ 	unsigned long flags;
+ 
++	phdr = (struct ipoib_pseudo_header *) skb->data;
++	skb_pull(skb, sizeof(*phdr));
+ 	header = (struct ipoib_header *) skb->data;
+ 
+-	if (unlikely(cb->hwaddr[4] == 0xff)) {
++	if (unlikely(phdr->hwaddr[4] == 0xff)) {
+ 		/* multicast, arrange "if" according to probability */
+ 		if ((header->proto != htons(ETH_P_IP)) &&
+ 		    (header->proto != htons(ETH_P_IPV6)) &&
+@@ -1045,13 +1054,13 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			return NETDEV_TX_OK;
+ 		}
+ 		/* Add in the P_Key for multicast*/
+-		cb->hwaddr[8] = (priv->pkey >> 8) & 0xff;
+-		cb->hwaddr[9] = priv->pkey & 0xff;
++		phdr->hwaddr[8] = (priv->pkey >> 8) & 0xff;
++		phdr->hwaddr[9] = priv->pkey & 0xff;
+ 
+-		neigh = ipoib_neigh_get(dev, cb->hwaddr);
++		neigh = ipoib_neigh_get(dev, phdr->hwaddr);
+ 		if (likely(neigh))
+ 			goto send_using_neigh;
+-		ipoib_mcast_send(dev, cb->hwaddr, skb);
++		ipoib_mcast_send(dev, phdr->hwaddr, skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+@@ -1060,16 +1069,16 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	case htons(ETH_P_IP):
+ 	case htons(ETH_P_IPV6):
+ 	case htons(ETH_P_TIPC):
+-		neigh = ipoib_neigh_get(dev, cb->hwaddr);
++		neigh = ipoib_neigh_get(dev, phdr->hwaddr);
+ 		if (unlikely(!neigh)) {
+-			neigh_add_path(skb, cb->hwaddr, dev);
++			neigh_add_path(skb, phdr->hwaddr, dev);
+ 			return NETDEV_TX_OK;
+ 		}
+ 		break;
+ 	case htons(ETH_P_ARP):
+ 	case htons(ETH_P_RARP):
+ 		/* for unicast ARP and RARP should always perform path find */
+-		unicast_arp_send(skb, dev, cb);
++		unicast_arp_send(skb, dev, phdr);
+ 		return NETDEV_TX_OK;
+ 	default:
+ 		/* ethertype not supported by IPoIB */
+@@ -1086,11 +1095,13 @@ send_using_neigh:
+ 			goto unref;
+ 		}
+ 	} else if (neigh->ah) {
+-		ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(cb->hwaddr));
++		ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(phdr->hwaddr));
+ 		goto unref;
+ 	}
+ 
+ 	if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
++		/* put pseudoheader back on for next time */
++		skb_push(skb, sizeof(*phdr));
+ 		spin_lock_irqsave(&priv->lock, flags);
+ 		__skb_queue_tail(&neigh->queue, skb);
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+@@ -1122,8 +1133,8 @@ static int ipoib_hard_header(struct sk_buff *skb,
+ 			     unsigned short type,
+ 			     const void *daddr, const void *saddr, unsigned len)
+ {
++	struct ipoib_pseudo_header *phdr;
+ 	struct ipoib_header *header;
+-	struct ipoib_cb *cb = ipoib_skb_cb(skb);
+ 
+ 	header = (struct ipoib_header *) skb_push(skb, sizeof *header);
+ 
+@@ -1132,12 +1143,13 @@ static int ipoib_hard_header(struct sk_buff *skb,
+ 
+ 	/*
+ 	 * we don't rely on dst_entry structure,  always stuff the
+-	 * destination address into skb->cb so we can figure out where
++	 * destination address into skb hard header so we can figure out where
+ 	 * to send the packet later.
+ 	 */
+-	memcpy(cb->hwaddr, daddr, INFINIBAND_ALEN);
++	phdr = (struct ipoib_pseudo_header *) skb_push(skb, sizeof(*phdr));
++	memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
+ 
+-	return sizeof *header;
++	return IPOIB_HARD_LEN;
+ }
+ 
+ static void ipoib_set_mcast_list(struct net_device *dev)
+@@ -1759,7 +1771,7 @@ void ipoib_setup(struct net_device *dev)
+ 
+ 	dev->flags		|= IFF_BROADCAST | IFF_MULTICAST;
+ 
+-	dev->hard_header_len	 = IPOIB_ENCAP_LEN;
++	dev->hard_header_len	 = IPOIB_HARD_LEN;
+ 	dev->addr_len		 = INFINIBAND_ALEN;
+ 	dev->type		 = ARPHRD_INFINIBAND;
+ 	dev->tx_queue_len	 = ipoib_sendq_size * 2;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index d3394b6add24..1909dd252c94 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -796,9 +796,11 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb)
+ 			__ipoib_mcast_add(dev, mcast);
+ 			list_add_tail(&mcast->list, &priv->multicast_list);
+ 		}
+-		if (skb_queue_len(&mcast->pkt_queue) < IPOIB_MAX_MCAST_QUEUE)
++		if (skb_queue_len(&mcast->pkt_queue) < IPOIB_MAX_MCAST_QUEUE) {
++			/* put pseudoheader back on for next time */
++			skb_push(skb, sizeof(struct ipoib_pseudo_header));
+ 			skb_queue_tail(&mcast->pkt_queue, skb);
+-		else {
++		} else {
+ 			++dev->stats.tx_dropped;
+ 			dev_kfree_skb_any(skb);
+ 		}
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 692ee248e486..3474de576dde 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -913,13 +913,11 @@ fec_restart(struct net_device *ndev)
+ 	 * enet-mac reset will reset mac address registers too,
+ 	 * so need to reconfigure it.
+ 	 */
+-	if (fep->quirks & FEC_QUIRK_ENET_MAC) {
+-		memcpy(&temp_mac, ndev->dev_addr, ETH_ALEN);
+-		writel((__force u32)cpu_to_be32(temp_mac[0]),
+-		       fep->hwp + FEC_ADDR_LOW);
+-		writel((__force u32)cpu_to_be32(temp_mac[1]),
+-		       fep->hwp + FEC_ADDR_HIGH);
+-	}
++	memcpy(&temp_mac, ndev->dev_addr, ETH_ALEN);
++	writel((__force u32)cpu_to_be32(temp_mac[0]),
++	       fep->hwp + FEC_ADDR_LOW);
++	writel((__force u32)cpu_to_be32(temp_mac[1]),
++	       fep->hwp + FEC_ADDR_HIGH);
+ 
+ 	/* Clear any outstanding interrupt. */
+ 	writel(0xffffffff, fep->hwp + FEC_IEVENT);
+@@ -1432,14 +1430,14 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+ 		skb_put(skb, pkt_len - 4);
+ 		data = skb->data;
+ 
++		if (!is_copybreak && need_swap)
++			swap_buffer(data, pkt_len);
++
+ #if !defined(CONFIG_M5272)
+ 		if (fep->quirks & FEC_QUIRK_HAS_RACC)
+ 			data = skb_pull_inline(skb, 2);
+ #endif
+ 
+-		if (!is_copybreak && need_swap)
+-			swap_buffer(data, pkt_len);
+-
+ 		/* Extract the enhanced buffer descriptor */
+ 		ebdp = NULL;
+ 		if (fep->bufdesc_ex)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+index 132cea655920..e3be7e44ff51 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+@@ -127,7 +127,15 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
+ 		/* For TX we use the same irq per
+ 		ring we assigned for the RX    */
+ 		struct mlx4_en_cq *rx_cq;
+-
++		int xdp_index;
++
++		/* The xdp tx irq must align with the rx ring that forwards to
++		 * it, so reindex these from 0. This should only happen when
++		 * tx_ring_num is not a multiple of rx_ring_num.
++		 */
++		xdp_index = (priv->xdp_ring_num - priv->tx_ring_num) + cq_idx;
++		if (xdp_index >= 0)
++			cq_idx = xdp_index;
+ 		cq_idx = cq_idx % priv->rx_ring_num;
+ 		rx_cq = priv->rx_cq[cq_idx];
+ 		cq->vector = rx_cq->vector;
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 3c20e87bb761..16af1ce99233 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -453,7 +453,7 @@ static struct sk_buff **geneve_gro_receive(struct sock *sk,
+ 
+ 	skb_gro_pull(skb, gh_len);
+ 	skb_gro_postpull_rcsum(skb, gh, gh_len);
+-	pp = ptype->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+ out_unlock:
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 3ba29fc80d05..c4d9653cae66 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -624,15 +624,18 @@ static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net,
+ 	       packet->total_data_buflen);
+ 
+ 	skb->protocol = eth_type_trans(skb, net);
+-	if (csum_info) {
+-		/* We only look at the IP checksum here.
+-		 * Should we be dropping the packet if checksum
+-		 * failed? How do we deal with other checksums - TCP/UDP?
+-		 */
+-		if (csum_info->receive.ip_checksum_succeeded)
++
++	/* skb is already created with CHECKSUM_NONE */
++	skb_checksum_none_assert(skb);
++
++	/*
++	 * In Linux, the IP checksum is always checked.
++	 * Do L4 checksum offload if enabled and present.
++	 */
++	if (csum_info && (net->features & NETIF_F_RXCSUM)) {
++		if (csum_info->receive.tcp_checksum_succeeded ||
++		    csum_info->receive.udp_checksum_succeeded)
+ 			skb->ip_summed = CHECKSUM_UNNECESSARY;
+-		else
+-			skb->ip_summed = CHECKSUM_NONE;
+ 	}
+ 
+ 	if (vlan_tci & VLAN_TAG_PRESENT)
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 351e701eb043..b72ddc61eff8 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -397,6 +397,14 @@ static struct macsec_cb *macsec_skb_cb(struct sk_buff *skb)
+ #define DEFAULT_ENCRYPT false
+ #define DEFAULT_ENCODING_SA 0
+ 
++static bool send_sci(const struct macsec_secy *secy)
++{
++	const struct macsec_tx_sc *tx_sc = &secy->tx_sc;
++
++	return tx_sc->send_sci ||
++		(secy->n_rx_sc > 1 && !tx_sc->end_station && !tx_sc->scb);
++}
++
+ static sci_t make_sci(u8 *addr, __be16 port)
+ {
+ 	sci_t sci;
+@@ -437,15 +445,15 @@ static unsigned int macsec_extra_len(bool sci_present)
+ 
+ /* Fill SecTAG according to IEEE 802.1AE-2006 10.5.3 */
+ static void macsec_fill_sectag(struct macsec_eth_header *h,
+-			       const struct macsec_secy *secy, u32 pn)
++			       const struct macsec_secy *secy, u32 pn,
++			       bool sci_present)
+ {
+ 	const struct macsec_tx_sc *tx_sc = &secy->tx_sc;
+ 
+-	memset(&h->tci_an, 0, macsec_sectag_len(tx_sc->send_sci));
++	memset(&h->tci_an, 0, macsec_sectag_len(sci_present));
+ 	h->eth.h_proto = htons(ETH_P_MACSEC);
+ 
+-	if (tx_sc->send_sci ||
+-	    (secy->n_rx_sc > 1 && !tx_sc->end_station && !tx_sc->scb)) {
++	if (sci_present) {
+ 		h->tci_an |= MACSEC_TCI_SC;
+ 		memcpy(&h->secure_channel_id, &secy->sci,
+ 		       sizeof(h->secure_channel_id));
+@@ -650,6 +658,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ 	struct macsec_tx_sc *tx_sc;
+ 	struct macsec_tx_sa *tx_sa;
+ 	struct macsec_dev *macsec = macsec_priv(dev);
++	bool sci_present;
+ 	u32 pn;
+ 
+ 	secy = &macsec->secy;
+@@ -687,7 +696,8 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ 
+ 	unprotected_len = skb->len;
+ 	eth = eth_hdr(skb);
+-	hh = (struct macsec_eth_header *)skb_push(skb, macsec_extra_len(tx_sc->send_sci));
++	sci_present = send_sci(secy);
++	hh = (struct macsec_eth_header *)skb_push(skb, macsec_extra_len(sci_present));
+ 	memmove(hh, eth, 2 * ETH_ALEN);
+ 
+ 	pn = tx_sa_update_pn(tx_sa, secy);
+@@ -696,7 +706,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ 		kfree_skb(skb);
+ 		return ERR_PTR(-ENOLINK);
+ 	}
+-	macsec_fill_sectag(hh, secy, pn);
++	macsec_fill_sectag(hh, secy, pn, sci_present);
+ 	macsec_set_shortlen(hh, unprotected_len - 2 * ETH_ALEN);
+ 
+ 	skb_put(skb, secy->icv_len);
+@@ -726,10 +736,10 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ 	skb_to_sgvec(skb, sg, 0, skb->len);
+ 
+ 	if (tx_sc->encrypt) {
+-		int len = skb->len - macsec_hdr_len(tx_sc->send_sci) -
++		int len = skb->len - macsec_hdr_len(sci_present) -
+ 			  secy->icv_len;
+ 		aead_request_set_crypt(req, sg, sg, len, iv);
+-		aead_request_set_ad(req, macsec_hdr_len(tx_sc->send_sci));
++		aead_request_set_ad(req, macsec_hdr_len(sci_present));
+ 	} else {
+ 		aead_request_set_crypt(req, sg, sg, 0, iv);
+ 		aead_request_set_ad(req, skb->len - secy->icv_len);
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index c6f66832a1a6..f424b867f73e 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -608,6 +608,21 @@ void phy_start_machine(struct phy_device *phydev)
+ }
+ 
+ /**
++ * phy_trigger_machine - trigger the state machine to run
++ *
++ * @phydev: the phy_device struct
++ *
++ * Description: There has been a change in state which requires that the
++ *   state machine runs.
++ */
++
++static void phy_trigger_machine(struct phy_device *phydev)
++{
++	cancel_delayed_work_sync(&phydev->state_queue);
++	queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, 0);
++}
++
++/**
+  * phy_stop_machine - stop the PHY state machine tracking
+  * @phydev: target phy_device struct
+  *
+@@ -639,6 +654,8 @@ static void phy_error(struct phy_device *phydev)
+ 	mutex_lock(&phydev->lock);
+ 	phydev->state = PHY_HALTED;
+ 	mutex_unlock(&phydev->lock);
++
++	phy_trigger_machine(phydev);
+ }
+ 
+ /**
+@@ -800,8 +817,7 @@ void phy_change(struct work_struct *work)
+ 	}
+ 
+ 	/* reschedule state queue work to run as soon as possible */
+-	cancel_delayed_work_sync(&phydev->state_queue);
+-	queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, 0);
++	phy_trigger_machine(phydev);
+ 	return;
+ 
+ ignore:
+@@ -890,6 +906,8 @@ void phy_start(struct phy_device *phydev)
+ 	/* if phy was suspended, bring the physical link up again */
+ 	if (do_resume)
+ 		phy_resume(phydev);
++
++	phy_trigger_machine(phydev);
+ }
+ EXPORT_SYMBOL(phy_start);
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 6e65832051d6..5ae664c02528 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -584,7 +584,7 @@ static struct sk_buff **vxlan_gro_receive(struct sock *sk,
+ 		}
+ 	}
+ 
+-	pp = eth_gro_receive(head, skb);
++	pp = call_gro_receive(eth_gro_receive, head, skb);
+ 	flush = 0;
+ 
+ out:
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index d637c933c8a9..58a97d420572 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -193,6 +193,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 		if (err)
+ 			break;
+ 
++		memset(&precise_offset, 0, sizeof(precise_offset));
+ 		ts = ktime_to_timespec64(xtstamp.device);
+ 		precise_offset.device.sec = ts.tv_sec;
+ 		precise_offset.device.nsec = ts.tv_nsec;
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index ca86c885dfaa..3aaea713bf37 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2233,7 +2233,7 @@ struct megasas_instance_template {
+ };
+ 
+ #define MEGASAS_IS_LOGICAL(scp)						\
+-	(scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1
++	((scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1)
+ 
+ #define MEGASAS_DEV_INDEX(scp)						\
+ 	(((scp->device->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) +	\
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index c1ed25adb17e..71e489937c6f 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -1713,16 +1713,13 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ 		goto out_done;
+ 	}
+ 
+-	switch (scmd->cmnd[0]) {
+-	case SYNCHRONIZE_CACHE:
+-		/*
+-		 * FW takes care of flush cache on its own
+-		 * No need to send it down
+-		 */
++	/*
++	 * FW takes care of flush cache on its own for Virtual Disk.
++	 * No need to send it down for VD. For JBOD send SYNCHRONIZE_CACHE to FW.
++	 */
++	if ((scmd->cmnd[0] == SYNCHRONIZE_CACHE) && MEGASAS_IS_LOGICAL(scmd)) {
+ 		scmd->result = DID_OK << 16;
+ 		goto out_done;
+-	default:
+-		break;
+ 	}
+ 
+ 	return instance->instancet->build_and_issue_cmd(instance, scmd);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 6443cfba7b55..dc3b5962d087 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -789,6 +789,7 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 		req->trb = trb;
+ 		req->trb_dma = dwc3_trb_dma_offset(dep, trb);
+ 		req->first_trb_index = dep->trb_enqueue;
++		dep->queued_requests++;
+ 	}
+ 
+ 	dwc3_ep_inc_enq(dep);
+@@ -841,8 +842,6 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 
+ 	trb->ctrl |= DWC3_TRB_CTRL_HWO;
+ 
+-	dep->queued_requests++;
+-
+ 	trace_dwc3_prepare_trb(dep, trb);
+ }
+ 
+@@ -1963,7 +1962,9 @@ static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,
+ 	unsigned int		s_pkt = 0;
+ 	unsigned int		trb_status;
+ 
+-	dep->queued_requests--;
++	if (req->trb == trb)
++		dep->queued_requests--;
++
+ 	trace_dwc3_complete_trb(dep, trb);
+ 
+ 	/*
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index e8d79d4ebcfe..e942c67ea230 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2154,7 +2154,10 @@ struct napi_gro_cb {
+ 	/* Used to determine if flush_id can be ignored */
+ 	u8	is_atomic:1;
+ 
+-	/* 5 bit hole */
++	/* Number of gro_receive callbacks this packet already went through */
++	u8 recursion_counter:4;
++
++	/* 1 bit hole */
+ 
+ 	/* used to support CHECKSUM_COMPLETE for tunneling protocols */
+ 	__wsum	csum;
+@@ -2165,6 +2168,40 @@ struct napi_gro_cb {
+ 
+ #define NAPI_GRO_CB(skb) ((struct napi_gro_cb *)(skb)->cb)
+ 
++#define GRO_RECURSION_LIMIT 15
++static inline int gro_recursion_inc_test(struct sk_buff *skb)
++{
++	return ++NAPI_GRO_CB(skb)->recursion_counter == GRO_RECURSION_LIMIT;
++}
++
++typedef struct sk_buff **(*gro_receive_t)(struct sk_buff **, struct sk_buff *);
++static inline struct sk_buff **call_gro_receive(gro_receive_t cb,
++						struct sk_buff **head,
++						struct sk_buff *skb)
++{
++	if (unlikely(gro_recursion_inc_test(skb))) {
++		NAPI_GRO_CB(skb)->flush |= 1;
++		return NULL;
++	}
++
++	return cb(head, skb);
++}
++
++typedef struct sk_buff **(*gro_receive_sk_t)(struct sock *, struct sk_buff **,
++					     struct sk_buff *);
++static inline struct sk_buff **call_gro_receive_sk(gro_receive_sk_t cb,
++						   struct sock *sk,
++						   struct sk_buff **head,
++						   struct sk_buff *skb)
++{
++	if (unlikely(gro_recursion_inc_test(skb))) {
++		NAPI_GRO_CB(skb)->flush |= 1;
++		return NULL;
++	}
++
++	return cb(sk, head, skb);
++}
++
+ struct packet_type {
+ 	__be16			type;	/* This is really htons(ether_type). */
+ 	struct net_device	*dev;	/* NULL is wildcarded here	     */
+@@ -3862,7 +3899,7 @@ struct net_device *netdev_all_lower_get_next_rcu(struct net_device *dev,
+ 	     ldev = netdev_all_lower_get_next(dev, &(iter)))
+ 
+ #define netdev_for_each_all_lower_dev_rcu(dev, ldev, iter) \
+-	for (iter = (dev)->all_adj_list.lower.next, \
++	for (iter = &(dev)->all_adj_list.lower, \
+ 	     ldev = netdev_all_lower_get_next_rcu(dev, &(iter)); \
+ 	     ldev; \
+ 	     ldev = netdev_all_lower_get_next_rcu(dev, &(iter)))
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 9742b92dc933..156b0c11b524 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -549,7 +549,7 @@ int ip_options_rcv_srr(struct sk_buff *skb);
+  */
+ 
+ void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb);
+-void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb, int offset);
++void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb, int tlen, int offset);
+ int ip_cmsg_send(struct sock *sk, struct msghdr *msg,
+ 		 struct ipcm_cookie *ipc, bool allow_ipv6);
+ int ip_setsockopt(struct sock *sk, int level, int optname, char __user *optval,
+@@ -571,7 +571,7 @@ void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
+ 
+ static inline void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
+ {
+-	ip_cmsg_recv_offset(msg, skb, 0);
++	ip_cmsg_recv_offset(msg, skb, 0, 0);
+ }
+ 
+ bool icmp_global_allow(void);
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index d97305d0e71f..0a2d2701285d 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -32,6 +32,7 @@ struct route_info {
+ #define RT6_LOOKUP_F_SRCPREF_TMP	0x00000008
+ #define RT6_LOOKUP_F_SRCPREF_PUBLIC	0x00000010
+ #define RT6_LOOKUP_F_SRCPREF_COA	0x00000020
++#define RT6_LOOKUP_F_IGNORE_LINKSTATE	0x00000040
+ 
+ /* We do not (yet ?) support IPv6 jumbograms (RFC 2675)
+  * Unlike IPv4, hdr->seg_len doesn't include the IPv6 header
+diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
+index 262f0379d83a..5a78be518101 100644
+--- a/include/uapi/linux/rtnetlink.h
++++ b/include/uapi/linux/rtnetlink.h
+@@ -350,7 +350,7 @@ struct rtnexthop {
+ #define RTNH_F_OFFLOAD		8	/* offloaded route */
+ #define RTNH_F_LINKDOWN		16	/* carrier-down on nexthop */
+ 
+-#define RTNH_COMPARE_MASK	(RTNH_F_DEAD | RTNH_F_LINKDOWN)
++#define RTNH_COMPARE_MASK	(RTNH_F_DEAD | RTNH_F_LINKDOWN | RTNH_F_OFFLOAD)
+ 
+ /* Macros to handle hexthops */
+ 
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index 8de138d3306b..f2531ad66b68 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -664,7 +664,7 @@ static struct sk_buff **vlan_gro_receive(struct sk_buff **head,
+ 
+ 	skb_gro_pull(skb, sizeof(*vhdr));
+ 	skb_gro_postpull_rcsum(skb, vhdr, sizeof(*vhdr));
+-	pp = ptype->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index c5fea9393946..2136e45f5277 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -972,13 +972,12 @@ static void br_multicast_enable(struct bridge_mcast_own_query *query)
+ 		mod_timer(&query->timer, jiffies);
+ }
+ 
+-void br_multicast_enable_port(struct net_bridge_port *port)
++static void __br_multicast_enable_port(struct net_bridge_port *port)
+ {
+ 	struct net_bridge *br = port->br;
+ 
+-	spin_lock(&br->multicast_lock);
+ 	if (br->multicast_disabled || !netif_running(br->dev))
+-		goto out;
++		return;
+ 
+ 	br_multicast_enable(&port->ip4_own_query);
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -987,8 +986,14 @@ void br_multicast_enable_port(struct net_bridge_port *port)
+ 	if (port->multicast_router == MDB_RTR_TYPE_PERM &&
+ 	    hlist_unhashed(&port->rlist))
+ 		br_multicast_add_router(br, port);
++}
+ 
+-out:
++void br_multicast_enable_port(struct net_bridge_port *port)
++{
++	struct net_bridge *br = port->br;
++
++	spin_lock(&br->multicast_lock);
++	__br_multicast_enable_port(port);
+ 	spin_unlock(&br->multicast_lock);
+ }
+ 
+@@ -1994,8 +1999,9 @@ static void br_multicast_start_querier(struct net_bridge *br,
+ 
+ int br_multicast_toggle(struct net_bridge *br, unsigned long val)
+ {
+-	int err = 0;
+ 	struct net_bridge_mdb_htable *mdb;
++	struct net_bridge_port *port;
++	int err = 0;
+ 
+ 	spin_lock_bh(&br->multicast_lock);
+ 	if (br->multicast_disabled == !val)
+@@ -2023,10 +2029,9 @@ rollback:
+ 			goto rollback;
+ 	}
+ 
+-	br_multicast_start_querier(br, &br->ip4_own_query);
+-#if IS_ENABLED(CONFIG_IPV6)
+-	br_multicast_start_querier(br, &br->ip6_own_query);
+-#endif
++	br_multicast_open(br);
++	list_for_each_entry(port, &br->port_list, list)
++		__br_multicast_enable_port(port);
+ 
+ unlock:
+ 	spin_unlock_bh(&br->multicast_lock);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index ea6312057a71..44b3ba462ba1 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3035,6 +3035,7 @@ struct sk_buff *validate_xmit_skb_list(struct sk_buff *skb, struct net_device *d
+ 	}
+ 	return head;
+ }
++EXPORT_SYMBOL_GPL(validate_xmit_skb_list);
+ 
+ static void qdisc_pkt_len_init(struct sk_buff *skb)
+ {
+@@ -4496,6 +4497,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff
+ 		NAPI_GRO_CB(skb)->flush = 0;
+ 		NAPI_GRO_CB(skb)->free = 0;
+ 		NAPI_GRO_CB(skb)->encap_mark = 0;
++		NAPI_GRO_CB(skb)->recursion_counter = 0;
+ 		NAPI_GRO_CB(skb)->is_fou = 0;
+ 		NAPI_GRO_CB(skb)->is_atomic = 1;
+ 		NAPI_GRO_CB(skb)->gro_remcsum_start = 0;
+@@ -5500,10 +5502,14 @@ struct net_device *netdev_all_lower_get_next_rcu(struct net_device *dev,
+ {
+ 	struct netdev_adjacent *lower;
+ 
+-	lower = list_first_or_null_rcu(&dev->all_adj_list.lower,
+-				       struct netdev_adjacent, list);
++	lower = list_entry_rcu((*iter)->next, struct netdev_adjacent, list);
++
++	if (&lower->list == &dev->all_adj_list.lower)
++		return NULL;
++
++	*iter = &lower->list;
+ 
+-	return lower ? lower->dev : NULL;
++	return lower->dev;
+ }
+ EXPORT_SYMBOL(netdev_all_lower_get_next_rcu);
+ 
+@@ -5578,6 +5584,7 @@ static inline bool netdev_adjacent_is_neigh_list(struct net_device *dev,
+ 
+ static int __netdev_adjacent_dev_insert(struct net_device *dev,
+ 					struct net_device *adj_dev,
++					u16 ref_nr,
+ 					struct list_head *dev_list,
+ 					void *private, bool master)
+ {
+@@ -5587,7 +5594,7 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev,
+ 	adj = __netdev_find_adj(adj_dev, dev_list);
+ 
+ 	if (adj) {
+-		adj->ref_nr++;
++		adj->ref_nr += ref_nr;
+ 		return 0;
+ 	}
+ 
+@@ -5597,7 +5604,7 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev,
+ 
+ 	adj->dev = adj_dev;
+ 	adj->master = master;
+-	adj->ref_nr = 1;
++	adj->ref_nr = ref_nr;
+ 	adj->private = private;
+ 	dev_hold(adj_dev);
+ 
+@@ -5636,6 +5643,7 @@ free_adj:
+ 
+ static void __netdev_adjacent_dev_remove(struct net_device *dev,
+ 					 struct net_device *adj_dev,
++					 u16 ref_nr,
+ 					 struct list_head *dev_list)
+ {
+ 	struct netdev_adjacent *adj;
+@@ -5648,10 +5656,10 @@ static void __netdev_adjacent_dev_remove(struct net_device *dev,
+ 		BUG();
+ 	}
+ 
+-	if (adj->ref_nr > 1) {
+-		pr_debug("%s to %s ref_nr-- = %d\n", dev->name, adj_dev->name,
+-			 adj->ref_nr-1);
+-		adj->ref_nr--;
++	if (adj->ref_nr > ref_nr) {
++		pr_debug("%s to %s ref_nr-%d = %d\n", dev->name, adj_dev->name,
++			 ref_nr, adj->ref_nr-ref_nr);
++		adj->ref_nr -= ref_nr;
+ 		return;
+ 	}
+ 
+@@ -5670,21 +5678,22 @@ static void __netdev_adjacent_dev_remove(struct net_device *dev,
+ 
+ static int __netdev_adjacent_dev_link_lists(struct net_device *dev,
+ 					    struct net_device *upper_dev,
++					    u16 ref_nr,
+ 					    struct list_head *up_list,
+ 					    struct list_head *down_list,
+ 					    void *private, bool master)
+ {
+ 	int ret;
+ 
+-	ret = __netdev_adjacent_dev_insert(dev, upper_dev, up_list, private,
+-					   master);
++	ret = __netdev_adjacent_dev_insert(dev, upper_dev, ref_nr, up_list,
++					   private, master);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = __netdev_adjacent_dev_insert(upper_dev, dev, down_list, private,
+-					   false);
++	ret = __netdev_adjacent_dev_insert(upper_dev, dev, ref_nr, down_list,
++					   private, false);
+ 	if (ret) {
+-		__netdev_adjacent_dev_remove(dev, upper_dev, up_list);
++		__netdev_adjacent_dev_remove(dev, upper_dev, ref_nr, up_list);
+ 		return ret;
+ 	}
+ 
+@@ -5692,9 +5701,10 @@ static int __netdev_adjacent_dev_link_lists(struct net_device *dev,
+ }
+ 
+ static int __netdev_adjacent_dev_link(struct net_device *dev,
+-				      struct net_device *upper_dev)
++				      struct net_device *upper_dev,
++				      u16 ref_nr)
+ {
+-	return __netdev_adjacent_dev_link_lists(dev, upper_dev,
++	return __netdev_adjacent_dev_link_lists(dev, upper_dev, ref_nr,
+ 						&dev->all_adj_list.upper,
+ 						&upper_dev->all_adj_list.lower,
+ 						NULL, false);
+@@ -5702,17 +5712,19 @@ static int __netdev_adjacent_dev_link(struct net_device *dev,
+ 
+ static void __netdev_adjacent_dev_unlink_lists(struct net_device *dev,
+ 					       struct net_device *upper_dev,
++					       u16 ref_nr,
+ 					       struct list_head *up_list,
+ 					       struct list_head *down_list)
+ {
+-	__netdev_adjacent_dev_remove(dev, upper_dev, up_list);
+-	__netdev_adjacent_dev_remove(upper_dev, dev, down_list);
++	__netdev_adjacent_dev_remove(dev, upper_dev, ref_nr, up_list);
++	__netdev_adjacent_dev_remove(upper_dev, dev, ref_nr, down_list);
+ }
+ 
+ static void __netdev_adjacent_dev_unlink(struct net_device *dev,
+-					 struct net_device *upper_dev)
++					 struct net_device *upper_dev,
++					 u16 ref_nr)
+ {
+-	__netdev_adjacent_dev_unlink_lists(dev, upper_dev,
++	__netdev_adjacent_dev_unlink_lists(dev, upper_dev, ref_nr,
+ 					   &dev->all_adj_list.upper,
+ 					   &upper_dev->all_adj_list.lower);
+ }
+@@ -5721,17 +5733,17 @@ static int __netdev_adjacent_dev_link_neighbour(struct net_device *dev,
+ 						struct net_device *upper_dev,
+ 						void *private, bool master)
+ {
+-	int ret = __netdev_adjacent_dev_link(dev, upper_dev);
++	int ret = __netdev_adjacent_dev_link(dev, upper_dev, 1);
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = __netdev_adjacent_dev_link_lists(dev, upper_dev,
++	ret = __netdev_adjacent_dev_link_lists(dev, upper_dev, 1,
+ 					       &dev->adj_list.upper,
+ 					       &upper_dev->adj_list.lower,
+ 					       private, master);
+ 	if (ret) {
+-		__netdev_adjacent_dev_unlink(dev, upper_dev);
++		__netdev_adjacent_dev_unlink(dev, upper_dev, 1);
+ 		return ret;
+ 	}
+ 
+@@ -5741,8 +5753,8 @@ static int __netdev_adjacent_dev_link_neighbour(struct net_device *dev,
+ static void __netdev_adjacent_dev_unlink_neighbour(struct net_device *dev,
+ 						   struct net_device *upper_dev)
+ {
+-	__netdev_adjacent_dev_unlink(dev, upper_dev);
+-	__netdev_adjacent_dev_unlink_lists(dev, upper_dev,
++	__netdev_adjacent_dev_unlink(dev, upper_dev, 1);
++	__netdev_adjacent_dev_unlink_lists(dev, upper_dev, 1,
+ 					   &dev->adj_list.upper,
+ 					   &upper_dev->adj_list.lower);
+ }
+@@ -5795,7 +5807,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
+ 		list_for_each_entry(j, &upper_dev->all_adj_list.upper, list) {
+ 			pr_debug("Interlinking %s with %s, non-neighbour\n",
+ 				 i->dev->name, j->dev->name);
+-			ret = __netdev_adjacent_dev_link(i->dev, j->dev);
++			ret = __netdev_adjacent_dev_link(i->dev, j->dev, i->ref_nr);
+ 			if (ret)
+ 				goto rollback_mesh;
+ 		}
+@@ -5805,7 +5817,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
+ 	list_for_each_entry(i, &upper_dev->all_adj_list.upper, list) {
+ 		pr_debug("linking %s's upper device %s with %s\n",
+ 			 upper_dev->name, i->dev->name, dev->name);
+-		ret = __netdev_adjacent_dev_link(dev, i->dev);
++		ret = __netdev_adjacent_dev_link(dev, i->dev, i->ref_nr);
+ 		if (ret)
+ 			goto rollback_upper_mesh;
+ 	}
+@@ -5814,7 +5826,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
+ 	list_for_each_entry(i, &dev->all_adj_list.lower, list) {
+ 		pr_debug("linking %s's lower device %s with %s\n", dev->name,
+ 			 i->dev->name, upper_dev->name);
+-		ret = __netdev_adjacent_dev_link(i->dev, upper_dev);
++		ret = __netdev_adjacent_dev_link(i->dev, upper_dev, i->ref_nr);
+ 		if (ret)
+ 			goto rollback_lower_mesh;
+ 	}
+@@ -5832,7 +5844,7 @@ rollback_lower_mesh:
+ 	list_for_each_entry(i, &dev->all_adj_list.lower, list) {
+ 		if (i == to_i)
+ 			break;
+-		__netdev_adjacent_dev_unlink(i->dev, upper_dev);
++		__netdev_adjacent_dev_unlink(i->dev, upper_dev, i->ref_nr);
+ 	}
+ 
+ 	i = NULL;
+@@ -5842,7 +5854,7 @@ rollback_upper_mesh:
+ 	list_for_each_entry(i, &upper_dev->all_adj_list.upper, list) {
+ 		if (i == to_i)
+ 			break;
+-		__netdev_adjacent_dev_unlink(dev, i->dev);
++		__netdev_adjacent_dev_unlink(dev, i->dev, i->ref_nr);
+ 	}
+ 
+ 	i = j = NULL;
+@@ -5854,7 +5866,7 @@ rollback_mesh:
+ 		list_for_each_entry(j, &upper_dev->all_adj_list.upper, list) {
+ 			if (i == to_i && j == to_j)
+ 				break;
+-			__netdev_adjacent_dev_unlink(i->dev, j->dev);
++			__netdev_adjacent_dev_unlink(i->dev, j->dev, i->ref_nr);
+ 		}
+ 		if (i == to_i)
+ 			break;
+@@ -5934,16 +5946,16 @@ void netdev_upper_dev_unlink(struct net_device *dev,
+ 	 */
+ 	list_for_each_entry(i, &dev->all_adj_list.lower, list)
+ 		list_for_each_entry(j, &upper_dev->all_adj_list.upper, list)
+-			__netdev_adjacent_dev_unlink(i->dev, j->dev);
++			__netdev_adjacent_dev_unlink(i->dev, j->dev, i->ref_nr);
+ 
+ 	/* remove also the devices itself from lower/upper device
+ 	 * list
+ 	 */
+ 	list_for_each_entry(i, &dev->all_adj_list.lower, list)
+-		__netdev_adjacent_dev_unlink(i->dev, upper_dev);
++		__netdev_adjacent_dev_unlink(i->dev, upper_dev, i->ref_nr);
+ 
+ 	list_for_each_entry(i, &upper_dev->all_adj_list.upper, list)
+-		__netdev_adjacent_dev_unlink(dev, i->dev);
++		__netdev_adjacent_dev_unlink(dev, i->dev, i->ref_nr);
+ 
+ 	call_netdevice_notifiers_info(NETDEV_CHANGEUPPER, dev,
+ 				      &changeupper_info.info);
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index bbd118b19aef..306b8f0e03c1 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -216,8 +216,8 @@
+ #define M_QUEUE_XMIT		2	/* Inject packet into qdisc */
+ 
+ /* If lock -- protects updating of if_list */
+-#define   if_lock(t)           spin_lock(&(t->if_lock));
+-#define   if_unlock(t)           spin_unlock(&(t->if_lock));
++#define   if_lock(t)           mutex_lock(&(t->if_lock));
++#define   if_unlock(t)           mutex_unlock(&(t->if_lock));
+ 
+ /* Used to help with determining the pkts on receive */
+ #define PKTGEN_MAGIC 0xbe9be955
+@@ -423,7 +423,7 @@ struct pktgen_net {
+ };
+ 
+ struct pktgen_thread {
+-	spinlock_t if_lock;		/* for list of devices */
++	struct mutex if_lock;		/* for list of devices */
+ 	struct list_head if_list;	/* All device here */
+ 	struct list_head th_list;
+ 	struct task_struct *tsk;
+@@ -2010,11 +2010,13 @@ static void pktgen_change_name(const struct pktgen_net *pn, struct net_device *d
+ {
+ 	struct pktgen_thread *t;
+ 
++	mutex_lock(&pktgen_thread_lock);
++
+ 	list_for_each_entry(t, &pn->pktgen_threads, th_list) {
+ 		struct pktgen_dev *pkt_dev;
+ 
+-		rcu_read_lock();
+-		list_for_each_entry_rcu(pkt_dev, &t->if_list, list) {
++		if_lock(t);
++		list_for_each_entry(pkt_dev, &t->if_list, list) {
+ 			if (pkt_dev->odev != dev)
+ 				continue;
+ 
+@@ -2029,8 +2031,9 @@ static void pktgen_change_name(const struct pktgen_net *pn, struct net_device *d
+ 				       dev->name);
+ 			break;
+ 		}
+-		rcu_read_unlock();
++		if_unlock(t);
+ 	}
++	mutex_unlock(&pktgen_thread_lock);
+ }
+ 
+ static int pktgen_device_event(struct notifier_block *unused,
+@@ -2286,7 +2289,7 @@ out:
+ 
+ static inline void set_pkt_overhead(struct pktgen_dev *pkt_dev)
+ {
+-	pkt_dev->pkt_overhead = LL_RESERVED_SPACE(pkt_dev->odev);
++	pkt_dev->pkt_overhead = 0;
+ 	pkt_dev->pkt_overhead += pkt_dev->nr_labels*sizeof(u32);
+ 	pkt_dev->pkt_overhead += VLAN_TAG_SIZE(pkt_dev);
+ 	pkt_dev->pkt_overhead += SVLAN_TAG_SIZE(pkt_dev);
+@@ -2777,13 +2780,13 @@ static void pktgen_finalize_skb(struct pktgen_dev *pkt_dev, struct sk_buff *skb,
+ }
+ 
+ static struct sk_buff *pktgen_alloc_skb(struct net_device *dev,
+-					struct pktgen_dev *pkt_dev,
+-					unsigned int extralen)
++					struct pktgen_dev *pkt_dev)
+ {
++	unsigned int extralen = LL_RESERVED_SPACE(dev);
+ 	struct sk_buff *skb = NULL;
+-	unsigned int size = pkt_dev->cur_pkt_size + 64 + extralen +
+-			    pkt_dev->pkt_overhead;
++	unsigned int size;
+ 
++	size = pkt_dev->cur_pkt_size + 64 + extralen + pkt_dev->pkt_overhead;
+ 	if (pkt_dev->flags & F_NODE) {
+ 		int node = pkt_dev->node >= 0 ? pkt_dev->node : numa_node_id();
+ 
+@@ -2796,8 +2799,9 @@ static struct sk_buff *pktgen_alloc_skb(struct net_device *dev,
+ 		 skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
+ 	}
+ 
++	/* the caller pre-fetches from skb->data and reserves for the mac hdr */
+ 	if (likely(skb))
+-		skb_reserve(skb, LL_RESERVED_SPACE(dev));
++		skb_reserve(skb, extralen - 16);
+ 
+ 	return skb;
+ }
+@@ -2830,16 +2834,14 @@ static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
+ 	mod_cur_headers(pkt_dev);
+ 	queue_map = pkt_dev->cur_queue_map;
+ 
+-	datalen = (odev->hard_header_len + 16) & ~0xf;
+-
+-	skb = pktgen_alloc_skb(odev, pkt_dev, datalen);
++	skb = pktgen_alloc_skb(odev, pkt_dev);
+ 	if (!skb) {
+ 		sprintf(pkt_dev->result, "No memory");
+ 		return NULL;
+ 	}
+ 
+ 	prefetchw(skb->data);
+-	skb_reserve(skb, datalen);
++	skb_reserve(skb, 16);
+ 
+ 	/*  Reserve for ethernet and IP header  */
+ 	eth = (__u8 *) skb_push(skb, 14);
+@@ -2959,7 +2961,7 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
+ 	mod_cur_headers(pkt_dev);
+ 	queue_map = pkt_dev->cur_queue_map;
+ 
+-	skb = pktgen_alloc_skb(odev, pkt_dev, 16);
++	skb = pktgen_alloc_skb(odev, pkt_dev);
+ 	if (!skb) {
+ 		sprintf(pkt_dev->result, "No memory");
+ 		return NULL;
+@@ -3763,7 +3765,7 @@ static int __net_init pktgen_create_thread(int cpu, struct pktgen_net *pn)
+ 		return -ENOMEM;
+ 	}
+ 
+-	spin_lock_init(&t->if_lock);
++	mutex_init(&t->if_lock);
+ 	t->cpu = cpu;
+ 
+ 	INIT_LIST_HEAD(&t->if_list);
+diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c
+index 66dff5e3d772..02acfff36028 100644
+--- a/net/ethernet/eth.c
++++ b/net/ethernet/eth.c
+@@ -439,7 +439,7 @@ struct sk_buff **eth_gro_receive(struct sk_buff **head,
+ 
+ 	skb_gro_pull(skb, sizeof(*eh));
+ 	skb_gro_postpull_rcsum(skb, eh, sizeof(*eh));
+-	pp = ptype->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 55513e654d79..eebbc0f2baa8 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1388,7 +1388,7 @@ struct sk_buff **inet_gro_receive(struct sk_buff **head, struct sk_buff *skb)
+ 	skb_gro_pull(skb, sizeof(*iph));
+ 	skb_set_transport_header(skb, skb_gro_offset(skb));
+ 
+-	pp = ops->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 321d57f825ce..5351b61ab8d3 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -249,7 +249,7 @@ static struct sk_buff **fou_gro_receive(struct sock *sk,
+ 	if (!ops || !ops->callbacks.gro_receive)
+ 		goto out_unlock;
+ 
+-	pp = ops->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+@@ -441,7 +441,7 @@ next_proto:
+ 	if (WARN_ON_ONCE(!ops || !ops->callbacks.gro_receive))
+ 		goto out_unlock;
+ 
+-	pp = ops->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+ out_unlock:
+diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
+index ecd1e09dbbf1..6871f59cd0c0 100644
+--- a/net/ipv4/gre_offload.c
++++ b/net/ipv4/gre_offload.c
+@@ -227,7 +227,7 @@ static struct sk_buff **gre_gro_receive(struct sk_buff **head,
+ 	/* Adjusted NAPI_GRO_CB(skb)->csum after skb_gro_pull()*/
+ 	skb_gro_postpull_rcsum(skb, greh, grehlen);
+ 
+-	pp = ptype->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
+ 	flush = 0;
+ 
+ out_unlock:
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 71a52f4d4cff..11ef96e2147a 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -98,7 +98,7 @@ static void ip_cmsg_recv_retopts(struct msghdr *msg, struct sk_buff *skb)
+ }
+ 
+ static void ip_cmsg_recv_checksum(struct msghdr *msg, struct sk_buff *skb,
+-				  int offset)
++				  int tlen, int offset)
+ {
+ 	__wsum csum = skb->csum;
+ 
+@@ -106,8 +106,9 @@ static void ip_cmsg_recv_checksum(struct msghdr *msg, struct sk_buff *skb,
+ 		return;
+ 
+ 	if (offset != 0)
+-		csum = csum_sub(csum, csum_partial(skb_transport_header(skb),
+-						   offset, 0));
++		csum = csum_sub(csum,
++				csum_partial(skb_transport_header(skb) + tlen,
++					     offset, 0));
+ 
+ 	put_cmsg(msg, SOL_IP, IP_CHECKSUM, sizeof(__wsum), &csum);
+ }
+@@ -153,7 +154,7 @@ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ }
+ 
+ void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb,
+-			 int offset)
++			 int tlen, int offset)
+ {
+ 	struct inet_sock *inet = inet_sk(skb->sk);
+ 	unsigned int flags = inet->cmsg_flags;
+@@ -216,7 +217,7 @@ void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb,
+ 	}
+ 
+ 	if (flags & IP_CMSG_CHECKSUM)
+-		ip_cmsg_recv_checksum(msg, skb, offset);
++		ip_cmsg_recv_checksum(msg, skb, tlen, offset);
+ }
+ EXPORT_SYMBOL(ip_cmsg_recv_offset);
+ 
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 1cb67de106fe..80bc36b25de2 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -96,11 +96,11 @@ static void inet_get_ping_group_range_table(struct ctl_table *table, kgid_t *low
+ 		container_of(table->data, struct net, ipv4.ping_group_range.range);
+ 	unsigned int seq;
+ 	do {
+-		seq = read_seqbegin(&net->ipv4.ip_local_ports.lock);
++		seq = read_seqbegin(&net->ipv4.ping_group_range.lock);
+ 
+ 		*low = data[0];
+ 		*high = data[1];
+-	} while (read_seqretry(&net->ipv4.ip_local_ports.lock, seq));
++	} while (read_seqretry(&net->ipv4.ping_group_range.lock, seq));
+ }
+ 
+ /* Update system visible IP port range */
+@@ -109,10 +109,10 @@ static void set_ping_group_range(struct ctl_table *table, kgid_t low, kgid_t hig
+ 	kgid_t *data = table->data;
+ 	struct net *net =
+ 		container_of(table->data, struct net, ipv4.ping_group_range.range);
+-	write_seqlock(&net->ipv4.ip_local_ports.lock);
++	write_seqlock(&net->ipv4.ping_group_range.lock);
+ 	data[0] = low;
+ 	data[1] = high;
+-	write_sequnlock(&net->ipv4.ip_local_ports.lock);
++	write_sequnlock(&net->ipv4.ping_group_range.lock);
+ }
+ 
+ /* Validate changes from /proc interface. */
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 5fdcb8d108d4..c0d71e7d663e 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1327,7 +1327,7 @@ try_again:
+ 		*addr_len = sizeof(*sin);
+ 	}
+ 	if (inet->cmsg_flags)
+-		ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr) + off);
++		ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr), off);
+ 
+ 	err = copied;
+ 	if (flags & MSG_TRUNC)
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 81f253b6ff36..6de9f977356e 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -293,7 +293,7 @@ unflush:
+ 
+ 	skb_gro_pull(skb, sizeof(struct udphdr)); /* pull encapsulating udp header */
+ 	skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr));
+-	pp = udp_sk(sk)->gro_receive(sk, head, skb);
++	pp = call_gro_receive_sk(udp_sk(sk)->gro_receive, sk, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 2f1f5d439788..f5432d65e6bf 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2995,7 +2995,7 @@ static void init_loopback(struct net_device *dev)
+ 				 * lo device down, release this obsolete dst and
+ 				 * reallocate a new router for ifa.
+ 				 */
+-				if (sp_ifa->rt->dst.obsolete > 0) {
++				if (!atomic_read(&sp_ifa->rt->rt6i_ref)) {
+ 					ip6_rt_put(sp_ifa->rt);
+ 					sp_ifa->rt = NULL;
+ 				} else {
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 22e90e56b5a9..a09418bda1f8 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -243,7 +243,7 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head,
+ 
+ 	skb_gro_postpull_rcsum(skb, iph, nlen);
+ 
+-	pp = ops->callbacks.gro_receive(head, skb);
++	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 888543debe4e..41489f39c456 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -155,6 +155,7 @@ ip6_tnl_lookup(struct net *net, const struct in6_addr *remote, const struct in6_
+ 	hash = HASH(&any, local);
+ 	for_each_ip6_tunnel_rcu(ip6n->tnls_r_l[hash]) {
+ 		if (ipv6_addr_equal(local, &t->parms.laddr) &&
++		    ipv6_addr_any(&t->parms.raddr) &&
+ 		    (t->dev->flags & IFF_UP))
+ 			return t;
+ 	}
+@@ -162,6 +163,7 @@ ip6_tnl_lookup(struct net *net, const struct in6_addr *remote, const struct in6_
+ 	hash = HASH(remote, &any);
+ 	for_each_ip6_tunnel_rcu(ip6n->tnls_r_l[hash]) {
+ 		if (ipv6_addr_equal(remote, &t->parms.raddr) &&
++		    ipv6_addr_any(&t->parms.laddr) &&
+ 		    (t->dev->flags & IFF_UP))
+ 			return t;
+ 	}
+@@ -1132,6 +1134,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ 	if (err)
+ 		return err;
+ 
++	skb->protocol = htons(ETH_P_IPV6);
+ 	skb_push(skb, sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	ipv6h = ipv6_hdr(skb);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 269218aacbea..23153ac6c9b9 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -656,7 +656,8 @@ static struct rt6_info *find_match(struct rt6_info *rt, int oif, int strict,
+ 	struct net_device *dev = rt->dst.dev;
+ 
+ 	if (dev && !netif_carrier_ok(dev) &&
+-	    idev->cnf.ignore_routes_with_linkdown)
++	    idev->cnf.ignore_routes_with_linkdown &&
++	    !(strict & RT6_LOOKUP_F_IGNORE_LINKSTATE))
+ 		goto out;
+ 
+ 	if (rt6_check_expired(rt))
+@@ -1050,6 +1051,7 @@ struct rt6_info *ip6_pol_route(struct net *net, struct fib6_table *table,
+ 	int strict = 0;
+ 
+ 	strict |= flags & RT6_LOOKUP_F_IFACE;
++	strict |= flags & RT6_LOOKUP_F_IGNORE_LINKSTATE;
+ 	if (net->ipv6.devconf_all->forwarding == 0)
+ 		strict |= RT6_LOOKUP_F_REACHABLE;
+ 
+@@ -1783,7 +1785,7 @@ static struct rt6_info *ip6_nh_lookup_table(struct net *net,
+ 	};
+ 	struct fib6_table *table;
+ 	struct rt6_info *rt;
+-	int flags = RT6_LOOKUP_F_IFACE;
++	int flags = RT6_LOOKUP_F_IFACE | RT6_LOOKUP_F_IGNORE_LINKSTATE;
+ 
+ 	table = fib6_get_table(net, cfg->fc_table);
+ 	if (!table)
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 94f4f89d73e7..fc67822c42e0 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1193,6 +1193,16 @@ out:
+ 	return NULL;
+ }
+ 
++static void tcp_v6_restore_cb(struct sk_buff *skb)
++{
++	/* We need to move header back to the beginning if xfrm6_policy_check()
++	 * and tcp_v6_fill_cb() are going to be called again.
++	 * ip6_datagram_recv_specific_ctl() also expects IP6CB to be there.
++	 */
++	memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,
++		sizeof(struct inet6_skb_parm));
++}
++
+ /* The socket must have it's spinlock held when we get
+  * here, unless it is a TCP_LISTEN socket.
+  *
+@@ -1322,6 +1332,7 @@ ipv6_pktoptions:
+ 			np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
+ 		if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) {
+ 			skb_set_owner_r(opt_skb, sk);
++			tcp_v6_restore_cb(opt_skb);
+ 			opt_skb = xchg(&np->pktoptions, opt_skb);
+ 		} else {
+ 			__kfree_skb(opt_skb);
+@@ -1355,15 +1366,6 @@ static void tcp_v6_fill_cb(struct sk_buff *skb, const struct ipv6hdr *hdr,
+ 	TCP_SKB_CB(skb)->sacked = 0;
+ }
+ 
+-static void tcp_v6_restore_cb(struct sk_buff *skb)
+-{
+-	/* We need to move header back to the beginning if xfrm6_policy_check()
+-	 * and tcp_v6_fill_cb() are going to be called again.
+-	 */
+-	memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,
+-		sizeof(struct inet6_skb_parm));
+-}
+-
+ static int tcp_v6_rcv(struct sk_buff *skb)
+ {
+ 	const struct tcphdr *th;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 19ac3a1c308d..c2a8656c22eb 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -427,7 +427,8 @@ try_again:
+ 
+ 	if (is_udp4) {
+ 		if (inet->cmsg_flags)
+-			ip_cmsg_recv(msg, skb);
++			ip_cmsg_recv_offset(msg, skb,
++					    sizeof(struct udphdr), off);
+ 	} else {
+ 		if (np->rxopt.all)
+ 			ip6_datagram_recv_specific_ctl(sk, msg, skb);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 627f898c05b9..62bea4591054 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1832,7 +1832,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	/* Record the max length of recvmsg() calls for future allocations */
+ 	nlk->max_recvmsg_len = max(nlk->max_recvmsg_len, len);
+ 	nlk->max_recvmsg_len = min_t(size_t, nlk->max_recvmsg_len,
+-				     16384);
++				     SKB_WITH_OVERHEAD(32768));
+ 
+ 	copied = data_skb->len;
+ 	if (len < copied) {
+@@ -2083,8 +2083,9 @@ static int netlink_dump(struct sock *sk)
+ 
+ 	if (alloc_min_size < nlk->max_recvmsg_len) {
+ 		alloc_size = nlk->max_recvmsg_len;
+-		skb = alloc_skb(alloc_size, GFP_KERNEL |
+-					    __GFP_NOWARN | __GFP_NORETRY);
++		skb = alloc_skb(alloc_size,
++				(GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) |
++				__GFP_NOWARN | __GFP_NORETRY);
+ 	}
+ 	if (!skb) {
+ 		alloc_size = alloc_min_size;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 33a4697d5539..d2238b204691 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -250,7 +250,7 @@ static void __fanout_link(struct sock *sk, struct packet_sock *po);
+ static int packet_direct_xmit(struct sk_buff *skb)
+ {
+ 	struct net_device *dev = skb->dev;
+-	netdev_features_t features;
++	struct sk_buff *orig_skb = skb;
+ 	struct netdev_queue *txq;
+ 	int ret = NETDEV_TX_BUSY;
+ 
+@@ -258,9 +258,8 @@ static int packet_direct_xmit(struct sk_buff *skb)
+ 		     !netif_carrier_ok(dev)))
+ 		goto drop;
+ 
+-	features = netif_skb_features(skb);
+-	if (skb_needs_linearize(skb, features) &&
+-	    __skb_linearize(skb))
++	skb = validate_xmit_skb_list(skb, dev);
++	if (skb != orig_skb)
+ 		goto drop;
+ 
+ 	txq = skb_get_tx_queue(dev, skb);
+@@ -280,7 +279,7 @@ static int packet_direct_xmit(struct sk_buff *skb)
+ 	return ret;
+ drop:
+ 	atomic_long_inc(&dev->tx_dropped);
+-	kfree_skb(skb);
++	kfree_skb_list(skb);
+ 	return NET_XMIT_DROP;
+ }
+ 
+@@ -3952,6 +3951,7 @@ static int packet_notifier(struct notifier_block *this,
+ 				}
+ 				if (msg == NETDEV_UNREGISTER) {
+ 					packet_cached_dev_reset(po);
++					fanout_release(sk);
+ 					po->ifindex = -1;
+ 					if (po->prot_hook.dev)
+ 						dev_put(po->prot_hook.dev);
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index d09d0687594b..027ddf412c40 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -341,22 +341,25 @@ int tcf_register_action(struct tc_action_ops *act,
+ 	if (!act->act || !act->dump || !act->init || !act->walk || !act->lookup)
+ 		return -EINVAL;
+ 
++	/* We have to register pernet ops before making the action ops visible,
++	 * otherwise tcf_action_init_1() could get a partially initialized
++	 * netns.
++	 */
++	ret = register_pernet_subsys(ops);
++	if (ret)
++		return ret;
++
+ 	write_lock(&act_mod_lock);
+ 	list_for_each_entry(a, &act_base, head) {
+ 		if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
+ 			write_unlock(&act_mod_lock);
++			unregister_pernet_subsys(ops);
+ 			return -EEXIST;
+ 		}
+ 	}
+ 	list_add_tail(&act->head, &act_base);
+ 	write_unlock(&act_mod_lock);
+ 
+-	ret = register_pernet_subsys(ops);
+-	if (ret) {
+-		tcf_unregister_action(act, ops);
+-		return ret;
+-	}
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(tcf_register_action);
+@@ -367,8 +370,6 @@ int tcf_unregister_action(struct tc_action_ops *act,
+ 	struct tc_action_ops *a;
+ 	int err = -ENOENT;
+ 
+-	unregister_pernet_subsys(ops);
+-
+ 	write_lock(&act_mod_lock);
+ 	list_for_each_entry(a, &act_base, head) {
+ 		if (a == act) {
+@@ -378,6 +379,8 @@ int tcf_unregister_action(struct tc_action_ops *act,
+ 		}
+ 	}
+ 	write_unlock(&act_mod_lock);
++	if (!err)
++		unregister_pernet_subsys(ops);
+ 	return err;
+ }
+ EXPORT_SYMBOL(tcf_unregister_action);
+diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
+index 691409de3e1a..4ffc6c13a566 100644
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -36,6 +36,12 @@ static int tcf_vlan(struct sk_buff *skb, const struct tc_action *a,
+ 	bstats_update(&v->tcf_bstats, skb);
+ 	action = v->tcf_action;
+ 
++	/* Ensure 'data' points at mac_header prior calling vlan manipulating
++	 * functions.
++	 */
++	if (skb_at_tc_ingress(skb))
++		skb_push_rcsum(skb, skb->mac_len);
++
+ 	switch (v->tcfv_action) {
+ 	case TCA_VLAN_ACT_POP:
+ 		err = skb_vlan_pop(skb);
+@@ -57,6 +63,9 @@ drop:
+ 	action = TC_ACT_SHOT;
+ 	v->tcf_qstats.drops++;
+ unlock:
++	if (skb_at_tc_ingress(skb))
++		skb_pull_rcsum(skb, skb->mac_len);
++
+ 	spin_unlock(&v->tcf_lock);
+ 	return action;
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index a7c5645373af..74bed5e9bb89 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -344,7 +344,8 @@ replay:
+ 			if (err == 0) {
+ 				struct tcf_proto *next = rtnl_dereference(tp->next);
+ 
+-				tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);
++				tfilter_notify(net, skb, n, tp,
++					       t->tcm_handle, RTM_DELTFILTER);
+ 				if (tcf_destroy(tp, false))
+ 					RCU_INIT_POINTER(*back, next);
+ 			}
+diff --git a/net/sctp/output.c b/net/sctp/output.c
+index 31b7bc35895d..81929907a365 100644
+--- a/net/sctp/output.c
++++ b/net/sctp/output.c
+@@ -417,6 +417,7 @@ int sctp_packet_transmit(struct sctp_packet *packet, gfp_t gfp)
+ 	__u8 has_data = 0;
+ 	int gso = 0;
+ 	int pktcount = 0;
++	int auth_len = 0;
+ 	struct dst_entry *dst;
+ 	unsigned char *auth = NULL;	/* pointer to auth in skb data */
+ 
+@@ -505,7 +506,12 @@ int sctp_packet_transmit(struct sctp_packet *packet, gfp_t gfp)
+ 			list_for_each_entry(chunk, &packet->chunk_list, list) {
+ 				int padded = WORD_ROUND(chunk->skb->len);
+ 
+-				if (pkt_size + padded > tp->pathmtu)
++				if (chunk == packet->auth)
++					auth_len = padded;
++				else if (auth_len + padded + packet->overhead >
++					 tp->pathmtu)
++					goto nomem;
++				else if (pkt_size + padded > tp->pathmtu)
+ 					break;
+ 				pkt_size += padded;
+ 			}
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index d88bb2b0b699..920469e7b0ef 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -3422,6 +3422,12 @@ sctp_disposition_t sctp_sf_ootb(struct net *net,
+ 			return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+ 						  commands);
+ 
++		/* Report violation if chunk len overflows */
++		ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length));
++		if (ch_end > skb_tail_pointer(skb))
++			return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
++
+ 		/* Now that we know we at least have a chunk header,
+ 		 * do things that are type appropriate.
+ 		 */
+@@ -3453,12 +3459,6 @@ sctp_disposition_t sctp_sf_ootb(struct net *net,
+ 			}
+ 		}
+ 
+-		/* Report violation if chunk len overflows */
+-		ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length));
+-		if (ch_end > skb_tail_pointer(skb))
+-			return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
+-
+ 		ch = (sctp_chunkhdr_t *) ch_end;
+ 	} while (ch_end < skb_tail_pointer(skb));
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 8ed2d99bde6d..baccbf3c1c60 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4683,7 +4683,7 @@ static int sctp_getsockopt_disable_fragments(struct sock *sk, int len,
+ static int sctp_getsockopt_events(struct sock *sk, int len, char __user *optval,
+ 				  int __user *optlen)
+ {
+-	if (len <= 0)
++	if (len == 0)
+ 		return -EINVAL;
+ 	if (len > sizeof(struct sctp_event_subscribe))
+ 		len = sizeof(struct sctp_event_subscribe);
+@@ -6426,6 +6426,9 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
++	if (len < 0)
++		return -EINVAL;
++
+ 	lock_sock(sk);
+ 
+ 	switch (optname) {
+diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c
+index a5fc9dd24aa9..a56c5e6f4498 100644
+--- a/net/switchdev/switchdev.c
++++ b/net/switchdev/switchdev.c
+@@ -774,6 +774,9 @@ int switchdev_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
+ 	u32 mask = BR_LEARNING | BR_LEARNING_SYNC | BR_FLOOD;
+ 	int err;
+ 
++	if (!netif_is_bridge_port(dev))
++		return -EOPNOTSUPP;
++
+ 	err = switchdev_port_attr_get(dev, &attr);
+ 	if (err && err != -EOPNOTSUPP)
+ 		return err;
+@@ -929,6 +932,9 @@ int switchdev_port_bridge_setlink(struct net_device *dev,
+ 	struct nlattr *afspec;
+ 	int err = 0;
+ 
++	if (!netif_is_bridge_port(dev))
++		return -EOPNOTSUPP;
++
+ 	protinfo = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
+ 				   IFLA_PROTINFO);
+ 	if (protinfo) {
+@@ -962,6 +968,9 @@ int switchdev_port_bridge_dellink(struct net_device *dev,
+ {
+ 	struct nlattr *afspec;
+ 
++	if (!netif_is_bridge_port(dev))
++		return -EOPNOTSUPP;
++
+ 	afspec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg),
+ 				 IFLA_AF_SPEC);
+ 	if (afspec)


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-19 11:05 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-11-19 11:05 UTC (permalink / raw
  To: gentoo-commits

commit:     f13a81bef4970bd4993d84ad318bfe4990d92536
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 19 11:05:07 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov 19 11:05:07 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f13a81be

Linux patch 4.8.9

 0000_README            |    4 +
 1008_linux-4.8.9.patch | 3120 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3124 insertions(+)

diff --git a/0000_README b/0000_README
index 236529a..d5af994 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-4.8.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.8
 
+Patch:  1008_linux-4.8.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-4.8.9.patch b/1008_linux-4.8.9.patch
new file mode 100644
index 0000000..6b106d5
--- /dev/null
+++ b/1008_linux-4.8.9.patch
@@ -0,0 +1,3120 @@
+diff --git a/Makefile b/Makefile
+index 8f18daa2c76a..c1519ab85258 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arc/kernel/time.c b/arch/arc/kernel/time.c
+index f927b8dc6edd..c10390d1ddb6 100644
+--- a/arch/arc/kernel/time.c
++++ b/arch/arc/kernel/time.c
+@@ -152,14 +152,17 @@ static cycle_t arc_read_rtc(struct clocksource *cs)
+ 		cycle_t  full;
+ 	} stamp;
+ 
+-
+-	__asm__ __volatile(
+-	"1:						\n"
+-	"	lr		%0, [AUX_RTC_LOW]	\n"
+-	"	lr		%1, [AUX_RTC_HIGH]	\n"
+-	"	lr		%2, [AUX_RTC_CTRL]	\n"
+-	"	bbit0.nt	%2, 31, 1b		\n"
+-	: "=r" (stamp.low), "=r" (stamp.high), "=r" (status));
++	/*
++	 * hardware has an internal state machine which tracks readout of
++	 * low/high and updates the CTRL.status if
++	 *  - interrupt/exception taken between the two reads
++	 *  - high increments after low has been read
++	 */
++	do {
++		stamp.low = read_aux_reg(AUX_RTC_LOW);
++		stamp.high = read_aux_reg(AUX_RTC_HIGH);
++		status = read_aux_reg(AUX_RTC_CTRL);
++	} while (!(status & _BITUL(31)));
+ 
+ 	return stamp.full;
+ }
+diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
+index 20afc65e22dc..9288851d43a0 100644
+--- a/arch/arc/mm/dma.c
++++ b/arch/arc/mm/dma.c
+@@ -105,6 +105,31 @@ static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
+ 	__free_pages(page, get_order(size));
+ }
+ 
++static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma,
++			void *cpu_addr, dma_addr_t dma_addr, size_t size,
++			unsigned long attrs)
++{
++	unsigned long user_count = vma_pages(vma);
++	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
++	unsigned long pfn = __phys_to_pfn(plat_dma_to_phys(dev, dma_addr));
++	unsigned long off = vma->vm_pgoff;
++	int ret = -ENXIO;
++
++	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
++
++	if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
++		return ret;
++
++	if (off < count && user_count <= (count - off)) {
++		ret = remap_pfn_range(vma, vma->vm_start,
++				      pfn + off,
++				      user_count << PAGE_SHIFT,
++				      vma->vm_page_prot);
++	}
++
++	return ret;
++}
++
+ /*
+  * streaming DMA Mapping API...
+  * CPU accesses page via normal paddr, thus needs to explicitly made
+@@ -193,6 +218,7 @@ static int arc_dma_supported(struct device *dev, u64 dma_mask)
+ struct dma_map_ops arc_dma_ops = {
+ 	.alloc			= arc_dma_alloc,
+ 	.free			= arc_dma_free,
++	.mmap			= arc_dma_mmap,
+ 	.map_page		= arc_dma_map_page,
+ 	.map_sg			= arc_dma_map_sg,
+ 	.sync_single_for_device	= arc_dma_sync_single_for_device,
+diff --git a/arch/s390/hypfs/hypfs_diag.c b/arch/s390/hypfs/hypfs_diag.c
+index 28f03ca60100..794bebb43d23 100644
+--- a/arch/s390/hypfs/hypfs_diag.c
++++ b/arch/s390/hypfs/hypfs_diag.c
+@@ -363,11 +363,11 @@ out:
+ static int diag224_get_name_table(void)
+ {
+ 	/* memory must be below 2GB */
+-	diag224_cpu_names = kmalloc(PAGE_SIZE, GFP_KERNEL | GFP_DMA);
++	diag224_cpu_names = (char *) __get_free_page(GFP_KERNEL | GFP_DMA);
+ 	if (!diag224_cpu_names)
+ 		return -ENOMEM;
+ 	if (diag224(diag224_cpu_names)) {
+-		kfree(diag224_cpu_names);
++		free_page((unsigned long) diag224_cpu_names);
+ 		return -EOPNOTSUPP;
+ 	}
+ 	EBCASC(diag224_cpu_names + 16, (*diag224_cpu_names + 1) * 16);
+@@ -376,7 +376,7 @@ static int diag224_get_name_table(void)
+ 
+ static void diag224_delete_name_table(void)
+ {
+-	kfree(diag224_cpu_names);
++	free_page((unsigned long) diag224_cpu_names);
+ }
+ 
+ static int diag224_idx2name(int index, char *name)
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 03323175de30..602af692efdc 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -192,7 +192,7 @@ struct task_struct;
+ struct mm_struct;
+ struct seq_file;
+ 
+-typedef int (*dump_trace_func_t)(void *data, unsigned long address);
++typedef int (*dump_trace_func_t)(void *data, unsigned long address, int reliable);
+ void dump_trace(dump_trace_func_t func, void *data,
+ 		struct task_struct *task, unsigned long sp);
+ 
+diff --git a/arch/s390/kernel/dumpstack.c b/arch/s390/kernel/dumpstack.c
+index 6693383bc01b..518f615ad0a2 100644
+--- a/arch/s390/kernel/dumpstack.c
++++ b/arch/s390/kernel/dumpstack.c
+@@ -38,10 +38,10 @@ __dump_trace(dump_trace_func_t func, void *data, unsigned long sp,
+ 		if (sp < low || sp > high - sizeof(*sf))
+ 			return sp;
+ 		sf = (struct stack_frame *) sp;
++		if (func(data, sf->gprs[8], 0))
++			return sp;
+ 		/* Follow the backchain. */
+ 		while (1) {
+-			if (func(data, sf->gprs[8]))
+-				return sp;
+ 			low = sp;
+ 			sp = sf->back_chain;
+ 			if (!sp)
+@@ -49,6 +49,8 @@ __dump_trace(dump_trace_func_t func, void *data, unsigned long sp,
+ 			if (sp <= low || sp > high - sizeof(*sf))
+ 				return sp;
+ 			sf = (struct stack_frame *) sp;
++			if (func(data, sf->gprs[8], 1))
++				return sp;
+ 		}
+ 		/* Zero backchain detected, check for interrupt frame. */
+ 		sp = (unsigned long) (sf + 1);
+@@ -56,7 +58,7 @@ __dump_trace(dump_trace_func_t func, void *data, unsigned long sp,
+ 			return sp;
+ 		regs = (struct pt_regs *) sp;
+ 		if (!user_mode(regs)) {
+-			if (func(data, regs->psw.addr))
++			if (func(data, regs->psw.addr, 1))
+ 				return sp;
+ 		}
+ 		low = sp;
+@@ -90,7 +92,7 @@ struct return_address_data {
+ 	int depth;
+ };
+ 
+-static int __return_address(void *data, unsigned long address)
++static int __return_address(void *data, unsigned long address, int reliable)
+ {
+ 	struct return_address_data *rd = data;
+ 
+@@ -109,9 +111,12 @@ unsigned long return_address(int depth)
+ }
+ EXPORT_SYMBOL_GPL(return_address);
+ 
+-static int show_address(void *data, unsigned long address)
++static int show_address(void *data, unsigned long address, int reliable)
+ {
+-	printk("([<%016lx>] %pSR)\n", address, (void *)address);
++	if (reliable)
++		printk(" [<%016lx>] %pSR \n", address, (void *)address);
++	else
++		printk("([<%016lx>] %pSR)\n", address, (void *)address);
+ 	return 0;
+ }
+ 
+diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
+index 17431f63de00..955a7b6fa0a4 100644
+--- a/arch/s390/kernel/perf_event.c
++++ b/arch/s390/kernel/perf_event.c
+@@ -222,7 +222,7 @@ static int __init service_level_perf_register(void)
+ }
+ arch_initcall(service_level_perf_register);
+ 
+-static int __perf_callchain_kernel(void *data, unsigned long address)
++static int __perf_callchain_kernel(void *data, unsigned long address, int reliable)
+ {
+ 	struct perf_callchain_entry_ctx *entry = data;
+ 
+diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
+index 44f84b23d4e5..355db9db8210 100644
+--- a/arch/s390/kernel/stacktrace.c
++++ b/arch/s390/kernel/stacktrace.c
+@@ -27,12 +27,12 @@ static int __save_address(void *data, unsigned long address, int nosched)
+ 	return 1;
+ }
+ 
+-static int save_address(void *data, unsigned long address)
++static int save_address(void *data, unsigned long address, int reliable)
+ {
+ 	return __save_address(data, address, 0);
+ }
+ 
+-static int save_address_nosched(void *data, unsigned long address)
++static int save_address_nosched(void *data, unsigned long address, int reliable)
+ {
+ 	return __save_address(data, address, 1);
+ }
+diff --git a/arch/s390/oprofile/init.c b/arch/s390/oprofile/init.c
+index 16f4c3960b87..9a4de4599c7b 100644
+--- a/arch/s390/oprofile/init.c
++++ b/arch/s390/oprofile/init.c
+@@ -13,7 +13,7 @@
+ #include <linux/init.h>
+ #include <asm/processor.h>
+ 
+-static int __s390_backtrace(void *data, unsigned long address)
++static int __s390_backtrace(void *data, unsigned long address, int reliable)
+ {
+ 	unsigned int *depth = data;
+ 
+diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
+index 77f28ce9c646..9976fcecd17e 100644
+--- a/arch/x86/entry/Makefile
++++ b/arch/x86/entry/Makefile
+@@ -5,8 +5,8 @@
+ OBJECT_FILES_NON_STANDARD_entry_$(BITS).o   := y
+ OBJECT_FILES_NON_STANDARD_entry_64_compat.o := y
+ 
+-CFLAGS_syscall_64.o		+= -Wno-override-init
+-CFLAGS_syscall_32.o		+= -Wno-override-init
++CFLAGS_syscall_64.o		+= $(call cc-option,-Wno-override-init,)
++CFLAGS_syscall_32.o		+= $(call cc-option,-Wno-override-init,)
+ obj-y				:= entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
+ obj-y				+= common.o
+ 
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index fbd19444403f..d99ca57cbf60 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -453,6 +453,7 @@ static void __init acpi_sci_ioapic_setup(u8 bus_irq, u16 polarity, u16 trigger,
+ 		polarity = acpi_sci_flags & ACPI_MADT_POLARITY_MASK;
+ 
+ 	mp_override_legacy_irq(bus_irq, polarity, trigger, gsi);
++	acpi_penalize_sci_irq(bus_irq, trigger, polarity);
+ 
+ 	/*
+ 	 * stash over-ride to indicate we've been here
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 60746ef904e4..caea575f25f8 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -662,7 +662,7 @@ static int ghes_proc(struct ghes *ghes)
+ 	ghes_do_proc(ghes, ghes->estatus);
+ out:
+ 	ghes_clear_estatus(ghes);
+-	return 0;
++	return rc;
+ }
+ 
+ static void ghes_add_timer(struct ghes *ghes)
+diff --git a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
+index c983bf733ad3..bc3d914dfc3e 100644
+--- a/drivers/acpi/pci_link.c
++++ b/drivers/acpi/pci_link.c
+@@ -87,6 +87,7 @@ struct acpi_pci_link {
+ 
+ static LIST_HEAD(acpi_link_list);
+ static DEFINE_MUTEX(acpi_link_lock);
++static int sci_irq = -1, sci_penalty;
+ 
+ /* --------------------------------------------------------------------------
+                             PCI Link Device Management
+@@ -496,25 +497,13 @@ static int acpi_irq_get_penalty(int irq)
+ {
+ 	int penalty = 0;
+ 
+-	/*
+-	* Penalize IRQ used by ACPI SCI. If ACPI SCI pin attributes conflict
+-	* with PCI IRQ attributes, mark ACPI SCI as ISA_ALWAYS so it won't be
+-	* use for PCI IRQs.
+-	*/
+-	if (irq == acpi_gbl_FADT.sci_interrupt) {
+-		u32 type = irq_get_trigger_type(irq) & IRQ_TYPE_SENSE_MASK;
+-
+-		if (type != IRQ_TYPE_LEVEL_LOW)
+-			penalty += PIRQ_PENALTY_ISA_ALWAYS;
+-		else
+-			penalty += PIRQ_PENALTY_PCI_USING;
+-	}
++	if (irq == sci_irq)
++		penalty += sci_penalty;
+ 
+ 	if (irq < ACPI_MAX_ISA_IRQS)
+ 		return penalty + acpi_isa_irq_penalty[irq];
+ 
+-	penalty += acpi_irq_pci_sharing_penalty(irq);
+-	return penalty;
++	return penalty + acpi_irq_pci_sharing_penalty(irq);
+ }
+ 
+ int __init acpi_irq_penalty_init(void)
+@@ -619,6 +608,10 @@ static int acpi_pci_link_allocate(struct acpi_pci_link *link)
+ 			    acpi_device_bid(link->device));
+ 		return -ENODEV;
+ 	} else {
++		if (link->irq.active < ACPI_MAX_ISA_IRQS)
++			acpi_isa_irq_penalty[link->irq.active] +=
++				PIRQ_PENALTY_PCI_USING;
++
+ 		printk(KERN_WARNING PREFIX "%s [%s] enabled at IRQ %d\n",
+ 		       acpi_device_name(link->device),
+ 		       acpi_device_bid(link->device), link->irq.active);
+@@ -849,7 +842,7 @@ static int __init acpi_irq_penalty_update(char *str, int used)
+ 			continue;
+ 
+ 		if (used)
+-			new_penalty = acpi_irq_get_penalty(irq) +
++			new_penalty = acpi_isa_irq_penalty[irq] +
+ 					PIRQ_PENALTY_ISA_USED;
+ 		else
+ 			new_penalty = 0;
+@@ -871,7 +864,7 @@ static int __init acpi_irq_penalty_update(char *str, int used)
+ void acpi_penalize_isa_irq(int irq, int active)
+ {
+ 	if ((irq >= 0) && (irq < ARRAY_SIZE(acpi_isa_irq_penalty)))
+-		acpi_isa_irq_penalty[irq] = acpi_irq_get_penalty(irq) +
++		acpi_isa_irq_penalty[irq] +=
+ 		  (active ? PIRQ_PENALTY_ISA_USED : PIRQ_PENALTY_PCI_USING);
+ }
+ 
+@@ -881,6 +874,17 @@ bool acpi_isa_irq_available(int irq)
+ 		    acpi_irq_get_penalty(irq) < PIRQ_PENALTY_ISA_ALWAYS);
+ }
+ 
++void acpi_penalize_sci_irq(int irq, int trigger, int polarity)
++{
++	sci_irq = irq;
++
++	if (trigger == ACPI_MADT_TRIGGER_LEVEL &&
++	    polarity == ACPI_MADT_POLARITY_ACTIVE_LOW)
++		sci_penalty = PIRQ_PENALTY_PCI_USING;
++	else
++		sci_penalty = PIRQ_PENALTY_ISA_ALWAYS;
++}
++
+ /*
+  * Over-ride default table to reserve additional IRQs for use by ISA
+  * e.g. acpi_irq_isa=5
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 100be556e613..83482721bc01 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -1871,7 +1871,7 @@ int drbd_send(struct drbd_connection *connection, struct socket *sock,
+ 		drbd_update_congested(connection);
+ 	}
+ 	do {
+-		rv = kernel_sendmsg(sock, &msg, &iov, 1, size);
++		rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);
+ 		if (rv == -EAGAIN) {
+ 			if (we_should_drop_the_connection(connection, sock))
+ 				break;
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 44311296ec02..0f7d28a98b9a 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -845,6 +845,8 @@ void intel_gtt_insert_page(dma_addr_t addr,
+ 			   unsigned int flags)
+ {
+ 	intel_private.driver->write_entry(addr, pg, flags);
++	if (intel_private.driver->chipset_flush)
++		intel_private.driver->chipset_flush();
+ }
+ EXPORT_SYMBOL(intel_gtt_insert_page);
+ 
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 9203f2d130c0..340f96e44642 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -84,14 +84,14 @@ static size_t rng_buffer_size(void)
+ 
+ static void add_early_randomness(struct hwrng *rng)
+ {
+-	unsigned char bytes[16];
+ 	int bytes_read;
++	size_t size = min_t(size_t, 16, rng_buffer_size());
+ 
+ 	mutex_lock(&reading_mutex);
+-	bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1);
++	bytes_read = rng_get_data(rng, rng_buffer, size, 1);
+ 	mutex_unlock(&reading_mutex);
+ 	if (bytes_read > 0)
+-		add_device_randomness(bytes, bytes_read);
++		add_device_randomness(rng_buffer, bytes_read);
+ }
+ 
+ static inline void cleanup_rng(struct kref *kref)
+diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
+index 20b105584f82..80ae2a51452d 100644
+--- a/drivers/clk/clk-qoriq.c
++++ b/drivers/clk/clk-qoriq.c
+@@ -700,6 +700,7 @@ static struct clk * __init create_mux_common(struct clockgen *cg,
+ 					     struct mux_hwclock *hwc,
+ 					     const struct clk_ops *ops,
+ 					     unsigned long min_rate,
++					     unsigned long max_rate,
+ 					     unsigned long pct80_rate,
+ 					     const char *fmt, int idx)
+ {
+@@ -728,6 +729,8 @@ static struct clk * __init create_mux_common(struct clockgen *cg,
+ 			continue;
+ 		if (rate < min_rate)
+ 			continue;
++		if (rate > max_rate)
++			continue;
+ 
+ 		parent_names[j] = div->name;
+ 		hwc->parent_to_clksel[j] = i;
+@@ -759,7 +762,7 @@ static struct clk * __init create_one_cmux(struct clockgen *cg, int idx)
+ 	struct mux_hwclock *hwc;
+ 	const struct clockgen_pll_div *div;
+ 	unsigned long plat_rate, min_rate;
+-	u64 pct80_rate;
++	u64 max_rate, pct80_rate;
+ 	u32 clksel;
+ 
+ 	hwc = kzalloc(sizeof(*hwc), GFP_KERNEL);
+@@ -787,8 +790,8 @@ static struct clk * __init create_one_cmux(struct clockgen *cg, int idx)
+ 		return NULL;
+ 	}
+ 
+-	pct80_rate = clk_get_rate(div->clk);
+-	pct80_rate *= 8;
++	max_rate = clk_get_rate(div->clk);
++	pct80_rate = max_rate * 8;
+ 	do_div(pct80_rate, 10);
+ 
+ 	plat_rate = clk_get_rate(cg->pll[PLATFORM_PLL].div[PLL_DIV1].clk);
+@@ -798,7 +801,7 @@ static struct clk * __init create_one_cmux(struct clockgen *cg, int idx)
+ 	else
+ 		min_rate = plat_rate / 2;
+ 
+-	return create_mux_common(cg, hwc, &cmux_ops, min_rate,
++	return create_mux_common(cg, hwc, &cmux_ops, min_rate, max_rate,
+ 				 pct80_rate, "cg-cmux%d", idx);
+ }
+ 
+@@ -813,7 +816,7 @@ static struct clk * __init create_one_hwaccel(struct clockgen *cg, int idx)
+ 	hwc->reg = cg->regs + 0x20 * idx + 0x10;
+ 	hwc->info = cg->info.hwaccel[idx];
+ 
+-	return create_mux_common(cg, hwc, &hwaccel_ops, 0, 0,
++	return create_mux_common(cg, hwc, &hwaccel_ops, 0, ULONG_MAX, 0,
+ 				 "cg-hwaccel%d", idx);
+ }
+ 
+diff --git a/drivers/clk/samsung/clk-exynos-audss.c b/drivers/clk/samsung/clk-exynos-audss.c
+index bdf8b971f332..0fa91f37879b 100644
+--- a/drivers/clk/samsung/clk-exynos-audss.c
++++ b/drivers/clk/samsung/clk-exynos-audss.c
+@@ -82,6 +82,7 @@ static const struct of_device_id exynos_audss_clk_of_match[] = {
+ 	  .data = (void *)TYPE_EXYNOS5420, },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, exynos_audss_clk_of_match);
+ 
+ static void exynos_audss_clk_teardown(void)
+ {
+diff --git a/drivers/clocksource/timer-sun5i.c b/drivers/clocksource/timer-sun5i.c
+index c184eb84101e..4f87f3e76d83 100644
+--- a/drivers/clocksource/timer-sun5i.c
++++ b/drivers/clocksource/timer-sun5i.c
+@@ -152,6 +152,13 @@ static irqreturn_t sun5i_timer_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
++static cycle_t sun5i_clksrc_read(struct clocksource *clksrc)
++{
++	struct sun5i_timer_clksrc *cs = to_sun5i_timer_clksrc(clksrc);
++
++	return ~readl(cs->timer.base + TIMER_CNTVAL_LO_REG(1));
++}
++
+ static int sun5i_rate_cb_clksrc(struct notifier_block *nb,
+ 				unsigned long event, void *data)
+ {
+@@ -210,8 +217,13 @@ static int __init sun5i_setup_clocksource(struct device_node *node,
+ 	writel(TIMER_CTL_ENABLE | TIMER_CTL_RELOAD,
+ 	       base + TIMER_CTL_REG(1));
+ 
+-	ret = clocksource_mmio_init(base + TIMER_CNTVAL_LO_REG(1), node->name,
+-				    rate, 340, 32, clocksource_mmio_readl_down);
++	cs->clksrc.name = node->name;
++	cs->clksrc.rating = 340;
++	cs->clksrc.read = sun5i_clksrc_read;
++	cs->clksrc.mask = CLOCKSOURCE_MASK(32);
++	cs->clksrc.flags = CLOCK_SOURCE_IS_CONTINUOUS;
++
++	ret = clocksource_register_hz(&cs->clksrc, rate);
+ 	if (ret) {
+ 		pr_err("Couldn't register clock source.\n");
+ 		goto err_remove_notifier;
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index cd5dc27320a2..1ed6132b993c 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -293,10 +293,10 @@ static void mvebu_gpio_irq_ack(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
+-	u32 mask = ~(1 << (d->irq - gc->irq_base));
++	u32 mask = d->mask;
+ 
+ 	irq_gc_lock(gc);
+-	writel_relaxed(mask, mvebu_gpioreg_edge_cause(mvchip));
++	writel_relaxed(~mask, mvebu_gpioreg_edge_cause(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -305,7 +305,7 @@ static void mvebu_gpio_edge_irq_mask(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+-	u32 mask = 1 << (d->irq - gc->irq_base);
++	u32 mask = d->mask;
+ 
+ 	irq_gc_lock(gc);
+ 	ct->mask_cache_priv &= ~mask;
+@@ -319,8 +319,7 @@ static void mvebu_gpio_edge_irq_unmask(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+-
+-	u32 mask = 1 << (d->irq - gc->irq_base);
++	u32 mask = d->mask;
+ 
+ 	irq_gc_lock(gc);
+ 	ct->mask_cache_priv |= mask;
+@@ -333,8 +332,7 @@ static void mvebu_gpio_level_irq_mask(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+-
+-	u32 mask = 1 << (d->irq - gc->irq_base);
++	u32 mask = d->mask;
+ 
+ 	irq_gc_lock(gc);
+ 	ct->mask_cache_priv &= ~mask;
+@@ -347,8 +345,7 @@ static void mvebu_gpio_level_irq_unmask(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+-
+-	u32 mask = 1 << (d->irq - gc->irq_base);
++	u32 mask = d->mask;
+ 
+ 	irq_gc_lock(gc);
+ 	ct->mask_cache_priv |= mask;
+@@ -462,7 +459,7 @@ static void mvebu_gpio_irq_handler(struct irq_desc *desc)
+ 	for (i = 0; i < mvchip->chip.ngpio; i++) {
+ 		int irq;
+ 
+-		irq = mvchip->irqbase + i;
++		irq = irq_find_mapping(mvchip->domain, i);
+ 
+ 		if (!(cause & (1 << i)))
+ 			continue;
+@@ -655,6 +652,7 @@ static int mvebu_gpio_probe(struct platform_device *pdev)
+ 	struct irq_chip_type *ct;
+ 	struct clk *clk;
+ 	unsigned int ngpios;
++	bool have_irqs;
+ 	int soc_variant;
+ 	int i, cpu, id;
+ 	int err;
+@@ -665,6 +663,9 @@ static int mvebu_gpio_probe(struct platform_device *pdev)
+ 	else
+ 		soc_variant = MVEBU_GPIO_SOC_VARIANT_ORION;
+ 
++	/* Some gpio controllers do not provide irq support */
++	have_irqs = of_irq_count(np) != 0;
++
+ 	mvchip = devm_kzalloc(&pdev->dev, sizeof(struct mvebu_gpio_chip),
+ 			      GFP_KERNEL);
+ 	if (!mvchip)
+@@ -697,7 +698,8 @@ static int mvebu_gpio_probe(struct platform_device *pdev)
+ 	mvchip->chip.get = mvebu_gpio_get;
+ 	mvchip->chip.direction_output = mvebu_gpio_direction_output;
+ 	mvchip->chip.set = mvebu_gpio_set;
+-	mvchip->chip.to_irq = mvebu_gpio_to_irq;
++	if (have_irqs)
++		mvchip->chip.to_irq = mvebu_gpio_to_irq;
+ 	mvchip->chip.base = id * MVEBU_MAX_GPIO_PER_BANK;
+ 	mvchip->chip.ngpio = ngpios;
+ 	mvchip->chip.can_sleep = false;
+@@ -758,34 +760,30 @@ static int mvebu_gpio_probe(struct platform_device *pdev)
+ 	devm_gpiochip_add_data(&pdev->dev, &mvchip->chip, mvchip);
+ 
+ 	/* Some gpio controllers do not provide irq support */
+-	if (!of_irq_count(np))
++	if (!have_irqs)
+ 		return 0;
+ 
+-	/* Setup the interrupt handlers. Each chip can have up to 4
+-	 * interrupt handlers, with each handler dealing with 8 GPIO
+-	 * pins. */
+-	for (i = 0; i < 4; i++) {
+-		int irq = platform_get_irq(pdev, i);
+-
+-		if (irq < 0)
+-			continue;
+-		irq_set_chained_handler_and_data(irq, mvebu_gpio_irq_handler,
+-						 mvchip);
+-	}
+-
+-	mvchip->irqbase = irq_alloc_descs(-1, 0, ngpios, -1);
+-	if (mvchip->irqbase < 0) {
+-		dev_err(&pdev->dev, "no irqs\n");
+-		return mvchip->irqbase;
++	mvchip->domain =
++	    irq_domain_add_linear(np, ngpios, &irq_generic_chip_ops, NULL);
++	if (!mvchip->domain) {
++		dev_err(&pdev->dev, "couldn't allocate irq domain %s (DT).\n",
++			mvchip->chip.label);
++		return -ENODEV;
+ 	}
+ 
+-	gc = irq_alloc_generic_chip("mvebu_gpio_irq", 2, mvchip->irqbase,
+-				    mvchip->membase, handle_level_irq);
+-	if (!gc) {
+-		dev_err(&pdev->dev, "Cannot allocate generic irq_chip\n");
+-		return -ENOMEM;
++	err = irq_alloc_domain_generic_chips(
++	    mvchip->domain, ngpios, 2, np->name, handle_level_irq,
++	    IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_LEVEL, 0, 0);
++	if (err) {
++		dev_err(&pdev->dev, "couldn't allocate irq chips %s (DT).\n",
++			mvchip->chip.label);
++		goto err_domain;
+ 	}
+ 
++	/* NOTE: The common accessors cannot be used because of the percpu
++	 * access to the mask registers
++	 */
++	gc = irq_get_domain_generic_chip(mvchip->domain, 0);
+ 	gc->private = mvchip;
+ 	ct = &gc->chip_types[0];
+ 	ct->type = IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW;
+@@ -803,27 +801,23 @@ static int mvebu_gpio_probe(struct platform_device *pdev)
+ 	ct->handler = handle_edge_irq;
+ 	ct->chip.name = mvchip->chip.label;
+ 
+-	irq_setup_generic_chip(gc, IRQ_MSK(ngpios), 0,
+-			       IRQ_NOREQUEST, IRQ_LEVEL | IRQ_NOPROBE);
++	/* Setup the interrupt handlers. Each chip can have up to 4
++	 * interrupt handlers, with each handler dealing with 8 GPIO
++	 * pins.
++	 */
++	for (i = 0; i < 4; i++) {
++		int irq = platform_get_irq(pdev, i);
+ 
+-	/* Setup irq domain on top of the generic chip. */
+-	mvchip->domain = irq_domain_add_simple(np, mvchip->chip.ngpio,
+-					       mvchip->irqbase,
+-					       &irq_domain_simple_ops,
+-					       mvchip);
+-	if (!mvchip->domain) {
+-		dev_err(&pdev->dev, "couldn't allocate irq domain %s (DT).\n",
+-			mvchip->chip.label);
+-		err = -ENODEV;
+-		goto err_generic_chip;
++		if (irq < 0)
++			continue;
++		irq_set_chained_handler_and_data(irq, mvebu_gpio_irq_handler,
++						 mvchip);
+ 	}
+ 
+ 	return 0;
+ 
+-err_generic_chip:
+-	irq_remove_generic_chip(gc, IRQ_MSK(ngpios), IRQ_NOREQUEST,
+-				IRQ_LEVEL | IRQ_NOPROBE);
+-	kfree(gc);
++err_domain:
++	irq_domain_remove(mvchip->domain);
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index a28feb3edf33..e3fc90130855 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -26,14 +26,18 @@
+ 
+ #include "gpiolib.h"
+ 
+-static int of_gpiochip_match_node(struct gpio_chip *chip, void *data)
++static int of_gpiochip_match_node_and_xlate(struct gpio_chip *chip, void *data)
+ {
+-	return chip->gpiodev->dev.of_node == data;
++	struct of_phandle_args *gpiospec = data;
++
++	return chip->gpiodev->dev.of_node == gpiospec->np &&
++				chip->of_xlate(chip, gpiospec, NULL) >= 0;
+ }
+ 
+-static struct gpio_chip *of_find_gpiochip_by_node(struct device_node *np)
++static struct gpio_chip *of_find_gpiochip_by_xlate(
++					struct of_phandle_args *gpiospec)
+ {
+-	return gpiochip_find(np, of_gpiochip_match_node);
++	return gpiochip_find(gpiospec, of_gpiochip_match_node_and_xlate);
+ }
+ 
+ static struct gpio_desc *of_xlate_and_get_gpiod_flags(struct gpio_chip *chip,
+@@ -79,7 +83,7 @@ struct gpio_desc *of_get_named_gpiod_flags(struct device_node *np,
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	chip = of_find_gpiochip_by_node(gpiospec.np);
++	chip = of_find_gpiochip_by_xlate(&gpiospec);
+ 	if (!chip) {
+ 		desc = ERR_PTR(-EPROBE_DEFER);
+ 		goto out;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+index 892d60fb225b..2057683f7b59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+@@ -395,9 +395,12 @@ static int acp_hw_fini(void *handle)
+ {
+ 	int i, ret;
+ 	struct device *dev;
+-
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	/* return early if no ACP */
++	if (!adev->acp.acp_genpd)
++		return 0;
++
+ 	for (i = 0; i < ACP_DEVS ; i++) {
+ 		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+ 		ret = pm_genpd_remove_device(&adev->acp.acp_genpd->gpd, dev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 9aa533cf4ad1..414a1600da54 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -605,6 +605,7 @@ static int __init amdgpu_init(void)
+ {
+ 	amdgpu_sync_init();
+ 	amdgpu_fence_slab_init();
++	amd_sched_fence_slab_init();
+ 	if (vgacon_text_force()) {
+ 		DRM_ERROR("VGACON disables amdgpu kernel modesetting.\n");
+ 		return -EINVAL;
+@@ -624,6 +625,7 @@ static void __exit amdgpu_exit(void)
+ 	drm_pci_exit(driver, pdriver);
+ 	amdgpu_unregister_atpx_handler();
+ 	amdgpu_sync_fini();
++	amd_sched_fence_slab_fini();
+ 	amdgpu_fence_slab_fini();
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index 0b109aebfec6..c82b95b838d0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -68,6 +68,7 @@ int amdgpu_fence_slab_init(void)
+ 
+ void amdgpu_fence_slab_fini(void)
+ {
++	rcu_barrier();
+ 	kmem_cache_destroy(amdgpu_fence_slab);
+ }
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index e24a8af72d90..1ed64aedb2fe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -99,6 +99,8 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
+ 
+ 	if ((amdgpu_runtime_pm != 0) &&
+ 	    amdgpu_has_atpx() &&
++	    (amdgpu_is_atpx_hybrid() ||
++	     amdgpu_has_atpx_dgpu_power_cntl()) &&
+ 	    ((flags & AMD_IS_APU) == 0))
+ 		flags |= AMD_IS_PX;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 80120fa4092c..e86ca392a08c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -1654,5 +1654,6 @@ void amdgpu_vm_manager_fini(struct amdgpu_device *adev)
+ 		fence_put(adev->vm_manager.ids[i].first);
+ 		amdgpu_sync_free(&adev->vm_manager.ids[i].active);
+ 		fence_put(id->flushed_updates);
++		fence_put(id->last_flush);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+index 963a24d46a93..ffe1f85ce300 100644
+--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
++++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+@@ -34,9 +34,6 @@ static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity);
+ static void amd_sched_wakeup(struct amd_gpu_scheduler *sched);
+ static void amd_sched_process_job(struct fence *f, struct fence_cb *cb);
+ 
+-struct kmem_cache *sched_fence_slab;
+-atomic_t sched_fence_slab_ref = ATOMIC_INIT(0);
+-
+ /* Initialize a given run queue struct */
+ static void amd_sched_rq_init(struct amd_sched_rq *rq)
+ {
+@@ -618,13 +615,6 @@ int amd_sched_init(struct amd_gpu_scheduler *sched,
+ 	INIT_LIST_HEAD(&sched->ring_mirror_list);
+ 	spin_lock_init(&sched->job_list_lock);
+ 	atomic_set(&sched->hw_rq_count, 0);
+-	if (atomic_inc_return(&sched_fence_slab_ref) == 1) {
+-		sched_fence_slab = kmem_cache_create(
+-			"amd_sched_fence", sizeof(struct amd_sched_fence), 0,
+-			SLAB_HWCACHE_ALIGN, NULL);
+-		if (!sched_fence_slab)
+-			return -ENOMEM;
+-	}
+ 
+ 	/* Each scheduler will run on a seperate kernel thread */
+ 	sched->thread = kthread_run(amd_sched_main, sched, sched->name);
+@@ -645,6 +635,4 @@ void amd_sched_fini(struct amd_gpu_scheduler *sched)
+ {
+ 	if (sched->thread)
+ 		kthread_stop(sched->thread);
+-	if (atomic_dec_and_test(&sched_fence_slab_ref))
+-		kmem_cache_destroy(sched_fence_slab);
+ }
+diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
+index 7cbbbfb502ef..51068e6c3d9a 100644
+--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
++++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
+@@ -30,9 +30,6 @@
+ struct amd_gpu_scheduler;
+ struct amd_sched_rq;
+ 
+-extern struct kmem_cache *sched_fence_slab;
+-extern atomic_t sched_fence_slab_ref;
+-
+ /**
+  * A scheduler entity is a wrapper around a job queue or a group
+  * of other entities. Entities take turns emitting jobs from their
+@@ -145,6 +142,9 @@ void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
+ 			   struct amd_sched_entity *entity);
+ void amd_sched_entity_push_job(struct amd_sched_job *sched_job);
+ 
++int amd_sched_fence_slab_init(void);
++void amd_sched_fence_slab_fini(void);
++
+ struct amd_sched_fence *amd_sched_fence_create(
+ 	struct amd_sched_entity *s_entity, void *owner);
+ void amd_sched_fence_scheduled(struct amd_sched_fence *fence);
+diff --git a/drivers/gpu/drm/amd/scheduler/sched_fence.c b/drivers/gpu/drm/amd/scheduler/sched_fence.c
+index 6b63beaf7574..93ad2e1f8f57 100644
+--- a/drivers/gpu/drm/amd/scheduler/sched_fence.c
++++ b/drivers/gpu/drm/amd/scheduler/sched_fence.c
+@@ -27,6 +27,25 @@
+ #include <drm/drmP.h>
+ #include "gpu_scheduler.h"
+ 
++static struct kmem_cache *sched_fence_slab;
++
++int amd_sched_fence_slab_init(void)
++{
++	sched_fence_slab = kmem_cache_create(
++		"amd_sched_fence", sizeof(struct amd_sched_fence), 0,
++		SLAB_HWCACHE_ALIGN, NULL);
++	if (!sched_fence_slab)
++		return -ENOMEM;
++
++	return 0;
++}
++
++void amd_sched_fence_slab_fini(void)
++{
++	rcu_barrier();
++	kmem_cache_destroy(sched_fence_slab);
++}
++
+ struct amd_sched_fence *amd_sched_fence_create(struct amd_sched_entity *entity,
+ 					       void *owner)
+ {
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 5de36d8dcc68..d46fa2206722 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1490,8 +1490,6 @@ static int i915_drm_suspend(struct drm_device *dev)
+ 
+ 	dev_priv->suspend_count++;
+ 
+-	intel_display_set_init_power(dev_priv, false);
+-
+ 	intel_csr_ucode_suspend(dev_priv);
+ 
+ out:
+@@ -1508,6 +1506,8 @@ static int i915_drm_suspend_late(struct drm_device *drm_dev, bool hibernation)
+ 
+ 	disable_rpm_wakeref_asserts(dev_priv);
+ 
++	intel_display_set_init_power(dev_priv, false);
++
+ 	fw_csr = !IS_BROXTON(dev_priv) &&
+ 		suspend_to_idle(dev_priv) && dev_priv->csr.dmc_payload;
+ 	/*
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 63462f279187..e26f88965c58 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -9737,6 +9737,29 @@ static void bxt_modeset_commit_cdclk(struct drm_atomic_state *old_state)
+ 	bxt_set_cdclk(to_i915(dev), req_cdclk);
+ }
+ 
++static int bdw_adjust_min_pipe_pixel_rate(struct intel_crtc_state *crtc_state,
++					  int pixel_rate)
++{
++	struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
++
++	/* pixel rate mustn't exceed 95% of cdclk with IPS on BDW */
++	if (IS_BROADWELL(dev_priv) && crtc_state->ips_enabled)
++		pixel_rate = DIV_ROUND_UP(pixel_rate * 100, 95);
++
++	/* BSpec says "Do not use DisplayPort with CDCLK less than
++	 * 432 MHz, audio enabled, port width x4, and link rate
++	 * HBR2 (5.4 GHz), or else there may be audio corruption or
++	 * screen corruption."
++	 */
++	if (intel_crtc_has_dp_encoder(crtc_state) &&
++	    crtc_state->has_audio &&
++	    crtc_state->port_clock >= 540000 &&
++	    crtc_state->lane_count == 4)
++		pixel_rate = max(432000, pixel_rate);
++
++	return pixel_rate;
++}
++
+ /* compute the max rate for new configuration */
+ static int ilk_max_pixel_rate(struct drm_atomic_state *state)
+ {
+@@ -9762,9 +9785,9 @@ static int ilk_max_pixel_rate(struct drm_atomic_state *state)
+ 
+ 		pixel_rate = ilk_pipe_pixel_rate(crtc_state);
+ 
+-		/* pixel rate mustn't exceed 95% of cdclk with IPS on BDW */
+-		if (IS_BROADWELL(dev_priv) && crtc_state->ips_enabled)
+-			pixel_rate = DIV_ROUND_UP(pixel_rate * 100, 95);
++		if (IS_BROADWELL(dev_priv) || IS_GEN9(dev_priv))
++			pixel_rate = bdw_adjust_min_pipe_pixel_rate(crtc_state,
++								    pixel_rate);
+ 
+ 		intel_state->min_pixclk[i] = pixel_rate;
+ 	}
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index c3aa9e670d15..1421270071d2 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -1759,6 +1759,50 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, struct drm_connector *c
+ 	intel_hdmi->aspect_ratio = HDMI_PICTURE_ASPECT_NONE;
+ }
+ 
++static u8 intel_hdmi_ddc_pin(struct drm_i915_private *dev_priv,
++			     enum port port)
++{
++	const struct ddi_vbt_port_info *info =
++		&dev_priv->vbt.ddi_port_info[port];
++	u8 ddc_pin;
++
++	if (info->alternate_ddc_pin) {
++		DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (VBT)\n",
++			      info->alternate_ddc_pin, port_name(port));
++		return info->alternate_ddc_pin;
++	}
++
++	switch (port) {
++	case PORT_B:
++		if (IS_BROXTON(dev_priv))
++			ddc_pin = GMBUS_PIN_1_BXT;
++		else
++			ddc_pin = GMBUS_PIN_DPB;
++		break;
++	case PORT_C:
++		if (IS_BROXTON(dev_priv))
++			ddc_pin = GMBUS_PIN_2_BXT;
++		else
++			ddc_pin = GMBUS_PIN_DPC;
++		break;
++	case PORT_D:
++		if (IS_CHERRYVIEW(dev_priv))
++			ddc_pin = GMBUS_PIN_DPD_CHV;
++		else
++			ddc_pin = GMBUS_PIN_DPD;
++		break;
++	default:
++		MISSING_CASE(port);
++		ddc_pin = GMBUS_PIN_DPB;
++		break;
++	}
++
++	DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (platform default)\n",
++		      ddc_pin, port_name(port));
++
++	return ddc_pin;
++}
++
+ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
+ 			       struct intel_connector *intel_connector)
+ {
+@@ -1768,7 +1812,6 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
+ 	struct drm_device *dev = intel_encoder->base.dev;
+ 	struct drm_i915_private *dev_priv = to_i915(dev);
+ 	enum port port = intel_dig_port->port;
+-	uint8_t alternate_ddc_pin;
+ 
+ 	DRM_DEBUG_KMS("Adding HDMI connector on port %c\n",
+ 		      port_name(port));
+@@ -1786,12 +1829,10 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
+ 	connector->doublescan_allowed = 0;
+ 	connector->stereo_allowed = 1;
+ 
++	intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(dev_priv, port);
++
+ 	switch (port) {
+ 	case PORT_B:
+-		if (IS_BROXTON(dev_priv))
+-			intel_hdmi->ddc_bus = GMBUS_PIN_1_BXT;
+-		else
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPB;
+ 		/*
+ 		 * On BXT A0/A1, sw needs to activate DDIA HPD logic and
+ 		 * interrupts to check the external panel connection.
+@@ -1802,46 +1843,17 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
+ 			intel_encoder->hpd_pin = HPD_PORT_B;
+ 		break;
+ 	case PORT_C:
+-		if (IS_BROXTON(dev_priv))
+-			intel_hdmi->ddc_bus = GMBUS_PIN_2_BXT;
+-		else
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPC;
+ 		intel_encoder->hpd_pin = HPD_PORT_C;
+ 		break;
+ 	case PORT_D:
+-		if (WARN_ON(IS_BROXTON(dev_priv)))
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DISABLED;
+-		else if (IS_CHERRYVIEW(dev_priv))
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPD_CHV;
+-		else
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPD;
+ 		intel_encoder->hpd_pin = HPD_PORT_D;
+ 		break;
+ 	case PORT_E:
+-		/* On SKL PORT E doesn't have seperate GMBUS pin
+-		 *  We rely on VBT to set a proper alternate GMBUS pin. */
+-		alternate_ddc_pin =
+-			dev_priv->vbt.ddi_port_info[PORT_E].alternate_ddc_pin;
+-		switch (alternate_ddc_pin) {
+-		case DDC_PIN_B:
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPB;
+-			break;
+-		case DDC_PIN_C:
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPC;
+-			break;
+-		case DDC_PIN_D:
+-			intel_hdmi->ddc_bus = GMBUS_PIN_DPD;
+-			break;
+-		default:
+-			MISSING_CASE(alternate_ddc_pin);
+-		}
+ 		intel_encoder->hpd_pin = HPD_PORT_E;
+ 		break;
+-	case PORT_A:
+-		intel_encoder->hpd_pin = HPD_PORT_A;
+-		/* Internal port only for eDP. */
+ 	default:
+-		BUG();
++		MISSING_CASE(port);
++		return;
+ 	}
+ 
+ 	if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) {
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 554ca7115f98..edd2d0396290 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -104,6 +104,14 @@ static const char radeon_family_name[][16] = {
+ 	"LAST",
+ };
+ 
++#if defined(CONFIG_VGA_SWITCHEROO)
++bool radeon_has_atpx_dgpu_power_cntl(void);
++bool radeon_is_atpx_hybrid(void);
++#else
++static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; }
++static inline bool radeon_is_atpx_hybrid(void) { return false; }
++#endif
++
+ #define RADEON_PX_QUIRK_DISABLE_PX  (1 << 0)
+ #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1)
+ 
+@@ -160,6 +168,11 @@ static void radeon_device_handle_px_quirks(struct radeon_device *rdev)
+ 
+ 	if (rdev->px_quirk_flags & RADEON_PX_QUIRK_DISABLE_PX)
+ 		rdev->flags &= ~RADEON_IS_PX;
++
++	/* disable PX is the system doesn't support dGPU power control or hybrid gfx */
++	if (!radeon_is_atpx_hybrid() &&
++	    !radeon_has_atpx_dgpu_power_cntl())
++		rdev->flags &= ~RADEON_IS_PX;
+ }
+ 
+ /**
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index da3fb069ec5c..ce69048c88e9 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -743,8 +743,8 @@ static int st_accel_read_raw(struct iio_dev *indio_dev,
+ 
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+-		*val = 0;
+-		*val2 = adata->current_fullscale->gain;
++		*val = adata->current_fullscale->gain / 1000000;
++		*val2 = adata->current_fullscale->gain % 1000000;
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+ 		*val = adata->odr;
+@@ -763,9 +763,13 @@ static int st_accel_write_raw(struct iio_dev *indio_dev,
+ 	int err;
+ 
+ 	switch (mask) {
+-	case IIO_CHAN_INFO_SCALE:
+-		err = st_sensors_set_fullscale_by_gain(indio_dev, val2);
++	case IIO_CHAN_INFO_SCALE: {
++		int gain;
++
++		gain = val * 1000000 + val2;
++		err = st_sensors_set_fullscale_by_gain(indio_dev, gain);
+ 		break;
++	}
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+ 		if (val2)
+ 			return -EINVAL;
+diff --git a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
+index dc33c1dd5191..b5beea53d6f6 100644
+--- a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
++++ b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
+@@ -30,26 +30,26 @@ static struct {
+ 	u32 usage_id;
+ 	int unit; /* 0 for default others from HID sensor spec */
+ 	int scale_val0; /* scale, whole number */
+-	int scale_val1; /* scale, fraction in micros */
++	int scale_val1; /* scale, fraction in nanos */
+ } unit_conversion[] = {
+-	{HID_USAGE_SENSOR_ACCEL_3D, 0, 9, 806650},
++	{HID_USAGE_SENSOR_ACCEL_3D, 0, 9, 806650000},
+ 	{HID_USAGE_SENSOR_ACCEL_3D,
+ 		HID_USAGE_SENSOR_UNITS_METERS_PER_SEC_SQRD, 1, 0},
+ 	{HID_USAGE_SENSOR_ACCEL_3D,
+-		HID_USAGE_SENSOR_UNITS_G, 9, 806650},
++		HID_USAGE_SENSOR_UNITS_G, 9, 806650000},
+ 
+-	{HID_USAGE_SENSOR_GYRO_3D, 0, 0, 17453},
++	{HID_USAGE_SENSOR_GYRO_3D, 0, 0, 17453293},
+ 	{HID_USAGE_SENSOR_GYRO_3D,
+ 		HID_USAGE_SENSOR_UNITS_RADIANS_PER_SECOND, 1, 0},
+ 	{HID_USAGE_SENSOR_GYRO_3D,
+-		HID_USAGE_SENSOR_UNITS_DEGREES_PER_SECOND, 0, 17453},
++		HID_USAGE_SENSOR_UNITS_DEGREES_PER_SECOND, 0, 17453293},
+ 
+-	{HID_USAGE_SENSOR_COMPASS_3D, 0, 0, 1000},
++	{HID_USAGE_SENSOR_COMPASS_3D, 0, 0, 1000000},
+ 	{HID_USAGE_SENSOR_COMPASS_3D, HID_USAGE_SENSOR_UNITS_GAUSS, 1, 0},
+ 
+-	{HID_USAGE_SENSOR_INCLINOMETER_3D, 0, 0, 17453},
++	{HID_USAGE_SENSOR_INCLINOMETER_3D, 0, 0, 17453293},
+ 	{HID_USAGE_SENSOR_INCLINOMETER_3D,
+-		HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453},
++		HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453293},
+ 	{HID_USAGE_SENSOR_INCLINOMETER_3D,
+ 		HID_USAGE_SENSOR_UNITS_RADIANS, 1, 0},
+ 
+@@ -57,7 +57,7 @@ static struct {
+ 	{HID_USAGE_SENSOR_ALS, HID_USAGE_SENSOR_UNITS_LUX, 1, 0},
+ 
+ 	{HID_USAGE_SENSOR_PRESSURE, 0, 100, 0},
+-	{HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000},
++	{HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000000},
+ };
+ 
+ static int pow_10(unsigned power)
+@@ -266,15 +266,15 @@ EXPORT_SYMBOL(hid_sensor_write_raw_hyst_value);
+ /*
+  * This fuction applies the unit exponent to the scale.
+  * For example:
+- * 9.806650 ->exp:2-> val0[980]val1[665000]
+- * 9.000806 ->exp:2-> val0[900]val1[80600]
+- * 0.174535 ->exp:2-> val0[17]val1[453500]
+- * 1.001745 ->exp:0-> val0[1]val1[1745]
+- * 1.001745 ->exp:2-> val0[100]val1[174500]
+- * 1.001745 ->exp:4-> val0[10017]val1[450000]
+- * 9.806650 ->exp:-2-> val0[0]val1[98066]
++ * 9.806650000 ->exp:2-> val0[980]val1[665000000]
++ * 9.000806000 ->exp:2-> val0[900]val1[80600000]
++ * 0.174535293 ->exp:2-> val0[17]val1[453529300]
++ * 1.001745329 ->exp:0-> val0[1]val1[1745329]
++ * 1.001745329 ->exp:2-> val0[100]val1[174532900]
++ * 1.001745329 ->exp:4-> val0[10017]val1[453290000]
++ * 9.806650000 ->exp:-2-> val0[0]val1[98066500]
+  */
+-static void adjust_exponent_micro(int *val0, int *val1, int scale0,
++static void adjust_exponent_nano(int *val0, int *val1, int scale0,
+ 				  int scale1, int exp)
+ {
+ 	int i;
+@@ -285,32 +285,32 @@ static void adjust_exponent_micro(int *val0, int *val1, int scale0,
+ 	if (exp > 0) {
+ 		*val0 = scale0 * pow_10(exp);
+ 		res = 0;
+-		if (exp > 6) {
++		if (exp > 9) {
+ 			*val1 = 0;
+ 			return;
+ 		}
+ 		for (i = 0; i < exp; ++i) {
+-			x = scale1 / pow_10(5 - i);
++			x = scale1 / pow_10(8 - i);
+ 			res += (pow_10(exp - 1 - i) * x);
+-			scale1 = scale1 % pow_10(5 - i);
++			scale1 = scale1 % pow_10(8 - i);
+ 		}
+ 		*val0 += res;
+ 			*val1 = scale1 * pow_10(exp);
+ 	} else if (exp < 0) {
+ 		exp = abs(exp);
+-		if (exp > 6) {
++		if (exp > 9) {
+ 			*val0 = *val1 = 0;
+ 			return;
+ 		}
+ 		*val0 = scale0 / pow_10(exp);
+ 		rem = scale0 % pow_10(exp);
+ 		res = 0;
+-		for (i = 0; i < (6 - exp); ++i) {
+-			x = scale1 / pow_10(5 - i);
+-			res += (pow_10(5 - exp - i) * x);
+-			scale1 = scale1 % pow_10(5 - i);
++		for (i = 0; i < (9 - exp); ++i) {
++			x = scale1 / pow_10(8 - i);
++			res += (pow_10(8 - exp - i) * x);
++			scale1 = scale1 % pow_10(8 - i);
+ 		}
+-		*val1 = rem * pow_10(6 - exp) + res;
++		*val1 = rem * pow_10(9 - exp) + res;
+ 	} else {
+ 		*val0 = scale0;
+ 		*val1 = scale1;
+@@ -332,14 +332,14 @@ int hid_sensor_format_scale(u32 usage_id,
+ 			unit_conversion[i].unit == attr_info->units) {
+ 			exp  = hid_sensor_convert_exponent(
+ 						attr_info->unit_expo);
+-			adjust_exponent_micro(val0, val1,
++			adjust_exponent_nano(val0, val1,
+ 					unit_conversion[i].scale_val0,
+ 					unit_conversion[i].scale_val1, exp);
+ 			break;
+ 		}
+ 	}
+ 
+-	return IIO_VAL_INT_PLUS_MICRO;
++	return IIO_VAL_INT_PLUS_NANO;
+ }
+ EXPORT_SYMBOL(hid_sensor_format_scale);
+ 
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index 2d5282e05482..32a594675a54 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -619,7 +619,7 @@ EXPORT_SYMBOL(st_sensors_sysfs_sampling_frequency_avail);
+ ssize_t st_sensors_sysfs_scale_avail(struct device *dev,
+ 				struct device_attribute *attr, char *buf)
+ {
+-	int i, len = 0;
++	int i, len = 0, q, r;
+ 	struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
+ 
+@@ -628,8 +628,10 @@ ssize_t st_sensors_sysfs_scale_avail(struct device *dev,
+ 		if (sdata->sensor_settings->fs.fs_avl[i].num == 0)
+ 			break;
+ 
+-		len += scnprintf(buf + len, PAGE_SIZE - len, "0.%06u ",
+-				sdata->sensor_settings->fs.fs_avl[i].gain);
++		q = sdata->sensor_settings->fs.fs_avl[i].gain / 1000000;
++		r = sdata->sensor_settings->fs.fs_avl[i].gain % 1000000;
++
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%u.%06u ", q, r);
+ 	}
+ 	mutex_unlock(&indio_dev->mlock);
+ 	buf[len - 1] = '\n';
+diff --git a/drivers/iio/orientation/hid-sensor-rotation.c b/drivers/iio/orientation/hid-sensor-rotation.c
+index b98b9d94d184..a97e802ca523 100644
+--- a/drivers/iio/orientation/hid-sensor-rotation.c
++++ b/drivers/iio/orientation/hid-sensor-rotation.c
+@@ -335,6 +335,7 @@ static struct platform_driver hid_dev_rot_platform_driver = {
+ 	.id_table = hid_dev_rot_ids,
+ 	.driver = {
+ 		.name	= KBUILD_MODNAME,
++		.pm     = &hid_sensor_pm_ops,
+ 	},
+ 	.probe		= hid_dev_rot_probe,
+ 	.remove		= hid_dev_rot_remove,
+diff --git a/drivers/input/rmi4/rmi_i2c.c b/drivers/input/rmi4/rmi_i2c.c
+index 6f2e0e4f0296..1ebc2c1debae 100644
+--- a/drivers/input/rmi4/rmi_i2c.c
++++ b/drivers/input/rmi4/rmi_i2c.c
+@@ -221,6 +221,21 @@ static const struct of_device_id rmi_i2c_of_match[] = {
+ MODULE_DEVICE_TABLE(of, rmi_i2c_of_match);
+ #endif
+ 
++static void rmi_i2c_regulator_bulk_disable(void *data)
++{
++	struct rmi_i2c_xport *rmi_i2c = data;
++
++	regulator_bulk_disable(ARRAY_SIZE(rmi_i2c->supplies),
++			       rmi_i2c->supplies);
++}
++
++static void rmi_i2c_unregister_transport(void *data)
++{
++	struct rmi_i2c_xport *rmi_i2c = data;
++
++	rmi_unregister_transport_device(&rmi_i2c->xport);
++}
++
+ static int rmi_i2c_probe(struct i2c_client *client,
+ 			 const struct i2c_device_id *id)
+ {
+@@ -264,6 +279,12 @@ static int rmi_i2c_probe(struct i2c_client *client,
+ 	if (retval < 0)
+ 		return retval;
+ 
++	retval = devm_add_action_or_reset(&client->dev,
++					  rmi_i2c_regulator_bulk_disable,
++					  rmi_i2c);
++	if (retval)
++		return retval;
++
+ 	of_property_read_u32(client->dev.of_node, "syna,startup-delay-ms",
+ 			     &rmi_i2c->startup_delay);
+ 
+@@ -294,6 +315,11 @@ static int rmi_i2c_probe(struct i2c_client *client,
+ 			client->addr);
+ 		return retval;
+ 	}
++	retval = devm_add_action_or_reset(&client->dev,
++					  rmi_i2c_unregister_transport,
++					  rmi_i2c);
++	if (retval)
++		return retval;
+ 
+ 	retval = rmi_i2c_init_irq(client);
+ 	if (retval < 0)
+@@ -304,17 +330,6 @@ static int rmi_i2c_probe(struct i2c_client *client,
+ 	return 0;
+ }
+ 
+-static int rmi_i2c_remove(struct i2c_client *client)
+-{
+-	struct rmi_i2c_xport *rmi_i2c = i2c_get_clientdata(client);
+-
+-	rmi_unregister_transport_device(&rmi_i2c->xport);
+-	regulator_bulk_disable(ARRAY_SIZE(rmi_i2c->supplies),
+-			       rmi_i2c->supplies);
+-
+-	return 0;
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static int rmi_i2c_suspend(struct device *dev)
+ {
+@@ -431,7 +446,6 @@ static struct i2c_driver rmi_i2c_driver = {
+ 	},
+ 	.id_table	= rmi_id,
+ 	.probe		= rmi_i2c_probe,
+-	.remove		= rmi_i2c_remove,
+ };
+ 
+ module_i2c_driver(rmi_i2c_driver);
+diff --git a/drivers/input/rmi4/rmi_spi.c b/drivers/input/rmi4/rmi_spi.c
+index 55bd1b34970c..4ebef607e214 100644
+--- a/drivers/input/rmi4/rmi_spi.c
++++ b/drivers/input/rmi4/rmi_spi.c
+@@ -396,6 +396,13 @@ static inline int rmi_spi_of_probe(struct spi_device *spi,
+ }
+ #endif
+ 
++static void rmi_spi_unregister_transport(void *data)
++{
++	struct rmi_spi_xport *rmi_spi = data;
++
++	rmi_unregister_transport_device(&rmi_spi->xport);
++}
++
+ static int rmi_spi_probe(struct spi_device *spi)
+ {
+ 	struct rmi_spi_xport *rmi_spi;
+@@ -464,6 +471,11 @@ static int rmi_spi_probe(struct spi_device *spi)
+ 		dev_err(&spi->dev, "failed to register transport.\n");
+ 		return retval;
+ 	}
++	retval = devm_add_action_or_reset(&spi->dev,
++					  rmi_spi_unregister_transport,
++					  rmi_spi);
++	if (retval)
++		return retval;
+ 
+ 	retval = rmi_spi_init_irq(spi);
+ 	if (retval < 0)
+@@ -473,15 +485,6 @@ static int rmi_spi_probe(struct spi_device *spi)
+ 	return 0;
+ }
+ 
+-static int rmi_spi_remove(struct spi_device *spi)
+-{
+-	struct rmi_spi_xport *rmi_spi = spi_get_drvdata(spi);
+-
+-	rmi_unregister_transport_device(&rmi_spi->xport);
+-
+-	return 0;
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static int rmi_spi_suspend(struct device *dev)
+ {
+@@ -577,7 +580,6 @@ static struct spi_driver rmi_spi_driver = {
+ 	},
+ 	.id_table	= rmi_id,
+ 	.probe		= rmi_spi_probe,
+-	.remove		= rmi_spi_remove,
+ };
+ 
+ module_spi_driver(rmi_spi_driver);
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 96de97a46079..822fc4afad3c 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -1654,6 +1654,9 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom)
+ 
+ 	free_pagetable(&dom->domain);
+ 
++	if (dom->domain.id)
++		domain_id_free(dom->domain.id);
++
+ 	kfree(dom);
+ }
+ 
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index ebb5bf3ddbd9..1257b0b80296 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -1711,6 +1711,7 @@ static void disable_dmar_iommu(struct intel_iommu *iommu)
+ 	if (!iommu->domains || !iommu->domain_ids)
+ 		return;
+ 
++again:
+ 	spin_lock_irqsave(&device_domain_lock, flags);
+ 	list_for_each_entry_safe(info, tmp, &device_domain_list, global) {
+ 		struct dmar_domain *domain;
+@@ -1723,10 +1724,19 @@ static void disable_dmar_iommu(struct intel_iommu *iommu)
+ 
+ 		domain = info->domain;
+ 
+-		dmar_remove_one_dev_info(domain, info->dev);
++		__dmar_remove_one_dev_info(info);
+ 
+-		if (!domain_type_is_vm_or_si(domain))
++		if (!domain_type_is_vm_or_si(domain)) {
++			/*
++			 * The domain_exit() function  can't be called under
++			 * device_domain_lock, as it takes this lock itself.
++			 * So release the lock here and re-run the loop
++			 * afterwards.
++			 */
++			spin_unlock_irqrestore(&device_domain_lock, flags);
+ 			domain_exit(domain);
++			goto again;
++		}
+ 	}
+ 	spin_unlock_irqrestore(&device_domain_lock, flags);
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index def8ca1c982d..f50e51c1a9c8 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -633,6 +633,10 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ {
+ 	struct arm_v7s_io_pgtable *data;
+ 
++#ifdef PHYS_OFFSET
++	if (upper_32_bits(PHYS_OFFSET))
++		return NULL;
++#endif
+ 	if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
+ 		return NULL;
+ 
+diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
+index bf890c3d9cda..f73e108dc980 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_core.c
++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
+@@ -677,7 +677,7 @@ static void dib0700_rc_urb_completion(struct urb *purb)
+ 	struct dvb_usb_device *d = purb->context;
+ 	struct dib0700_rc_response *poll_reply;
+ 	enum rc_type protocol;
+-	u32 uninitialized_var(keycode);
++	u32 keycode;
+ 	u8 toggle;
+ 
+ 	deb_info("%s()\n", __func__);
+@@ -719,7 +719,8 @@ static void dib0700_rc_urb_completion(struct urb *purb)
+ 		    poll_reply->nec.data       == 0x00 &&
+ 		    poll_reply->nec.not_data   == 0xff) {
+ 			poll_reply->data_state = 2;
+-			break;
++			rc_repeat(d->rc_dev);
++			goto resubmit;
+ 		}
+ 
+ 		if ((poll_reply->nec.data ^ poll_reply->nec.not_data) != 0xff) {
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index e9e6ea3ab73c..75b9d4ac8b1e 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -178,7 +178,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 
+ 	ret = 0;
+ 	bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length);
+-	if (bytes_recv < 0 || bytes_recv < sizeof(struct mei_nfc_reply)) {
++	if (bytes_recv < if_version_length) {
+ 		dev_err(bus->dev, "Could not read IF version\n");
+ 		ret = -EIO;
+ 		goto err;
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index c57eb32dc075..6ef1e3c731f8 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -26,6 +26,8 @@
+ #include "mmc_ops.h"
+ #include "sd_ops.h"
+ 
++#define DEFAULT_CMD6_TIMEOUT_MS	500
++
+ static const unsigned int tran_exp[] = {
+ 	10000,		100000,		1000000,	10000000,
+ 	0,		0,		0,		0
+@@ -571,6 +573,7 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
+ 		card->erased_byte = 0x0;
+ 
+ 	/* eMMC v4.5 or later */
++	card->ext_csd.generic_cmd6_time = DEFAULT_CMD6_TIMEOUT_MS;
+ 	if (card->ext_csd.rev >= 6) {
+ 		card->ext_csd.feature_support |= MMC_DISCARD_FEATURE;
+ 
+diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
+index d839147e591d..44ecebd1ea8c 100644
+--- a/drivers/mmc/host/mxs-mmc.c
++++ b/drivers/mmc/host/mxs-mmc.c
+@@ -661,13 +661,13 @@ static int mxs_mmc_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, mmc);
+ 
++	spin_lock_init(&host->lock);
++
+ 	ret = devm_request_irq(&pdev->dev, irq_err, mxs_mmc_irq_handler, 0,
+ 			       dev_name(&pdev->dev), host);
+ 	if (ret)
+ 		goto out_free_dma;
+ 
+-	spin_lock_init(&host->lock);
+-
+ 	ret = mmc_add_host(mmc);
+ 	if (ret)
+ 		goto out_free_dma;
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 8ef44a2a2fd9..90ed2e12d345 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -647,6 +647,7 @@ static int sdhci_msm_probe(struct platform_device *pdev)
+ 	if (msm_host->pwr_irq < 0) {
+ 		dev_err(&pdev->dev, "Get pwr_irq failed (%d)\n",
+ 			msm_host->pwr_irq);
++		ret = msm_host->pwr_irq;
+ 		goto clk_disable;
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index a8a022a7358f..6eb8f0705c65 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2269,10 +2269,8 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 
+ 	for (i = 0; i < SDHCI_MAX_MRQS; i++) {
+ 		mrq = host->mrqs_done[i];
+-		if (mrq) {
+-			host->mrqs_done[i] = NULL;
++		if (mrq)
+ 			break;
+-		}
+ 	}
+ 
+ 	if (!mrq) {
+@@ -2303,6 +2301,17 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 	 * upon error conditions.
+ 	 */
+ 	if (sdhci_needs_reset(host, mrq)) {
++		/*
++		 * Do not finish until command and data lines are available for
++		 * reset. Note there can only be one other mrq, so it cannot
++		 * also be in mrqs_done, otherwise host->cmd and host->data_cmd
++		 * would both be null.
++		 */
++		if (host->cmd || host->data_cmd) {
++			spin_unlock_irqrestore(&host->lock, flags);
++			return true;
++		}
++
+ 		/* Some controllers need this kick or reset won't work here */
+ 		if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
+ 			/* This is to force an update */
+@@ -2310,10 +2319,8 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 
+ 		/* Spec says we should do both at the same time, but Ricoh
+ 		   controllers do not like that. */
+-		if (!host->cmd)
+-			sdhci_do_reset(host, SDHCI_RESET_CMD);
+-		if (!host->data_cmd)
+-			sdhci_do_reset(host, SDHCI_RESET_DATA);
++		sdhci_do_reset(host, SDHCI_RESET_CMD);
++		sdhci_do_reset(host, SDHCI_RESET_DATA);
+ 
+ 		host->pending_reset = false;
+ 	}
+@@ -2321,6 +2328,8 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ 	if (!sdhci_has_requests(host))
+ 		sdhci_led_deactivate(host);
+ 
++	host->mrqs_done[i] = NULL;
++
+ 	mmiowb();
+ 	spin_unlock_irqrestore(&host->lock, flags);
+ 
+@@ -2500,9 +2509,6 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
+ 	if (!host->data) {
+ 		struct mmc_command *data_cmd = host->data_cmd;
+ 
+-		if (data_cmd)
+-			host->data_cmd = NULL;
+-
+ 		/*
+ 		 * The "data complete" interrupt is also used to
+ 		 * indicate that a busy state has ended. See comment
+@@ -2510,11 +2516,13 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
+ 		 */
+ 		if (data_cmd && (data_cmd->flags & MMC_RSP_BUSY)) {
+ 			if (intmask & SDHCI_INT_DATA_TIMEOUT) {
++				host->data_cmd = NULL;
+ 				data_cmd->error = -ETIMEDOUT;
+ 				sdhci_finish_mrq(host, data_cmd->mrq);
+ 				return;
+ 			}
+ 			if (intmask & SDHCI_INT_DATA_END) {
++				host->data_cmd = NULL;
+ 				/*
+ 				 * Some cards handle busy-end interrupt
+ 				 * before the command completed, so make
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c74d16409941..6b46a37ba139 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -9001,7 +9001,7 @@ static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
+ 		return 0;
+ 
+ 	return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode,
+-				       nlflags, 0, 0, filter_mask, NULL);
++				       0, 0, nlflags, filter_mask, NULL);
+ }
+ 
+ /* Hardware supports L4 tunnel length of 128B (=2^7) which includes
+diff --git a/drivers/nfc/mei_phy.c b/drivers/nfc/mei_phy.c
+index 83deda4bb4d6..6f9563a96488 100644
+--- a/drivers/nfc/mei_phy.c
++++ b/drivers/nfc/mei_phy.c
+@@ -133,7 +133,7 @@ static int mei_nfc_if_version(struct nfc_mei_phy *phy)
+ 		return -ENOMEM;
+ 
+ 	bytes_recv = mei_cldev_recv(phy->cldev, (u8 *)reply, if_version_length);
+-	if (bytes_recv < 0 || bytes_recv < sizeof(struct mei_nfc_reply)) {
++	if (bytes_recv < 0 || bytes_recv < if_version_length) {
+ 		pr_err("Could not read IF version\n");
+ 		r = -EIO;
+ 		goto err;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 60f7eab11865..da134a0df7d8 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1531,9 +1531,9 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
+ 	return 0;
+ }
+ 
+-static void nvme_disable_io_queues(struct nvme_dev *dev)
++static void nvme_disable_io_queues(struct nvme_dev *dev, int queues)
+ {
+-	int pass, queues = dev->online_queues - 1;
++	int pass;
+ 	unsigned long timeout;
+ 	u8 opcode = nvme_admin_delete_sq;
+ 
+@@ -1678,7 +1678,7 @@ static void nvme_pci_disable(struct nvme_dev *dev)
+ 
+ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ {
+-	int i;
++	int i, queues;
+ 	u32 csts = -1;
+ 
+ 	del_timer_sync(&dev->watchdog_timer);
+@@ -1689,6 +1689,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ 		csts = readl(dev->bar + NVME_REG_CSTS);
+ 	}
+ 
++	queues = dev->online_queues - 1;
+ 	for (i = dev->queue_count - 1; i > 0; i--)
+ 		nvme_suspend_queue(dev->queues[i]);
+ 
+@@ -1700,7 +1701,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ 		if (dev->queue_count)
+ 			nvme_suspend_queue(dev->queues[0]);
+ 	} else {
+-		nvme_disable_io_queues(dev);
++		nvme_disable_io_queues(dev, queues);
+ 		nvme_disable_admin_queue(dev, shutdown);
+ 	}
+ 	nvme_pci_disable(dev);
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index 66c4d8f42233..9526e341988b 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -121,6 +121,14 @@ int pci_claim_resource(struct pci_dev *dev, int resource)
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * If we have a shadow copy in RAM, the PCI device doesn't respond
++	 * to the shadow range, so we don't need to claim it, and upstream
++	 * bridges don't need to route the range to the device.
++	 */
++	if (res->flags & IORESOURCE_ROM_SHADOW)
++		return 0;
++
+ 	root = pci_find_parent_resource(dev, res);
+ 	if (!root) {
+ 		dev_info(&dev->dev, "can't claim BAR %d %pR: no compatible bridge window\n",
+diff --git a/drivers/pinctrl/bcm/pinctrl-iproc-gpio.c b/drivers/pinctrl/bcm/pinctrl-iproc-gpio.c
+index 7f7700716398..5d1e505c3c63 100644
+--- a/drivers/pinctrl/bcm/pinctrl-iproc-gpio.c
++++ b/drivers/pinctrl/bcm/pinctrl-iproc-gpio.c
+@@ -844,6 +844,6 @@ static struct platform_driver iproc_gpio_driver = {
+ 
+ static int __init iproc_gpio_init(void)
+ {
+-	return platform_driver_probe(&iproc_gpio_driver, iproc_gpio_probe);
++	return platform_driver_register(&iproc_gpio_driver);
+ }
+ arch_initcall_sync(iproc_gpio_init);
+diff --git a/drivers/pinctrl/bcm/pinctrl-nsp-gpio.c b/drivers/pinctrl/bcm/pinctrl-nsp-gpio.c
+index 35783db1c10b..c8deb8be1da7 100644
+--- a/drivers/pinctrl/bcm/pinctrl-nsp-gpio.c
++++ b/drivers/pinctrl/bcm/pinctrl-nsp-gpio.c
+@@ -741,6 +741,6 @@ static struct platform_driver nsp_gpio_driver = {
+ 
+ static int __init nsp_gpio_init(void)
+ {
+-	return platform_driver_probe(&nsp_gpio_driver, nsp_gpio_probe);
++	return platform_driver_register(&nsp_gpio_driver);
+ }
+ arch_initcall_sync(nsp_gpio_init);
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 0fe8fad25e4d..bc3150428d89 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1634,12 +1634,15 @@ static int chv_pinctrl_remove(struct platform_device *pdev)
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+-static int chv_pinctrl_suspend(struct device *dev)
++static int chv_pinctrl_suspend_noirq(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct chv_pinctrl *pctrl = platform_get_drvdata(pdev);
++	unsigned long flags;
+ 	int i;
+ 
++	raw_spin_lock_irqsave(&chv_lock, flags);
++
+ 	pctrl->saved_intmask = readl(pctrl->regs + CHV_INTMASK);
+ 
+ 	for (i = 0; i < pctrl->community->npins; i++) {
+@@ -1660,15 +1663,20 @@ static int chv_pinctrl_suspend(struct device *dev)
+ 		ctx->padctrl1 = readl(reg);
+ 	}
+ 
++	raw_spin_unlock_irqrestore(&chv_lock, flags);
++
+ 	return 0;
+ }
+ 
+-static int chv_pinctrl_resume(struct device *dev)
++static int chv_pinctrl_resume_noirq(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct chv_pinctrl *pctrl = platform_get_drvdata(pdev);
++	unsigned long flags;
+ 	int i;
+ 
++	raw_spin_lock_irqsave(&chv_lock, flags);
++
+ 	/*
+ 	 * Mask all interrupts before restoring per-pin configuration
+ 	 * registers because we don't know in which state BIOS left them
+@@ -1713,12 +1721,15 @@ static int chv_pinctrl_resume(struct device *dev)
+ 	chv_writel(0xffff, pctrl->regs + CHV_INTSTAT);
+ 	chv_writel(pctrl->saved_intmask, pctrl->regs + CHV_INTMASK);
+ 
++	raw_spin_unlock_irqrestore(&chv_lock, flags);
++
+ 	return 0;
+ }
+ #endif
+ 
+ static const struct dev_pm_ops chv_pinctrl_pm_ops = {
+-	SET_LATE_SYSTEM_SLEEP_PM_OPS(chv_pinctrl_suspend, chv_pinctrl_resume)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(chv_pinctrl_suspend_noirq,
++				      chv_pinctrl_resume_noirq)
+ };
+ 
+ static const struct acpi_device_id chv_pinctrl_acpi_match[] = {
+diff --git a/drivers/platform/x86/toshiba-wmi.c b/drivers/platform/x86/toshiba-wmi.c
+index feac4576b837..2df07ee8f3c3 100644
+--- a/drivers/platform/x86/toshiba-wmi.c
++++ b/drivers/platform/x86/toshiba-wmi.c
+@@ -24,14 +24,15 @@
+ #include <linux/acpi.h>
+ #include <linux/input.h>
+ #include <linux/input/sparse-keymap.h>
++#include <linux/dmi.h>
+ 
+ MODULE_AUTHOR("Azael Avalos");
+ MODULE_DESCRIPTION("Toshiba WMI Hotkey Driver");
+ MODULE_LICENSE("GPL");
+ 
+-#define TOSHIBA_WMI_EVENT_GUID	"59142400-C6A3-40FA-BADB-8A2652834100"
++#define WMI_EVENT_GUID	"59142400-C6A3-40FA-BADB-8A2652834100"
+ 
+-MODULE_ALIAS("wmi:"TOSHIBA_WMI_EVENT_GUID);
++MODULE_ALIAS("wmi:"WMI_EVENT_GUID);
+ 
+ static struct input_dev *toshiba_wmi_input_dev;
+ 
+@@ -63,6 +64,16 @@ static void toshiba_wmi_notify(u32 value, void *context)
+ 	kfree(response.pointer);
+ }
+ 
++static struct dmi_system_id toshiba_wmi_dmi_table[] __initdata = {
++	{
++		.ident = "Toshiba laptop",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++		},
++	},
++	{}
++};
++
+ static int __init toshiba_wmi_input_setup(void)
+ {
+ 	acpi_status status;
+@@ -81,7 +92,7 @@ static int __init toshiba_wmi_input_setup(void)
+ 	if (err)
+ 		goto err_free_dev;
+ 
+-	status = wmi_install_notify_handler(TOSHIBA_WMI_EVENT_GUID,
++	status = wmi_install_notify_handler(WMI_EVENT_GUID,
+ 					    toshiba_wmi_notify, NULL);
+ 	if (ACPI_FAILURE(status)) {
+ 		err = -EIO;
+@@ -95,7 +106,7 @@ static int __init toshiba_wmi_input_setup(void)
+ 	return 0;
+ 
+  err_remove_notifier:
+-	wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID);
++	wmi_remove_notify_handler(WMI_EVENT_GUID);
+  err_free_keymap:
+ 	sparse_keymap_free(toshiba_wmi_input_dev);
+  err_free_dev:
+@@ -105,7 +116,7 @@ static int __init toshiba_wmi_input_setup(void)
+ 
+ static void toshiba_wmi_input_destroy(void)
+ {
+-	wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID);
++	wmi_remove_notify_handler(WMI_EVENT_GUID);
+ 	sparse_keymap_free(toshiba_wmi_input_dev);
+ 	input_unregister_device(toshiba_wmi_input_dev);
+ }
+@@ -114,7 +125,8 @@ static int __init toshiba_wmi_init(void)
+ {
+ 	int ret;
+ 
+-	if (!wmi_has_guid(TOSHIBA_WMI_EVENT_GUID))
++	if (!wmi_has_guid(WMI_EVENT_GUID) ||
++	    !dmi_check_system(toshiba_wmi_dmi_table))
+ 		return -ENODEV;
+ 
+ 	ret = toshiba_wmi_input_setup();
+@@ -130,7 +142,7 @@ static int __init toshiba_wmi_init(void)
+ 
+ static void __exit toshiba_wmi_exit(void)
+ {
+-	if (wmi_has_guid(TOSHIBA_WMI_EVENT_GUID))
++	if (wmi_has_guid(WMI_EVENT_GUID))
+ 		toshiba_wmi_input_destroy();
+ }
+ 
+diff --git a/drivers/rtc/rtc-pcf2123.c b/drivers/rtc/rtc-pcf2123.c
+index b4478cc92b55..8895f77726e8 100644
+--- a/drivers/rtc/rtc-pcf2123.c
++++ b/drivers/rtc/rtc-pcf2123.c
+@@ -182,7 +182,8 @@ static ssize_t pcf2123_show(struct device *dev, struct device_attribute *attr,
+ }
+ 
+ static ssize_t pcf2123_store(struct device *dev, struct device_attribute *attr,
+-			     const char *buffer, size_t count) {
++			     const char *buffer, size_t count)
++{
+ 	struct pcf2123_sysfs_reg *r;
+ 	unsigned long reg;
+ 	unsigned long val;
+@@ -199,7 +200,7 @@ static ssize_t pcf2123_store(struct device *dev, struct device_attribute *attr,
+ 	if (ret)
+ 		return ret;
+ 
+-	pcf2123_write_reg(dev, reg, val);
++	ret = pcf2123_write_reg(dev, reg, val);
+ 	if (ret < 0)
+ 		return -EIO;
+ 	return count;
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index 752b5c9d1ab2..920c42151e92 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -792,6 +792,7 @@ static void alua_rtpg_work(struct work_struct *work)
+ 		WARN_ON(pg->flags & ALUA_PG_RUN_RTPG);
+ 		WARN_ON(pg->flags & ALUA_PG_RUN_STPG);
+ 		spin_unlock_irqrestore(&pg->lock, flags);
++		kref_put(&pg->kref, release_port_group);
+ 		return;
+ 	}
+ 	if (pg->flags & ALUA_SYNC_STPG)
+@@ -889,6 +890,7 @@ static void alua_rtpg_queue(struct alua_port_group *pg,
+ 		/* Do not queue if the worker is already running */
+ 		if (!(pg->flags & ALUA_PG_RUNNING)) {
+ 			kref_get(&pg->kref);
++			sdev = NULL;
+ 			start_queue = 1;
+ 		}
+ 	}
+@@ -900,7 +902,8 @@ static void alua_rtpg_queue(struct alua_port_group *pg,
+ 	if (start_queue &&
+ 	    !queue_delayed_work(alua_wq, &pg->rtpg_work,
+ 				msecs_to_jiffies(ALUA_RTPG_DELAY_MSECS))) {
+-		scsi_device_put(sdev);
++		if (sdev)
++			scsi_device_put(sdev);
+ 		kref_put(&pg->kref, release_port_group);
+ 	}
+ }
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 4cb79902e7a8..46c0f5ecd99d 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -1273,9 +1273,9 @@ scsih_target_alloc(struct scsi_target *starget)
+ 			sas_target_priv_data->handle = raid_device->handle;
+ 			sas_target_priv_data->sas_address = raid_device->wwid;
+ 			sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME;
+-			sas_target_priv_data->raid_device = raid_device;
+ 			if (ioc->is_warpdrive)
+-				raid_device->starget = starget;
++				sas_target_priv_data->raid_device = raid_device;
++			raid_device->starget = starget;
+ 		}
+ 		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+ 		return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 2674f4c16bc3..e46e2c53871a 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -2341,6 +2341,8 @@ qla2xxx_scan_finished(struct Scsi_Host *shost, unsigned long time)
+ {
+ 	scsi_qla_host_t *vha = shost_priv(shost);
+ 
++	if (test_bit(UNLOADING, &vha->dpc_flags))
++		return 1;
+ 	if (!vha->host)
+ 		return 1;
+ 	if (time > vha->hw->loop_reset_delay * HZ)
+diff --git a/drivers/staging/comedi/drivers/ni_tio.c b/drivers/staging/comedi/drivers/ni_tio.c
+index 7043eb0543f6..5ab49a798164 100644
+--- a/drivers/staging/comedi/drivers/ni_tio.c
++++ b/drivers/staging/comedi/drivers/ni_tio.c
+@@ -207,7 +207,8 @@ static int ni_tio_clock_period_ps(const struct ni_gpct *counter,
+ 		 * clock period is specified by user with prescaling
+ 		 * already taken into account.
+ 		 */
+-		return counter->clock_period_ps;
++		*period_ps = counter->clock_period_ps;
++		return 0;
+ 	}
+ 
+ 	switch (generic_clock_source & NI_GPCT_PRESCALE_MODE_CLOCK_SRC_MASK) {
+diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
+index 24c348d2f5bb..98d947338e01 100644
+--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
+@@ -655,6 +655,7 @@ static void ad5933_work(struct work_struct *work)
+ 	__be16 buf[2];
+ 	int val[2];
+ 	unsigned char status;
++	int ret;
+ 
+ 	mutex_lock(&indio_dev->mlock);
+ 	if (st->state == AD5933_CTRL_INIT_START_FREQ) {
+@@ -662,19 +663,22 @@ static void ad5933_work(struct work_struct *work)
+ 		ad5933_cmd(st, AD5933_CTRL_START_SWEEP);
+ 		st->state = AD5933_CTRL_START_SWEEP;
+ 		schedule_delayed_work(&st->work, st->poll_time_jiffies);
+-		mutex_unlock(&indio_dev->mlock);
+-		return;
++		goto out;
+ 	}
+ 
+-	ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status);
++	ret = ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status);
++	if (ret)
++		goto out;
+ 
+ 	if (status & AD5933_STAT_DATA_VALID) {
+ 		int scan_count = bitmap_weight(indio_dev->active_scan_mask,
+ 					       indio_dev->masklength);
+-		ad5933_i2c_read(st->client,
++		ret = ad5933_i2c_read(st->client,
+ 				test_bit(1, indio_dev->active_scan_mask) ?
+ 				AD5933_REG_REAL_DATA : AD5933_REG_IMAG_DATA,
+ 				scan_count * 2, (u8 *)buf);
++		if (ret)
++			goto out;
+ 
+ 		if (scan_count == 2) {
+ 			val[0] = be16_to_cpu(buf[0]);
+@@ -686,8 +690,7 @@ static void ad5933_work(struct work_struct *work)
+ 	} else {
+ 		/* no data available - try again later */
+ 		schedule_delayed_work(&st->work, st->poll_time_jiffies);
+-		mutex_unlock(&indio_dev->mlock);
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (status & AD5933_STAT_SWEEP_DONE) {
+@@ -700,7 +703,7 @@ static void ad5933_work(struct work_struct *work)
+ 		ad5933_cmd(st, AD5933_CTRL_INC_FREQ);
+ 		schedule_delayed_work(&st->work, st->poll_time_jiffies);
+ 	}
+-
++out:
+ 	mutex_unlock(&indio_dev->mlock);
+ }
+ 
+diff --git a/drivers/staging/nvec/nvec_ps2.c b/drivers/staging/nvec/nvec_ps2.c
+index a324322ee0ad..499952c8ef39 100644
+--- a/drivers/staging/nvec/nvec_ps2.c
++++ b/drivers/staging/nvec/nvec_ps2.c
+@@ -106,13 +106,12 @@ static int nvec_mouse_probe(struct platform_device *pdev)
+ {
+ 	struct nvec_chip *nvec = dev_get_drvdata(pdev->dev.parent);
+ 	struct serio *ser_dev;
+-	char mouse_reset[] = { NVEC_PS2, SEND_COMMAND, PSMOUSE_RST, 3 };
+ 
+-	ser_dev = devm_kzalloc(&pdev->dev, sizeof(struct serio), GFP_KERNEL);
++	ser_dev = kzalloc(sizeof(struct serio), GFP_KERNEL);
+ 	if (!ser_dev)
+ 		return -ENOMEM;
+ 
+-	ser_dev->id.type = SERIO_PS_PSTHRU;
++	ser_dev->id.type = SERIO_8042;
+ 	ser_dev->write = ps2_sendcommand;
+ 	ser_dev->start = ps2_startstreaming;
+ 	ser_dev->stop = ps2_stopstreaming;
+@@ -127,9 +126,6 @@ static int nvec_mouse_probe(struct platform_device *pdev)
+ 
+ 	serio_register_port(ser_dev);
+ 
+-	/* mouse reset */
+-	nvec_write_async(nvec, mouse_reset, sizeof(mouse_reset));
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/sm750fb/ddk750_reg.h b/drivers/staging/sm750fb/ddk750_reg.h
+index 955247979aaa..4ed6d8d7712a 100644
+--- a/drivers/staging/sm750fb/ddk750_reg.h
++++ b/drivers/staging/sm750fb/ddk750_reg.h
+@@ -601,13 +601,13 @@
+ 
+ #define PANEL_PLANE_TL                                0x08001C
+ #define PANEL_PLANE_TL_TOP_SHIFT                      16
+-#define PANEL_PLANE_TL_TOP_MASK                       (0xeff << 16)
+-#define PANEL_PLANE_TL_LEFT_MASK                      0xeff
++#define PANEL_PLANE_TL_TOP_MASK                       (0x7ff << 16)
++#define PANEL_PLANE_TL_LEFT_MASK                      0x7ff
+ 
+ #define PANEL_PLANE_BR                                0x080020
+ #define PANEL_PLANE_BR_BOTTOM_SHIFT                   16
+-#define PANEL_PLANE_BR_BOTTOM_MASK                    (0xeff << 16)
+-#define PANEL_PLANE_BR_RIGHT_MASK                     0xeff
++#define PANEL_PLANE_BR_BOTTOM_MASK                    (0x7ff << 16)
++#define PANEL_PLANE_BR_RIGHT_MASK                     0x7ff
+ 
+ #define PANEL_HORIZONTAL_TOTAL                        0x080024
+ #define PANEL_HORIZONTAL_TOTAL_TOTAL_SHIFT            16
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 8bbde52db376..21aeac59df48 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -2026,6 +2026,7 @@ static void atmel_serial_pm(struct uart_port *port, unsigned int state,
+ static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
+ 			      struct ktermios *old)
+ {
++	struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ 	unsigned long flags;
+ 	unsigned int old_mode, mode, imr, quot, baud;
+ 
+@@ -2129,11 +2130,29 @@ static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
+ 		mode |= ATMEL_US_USMODE_RS485;
+ 	} else if (termios->c_cflag & CRTSCTS) {
+ 		/* RS232 with hardware handshake (RTS/CTS) */
+-		if (atmel_use_dma_rx(port) && !atmel_use_fifo(port)) {
+-			dev_info(port->dev, "not enabling hardware flow control because DMA is used");
+-			termios->c_cflag &= ~CRTSCTS;
+-		} else {
++		if (atmel_use_fifo(port) &&
++		    !mctrl_gpio_to_gpiod(atmel_port->gpios, UART_GPIO_CTS)) {
++			/*
++			 * with ATMEL_US_USMODE_HWHS set, the controller will
++			 * be able to drive the RTS pin high/low when the RX
++			 * FIFO is above RXFTHRES/below RXFTHRES2.
++			 * It will also disable the transmitter when the CTS
++			 * pin is high.
++			 * This mode is not activated if CTS pin is a GPIO
++			 * because in this case, the transmitter is always
++			 * disabled (there must be an internal pull-up
++			 * responsible for this behaviour).
++			 * If the RTS pin is a GPIO, the controller won't be
++			 * able to drive it according to the FIFO thresholds,
++			 * but it will be handled by the driver.
++			 */
+ 			mode |= ATMEL_US_USMODE_HWHS;
++		} else {
++			/*
++			 * For platforms without FIFO, the flow control is
++			 * handled by the driver.
++			 */
++			mode |= ATMEL_US_USMODE_NORMAL;
+ 		}
+ 	} else {
+ 		/* RS232 without hadware handshake */
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 0f3f62e81e5b..3ca9fdb0a271 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -946,8 +946,6 @@ static int wait_serial_change(struct acm *acm, unsigned long arg)
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	struct async_icount old, new;
+ 
+-	if (arg & (TIOCM_DSR | TIOCM_RI | TIOCM_CD))
+-		return -EINVAL;
+ 	do {
+ 		spin_lock_irq(&acm->read_lock);
+ 		old = acm->oldcount;
+@@ -1175,6 +1173,8 @@ static int acm_probe(struct usb_interface *intf,
+ 	if (quirks == IGNORE_DEVICE)
+ 		return -ENODEV;
+ 
++	memset(&h, 0x00, sizeof(struct usb_cdc_parsed_header));
++
+ 	num_rx_buf = (quirks == SINGLE_RX_URB) ? 1 : ACM_NR;
+ 
+ 	/* handle quirks deadly to normal probing*/
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 35d092456bec..2d47010e55af 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -669,15 +669,14 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 	return 0;
+ 
+ err4:
+-	phy_power_off(dwc->usb2_generic_phy);
++	phy_power_off(dwc->usb3_generic_phy);
+ 
+ err3:
+-	phy_power_off(dwc->usb3_generic_phy);
++	phy_power_off(dwc->usb2_generic_phy);
+ 
+ err2:
+ 	usb_phy_set_suspend(dwc->usb2_phy, 1);
+ 	usb_phy_set_suspend(dwc->usb3_phy, 1);
+-	dwc3_core_exit(dwc);
+ 
+ err1:
+ 	usb_phy_shutdown(dwc->usb2_phy);
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 9b9e71f2c66e..f590adaaba8e 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -585,14 +585,6 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
+ 
+ 	req->length = length;
+ 
+-	/* throttle high/super speed IRQ rate back slightly */
+-	if (gadget_is_dualspeed(dev->gadget))
+-		req->no_interrupt = (((dev->gadget->speed == USB_SPEED_HIGH ||
+-				       dev->gadget->speed == USB_SPEED_SUPER)) &&
+-					!list_empty(&dev->tx_reqs))
+-			? ((atomic_read(&dev->tx_qlen) % dev->qmult) != 0)
+-			: 0;
+-
+ 	retval = usb_ep_queue(in, req, GFP_ATOMIC);
+ 	switch (retval) {
+ 	default:
+diff --git a/drivers/watchdog/watchdog_core.c b/drivers/watchdog/watchdog_core.c
+index 6abb83cd7681..74265b2f806c 100644
+--- a/drivers/watchdog/watchdog_core.c
++++ b/drivers/watchdog/watchdog_core.c
+@@ -349,7 +349,7 @@ int devm_watchdog_register_device(struct device *dev,
+ 	struct watchdog_device **rcwdd;
+ 	int ret;
+ 
+-	rcwdd = devres_alloc(devm_watchdog_unregister_device, sizeof(*wdd),
++	rcwdd = devres_alloc(devm_watchdog_unregister_device, sizeof(*rcwdd),
+ 			     GFP_KERNEL);
+ 	if (!rcwdd)
+ 		return -ENOMEM;
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 281b768000e6..eb9c92c9b20f 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1,6 +1,7 @@
+ #include <linux/slab.h>
+ #include <linux/file.h>
+ #include <linux/fdtable.h>
++#include <linux/freezer.h>
+ #include <linux/mm.h>
+ #include <linux/stat.h>
+ #include <linux/fcntl.h>
+@@ -423,7 +424,9 @@ static int coredump_wait(int exit_code, struct core_state *core_state)
+ 	if (core_waiters > 0) {
+ 		struct core_thread *ptr;
+ 
++		freezer_do_not_count();
+ 		wait_for_completion(&core_state->startup);
++		freezer_count();
+ 		/*
+ 		 * Wait for all the threads to become inactive, so that
+ 		 * all the thread context (extended register state, like
+diff --git a/fs/nfs/nfs4session.c b/fs/nfs/nfs4session.c
+index b62973045a3e..150c5a1879bf 100644
+--- a/fs/nfs/nfs4session.c
++++ b/fs/nfs/nfs4session.c
+@@ -178,12 +178,14 @@ static int nfs4_slot_get_seqid(struct nfs4_slot_table  *tbl, u32 slotid,
+ 	__must_hold(&tbl->slot_tbl_lock)
+ {
+ 	struct nfs4_slot *slot;
++	int ret;
+ 
+ 	slot = nfs4_lookup_slot(tbl, slotid);
+-	if (IS_ERR(slot))
+-		return PTR_ERR(slot);
+-	*seq_nr = slot->seq_nr;
+-	return 0;
++	ret = PTR_ERR_OR_ZERO(slot);
++	if (!ret)
++		*seq_nr = slot->seq_nr;
++
++	return ret;
+ }
+ 
+ /*
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index c5eaf2f80a4c..67d1d3ebb4b2 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -318,6 +318,7 @@ struct pci_dev;
+ int acpi_pci_irq_enable (struct pci_dev *dev);
+ void acpi_penalize_isa_irq(int irq, int active);
+ bool acpi_isa_irq_available(int irq);
++void acpi_penalize_sci_irq(int irq, int trigger, int polarity);
+ void acpi_pci_irq_disable (struct pci_dev *dev);
+ 
+ extern int ec_read(u8 addr, u8 *val);
+diff --git a/include/linux/frontswap.h b/include/linux/frontswap.h
+index c46d2aa16d81..1d18af034554 100644
+--- a/include/linux/frontswap.h
++++ b/include/linux/frontswap.h
+@@ -106,8 +106,9 @@ static inline void frontswap_invalidate_area(unsigned type)
+ 
+ static inline void frontswap_init(unsigned type, unsigned long *map)
+ {
+-	if (frontswap_enabled())
+-		__frontswap_init(type, map);
++#ifdef CONFIG_FRONTSWAP
++	__frontswap_init(type, map);
++#endif
+ }
+ 
+ #endif /* _LINUX_FRONTSWAP_H */
+diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
+index d6917b896d3a..3584bc8864c4 100644
+--- a/include/linux/sunrpc/svc_rdma.h
++++ b/include/linux/sunrpc/svc_rdma.h
+@@ -86,6 +86,7 @@ struct svc_rdma_op_ctxt {
+ 	unsigned long flags;
+ 	enum dma_data_direction direction;
+ 	int count;
++	unsigned int mapped_sges;
+ 	struct ib_sge sge[RPCSVC_MAXPAGES];
+ 	struct page *pages[RPCSVC_MAXPAGES];
+ };
+@@ -193,6 +194,14 @@ struct svcxprt_rdma {
+ 
+ #define RPCSVC_MAXPAYLOAD_RDMA	RPCSVC_MAXPAYLOAD
+ 
++/* Track DMA maps for this transport and context */
++static inline void svc_rdma_count_mappings(struct svcxprt_rdma *rdma,
++					   struct svc_rdma_op_ctxt *ctxt)
++{
++	ctxt->mapped_sges++;
++	atomic_inc(&rdma->sc_dma_used);
++}
++
+ /* svc_rdma_backchannel.c */
+ extern int svc_rdma_handle_bc_reply(struct rpc_xprt *xprt,
+ 				    struct rpcrdma_msg *rmsgp,
+diff --git a/lib/genalloc.c b/lib/genalloc.c
+index 0a1139644d32..144fe6b1a03e 100644
+--- a/lib/genalloc.c
++++ b/lib/genalloc.c
+@@ -292,7 +292,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size,
+ 	struct gen_pool_chunk *chunk;
+ 	unsigned long addr = 0;
+ 	int order = pool->min_alloc_order;
+-	int nbits, start_bit = 0, end_bit, remain;
++	int nbits, start_bit, end_bit, remain;
+ 
+ #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	BUG_ON(in_nmi());
+@@ -307,6 +307,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size,
+ 		if (size > atomic_read(&chunk->avail))
+ 			continue;
+ 
++		start_bit = 0;
+ 		end_bit = chunk_size(chunk) >> order;
+ retry:
+ 		start_bit = algo(chunk->bits, end_bit, start_bit,
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 770d83eb3f48..0ddce6a1cdf7 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1826,11 +1826,17 @@ static void return_unused_surplus_pages(struct hstate *h,
+  * is not the case is if a reserve map was changed between calls.  It
+  * is the responsibility of the caller to notice the difference and
+  * take appropriate action.
++ *
++ * vma_add_reservation is used in error paths where a reservation must
++ * be restored when a newly allocated huge page must be freed.  It is
++ * to be called after calling vma_needs_reservation to determine if a
++ * reservation exists.
+  */
+ enum vma_resv_mode {
+ 	VMA_NEEDS_RESV,
+ 	VMA_COMMIT_RESV,
+ 	VMA_END_RESV,
++	VMA_ADD_RESV,
+ };
+ static long __vma_reservation_common(struct hstate *h,
+ 				struct vm_area_struct *vma, unsigned long addr,
+@@ -1856,6 +1862,14 @@ static long __vma_reservation_common(struct hstate *h,
+ 		region_abort(resv, idx, idx + 1);
+ 		ret = 0;
+ 		break;
++	case VMA_ADD_RESV:
++		if (vma->vm_flags & VM_MAYSHARE)
++			ret = region_add(resv, idx, idx + 1);
++		else {
++			region_abort(resv, idx, idx + 1);
++			ret = region_del(resv, idx, idx + 1);
++		}
++		break;
+ 	default:
+ 		BUG();
+ 	}
+@@ -1903,6 +1917,56 @@ static void vma_end_reservation(struct hstate *h,
+ 	(void)__vma_reservation_common(h, vma, addr, VMA_END_RESV);
+ }
+ 
++static long vma_add_reservation(struct hstate *h,
++			struct vm_area_struct *vma, unsigned long addr)
++{
++	return __vma_reservation_common(h, vma, addr, VMA_ADD_RESV);
++}
++
++/*
++ * This routine is called to restore a reservation on error paths.  In the
++ * specific error paths, a huge page was allocated (via alloc_huge_page)
++ * and is about to be freed.  If a reservation for the page existed,
++ * alloc_huge_page would have consumed the reservation and set PagePrivate
++ * in the newly allocated page.  When the page is freed via free_huge_page,
++ * the global reservation count will be incremented if PagePrivate is set.
++ * However, free_huge_page can not adjust the reserve map.  Adjust the
++ * reserve map here to be consistent with global reserve count adjustments
++ * to be made by free_huge_page.
++ */
++static void restore_reserve_on_error(struct hstate *h,
++			struct vm_area_struct *vma, unsigned long address,
++			struct page *page)
++{
++	if (unlikely(PagePrivate(page))) {
++		long rc = vma_needs_reservation(h, vma, address);
++
++		if (unlikely(rc < 0)) {
++			/*
++			 * Rare out of memory condition in reserve map
++			 * manipulation.  Clear PagePrivate so that
++			 * global reserve count will not be incremented
++			 * by free_huge_page.  This will make it appear
++			 * as though the reservation for this page was
++			 * consumed.  This may prevent the task from
++			 * faulting in the page at a later time.  This
++			 * is better than inconsistent global huge page
++			 * accounting of reserve counts.
++			 */
++			ClearPagePrivate(page);
++		} else if (rc) {
++			rc = vma_add_reservation(h, vma, address);
++			if (unlikely(rc < 0))
++				/*
++				 * See above comment about rare out of
++				 * memory condition.
++				 */
++				ClearPagePrivate(page);
++		} else
++			vma_end_reservation(h, vma, address);
++	}
++}
++
+ struct page *alloc_huge_page(struct vm_area_struct *vma,
+ 				    unsigned long addr, int avoid_reserve)
+ {
+@@ -3498,6 +3562,7 @@ retry_avoidcopy:
+ 	spin_unlock(ptl);
+ 	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ out_release_all:
++	restore_reserve_on_error(h, vma, address, new_page);
+ 	put_page(new_page);
+ out_release_old:
+ 	put_page(old_page);
+@@ -3680,6 +3745,7 @@ backout:
+ 	spin_unlock(ptl);
+ backout_unlocked:
+ 	unlock_page(page);
++	restore_reserve_on_error(h, vma, address, page);
+ 	put_page(page);
+ 	goto out;
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index de88f33519c0..19e796d36a62 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1112,10 +1112,10 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
+ 	}
+ 
+ 	if (!PageHuge(p) && PageTransHuge(hpage)) {
+-		lock_page(hpage);
+-		if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {
+-			unlock_page(hpage);
+-			if (!PageAnon(hpage))
++		lock_page(p);
++		if (!PageAnon(p) || unlikely(split_huge_page(p))) {
++			unlock_page(p);
++			if (!PageAnon(p))
+ 				pr_err("Memory failure: %#lx: non anonymous thp\n",
+ 					pfn);
+ 			else
+@@ -1126,9 +1126,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
+ 			put_hwpoison_page(p);
+ 			return -EBUSY;
+ 		}
+-		unlock_page(hpage);
+-		get_hwpoison_page(p);
+-		put_hwpoison_page(hpage);
++		unlock_page(p);
+ 		VM_BUG_ON_PAGE(!page_count(p), p);
+ 		hpage = compound_head(p);
+ 	}
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 971fc83e6402..38aa5e0a955f 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1483,6 +1483,8 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
+ 	copy_highpage(newpage, oldpage);
+ 	flush_dcache_page(newpage);
+ 
++	__SetPageLocked(newpage);
++	__SetPageSwapBacked(newpage);
+ 	SetPageUptodate(newpage);
+ 	set_page_private(newpage, swap_index);
+ 	SetPageSwapCache(newpage);
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 71f0b28a1bec..329b03843863 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -533,8 +533,8 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
+ 
+ 	s = create_cache(cache_name, root_cache->object_size,
+ 			 root_cache->size, root_cache->align,
+-			 root_cache->flags, root_cache->ctor,
+-			 memcg, root_cache);
++			 root_cache->flags & CACHE_CREATE_MASK,
++			 root_cache->ctor, memcg, root_cache);
+ 	/*
+ 	 * If we could not create a memcg cache, do not complain, because
+ 	 * that's not critical at all as we can always proceed with the root
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 2657accc6e2b..bf262e494f68 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2218,6 +2218,8 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 		swab32s(&swap_header->info.version);
+ 		swab32s(&swap_header->info.last_page);
+ 		swab32s(&swap_header->info.nr_badpages);
++		if (swap_header->info.nr_badpages > MAX_SWAP_BADPAGES)
++			return 0;
+ 		for (i = 0; i < swap_header->info.nr_badpages; i++)
+ 			swab32s(&swap_header->info.badpages[i]);
+ 	}
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 3940b5d24421..3e9667e467c3 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -537,7 +537,7 @@ batadv_hardif_neigh_create(struct batadv_hard_iface *hard_iface,
+ 	if (bat_priv->algo_ops->neigh.hardif_init)
+ 		bat_priv->algo_ops->neigh.hardif_init(hardif_neigh);
+ 
+-	hlist_add_head(&hardif_neigh->list, &hard_iface->neigh_list);
++	hlist_add_head_rcu(&hardif_neigh->list, &hard_iface->neigh_list);
+ 
+ out:
+ 	spin_unlock_bh(&hard_iface->neigh_list_lock);
+diff --git a/net/ceph/ceph_fs.c b/net/ceph/ceph_fs.c
+index 7d54e944de5e..dcbe67ff3e2b 100644
+--- a/net/ceph/ceph_fs.c
++++ b/net/ceph/ceph_fs.c
+@@ -34,7 +34,8 @@ void ceph_file_layout_from_legacy(struct ceph_file_layout *fl,
+ 	fl->stripe_count = le32_to_cpu(legacy->fl_stripe_count);
+ 	fl->object_size = le32_to_cpu(legacy->fl_object_size);
+ 	fl->pool_id = le32_to_cpu(legacy->fl_pg_pool);
+-	if (fl->pool_id == 0)
++	if (fl->pool_id == 0 && fl->stripe_unit == 0 &&
++	    fl->stripe_count == 0 && fl->object_size == 0)
+ 		fl->pool_id = -1;
+ }
+ EXPORT_SYMBOL(ceph_file_layout_from_legacy);
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index aa5847a16713..1df2c8dac7c5 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -420,7 +420,7 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write,
+ 	char buf[NFLOGGER_NAME_LEN];
+ 	int r = 0;
+ 	int tindex = (unsigned long)table->extra1;
+-	struct net *net = current->nsproxy->net_ns;
++	struct net *net = table->extra2;
+ 
+ 	if (write) {
+ 		struct ctl_table tmp = *table;
+@@ -474,7 +474,6 @@ static int netfilter_log_sysctl_init(struct net *net)
+ 				 3, "%d", i);
+ 			nf_log_sysctl_table[i].procname	=
+ 				nf_log_sysctl_fnames[i];
+-			nf_log_sysctl_table[i].data = NULL;
+ 			nf_log_sysctl_table[i].maxlen = NFLOGGER_NAME_LEN;
+ 			nf_log_sysctl_table[i].mode = 0644;
+ 			nf_log_sysctl_table[i].proc_handler =
+@@ -484,6 +483,9 @@ static int netfilter_log_sysctl_init(struct net *net)
+ 		}
+ 	}
+ 
++	for (i = NFPROTO_UNSPEC; i < NFPROTO_NUMPROTO; i++)
++		table[i].extra2 = net;
++
+ 	net->nf.nf_log_dir_header = register_net_sysctl(net,
+ 						"net/netfilter/nf_log",
+ 						table);
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index 892b5e1d9b09..2761377dcc17 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -44,18 +44,20 @@
+  * being done.
+  *
+  * When the underlying transport disconnects, MRs are left in one of
+- * three states:
++ * four states:
+  *
+  * INVALID:	The MR was not in use before the QP entered ERROR state.
+- *		(Or, the LOCAL_INV WR has not completed or flushed yet).
+- *
+- * STALE:	The MR was being registered or unregistered when the QP
+- *		entered ERROR state, and the pending WR was flushed.
+  *
+  * VALID:	The MR was registered before the QP entered ERROR state.
+  *
+- * When frwr_op_map encounters STALE and VALID MRs, they are recovered
+- * with ib_dereg_mr and then are re-initialized. Beause MR recovery
++ * FLUSHED_FR:	The MR was being registered when the QP entered ERROR
++ *		state, and the pending WR was flushed.
++ *
++ * FLUSHED_LI:	The MR was being invalidated when the QP entered ERROR
++ *		state, and the pending WR was flushed.
++ *
++ * When frwr_op_map encounters FLUSHED and VALID MRs, they are recovered
++ * with ib_dereg_mr and then are re-initialized. Because MR recovery
+  * allocates fresh resources, it is deferred to a workqueue, and the
+  * recovered MRs are placed back on the rb_mws list when recovery is
+  * complete. frwr_op_map allocates another MR for the current RPC while
+@@ -175,12 +177,15 @@ __frwr_reset_mr(struct rpcrdma_ia *ia, struct rpcrdma_mw *r)
+ static void
+ frwr_op_recover_mr(struct rpcrdma_mw *mw)
+ {
++	enum rpcrdma_frmr_state state = mw->frmr.fr_state;
+ 	struct rpcrdma_xprt *r_xprt = mw->mw_xprt;
+ 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
+ 	int rc;
+ 
+ 	rc = __frwr_reset_mr(ia, mw);
+-	ib_dma_unmap_sg(ia->ri_device, mw->mw_sg, mw->mw_nents, mw->mw_dir);
++	if (state != FRMR_FLUSHED_LI)
++		ib_dma_unmap_sg(ia->ri_device,
++				mw->mw_sg, mw->mw_nents, mw->mw_dir);
+ 	if (rc)
+ 		goto out_release;
+ 
+@@ -261,10 +266,8 @@ frwr_op_maxpages(struct rpcrdma_xprt *r_xprt)
+ }
+ 
+ static void
+-__frwr_sendcompletion_flush(struct ib_wc *wc, struct rpcrdma_frmr *frmr,
+-			    const char *wr)
++__frwr_sendcompletion_flush(struct ib_wc *wc, const char *wr)
+ {
+-	frmr->fr_state = FRMR_IS_STALE;
+ 	if (wc->status != IB_WC_WR_FLUSH_ERR)
+ 		pr_err("rpcrdma: %s: %s (%u/0x%x)\n",
+ 		       wr, ib_wc_status_msg(wc->status),
+@@ -287,7 +290,8 @@ frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
+ 	if (wc->status != IB_WC_SUCCESS) {
+ 		cqe = wc->wr_cqe;
+ 		frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);
+-		__frwr_sendcompletion_flush(wc, frmr, "fastreg");
++		frmr->fr_state = FRMR_FLUSHED_FR;
++		__frwr_sendcompletion_flush(wc, "fastreg");
+ 	}
+ }
+ 
+@@ -307,7 +311,8 @@ frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
+ 	if (wc->status != IB_WC_SUCCESS) {
+ 		cqe = wc->wr_cqe;
+ 		frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);
+-		__frwr_sendcompletion_flush(wc, frmr, "localinv");
++		frmr->fr_state = FRMR_FLUSHED_LI;
++		__frwr_sendcompletion_flush(wc, "localinv");
+ 	}
+ }
+ 
+@@ -327,9 +332,11 @@ frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
+ 	/* WARNING: Only wr_cqe and status are reliable at this point */
+ 	cqe = wc->wr_cqe;
+ 	frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);
+-	if (wc->status != IB_WC_SUCCESS)
+-		__frwr_sendcompletion_flush(wc, frmr, "localinv");
+-	complete_all(&frmr->fr_linv_done);
++	if (wc->status != IB_WC_SUCCESS) {
++		frmr->fr_state = FRMR_FLUSHED_LI;
++		__frwr_sendcompletion_flush(wc, "localinv");
++	}
++	complete(&frmr->fr_linv_done);
+ }
+ 
+ /* Post a REG_MR Work Request to register a memory region
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index a2a7519b0f23..cd0c5581498c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -129,7 +129,7 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma,
+ 		ret = -EIO;
+ 		goto out_unmap;
+ 	}
+-	atomic_inc(&rdma->sc_dma_used);
++	svc_rdma_count_mappings(rdma, ctxt);
+ 
+ 	memset(&send_wr, 0, sizeof(send_wr));
+ 	ctxt->cqe.done = svc_rdma_wc_send;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index 2c25606f2561..ad1df979b3f0 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -159,7 +159,7 @@ int rdma_read_chunk_lcl(struct svcxprt_rdma *xprt,
+ 					   ctxt->sge[pno].addr);
+ 		if (ret)
+ 			goto err;
+-		atomic_inc(&xprt->sc_dma_used);
++		svc_rdma_count_mappings(xprt, ctxt);
+ 
+ 		ctxt->sge[pno].lkey = xprt->sc_pd->local_dma_lkey;
+ 		ctxt->sge[pno].length = len;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+index 54d533300620..3b95b19fcf72 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+@@ -280,7 +280,7 @@ static int send_write(struct svcxprt_rdma *xprt, struct svc_rqst *rqstp,
+ 		if (ib_dma_mapping_error(xprt->sc_cm_id->device,
+ 					 sge[sge_no].addr))
+ 			goto err;
+-		atomic_inc(&xprt->sc_dma_used);
++		svc_rdma_count_mappings(xprt, ctxt);
+ 		sge[sge_no].lkey = xprt->sc_pd->local_dma_lkey;
+ 		ctxt->count++;
+ 		sge_off = 0;
+@@ -489,7 +489,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 			    ctxt->sge[0].length, DMA_TO_DEVICE);
+ 	if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr))
+ 		goto err;
+-	atomic_inc(&rdma->sc_dma_used);
++	svc_rdma_count_mappings(rdma, ctxt);
+ 
+ 	ctxt->direction = DMA_TO_DEVICE;
+ 
+@@ -505,7 +505,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 		if (ib_dma_mapping_error(rdma->sc_cm_id->device,
+ 					 ctxt->sge[sge_no].addr))
+ 			goto err;
+-		atomic_inc(&rdma->sc_dma_used);
++		svc_rdma_count_mappings(rdma, ctxt);
+ 		ctxt->sge[sge_no].lkey = rdma->sc_pd->local_dma_lkey;
+ 		ctxt->sge[sge_no].length = sge_bytes;
+ 	}
+@@ -523,23 +523,9 @@ static int send_reply(struct svcxprt_rdma *rdma,
+ 		ctxt->pages[page_no+1] = rqstp->rq_respages[page_no];
+ 		ctxt->count++;
+ 		rqstp->rq_respages[page_no] = NULL;
+-		/*
+-		 * If there are more pages than SGE, terminate SGE
+-		 * list so that svc_rdma_unmap_dma doesn't attempt to
+-		 * unmap garbage.
+-		 */
+-		if (page_no+1 >= sge_no)
+-			ctxt->sge[page_no+1].length = 0;
+ 	}
+ 	rqstp->rq_next_page = rqstp->rq_respages + 1;
+ 
+-	/* The loop above bumps sc_dma_used for each sge. The
+-	 * xdr_buf.tail gets a separate sge, but resides in the
+-	 * same page as xdr_buf.head. Don't count it twice.
+-	 */
+-	if (sge_no > ctxt->count)
+-		atomic_dec(&rdma->sc_dma_used);
+-
+ 	if (sge_no > rdma->sc_max_sge) {
+ 		pr_err("svcrdma: Too many sges (%d)\n", sge_no);
+ 		goto err;
+@@ -635,7 +621,7 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
+ 	ret = send_reply(rdma, rqstp, res_page, rdma_resp, vec,
+ 			 inline_bytes);
+ 	if (ret < 0)
+-		goto err1;
++		goto err0;
+ 
+ 	svc_rdma_put_req_map(rdma, vec);
+ 	dprintk("svcrdma: send_reply returns %d\n", ret);
+@@ -692,7 +678,7 @@ void svc_rdma_send_error(struct svcxprt_rdma *xprt, struct rpcrdma_msg *rmsgp,
+ 		svc_rdma_put_context(ctxt, 1);
+ 		return;
+ 	}
+-	atomic_inc(&xprt->sc_dma_used);
++	svc_rdma_count_mappings(xprt, ctxt);
+ 
+ 	/* Prepare SEND WR */
+ 	memset(&err_wr, 0, sizeof(err_wr));
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index dd9440137834..924271c9ef3e 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -198,6 +198,7 @@ struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
+ 
+ out:
+ 	ctxt->count = 0;
++	ctxt->mapped_sges = 0;
+ 	ctxt->frmr = NULL;
+ 	return ctxt;
+ 
+@@ -221,22 +222,27 @@ out_empty:
+ void svc_rdma_unmap_dma(struct svc_rdma_op_ctxt *ctxt)
+ {
+ 	struct svcxprt_rdma *xprt = ctxt->xprt;
+-	int i;
+-	for (i = 0; i < ctxt->count && ctxt->sge[i].length; i++) {
++	struct ib_device *device = xprt->sc_cm_id->device;
++	u32 lkey = xprt->sc_pd->local_dma_lkey;
++	unsigned int i, count;
++
++	for (count = 0, i = 0; i < ctxt->mapped_sges; i++) {
+ 		/*
+ 		 * Unmap the DMA addr in the SGE if the lkey matches
+ 		 * the local_dma_lkey, otherwise, ignore it since it is
+ 		 * an FRMR lkey and will be unmapped later when the
+ 		 * last WR that uses it completes.
+ 		 */
+-		if (ctxt->sge[i].lkey == xprt->sc_pd->local_dma_lkey) {
+-			atomic_dec(&xprt->sc_dma_used);
+-			ib_dma_unmap_page(xprt->sc_cm_id->device,
++		if (ctxt->sge[i].lkey == lkey) {
++			count++;
++			ib_dma_unmap_page(device,
+ 					    ctxt->sge[i].addr,
+ 					    ctxt->sge[i].length,
+ 					    ctxt->direction);
+ 		}
+ 	}
++	ctxt->mapped_sges = 0;
++	atomic_sub(count, &xprt->sc_dma_used);
+ }
+ 
+ void svc_rdma_put_context(struct svc_rdma_op_ctxt *ctxt, int free_pages)
+@@ -600,7 +606,7 @@ int svc_rdma_post_recv(struct svcxprt_rdma *xprt, gfp_t flags)
+ 				     DMA_FROM_DEVICE);
+ 		if (ib_dma_mapping_error(xprt->sc_cm_id->device, pa))
+ 			goto err_put_ctxt;
+-		atomic_inc(&xprt->sc_dma_used);
++		svc_rdma_count_mappings(xprt, ctxt);
+ 		ctxt->sge[sge_no].addr = pa;
+ 		ctxt->sge[sge_no].length = PAGE_SIZE;
+ 		ctxt->sge[sge_no].lkey = xprt->sc_pd->local_dma_lkey;
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index a71b0f5897d8..edc03445beed 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -207,7 +207,8 @@ struct rpcrdma_rep {
+ enum rpcrdma_frmr_state {
+ 	FRMR_IS_INVALID,	/* ready to be used */
+ 	FRMR_IS_VALID,		/* in use */
+-	FRMR_IS_STALE,		/* failed completion */
++	FRMR_FLUSHED_FR,	/* flushed FASTREG WR */
++	FRMR_FLUSHED_LI,	/* flushed LOCALINV WR */
+ };
+ 
+ struct rpcrdma_frmr {
+diff --git a/sound/core/info.c b/sound/core/info.c
+index 895362a696c9..8ab72e0f5932 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -325,10 +325,15 @@ static ssize_t snd_info_text_entry_write(struct file *file,
+ 	size_t next;
+ 	int err = 0;
+ 
++	if (!entry->c.text.write)
++		return -EIO;
+ 	pos = *offset;
+ 	if (!valid_pos(pos, count))
+ 		return -EIO;
+ 	next = pos + count;
++	/* don't handle too large text inputs */
++	if (next > 16 * 1024)
++		return -EIO;
+ 	mutex_lock(&entry->access);
+ 	buf = data->wbuffer;
+ 	if (!buf) {
+@@ -366,7 +371,9 @@ static int snd_info_seq_show(struct seq_file *seq, void *p)
+ 	struct snd_info_private_data *data = seq->private;
+ 	struct snd_info_entry *entry = data->entry;
+ 
+-	if (entry->c.text.read) {
++	if (!entry->c.text.read) {
++		return -EIO;
++	} else {
+ 		data->rbuffer->buffer = (char *)seq; /* XXX hack! */
+ 		entry->c.text.read(entry, data->rbuffer);
+ 	}
+diff --git a/sound/soc/codecs/cs4270.c b/sound/soc/codecs/cs4270.c
+index e07807d96b68..3670086b9227 100644
+--- a/sound/soc/codecs/cs4270.c
++++ b/sound/soc/codecs/cs4270.c
+@@ -148,11 +148,11 @@ SND_SOC_DAPM_OUTPUT("AOUTR"),
+ };
+ 
+ static const struct snd_soc_dapm_route cs4270_dapm_routes[] = {
+-	{ "Capture", NULL, "AINA" },
+-	{ "Capture", NULL, "AINB" },
++	{ "Capture", NULL, "AINL" },
++	{ "Capture", NULL, "AINR" },
+ 
+-	{ "AOUTA", NULL, "Playback" },
+-	{ "AOUTB", NULL, "Playback" },
++	{ "AOUTL", NULL, "Playback" },
++	{ "AOUTR", NULL, "Playback" },
+ };
+ 
+ /**
+diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
+index e3e764167765..7b7a380b1245 100644
+--- a/sound/soc/intel/skylake/skl.c
++++ b/sound/soc/intel/skylake/skl.c
+@@ -785,8 +785,7 @@ static void skl_remove(struct pci_dev *pci)
+ 
+ 	release_firmware(skl->tplg);
+ 
+-	if (pci_dev_run_wake(pci))
+-		pm_runtime_get_noresume(&pci->dev);
++	pm_runtime_get_noresume(&pci->dev);
+ 
+ 	/* codec removal, invoke bus_device_remove */
+ 	snd_hdac_ext_bus_device_remove(ebus);
+diff --git a/sound/soc/sunxi/sun4i-codec.c b/sound/soc/sunxi/sun4i-codec.c
+index 44f170c73b06..03c18db96741 100644
+--- a/sound/soc/sunxi/sun4i-codec.c
++++ b/sound/soc/sunxi/sun4i-codec.c
+@@ -738,11 +738,11 @@ static struct snd_soc_card *sun4i_codec_create_card(struct device *dev)
+ 
+ 	card = devm_kzalloc(dev, sizeof(*card), GFP_KERNEL);
+ 	if (!card)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dai_link = sun4i_codec_create_link(dev, &card->num_links);
+ 	if (!card->dai_link)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	card->dev		= dev;
+ 	card->name		= "sun4i-codec";
+@@ -842,7 +842,8 @@ static int sun4i_codec_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	card = sun4i_codec_create_card(&pdev->dev);
+-	if (!card) {
++	if (IS_ERR(card)) {
++		ret = PTR_ERR(card);
+ 		dev_err(&pdev->dev, "Failed to create our card\n");
+ 		goto err_unregister_codec;
+ 	}
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index 7aee954b307f..4ad1eac23f66 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -595,7 +595,8 @@ int hist_browser__run(struct hist_browser *browser, const char *help)
+ 			u64 nr_entries;
+ 			hbt->timer(hbt->arg);
+ 
+-			if (hist_browser__has_filter(browser))
++			if (hist_browser__has_filter(browser) ||
++			    symbol_conf.report_hierarchy)
+ 				hist_browser__update_nr_entries(browser);
+ 
+ 			nr_entries = hist_browser__nr_entries(browser);
+diff --git a/tools/power/cpupower/utils/cpufreq-set.c b/tools/power/cpupower/utils/cpufreq-set.c
+index b4bf76971dc9..1eef0aed6423 100644
+--- a/tools/power/cpupower/utils/cpufreq-set.c
++++ b/tools/power/cpupower/utils/cpufreq-set.c
+@@ -296,7 +296,7 @@ int cmd_freq_set(int argc, char **argv)
+ 			struct cpufreq_affected_cpus *cpus;
+ 
+ 			if (!bitmask_isbitset(cpus_chosen, cpu) ||
+-			    cpupower_is_cpu_online(cpu))
++			    cpupower_is_cpu_online(cpu) != 1)
+ 				continue;
+ 
+ 			cpus = cpufreq_get_related_cpus(cpu);
+@@ -316,10 +316,7 @@ int cmd_freq_set(int argc, char **argv)
+ 	     cpu <= bitmask_last(cpus_chosen); cpu++) {
+ 
+ 		if (!bitmask_isbitset(cpus_chosen, cpu) ||
+-		    cpupower_is_cpu_online(cpu))
+-			continue;
+-
+-		if (cpupower_is_cpu_online(cpu) != 1)
++		    cpupower_is_cpu_online(cpu) != 1)
+ 			continue;
+ 
+ 		printf(_("Setting cpu: %d\n"), cpu);
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
+index 3bad3c5ed431..d1b080ca8dc9 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.c
++++ b/virt/kvm/arm/vgic/vgic-mmio.c
+@@ -453,17 +453,33 @@ struct vgic_io_device *kvm_to_vgic_iodev(const struct kvm_io_device *dev)
+ 	return container_of(dev, struct vgic_io_device, dev);
+ }
+ 
+-static bool check_region(const struct vgic_register_region *region,
++static bool check_region(const struct kvm *kvm,
++			 const struct vgic_register_region *region,
+ 			 gpa_t addr, int len)
+ {
+-	if ((region->access_flags & VGIC_ACCESS_8bit) && len == 1)
+-		return true;
+-	if ((region->access_flags & VGIC_ACCESS_32bit) &&
+-	    len == sizeof(u32) && !(addr & 3))
+-		return true;
+-	if ((region->access_flags & VGIC_ACCESS_64bit) &&
+-	    len == sizeof(u64) && !(addr & 7))
+-		return true;
++	int flags, nr_irqs = kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS;
++
++	switch (len) {
++	case sizeof(u8):
++		flags = VGIC_ACCESS_8bit;
++		break;
++	case sizeof(u32):
++		flags = VGIC_ACCESS_32bit;
++		break;
++	case sizeof(u64):
++		flags = VGIC_ACCESS_64bit;
++		break;
++	default:
++		return false;
++	}
++
++	if ((region->access_flags & flags) && IS_ALIGNED(addr, len)) {
++		if (!region->bits_per_irq)
++			return true;
++
++		/* Do we access a non-allocated IRQ? */
++		return VGIC_ADDR_TO_INTID(addr, region->bits_per_irq) < nr_irqs;
++	}
+ 
+ 	return false;
+ }
+@@ -477,7 +493,7 @@ static int dispatch_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
+ 
+ 	region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions,
+ 				       addr - iodev->base_addr);
+-	if (!region || !check_region(region, addr, len)) {
++	if (!region || !check_region(vcpu->kvm, region, addr, len)) {
+ 		memset(val, 0, len);
+ 		return 0;
+ 	}
+@@ -510,10 +526,7 @@ static int dispatch_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
+ 
+ 	region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions,
+ 				       addr - iodev->base_addr);
+-	if (!region)
+-		return 0;
+-
+-	if (!check_region(region, addr, len))
++	if (!region || !check_region(vcpu->kvm, region, addr, len))
+ 		return 0;
+ 
+ 	switch (iodev->iodev_type) {
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
+index 0b3ecf9d100e..ba63d91d2619 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.h
++++ b/virt/kvm/arm/vgic/vgic-mmio.h
+@@ -50,15 +50,15 @@ extern struct kvm_io_device_ops kvm_io_gic_ops;
+ #define VGIC_ADDR_IRQ_MASK(bits) (((bits) * 1024 / 8) - 1)
+ 
+ /*
+- * (addr & mask) gives us the byte offset for the INT ID, so we want to
+- * divide this with 'bytes per irq' to get the INT ID, which is given
+- * by '(bits) / 8'.  But we do this with fixed-point-arithmetic and
+- * take advantage of the fact that division by a fraction equals
+- * multiplication with the inverted fraction, and scale up both the
+- * numerator and denominator with 8 to support at most 64 bits per IRQ:
++ * (addr & mask) gives us the _byte_ offset for the INT ID.
++ * We multiply this by 8 the get the _bit_ offset, then divide this by
++ * the number of bits to learn the actual INT ID.
++ * But instead of a division (which requires a "long long div" implementation),
++ * we shift by the binary logarithm of <bits>.
++ * This assumes that <bits> is a power of two.
+  */
+ #define VGIC_ADDR_TO_INTID(addr, bits)  (((addr) & VGIC_ADDR_IRQ_MASK(bits)) * \
+-					64 / (bits) / 8)
++					8 >> ilog2(bits))
+ 
+ /*
+  * Some VGIC registers store per-IRQ information, with a different number


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-21 14:50 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-11-21 14:50 UTC (permalink / raw
  To: gentoo-commits

commit:     c8c8fca074336deefaa5af1dbf8bf3b62839878e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 21 14:50:13 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Nov 21 14:50:13 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c8c8fca0

Linux patch 4.8.10

 0000_README             |    4 +
 1009_linux-4.8.10.patch | 4759 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4763 insertions(+)

diff --git a/0000_README b/0000_README
index d5af994..13976e7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-4.8.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.9
 
+Patch:  1009_linux-4.8.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-4.8.10.patch b/1009_linux-4.8.10.patch
new file mode 100644
index 0000000..7b1d9cf
--- /dev/null
+++ b/1009_linux-4.8.10.patch
@@ -0,0 +1,4759 @@
+diff --git a/Makefile b/Makefile
+index c1519ab85258..7cf2b4985703 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
+index 37a315d0ddd4..a6847fc05a6d 100644
+--- a/arch/sparc/include/asm/uaccess_64.h
++++ b/arch/sparc/include/asm/uaccess_64.h
+@@ -98,7 +98,6 @@ struct exception_table_entry {
+         unsigned int insn, fixup;
+ };
+ 
+-void __ret_efault(void);
+ void __retl_efault(void);
+ 
+ /* Uh, these should become the main single-value transfer routines..
+@@ -205,55 +204,34 @@ int __get_user_bad(void);
+ unsigned long __must_check ___copy_from_user(void *to,
+ 					     const void __user *from,
+ 					     unsigned long size);
+-unsigned long copy_from_user_fixup(void *to, const void __user *from,
+-				   unsigned long size);
+ static inline unsigned long __must_check
+ copy_from_user(void *to, const void __user *from, unsigned long size)
+ {
+-	unsigned long ret;
+-
+ 	check_object_size(to, size, false);
+ 
+-	ret = ___copy_from_user(to, from, size);
+-	if (unlikely(ret))
+-		ret = copy_from_user_fixup(to, from, size);
+-
+-	return ret;
++	return ___copy_from_user(to, from, size);
+ }
+ #define __copy_from_user copy_from_user
+ 
+ unsigned long __must_check ___copy_to_user(void __user *to,
+ 					   const void *from,
+ 					   unsigned long size);
+-unsigned long copy_to_user_fixup(void __user *to, const void *from,
+-				 unsigned long size);
+ static inline unsigned long __must_check
+ copy_to_user(void __user *to, const void *from, unsigned long size)
+ {
+-	unsigned long ret;
+-
+ 	check_object_size(from, size, true);
+ 
+-	ret = ___copy_to_user(to, from, size);
+-	if (unlikely(ret))
+-		ret = copy_to_user_fixup(to, from, size);
+-	return ret;
++	return ___copy_to_user(to, from, size);
+ }
+ #define __copy_to_user copy_to_user
+ 
+ unsigned long __must_check ___copy_in_user(void __user *to,
+ 					   const void __user *from,
+ 					   unsigned long size);
+-unsigned long copy_in_user_fixup(void __user *to, void __user *from,
+-				 unsigned long size);
+ static inline unsigned long __must_check
+ copy_in_user(void __user *to, void __user *from, unsigned long size)
+ {
+-	unsigned long ret = ___copy_in_user(to, from, size);
+-
+-	if (unlikely(ret))
+-		ret = copy_in_user_fixup(to, from, size);
+-	return ret;
++	return ___copy_in_user(to, from, size);
+ }
+ #define __copy_in_user copy_in_user
+ 
+diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
+index a076b4249e62..5f1f3ae21657 100644
+--- a/arch/sparc/kernel/head_64.S
++++ b/arch/sparc/kernel/head_64.S
+@@ -922,47 +922,11 @@ prom_tba:	.xword	0
+ tlb_type:	.word	0	/* Must NOT end up in BSS */
+ 	.section	".fixup",#alloc,#execinstr
+ 
+-	.globl	__ret_efault, __retl_efault, __ret_one, __retl_one
+-ENTRY(__ret_efault)
+-	ret
+-	 restore %g0, -EFAULT, %o0
+-ENDPROC(__ret_efault)
+-
+ ENTRY(__retl_efault)
+ 	retl
+ 	 mov	-EFAULT, %o0
+ ENDPROC(__retl_efault)
+ 
+-ENTRY(__retl_one)
+-	retl
+-	 mov	1, %o0
+-ENDPROC(__retl_one)
+-
+-ENTRY(__retl_one_fp)
+-	VISExitHalf
+-	retl
+-	 mov	1, %o0
+-ENDPROC(__retl_one_fp)
+-
+-ENTRY(__ret_one_asi)
+-	wr	%g0, ASI_AIUS, %asi
+-	ret
+-	 restore %g0, 1, %o0
+-ENDPROC(__ret_one_asi)
+-
+-ENTRY(__retl_one_asi)
+-	wr	%g0, ASI_AIUS, %asi
+-	retl
+-	 mov	1, %o0
+-ENDPROC(__retl_one_asi)
+-
+-ENTRY(__retl_one_asi_fp)
+-	wr	%g0, ASI_AIUS, %asi
+-	VISExitHalf
+-	retl
+-	 mov	1, %o0
+-ENDPROC(__retl_one_asi_fp)
+-
+ ENTRY(__retl_o1)
+ 	retl
+ 	 mov	%o1, %o0
+diff --git a/arch/sparc/kernel/jump_label.c b/arch/sparc/kernel/jump_label.c
+index 59bbeff55024..07933b9e9ce0 100644
+--- a/arch/sparc/kernel/jump_label.c
++++ b/arch/sparc/kernel/jump_label.c
+@@ -13,19 +13,30 @@
+ void arch_jump_label_transform(struct jump_entry *entry,
+ 			       enum jump_label_type type)
+ {
+-	u32 val;
+ 	u32 *insn = (u32 *) (unsigned long) entry->code;
++	u32 val;
+ 
+ 	if (type == JUMP_LABEL_JMP) {
+ 		s32 off = (s32)entry->target - (s32)entry->code;
++		bool use_v9_branch = false;
++
++		BUG_ON(off & 3);
+ 
+ #ifdef CONFIG_SPARC64
+-		/* ba,pt %xcc, . + (off << 2) */
+-		val = 0x10680000 | ((u32) off >> 2);
+-#else
+-		/* ba . + (off << 2) */
+-		val = 0x10800000 | ((u32) off >> 2);
++		if (off <= 0xfffff && off >= -0x100000)
++			use_v9_branch = true;
+ #endif
++		if (use_v9_branch) {
++			/* WDISP19 - target is . + immed << 2 */
++			/* ba,pt %xcc, . + off */
++			val = 0x10680000 | (((u32) off >> 2) & 0x7ffff);
++		} else {
++			/* WDISP22 - target is . + immed << 2 */
++			BUG_ON(off > 0x7fffff);
++			BUG_ON(off < -0x800000);
++			/* ba . + off */
++			val = 0x10800000 | (((u32) off >> 2) & 0x3fffff);
++		}
+ 	} else {
+ 		val = 0x01000000;
+ 	}
+diff --git a/arch/sparc/kernel/sparc_ksyms_64.c b/arch/sparc/kernel/sparc_ksyms_64.c
+index 9e034f29dcc5..20ffb052fe38 100644
+--- a/arch/sparc/kernel/sparc_ksyms_64.c
++++ b/arch/sparc/kernel/sparc_ksyms_64.c
+@@ -27,7 +27,6 @@ EXPORT_SYMBOL(__flushw_user);
+ EXPORT_SYMBOL_GPL(real_hard_smp_processor_id);
+ 
+ /* from head_64.S */
+-EXPORT_SYMBOL(__ret_efault);
+ EXPORT_SYMBOL(tlb_type);
+ EXPORT_SYMBOL(sun4v_chip_type);
+ EXPORT_SYMBOL(prom_root_node);
+diff --git a/arch/sparc/lib/GENcopy_from_user.S b/arch/sparc/lib/GENcopy_from_user.S
+index b7d0bd6b1406..69a439fa2fc1 100644
+--- a/arch/sparc/lib/GENcopy_from_user.S
++++ b/arch/sparc/lib/GENcopy_from_user.S
+@@ -3,11 +3,11 @@
+  * Copyright (C) 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/GENcopy_to_user.S b/arch/sparc/lib/GENcopy_to_user.S
+index 780550e1afc7..9947427ce354 100644
+--- a/arch/sparc/lib/GENcopy_to_user.S
++++ b/arch/sparc/lib/GENcopy_to_user.S
+@@ -3,11 +3,11 @@
+  * Copyright (C) 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/GENmemcpy.S b/arch/sparc/lib/GENmemcpy.S
+index 89358ee94851..059ea24ad73d 100644
+--- a/arch/sparc/lib/GENmemcpy.S
++++ b/arch/sparc/lib/GENmemcpy.S
+@@ -4,21 +4,18 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #define GLOBAL_SPARE	%g7
+ #else
+ #define GLOBAL_SPARE	%g5
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
+-#endif
+-
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ 
+ #ifndef LOAD
+@@ -45,6 +42,29 @@
+ 	.register	%g3,#scratch
+ 
+ 	.text
++
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++ENTRY(GEN_retl_o4_1)
++	add	%o4, %o2, %o4
++	retl
++	 add	%o4, 1, %o0
++ENDPROC(GEN_retl_o4_1)
++ENTRY(GEN_retl_g1_8)
++	add	%g1, %o2, %g1
++	retl
++	 add	%g1, 8, %o0
++ENDPROC(GEN_retl_g1_8)
++ENTRY(GEN_retl_o2_4)
++	retl
++	 add	%o2, 4, %o0
++ENDPROC(GEN_retl_o2_4)
++ENTRY(GEN_retl_o2_1)
++	retl
++	 add	%o2, 1, %o0
++ENDPROC(GEN_retl_o2_1)
++#endif
++
+ 	.align		64
+ 
+ 	.globl	FUNC_NAME
+@@ -73,8 +93,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	sub		%g0, %o4, %o4
+ 	sub		%o2, %o4, %o2
+ 1:	subcc		%o4, 1, %o4
+-	EX_LD(LOAD(ldub, %o1, %g1))
+-	EX_ST(STORE(stb, %g1, %o0))
++	EX_LD(LOAD(ldub, %o1, %g1),GEN_retl_o4_1)
++	EX_ST(STORE(stb, %g1, %o0),GEN_retl_o4_1)
+ 	add		%o1, 1, %o1
+ 	bne,pt		%XCC, 1b
+ 	add		%o0, 1, %o0
+@@ -82,8 +102,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	andn		%o2, 0x7, %g1
+ 	sub		%o2, %g1, %o2
+ 1:	subcc		%g1, 0x8, %g1
+-	EX_LD(LOAD(ldx, %o1, %g2))
+-	EX_ST(STORE(stx, %g2, %o0))
++	EX_LD(LOAD(ldx, %o1, %g2),GEN_retl_g1_8)
++	EX_ST(STORE(stx, %g2, %o0),GEN_retl_g1_8)
+ 	add		%o1, 0x8, %o1
+ 	bne,pt		%XCC, 1b
+ 	 add		%o0, 0x8, %o0
+@@ -100,8 +120,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 1:
+ 	subcc		%o2, 4, %o2
+-	EX_LD(LOAD(lduw, %o1, %g1))
+-	EX_ST(STORE(stw, %g1, %o1 + %o3))
++	EX_LD(LOAD(lduw, %o1, %g1),GEN_retl_o2_4)
++	EX_ST(STORE(stw, %g1, %o1 + %o3),GEN_retl_o2_4)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 4, %o1
+ 
+@@ -111,8 +131,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	.align		32
+ 90:
+ 	subcc		%o2, 1, %o2
+-	EX_LD(LOAD(ldub, %o1, %g1))
+-	EX_ST(STORE(stb, %g1, %o1 + %o3))
++	EX_LD(LOAD(ldub, %o1, %g1),GEN_retl_o2_1)
++	EX_ST(STORE(stb, %g1, %o1 + %o3),GEN_retl_o2_1)
+ 	bgu,pt		%XCC, 90b
+ 	 add		%o1, 1, %o1
+ 	retl
+diff --git a/arch/sparc/lib/Makefile b/arch/sparc/lib/Makefile
+index 3269b0234093..4f2384a4286a 100644
+--- a/arch/sparc/lib/Makefile
++++ b/arch/sparc/lib/Makefile
+@@ -38,7 +38,7 @@ lib-$(CONFIG_SPARC64) +=  NG4patch.o NG4copy_page.o NG4clear_page.o NG4memset.o
+ lib-$(CONFIG_SPARC64) += GENmemcpy.o GENcopy_from_user.o GENcopy_to_user.o
+ lib-$(CONFIG_SPARC64) += GENpatch.o GENpage.o GENbzero.o
+ 
+-lib-$(CONFIG_SPARC64) += copy_in_user.o user_fixup.o memmove.o
++lib-$(CONFIG_SPARC64) += copy_in_user.o memmove.o
+ lib-$(CONFIG_SPARC64) += mcount.o ipcsum.o xor.o hweight.o ffs.o
+ 
+ obj-$(CONFIG_SPARC64) += iomap.o
+diff --git a/arch/sparc/lib/NG2copy_from_user.S b/arch/sparc/lib/NG2copy_from_user.S
+index d5242b8c4f94..b79a6998d87c 100644
+--- a/arch/sparc/lib/NG2copy_from_user.S
++++ b/arch/sparc/lib/NG2copy_from_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_LD_FP(x)		\
++#define EX_LD_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NG2copy_to_user.S b/arch/sparc/lib/NG2copy_to_user.S
+index 4e962d993b10..dcec55f254ab 100644
+--- a/arch/sparc/lib/NG2copy_to_user.S
++++ b/arch/sparc/lib/NG2copy_to_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_ST_FP(x)		\
++#define EX_ST_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NG2memcpy.S b/arch/sparc/lib/NG2memcpy.S
+index d5f585df2f3f..c629dbd121b6 100644
+--- a/arch/sparc/lib/NG2memcpy.S
++++ b/arch/sparc/lib/NG2memcpy.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #include <asm/visasm.h>
+ #include <asm/asi.h>
+ #define GLOBAL_SPARE	%g7
+@@ -32,21 +33,17 @@
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ #ifndef EX_LD_FP
+-#define EX_LD_FP(x)	x
++#define EX_LD_FP(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ #ifndef EX_ST_FP
+-#define EX_ST_FP(x)	x
+-#endif
+-
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
++#define EX_ST_FP(x,y)	x
+ #endif
+ 
+ #ifndef LOAD
+@@ -140,45 +137,110 @@
+ 	fsrc2		%x6, %f12; \
+ 	fsrc2		%x7, %f14;
+ #define FREG_LOAD_1(base, x0) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0))
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1)
+ #define FREG_LOAD_2(base, x0, x1) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1);
+ #define FREG_LOAD_3(base, x0, x1, x2) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x10, %x2));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x10, %x2), NG2_retl_o2_plus_g1);
+ #define FREG_LOAD_4(base, x0, x1, x2, x3) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x10, %x2)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x18, %x3));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x10, %x2), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x18, %x3), NG2_retl_o2_plus_g1);
+ #define FREG_LOAD_5(base, x0, x1, x2, x3, x4) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x10, %x2)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x18, %x3)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x20, %x4));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x10, %x2), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x18, %x3), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x20, %x4), NG2_retl_o2_plus_g1);
+ #define FREG_LOAD_6(base, x0, x1, x2, x3, x4, x5) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x10, %x2)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x18, %x3)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x20, %x4)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x28, %x5));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x10, %x2), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x18, %x3), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x20, %x4), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x28, %x5), NG2_retl_o2_plus_g1);
+ #define FREG_LOAD_7(base, x0, x1, x2, x3, x4, x5, x6) \
+-	EX_LD_FP(LOAD(ldd, base + 0x00, %x0)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x08, %x1)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x10, %x2)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x18, %x3)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x20, %x4)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x28, %x5)); \
+-	EX_LD_FP(LOAD(ldd, base + 0x30, %x6));
++	EX_LD_FP(LOAD(ldd, base + 0x00, %x0), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x08, %x1), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x10, %x2), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x18, %x3), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x20, %x4), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x28, %x5), NG2_retl_o2_plus_g1); \
++	EX_LD_FP(LOAD(ldd, base + 0x30, %x6), NG2_retl_o2_plus_g1);
+ 
+ 	.register	%g2,#scratch
+ 	.register	%g3,#scratch
+ 
+ 	.text
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++__restore_fp:
++	VISExitHalf
++__restore_asi:
++	retl
++	 wr	%g0, ASI_AIUS, %asi
++ENTRY(NG2_retl_o2)
++	ba,pt	%xcc, __restore_asi
++	 mov	%o2, %o0
++ENDPROC(NG2_retl_o2)
++ENTRY(NG2_retl_o2_plus_1)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, 1, %o0
++ENDPROC(NG2_retl_o2_plus_1)
++ENTRY(NG2_retl_o2_plus_4)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, 4, %o0
++ENDPROC(NG2_retl_o2_plus_4)
++ENTRY(NG2_retl_o2_plus_8)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, 8, %o0
++ENDPROC(NG2_retl_o2_plus_8)
++ENTRY(NG2_retl_o2_plus_o4_plus_1)
++	add	%o4, 1, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG2_retl_o2_plus_o4_plus_1)
++ENTRY(NG2_retl_o2_plus_o4_plus_8)
++	add	%o4, 8, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG2_retl_o2_plus_o4_plus_8)
++ENTRY(NG2_retl_o2_plus_o4_plus_16)
++	add	%o4, 16, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG2_retl_o2_plus_o4_plus_16)
++ENTRY(NG2_retl_o2_plus_g1_fp)
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %g1, %o0
++ENDPROC(NG2_retl_o2_plus_g1_fp)
++ENTRY(NG2_retl_o2_plus_g1_plus_64_fp)
++	add	%g1, 64, %g1
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %g1, %o0
++ENDPROC(NG2_retl_o2_plus_g1_plus_64_fp)
++ENTRY(NG2_retl_o2_plus_g1_plus_1)
++	add	%g1, 1, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %g1, %o0
++ENDPROC(NG2_retl_o2_plus_g1_plus_1)
++ENTRY(NG2_retl_o2_and_7_plus_o4)
++	and	%o2, 7, %o2
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG2_retl_o2_and_7_plus_o4)
++ENTRY(NG2_retl_o2_and_7_plus_o4_plus_8)
++	and	%o2, 7, %o2
++	add	%o4, 8, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG2_retl_o2_and_7_plus_o4_plus_8)
++#endif
++
+ 	.align		64
+ 
+ 	.globl	FUNC_NAME
+@@ -230,8 +292,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	sub		%g0, %o4, %o4	! bytes to align dst
+ 	sub		%o2, %o4, %o2
+ 1:	subcc		%o4, 1, %o4
+-	EX_LD(LOAD(ldub, %o1, %g1))
+-	EX_ST(STORE(stb, %g1, %o0))
++	EX_LD(LOAD(ldub, %o1, %g1), NG2_retl_o2_plus_o4_plus_1)
++	EX_ST(STORE(stb, %g1, %o0), NG2_retl_o2_plus_o4_plus_1)
+ 	add		%o1, 1, %o1
+ 	bne,pt		%XCC, 1b
+ 	add		%o0, 1, %o0
+@@ -281,11 +343,11 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	 nop
+ 	/* fall through for 0 < low bits < 8 */
+ 110:	sub		%o4, 64, %g2
+-	EX_LD_FP(LOAD_BLK(%g2, %f0))
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++	EX_LD_FP(LOAD_BLK(%g2, %f0), NG2_retl_o2_plus_g1)
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f6, f8, f10, f12, f14, f16)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_8(f16, f18, f20, f22, f24, f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -296,10 +358,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 120:	sub		%o4, 56, %g2
+ 	FREG_LOAD_7(%g2, f0, f2, f4, f6, f8, f10, f12)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f6, f8, f10, f12, f16, f18)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_7(f18, f20, f22, f24, f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -310,10 +372,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 130:	sub		%o4, 48, %g2
+ 	FREG_LOAD_6(%g2, f0, f2, f4, f6, f8, f10)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f6, f8, f10, f16, f18, f20)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_6(f20, f22, f24, f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -324,10 +386,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 140:	sub		%o4, 40, %g2
+ 	FREG_LOAD_5(%g2, f0, f2, f4, f6, f8)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f6, f8, f16, f18, f20, f22)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_5(f22, f24, f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -338,10 +400,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 150:	sub		%o4, 32, %g2
+ 	FREG_LOAD_4(%g2, f0, f2, f4, f6)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f6, f16, f18, f20, f22, f24)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_4(f24, f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -352,10 +414,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 160:	sub		%o4, 24, %g2
+ 	FREG_LOAD_3(%g2, f0, f2, f4)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f4, f16, f18, f20, f22, f24, f26)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_3(f26, f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -366,10 +428,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 170:	sub		%o4, 16, %g2
+ 	FREG_LOAD_2(%g2, f0, f2)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f2, f16, f18, f20, f22, f24, f26, f28)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_2(f28, f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -380,10 +442,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 180:	sub		%o4, 8, %g2
+ 	FREG_LOAD_1(%g2, f0)
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
+-	EX_LD_FP(LOAD_BLK(%o4, %f16))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
++	EX_LD_FP(LOAD_BLK(%o4, %f16), NG2_retl_o2_plus_g1)
+ 	FREG_FROB(f0, f16, f18, f20, f22, f24, f26, f28, f30)
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	FREG_MOVE_1(f30)
+ 	subcc		%g1, 64, %g1
+ 	add		%o4, 64, %o4
+@@ -393,10 +455,10 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	 nop
+ 
+ 190:
+-1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3))
++1:	EX_ST_FP(STORE_INIT(%g0, %o4 + %g3), NG2_retl_o2_plus_g1)
+ 	subcc		%g1, 64, %g1
+-	EX_LD_FP(LOAD_BLK(%o4, %f0))
+-	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3))
++	EX_LD_FP(LOAD_BLK(%o4, %f0), NG2_retl_o2_plus_g1_plus_64)
++	EX_ST_FP(STORE_BLK(%f0, %o4 + %g3), NG2_retl_o2_plus_g1_plus_64)
+ 	add		%o4, 64, %o4
+ 	bne,pt		%xcc, 1b
+ 	 LOAD(prefetch, %o4 + 64, #one_read)
+@@ -423,28 +485,28 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	andn		%o2, 0xf, %o4
+ 	and		%o2, 0xf, %o2
+ 1:	subcc		%o4, 0x10, %o4
+-	EX_LD(LOAD(ldx, %o1, %o5))
++	EX_LD(LOAD(ldx, %o1, %o5), NG2_retl_o2_plus_o4_plus_16)
+ 	add		%o1, 0x08, %o1
+-	EX_LD(LOAD(ldx, %o1, %g1))
++	EX_LD(LOAD(ldx, %o1, %g1), NG2_retl_o2_plus_o4_plus_16)
+ 	sub		%o1, 0x08, %o1
+-	EX_ST(STORE(stx, %o5, %o1 + GLOBAL_SPARE))
++	EX_ST(STORE(stx, %o5, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_o4_plus_16)
+ 	add		%o1, 0x8, %o1
+-	EX_ST(STORE(stx, %g1, %o1 + GLOBAL_SPARE))
++	EX_ST(STORE(stx, %g1, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_o4_plus_8)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 0x8, %o1
+ 73:	andcc		%o2, 0x8, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%o2, 0x8, %o2
+-	EX_LD(LOAD(ldx, %o1, %o5))
+-	EX_ST(STORE(stx, %o5, %o1 + GLOBAL_SPARE))
++	EX_LD(LOAD(ldx, %o1, %o5), NG2_retl_o2_plus_8)
++	EX_ST(STORE(stx, %o5, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_8)
+ 	add		%o1, 0x8, %o1
+ 1:	andcc		%o2, 0x4, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%o2, 0x4, %o2
+-	EX_LD(LOAD(lduw, %o1, %o5))
+-	EX_ST(STORE(stw, %o5, %o1 + GLOBAL_SPARE))
++	EX_LD(LOAD(lduw, %o1, %o5), NG2_retl_o2_plus_4)
++	EX_ST(STORE(stw, %o5, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_4)
+ 	add		%o1, 0x4, %o1
+ 1:	cmp		%o2, 0
+ 	be,pt		%XCC, 85f
+@@ -460,8 +522,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	sub		%o2, %g1, %o2
+ 
+ 1:	subcc		%g1, 1, %g1
+-	EX_LD(LOAD(ldub, %o1, %o5))
+-	EX_ST(STORE(stb, %o5, %o1 + GLOBAL_SPARE))
++	EX_LD(LOAD(ldub, %o1, %o5), NG2_retl_o2_plus_g1_plus_1)
++	EX_ST(STORE(stb, %o5, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_g1_plus_1)
+ 	bgu,pt		%icc, 1b
+ 	 add		%o1, 1, %o1
+ 
+@@ -477,16 +539,16 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 8:	mov		64, GLOBAL_SPARE
+ 	andn		%o1, 0x7, %o1
+-	EX_LD(LOAD(ldx, %o1, %g2))
++	EX_LD(LOAD(ldx, %o1, %g2), NG2_retl_o2)
+ 	sub		GLOBAL_SPARE, %g1, GLOBAL_SPARE
+ 	andn		%o2, 0x7, %o4
+ 	sllx		%g2, %g1, %g2
+ 1:	add		%o1, 0x8, %o1
+-	EX_LD(LOAD(ldx, %o1, %g3))
++	EX_LD(LOAD(ldx, %o1, %g3), NG2_retl_o2_and_7_plus_o4)
+ 	subcc		%o4, 0x8, %o4
+ 	srlx		%g3, GLOBAL_SPARE, %o5
+ 	or		%o5, %g2, %o5
+-	EX_ST(STORE(stx, %o5, %o0))
++	EX_ST(STORE(stx, %o5, %o0), NG2_retl_o2_and_7_plus_o4_plus_8)
+ 	add		%o0, 0x8, %o0
+ 	bgu,pt		%icc, 1b
+ 	 sllx		%g3, %g1, %g2
+@@ -506,8 +568,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 1:
+ 	subcc		%o2, 4, %o2
+-	EX_LD(LOAD(lduw, %o1, %g1))
+-	EX_ST(STORE(stw, %g1, %o1 + GLOBAL_SPARE))
++	EX_LD(LOAD(lduw, %o1, %g1), NG2_retl_o2_plus_4)
++	EX_ST(STORE(stw, %g1, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_4)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 4, %o1
+ 
+@@ -517,8 +579,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	.align		32
+ 90:
+ 	subcc		%o2, 1, %o2
+-	EX_LD(LOAD(ldub, %o1, %g1))
+-	EX_ST(STORE(stb, %g1, %o1 + GLOBAL_SPARE))
++	EX_LD(LOAD(ldub, %o1, %g1), NG2_retl_o2_plus_1)
++	EX_ST(STORE(stb, %g1, %o1 + GLOBAL_SPARE), NG2_retl_o2_plus_1)
+ 	bgu,pt		%XCC, 90b
+ 	 add		%o1, 1, %o1
+ 	retl
+diff --git a/arch/sparc/lib/NG4copy_from_user.S b/arch/sparc/lib/NG4copy_from_user.S
+index 2e8ee7ad07a9..16a286c1a528 100644
+--- a/arch/sparc/lib/NG4copy_from_user.S
++++ b/arch/sparc/lib/NG4copy_from_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 2012 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x, y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_LD_FP(x)		\
++#define EX_LD_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NG4copy_to_user.S b/arch/sparc/lib/NG4copy_to_user.S
+index be0bf4590df8..6b0276ffc858 100644
+--- a/arch/sparc/lib/NG4copy_to_user.S
++++ b/arch/sparc/lib/NG4copy_to_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 2012 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_ST_FP(x)		\
++#define EX_ST_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_asi_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NG4memcpy.S b/arch/sparc/lib/NG4memcpy.S
+index 8e13ee1f4454..75bb93b1437f 100644
+--- a/arch/sparc/lib/NG4memcpy.S
++++ b/arch/sparc/lib/NG4memcpy.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #include <asm/visasm.h>
+ #include <asm/asi.h>
+ #define GLOBAL_SPARE	%g7
+@@ -46,22 +47,19 @@
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ #ifndef EX_LD_FP
+-#define EX_LD_FP(x)	x
++#define EX_LD_FP(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ #ifndef EX_ST_FP
+-#define EX_ST_FP(x)	x
++#define EX_ST_FP(x,y)	x
+ #endif
+ 
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
+-#endif
+ 
+ #ifndef LOAD
+ #define LOAD(type,addr,dest)	type [addr], dest
+@@ -94,6 +92,158 @@
+ 	.register	%g3,#scratch
+ 
+ 	.text
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++__restore_asi_fp:
++	VISExitHalf
++__restore_asi:
++	retl
++	 wr	%g0, ASI_AIUS, %asi
++
++ENTRY(NG4_retl_o2)
++	ba,pt	%xcc, __restore_asi
++	 mov	%o2, %o0
++ENDPROC(NG4_retl_o2)
++ENTRY(NG4_retl_o2_plus_1)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, 1, %o0
++ENDPROC(NG4_retl_o2_plus_1)
++ENTRY(NG4_retl_o2_plus_4)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, 4, %o0
++ENDPROC(NG4_retl_o2_plus_4)
++ENTRY(NG4_retl_o2_plus_o5)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5)
++ENTRY(NG4_retl_o2_plus_o5_plus_4)
++	add	%o5, 4, %o5
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5_plus_4)
++ENTRY(NG4_retl_o2_plus_o5_plus_8)
++	add	%o5, 8, %o5
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5_plus_8)
++ENTRY(NG4_retl_o2_plus_o5_plus_16)
++	add	%o5, 16, %o5
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5_plus_16)
++ENTRY(NG4_retl_o2_plus_o5_plus_24)
++	add	%o5, 24, %o5
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5_plus_24)
++ENTRY(NG4_retl_o2_plus_o5_plus_32)
++	add	%o5, 32, %o5
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o5, %o0
++ENDPROC(NG4_retl_o2_plus_o5_plus_32)
++ENTRY(NG4_retl_o2_plus_g1)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %g1, %o0
++ENDPROC(NG4_retl_o2_plus_g1)
++ENTRY(NG4_retl_o2_plus_g1_plus_1)
++	add	%g1, 1, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %g1, %o0
++ENDPROC(NG4_retl_o2_plus_g1_plus_1)
++ENTRY(NG4_retl_o2_plus_g1_plus_8)
++	add	%g1, 8, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %g1, %o0
++ENDPROC(NG4_retl_o2_plus_g1_plus_8)
++ENTRY(NG4_retl_o2_plus_o4)
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4)
++ENTRY(NG4_retl_o2_plus_o4_plus_8)
++	add	%o4, 8, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_8)
++ENTRY(NG4_retl_o2_plus_o4_plus_16)
++	add	%o4, 16, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_16)
++ENTRY(NG4_retl_o2_plus_o4_plus_24)
++	add	%o4, 24, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_24)
++ENTRY(NG4_retl_o2_plus_o4_plus_32)
++	add	%o4, 32, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_32)
++ENTRY(NG4_retl_o2_plus_o4_plus_40)
++	add	%o4, 40, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_40)
++ENTRY(NG4_retl_o2_plus_o4_plus_48)
++	add	%o4, 48, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_48)
++ENTRY(NG4_retl_o2_plus_o4_plus_56)
++	add	%o4, 56, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_56)
++ENTRY(NG4_retl_o2_plus_o4_plus_64)
++	add	%o4, 64, %o4
++	ba,pt	%xcc, __restore_asi
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_64)
++ENTRY(NG4_retl_o2_plus_o4_fp)
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_8_fp)
++	add	%o4, 8, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_8_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_16_fp)
++	add	%o4, 16, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_16_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_24_fp)
++	add	%o4, 24, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_24_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_32_fp)
++	add	%o4, 32, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_32_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_40_fp)
++	add	%o4, 40, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_40_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_48_fp)
++	add	%o4, 48, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_48_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_56_fp)
++	add	%o4, 56, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_56_fp)
++ENTRY(NG4_retl_o2_plus_o4_plus_64_fp)
++	add	%o4, 64, %o4
++	ba,pt	%xcc, __restore_asi_fp
++	 add	%o2, %o4, %o0
++ENDPROC(NG4_retl_o2_plus_o4_plus_64_fp)
++#endif
+ 	.align		64
+ 
+ 	.globl	FUNC_NAME
+@@ -124,12 +274,13 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	brz,pt		%g1, 51f
+ 	 sub		%o2, %g1, %o2
+ 
+-1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g2))
++
++1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g2), NG4_retl_o2_plus_g1)
+ 	add		%o1, 1, %o1
+ 	subcc		%g1, 1, %g1
+ 	add		%o0, 1, %o0
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stb, %g2, %o0 - 0x01))
++	 EX_ST(STORE(stb, %g2, %o0 - 0x01), NG4_retl_o2_plus_g1_plus_1)
+ 
+ 51:	LOAD(prefetch, %o1 + 0x040, #n_reads_strong)
+ 	LOAD(prefetch, %o1 + 0x080, #n_reads_strong)
+@@ -154,43 +305,43 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	brz,pt		%g1, .Llarge_aligned
+ 	 sub		%o2, %g1, %o2
+ 
+-1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g2))
++1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g2), NG4_retl_o2_plus_g1)
+ 	add		%o1, 8, %o1
+ 	subcc		%g1, 8, %g1
+ 	add		%o0, 8, %o0
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stx, %g2, %o0 - 0x08))
++	 EX_ST(STORE(stx, %g2, %o0 - 0x08), NG4_retl_o2_plus_g1_plus_8)
+ 
+ .Llarge_aligned:
+ 	/* len >= 0x80 && src 8-byte aligned && dest 8-byte aligned */
+ 	andn		%o2, 0x3f, %o4
+ 	sub		%o2, %o4, %o2
+ 
+-1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1))
++1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1), NG4_retl_o2_plus_o4)
+ 	add		%o1, 0x40, %o1
+-	EX_LD(LOAD(ldx, %o1 - 0x38, %g2))
++	EX_LD(LOAD(ldx, %o1 - 0x38, %g2), NG4_retl_o2_plus_o4)
+ 	subcc		%o4, 0x40, %o4
+-	EX_LD(LOAD(ldx, %o1 - 0x30, %g3))
+-	EX_LD(LOAD(ldx, %o1 - 0x28, GLOBAL_SPARE))
+-	EX_LD(LOAD(ldx, %o1 - 0x20, %o5))
+-	EX_ST(STORE_INIT(%g1, %o0))
++	EX_LD(LOAD(ldx, %o1 - 0x30, %g3), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD(LOAD(ldx, %o1 - 0x28, GLOBAL_SPARE), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD(LOAD(ldx, %o1 - 0x20, %o5), NG4_retl_o2_plus_o4_plus_64)
++	EX_ST(STORE_INIT(%g1, %o0), NG4_retl_o2_plus_o4_plus_64)
+ 	add		%o0, 0x08, %o0
+-	EX_ST(STORE_INIT(%g2, %o0))
++	EX_ST(STORE_INIT(%g2, %o0), NG4_retl_o2_plus_o4_plus_56)
+ 	add		%o0, 0x08, %o0
+-	EX_LD(LOAD(ldx, %o1 - 0x18, %g2))
+-	EX_ST(STORE_INIT(%g3, %o0))
++	EX_LD(LOAD(ldx, %o1 - 0x18, %g2), NG4_retl_o2_plus_o4_plus_48)
++	EX_ST(STORE_INIT(%g3, %o0), NG4_retl_o2_plus_o4_plus_48)
+ 	add		%o0, 0x08, %o0
+-	EX_LD(LOAD(ldx, %o1 - 0x10, %g3))
+-	EX_ST(STORE_INIT(GLOBAL_SPARE, %o0))
++	EX_LD(LOAD(ldx, %o1 - 0x10, %g3), NG4_retl_o2_plus_o4_plus_40)
++	EX_ST(STORE_INIT(GLOBAL_SPARE, %o0), NG4_retl_o2_plus_o4_plus_40)
+ 	add		%o0, 0x08, %o0
+-	EX_LD(LOAD(ldx, %o1 - 0x08, GLOBAL_SPARE))
+-	EX_ST(STORE_INIT(%o5, %o0))
++	EX_LD(LOAD(ldx, %o1 - 0x08, GLOBAL_SPARE), NG4_retl_o2_plus_o4_plus_32)
++	EX_ST(STORE_INIT(%o5, %o0), NG4_retl_o2_plus_o4_plus_32)
+ 	add		%o0, 0x08, %o0
+-	EX_ST(STORE_INIT(%g2, %o0))
++	EX_ST(STORE_INIT(%g2, %o0), NG4_retl_o2_plus_o4_plus_24)
+ 	add		%o0, 0x08, %o0
+-	EX_ST(STORE_INIT(%g3, %o0))
++	EX_ST(STORE_INIT(%g3, %o0), NG4_retl_o2_plus_o4_plus_16)
+ 	add		%o0, 0x08, %o0
+-	EX_ST(STORE_INIT(GLOBAL_SPARE, %o0))
++	EX_ST(STORE_INIT(GLOBAL_SPARE, %o0), NG4_retl_o2_plus_o4_plus_8)
+ 	add		%o0, 0x08, %o0
+ 	bne,pt		%icc, 1b
+ 	 LOAD(prefetch, %o1 + 0x200, #n_reads_strong)
+@@ -216,17 +367,17 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	sub		%o2, %o4, %o2
+ 	alignaddr	%o1, %g0, %g1
+ 	add		%o1, %o4, %o1
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x00, %f0))
+-1:	EX_LD_FP(LOAD(ldd, %g1 + 0x08, %f2))
++	EX_LD_FP(LOAD(ldd, %g1 + 0x00, %f0), NG4_retl_o2_plus_o4)
++1:	EX_LD_FP(LOAD(ldd, %g1 + 0x08, %f2), NG4_retl_o2_plus_o4)
+ 	subcc		%o4, 0x40, %o4
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x10, %f4))
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x18, %f6))
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x20, %f8))
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x28, %f10))
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x30, %f12))
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x38, %f14))
++	EX_LD_FP(LOAD(ldd, %g1 + 0x10, %f4), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD_FP(LOAD(ldd, %g1 + 0x18, %f6), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD_FP(LOAD(ldd, %g1 + 0x20, %f8), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD_FP(LOAD(ldd, %g1 + 0x28, %f10), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD_FP(LOAD(ldd, %g1 + 0x30, %f12), NG4_retl_o2_plus_o4_plus_64)
++	EX_LD_FP(LOAD(ldd, %g1 + 0x38, %f14), NG4_retl_o2_plus_o4_plus_64)
+ 	faligndata	%f0, %f2, %f16
+-	EX_LD_FP(LOAD(ldd, %g1 + 0x40, %f0))
++	EX_LD_FP(LOAD(ldd, %g1 + 0x40, %f0), NG4_retl_o2_plus_o4_plus_64)
+ 	faligndata	%f2, %f4, %f18
+ 	add		%g1, 0x40, %g1
+ 	faligndata	%f4, %f6, %f20
+@@ -235,14 +386,14 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	faligndata	%f10, %f12, %f26
+ 	faligndata	%f12, %f14, %f28
+ 	faligndata	%f14, %f0, %f30
+-	EX_ST_FP(STORE(std, %f16, %o0 + 0x00))
+-	EX_ST_FP(STORE(std, %f18, %o0 + 0x08))
+-	EX_ST_FP(STORE(std, %f20, %o0 + 0x10))
+-	EX_ST_FP(STORE(std, %f22, %o0 + 0x18))
+-	EX_ST_FP(STORE(std, %f24, %o0 + 0x20))
+-	EX_ST_FP(STORE(std, %f26, %o0 + 0x28))
+-	EX_ST_FP(STORE(std, %f28, %o0 + 0x30))
+-	EX_ST_FP(STORE(std, %f30, %o0 + 0x38))
++	EX_ST_FP(STORE(std, %f16, %o0 + 0x00), NG4_retl_o2_plus_o4_plus_64)
++	EX_ST_FP(STORE(std, %f18, %o0 + 0x08), NG4_retl_o2_plus_o4_plus_56)
++	EX_ST_FP(STORE(std, %f20, %o0 + 0x10), NG4_retl_o2_plus_o4_plus_48)
++	EX_ST_FP(STORE(std, %f22, %o0 + 0x18), NG4_retl_o2_plus_o4_plus_40)
++	EX_ST_FP(STORE(std, %f24, %o0 + 0x20), NG4_retl_o2_plus_o4_plus_32)
++	EX_ST_FP(STORE(std, %f26, %o0 + 0x28), NG4_retl_o2_plus_o4_plus_24)
++	EX_ST_FP(STORE(std, %f28, %o0 + 0x30), NG4_retl_o2_plus_o4_plus_16)
++	EX_ST_FP(STORE(std, %f30, %o0 + 0x38), NG4_retl_o2_plus_o4_plus_8)
+ 	add		%o0, 0x40, %o0
+ 	bne,pt		%icc, 1b
+ 	 LOAD(prefetch, %g1 + 0x200, #n_reads_strong)
+@@ -270,37 +421,38 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	andncc		%o2, 0x20 - 1, %o5
+ 	be,pn		%icc, 2f
+ 	 sub		%o2, %o5, %o2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1))
+-	EX_LD(LOAD(ldx, %o1 + 0x08, %g2))
+-	EX_LD(LOAD(ldx, %o1 + 0x10, GLOBAL_SPARE))
+-	EX_LD(LOAD(ldx, %o1 + 0x18, %o4))
++1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1), NG4_retl_o2_plus_o5)
++	EX_LD(LOAD(ldx, %o1 + 0x08, %g2), NG4_retl_o2_plus_o5)
++	EX_LD(LOAD(ldx, %o1 + 0x10, GLOBAL_SPARE), NG4_retl_o2_plus_o5)
++	EX_LD(LOAD(ldx, %o1 + 0x18, %o4), NG4_retl_o2_plus_o5)
+ 	add		%o1, 0x20, %o1
+ 	subcc		%o5, 0x20, %o5
+-	EX_ST(STORE(stx, %g1, %o0 + 0x00))
+-	EX_ST(STORE(stx, %g2, %o0 + 0x08))
+-	EX_ST(STORE(stx, GLOBAL_SPARE, %o0 + 0x10))
+-	EX_ST(STORE(stx, %o4, %o0 + 0x18))
++	EX_ST(STORE(stx, %g1, %o0 + 0x00), NG4_retl_o2_plus_o5_plus_32)
++	EX_ST(STORE(stx, %g2, %o0 + 0x08), NG4_retl_o2_plus_o5_plus_24)
++	EX_ST(STORE(stx, GLOBAL_SPARE, %o0 + 0x10), NG4_retl_o2_plus_o5_plus_24)
++	EX_ST(STORE(stx, %o4, %o0 + 0x18), NG4_retl_o2_plus_o5_plus_8)
+ 	bne,pt		%icc, 1b
+ 	 add		%o0, 0x20, %o0
+ 2:	andcc		%o2, 0x18, %o5
+ 	be,pt		%icc, 3f
+ 	 sub		%o2, %o5, %o2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1))
++
++1:	EX_LD(LOAD(ldx, %o1 + 0x00, %g1), NG4_retl_o2_plus_o5)
+ 	add		%o1, 0x08, %o1
+ 	add		%o0, 0x08, %o0
+ 	subcc		%o5, 0x08, %o5
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stx, %g1, %o0 - 0x08))
++	 EX_ST(STORE(stx, %g1, %o0 - 0x08), NG4_retl_o2_plus_o5_plus_8)
+ 3:	brz,pt		%o2, .Lexit
+ 	 cmp		%o2, 0x04
+ 	bl,pn		%icc, .Ltiny
+ 	 nop
+-	EX_LD(LOAD(lduw, %o1 + 0x00, %g1))
++	EX_LD(LOAD(lduw, %o1 + 0x00, %g1), NG4_retl_o2)
+ 	add		%o1, 0x04, %o1
+ 	add		%o0, 0x04, %o0
+ 	subcc		%o2, 0x04, %o2
+ 	bne,pn		%icc, .Ltiny
+-	 EX_ST(STORE(stw, %g1, %o0 - 0x04))
++	 EX_ST(STORE(stw, %g1, %o0 - 0x04), NG4_retl_o2_plus_4)
+ 	ba,a,pt		%icc, .Lexit
+ .Lmedium_unaligned:
+ 	/* First get dest 8 byte aligned.  */
+@@ -309,12 +461,12 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	brz,pt		%g1, 2f
+ 	 sub		%o2, %g1, %o2
+ 
+-1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g2))
++1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g2), NG4_retl_o2_plus_g1)
+ 	add		%o1, 1, %o1
+ 	subcc		%g1, 1, %g1
+ 	add		%o0, 1, %o0
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stb, %g2, %o0 - 0x01))
++	 EX_ST(STORE(stb, %g2, %o0 - 0x01), NG4_retl_o2_plus_g1_plus_1)
+ 2:
+ 	and		%o1, 0x7, %g1
+ 	brz,pn		%g1, .Lmedium_noprefetch
+@@ -322,16 +474,16 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	mov		64, %g2
+ 	sub		%g2, %g1, %g2
+ 	andn		%o1, 0x7, %o1
+-	EX_LD(LOAD(ldx, %o1 + 0x00, %o4))
++	EX_LD(LOAD(ldx, %o1 + 0x00, %o4), NG4_retl_o2)
+ 	sllx		%o4, %g1, %o4
+ 	andn		%o2, 0x08 - 1, %o5
+ 	sub		%o2, %o5, %o2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x08, %g3))
++1:	EX_LD(LOAD(ldx, %o1 + 0x08, %g3), NG4_retl_o2_plus_o5)
+ 	add		%o1, 0x08, %o1
+ 	subcc		%o5, 0x08, %o5
+ 	srlx		%g3, %g2, GLOBAL_SPARE
+ 	or		GLOBAL_SPARE, %o4, GLOBAL_SPARE
+-	EX_ST(STORE(stx, GLOBAL_SPARE, %o0 + 0x00))
++	EX_ST(STORE(stx, GLOBAL_SPARE, %o0 + 0x00), NG4_retl_o2_plus_o5_plus_8)
+ 	add		%o0, 0x08, %o0
+ 	bne,pt		%icc, 1b
+ 	 sllx		%g3, %g1, %o4
+@@ -342,17 +494,17 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	ba,pt		%icc, .Lsmall_unaligned
+ 
+ .Ltiny:
+-	EX_LD(LOAD(ldub, %o1 + 0x00, %g1))
++	EX_LD(LOAD(ldub, %o1 + 0x00, %g1), NG4_retl_o2)
+ 	subcc		%o2, 1, %o2
+ 	be,pn		%icc, .Lexit
+-	 EX_ST(STORE(stb, %g1, %o0 + 0x00))
+-	EX_LD(LOAD(ldub, %o1 + 0x01, %g1))
++	 EX_ST(STORE(stb, %g1, %o0 + 0x00), NG4_retl_o2_plus_1)
++	EX_LD(LOAD(ldub, %o1 + 0x01, %g1), NG4_retl_o2)
+ 	subcc		%o2, 1, %o2
+ 	be,pn		%icc, .Lexit
+-	 EX_ST(STORE(stb, %g1, %o0 + 0x01))
+-	EX_LD(LOAD(ldub, %o1 + 0x02, %g1))
++	 EX_ST(STORE(stb, %g1, %o0 + 0x01), NG4_retl_o2_plus_1)
++	EX_LD(LOAD(ldub, %o1 + 0x02, %g1), NG4_retl_o2)
+ 	ba,pt		%icc, .Lexit
+-	 EX_ST(STORE(stb, %g1, %o0 + 0x02))
++	 EX_ST(STORE(stb, %g1, %o0 + 0x02), NG4_retl_o2)
+ 
+ .Lsmall:
+ 	andcc		%g2, 0x3, %g0
+@@ -360,22 +512,22 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	 andn		%o2, 0x4 - 1, %o5
+ 	sub		%o2, %o5, %o2
+ 1:
+-	EX_LD(LOAD(lduw, %o1 + 0x00, %g1))
++	EX_LD(LOAD(lduw, %o1 + 0x00, %g1), NG4_retl_o2_plus_o5)
+ 	add		%o1, 0x04, %o1
+ 	subcc		%o5, 0x04, %o5
+ 	add		%o0, 0x04, %o0
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stw, %g1, %o0 - 0x04))
++	 EX_ST(STORE(stw, %g1, %o0 - 0x04), NG4_retl_o2_plus_o5_plus_4)
+ 	brz,pt		%o2, .Lexit
+ 	 nop
+ 	ba,a,pt		%icc, .Ltiny
+ 
+ .Lsmall_unaligned:
+-1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g1))
++1:	EX_LD(LOAD(ldub, %o1 + 0x00, %g1), NG4_retl_o2)
+ 	add		%o1, 1, %o1
+ 	add		%o0, 1, %o0
+ 	subcc		%o2, 1, %o2
+ 	bne,pt		%icc, 1b
+-	 EX_ST(STORE(stb, %g1, %o0 - 0x01))
++	 EX_ST(STORE(stb, %g1, %o0 - 0x01), NG4_retl_o2_plus_1)
+ 	ba,a,pt		%icc, .Lexit
+ 	.size		FUNC_NAME, .-FUNC_NAME
+diff --git a/arch/sparc/lib/NGcopy_from_user.S b/arch/sparc/lib/NGcopy_from_user.S
+index 5d1e4d1ac21e..9cd42fcbc781 100644
+--- a/arch/sparc/lib/NGcopy_from_user.S
++++ b/arch/sparc/lib/NGcopy_from_user.S
+@@ -3,11 +3,11 @@
+  * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __ret_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NGcopy_to_user.S b/arch/sparc/lib/NGcopy_to_user.S
+index ff630dcb273c..5c358afd464e 100644
+--- a/arch/sparc/lib/NGcopy_to_user.S
++++ b/arch/sparc/lib/NGcopy_to_user.S
+@@ -3,11 +3,11 @@
+  * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __ret_one_asi;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/NGmemcpy.S b/arch/sparc/lib/NGmemcpy.S
+index 96a14caf6966..d88c4ed50a00 100644
+--- a/arch/sparc/lib/NGmemcpy.S
++++ b/arch/sparc/lib/NGmemcpy.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #include <asm/asi.h>
+ #include <asm/thread_info.h>
+ #define GLOBAL_SPARE	%g7
+@@ -27,15 +28,11 @@
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
+-#endif
+-
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ 
+ #ifndef LOAD
+@@ -79,6 +76,92 @@
+ 	.register	%g3,#scratch
+ 
+ 	.text
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++__restore_asi:
++	ret
++	wr	%g0, ASI_AIUS, %asi
++	 restore
++ENTRY(NG_ret_i2_plus_i4_plus_1)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %i5, %i0
++ENDPROC(NG_ret_i2_plus_i4_plus_1)
++ENTRY(NG_ret_i2_plus_g1)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1)
++ENTRY(NG_ret_i2_plus_g1_minus_8)
++	sub	%g1, 8, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_8)
++ENTRY(NG_ret_i2_plus_g1_minus_16)
++	sub	%g1, 16, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_16)
++ENTRY(NG_ret_i2_plus_g1_minus_24)
++	sub	%g1, 24, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_24)
++ENTRY(NG_ret_i2_plus_g1_minus_32)
++	sub	%g1, 32, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_32)
++ENTRY(NG_ret_i2_plus_g1_minus_40)
++	sub	%g1, 40, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_40)
++ENTRY(NG_ret_i2_plus_g1_minus_48)
++	sub	%g1, 48, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_48)
++ENTRY(NG_ret_i2_plus_g1_minus_56)
++	sub	%g1, 56, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_minus_56)
++ENTRY(NG_ret_i2_plus_i4)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %i4, %i0
++ENDPROC(NG_ret_i2_plus_i4)
++ENTRY(NG_ret_i2_plus_i4_minus_8)
++	sub	%i4, 8, %i4
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %i4, %i0
++ENDPROC(NG_ret_i2_plus_i4_minus_8)
++ENTRY(NG_ret_i2_plus_8)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, 8, %i0
++ENDPROC(NG_ret_i2_plus_8)
++ENTRY(NG_ret_i2_plus_4)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, 4, %i0
++ENDPROC(NG_ret_i2_plus_4)
++ENTRY(NG_ret_i2_plus_1)
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, 1, %i0
++ENDPROC(NG_ret_i2_plus_1)
++ENTRY(NG_ret_i2_plus_g1_plus_1)
++	add	%g1, 1, %g1
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %g1, %i0
++ENDPROC(NG_ret_i2_plus_g1_plus_1)
++ENTRY(NG_ret_i2)
++	ba,pt	%xcc, __restore_asi
++	 mov	%i2, %i0
++ENDPROC(NG_ret_i2)
++ENTRY(NG_ret_i2_and_7_plus_i4)
++	and	%i2, 7, %i2
++	ba,pt	%xcc, __restore_asi
++	 add	%i2, %i4, %i0
++ENDPROC(NG_ret_i2_and_7_plus_i4)
++#endif
++
+ 	.align		64
+ 
+ 	.globl	FUNC_NAME
+@@ -126,8 +209,8 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	sub		%g0, %i4, %i4	! bytes to align dst
+ 	sub		%i2, %i4, %i2
+ 1:	subcc		%i4, 1, %i4
+-	EX_LD(LOAD(ldub, %i1, %g1))
+-	EX_ST(STORE(stb, %g1, %o0))
++	EX_LD(LOAD(ldub, %i1, %g1), NG_ret_i2_plus_i4_plus_1)
++	EX_ST(STORE(stb, %g1, %o0), NG_ret_i2_plus_i4_plus_1)
+ 	add		%i1, 1, %i1
+ 	bne,pt		%XCC, 1b
+ 	add		%o0, 1, %o0
+@@ -160,7 +243,7 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	and		%i4, 0x7, GLOBAL_SPARE
+ 	sll		GLOBAL_SPARE, 3, GLOBAL_SPARE
+ 	mov		64, %i5
+-	EX_LD(LOAD_TWIN(%i1, %g2, %g3))
++	EX_LD(LOAD_TWIN(%i1, %g2, %g3), NG_ret_i2_plus_g1)
+ 	sub		%i5, GLOBAL_SPARE, %i5
+ 	mov		16, %o4
+ 	mov		32, %o5
+@@ -178,31 +261,31 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	srlx		WORD3, PRE_SHIFT, TMP; \
+ 	or		WORD2, TMP, WORD2;
+ 
+-8:	EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3))
++8:	EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3), NG_ret_i2_plus_g1)
+ 	MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1)
+ 	LOAD(prefetch, %i1 + %i3, #one_read)
+ 
+-	EX_ST(STORE_INIT(%g2, %o0 + 0x00))
+-	EX_ST(STORE_INIT(%g3, %o0 + 0x08))
++	EX_ST(STORE_INIT(%g2, %o0 + 0x00), NG_ret_i2_plus_g1)
++	EX_ST(STORE_INIT(%g3, %o0 + 0x08), NG_ret_i2_plus_g1_minus_8)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3))
++	EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3), NG_ret_i2_plus_g1_minus_16)
+ 	MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x10))
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x18))
++	EX_ST(STORE_INIT(%o2, %o0 + 0x10), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%o3, %o0 + 0x18), NG_ret_i2_plus_g1_minus_24)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3))
++	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3), NG_ret_i2_plus_g1_minus_32)
+ 	MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%g2, %o0 + 0x20))
+-	EX_ST(STORE_INIT(%g3, %o0 + 0x28))
++	EX_ST(STORE_INIT(%g2, %o0 + 0x20), NG_ret_i2_plus_g1_minus_32)
++	EX_ST(STORE_INIT(%g3, %o0 + 0x28), NG_ret_i2_plus_g1_minus_40)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3))
++	EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3), NG_ret_i2_plus_g1_minus_48)
+ 	add		%i1, 64, %i1
+ 	MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x30))
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x38))
++	EX_ST(STORE_INIT(%o2, %o0 + 0x30), NG_ret_i2_plus_g1_minus_48)
++	EX_ST(STORE_INIT(%o3, %o0 + 0x38), NG_ret_i2_plus_g1_minus_56)
+ 
+ 	subcc		%g1, 64, %g1
+ 	bne,pt		%XCC, 8b
+@@ -211,31 +294,31 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	ba,pt		%XCC, 60f
+ 	 add		%i1, %i4, %i1
+ 
+-9:	EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3))
++9:	EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3), NG_ret_i2_plus_g1)
+ 	MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1)
+ 	LOAD(prefetch, %i1 + %i3, #one_read)
+ 
+-	EX_ST(STORE_INIT(%g3, %o0 + 0x00))
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x08))
++	EX_ST(STORE_INIT(%g3, %o0 + 0x00), NG_ret_i2_plus_g1)
++	EX_ST(STORE_INIT(%o2, %o0 + 0x08), NG_ret_i2_plus_g1_minus_8)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3))
++	EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3), NG_ret_i2_plus_g1_minus_16)
+ 	MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x10))
+-	EX_ST(STORE_INIT(%g2, %o0 + 0x18))
++	EX_ST(STORE_INIT(%o3, %o0 + 0x10), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%g2, %o0 + 0x18), NG_ret_i2_plus_g1_minus_24)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3))
++	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3), NG_ret_i2_plus_g1_minus_32)
+ 	MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%g3, %o0 + 0x20))
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x28))
++	EX_ST(STORE_INIT(%g3, %o0 + 0x20), NG_ret_i2_plus_g1_minus_32)
++	EX_ST(STORE_INIT(%o2, %o0 + 0x28), NG_ret_i2_plus_g1_minus_40)
+ 
+-	EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3))
++	EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3), NG_ret_i2_plus_g1_minus_48)
+ 	add		%i1, 64, %i1
+ 	MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1)
+ 
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x30))
+-	EX_ST(STORE_INIT(%g2, %o0 + 0x38))
++	EX_ST(STORE_INIT(%o3, %o0 + 0x30), NG_ret_i2_plus_g1_minus_48)
++	EX_ST(STORE_INIT(%g2, %o0 + 0x38), NG_ret_i2_plus_g1_minus_56)
+ 
+ 	subcc		%g1, 64, %g1
+ 	bne,pt		%XCC, 9b
+@@ -249,25 +332,25 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	 * one twin load ahead, then add 8 back into source when
+ 	 * we finish the loop.
+ 	 */
+-	EX_LD(LOAD_TWIN(%i1, %o4, %o5))
++	EX_LD(LOAD_TWIN(%i1, %o4, %o5), NG_ret_i2_plus_g1)
+ 	mov	16, %o7
+ 	mov	32, %g2
+ 	mov	48, %g3
+ 	mov	64, %o1
+-1:	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3))
++1:	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3), NG_ret_i2_plus_g1)
+ 	LOAD(prefetch, %i1 + %o1, #one_read)
+-	EX_ST(STORE_INIT(%o5, %o0 + 0x00))	! initializes cache line
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x08))
+-	EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5))
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x10))
+-	EX_ST(STORE_INIT(%o4, %o0 + 0x18))
+-	EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3))
+-	EX_ST(STORE_INIT(%o5, %o0 + 0x20))
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x28))
+-	EX_LD(LOAD_TWIN(%i1 + %o1, %o4, %o5))
++	EX_ST(STORE_INIT(%o5, %o0 + 0x00), NG_ret_i2_plus_g1)	! initializes cache line
++	EX_ST(STORE_INIT(%o2, %o0 + 0x08), NG_ret_i2_plus_g1_minus_8)
++	EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%o3, %o0 + 0x10), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%o4, %o0 + 0x18), NG_ret_i2_plus_g1_minus_24)
++	EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3), NG_ret_i2_plus_g1_minus_32)
++	EX_ST(STORE_INIT(%o5, %o0 + 0x20), NG_ret_i2_plus_g1_minus_32)
++	EX_ST(STORE_INIT(%o2, %o0 + 0x28), NG_ret_i2_plus_g1_minus_40)
++	EX_LD(LOAD_TWIN(%i1 + %o1, %o4, %o5), NG_ret_i2_plus_g1_minus_48)
+ 	add		%i1, 64, %i1
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x30))
+-	EX_ST(STORE_INIT(%o4, %o0 + 0x38))
++	EX_ST(STORE_INIT(%o3, %o0 + 0x30), NG_ret_i2_plus_g1_minus_48)
++	EX_ST(STORE_INIT(%o4, %o0 + 0x38), NG_ret_i2_plus_g1_minus_56)
+ 	subcc		%g1, 64, %g1
+ 	bne,pt		%XCC, 1b
+ 	 add		%o0, 64, %o0
+@@ -282,20 +365,20 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	mov	32, %g2
+ 	mov	48, %g3
+ 	mov	64, %o1
+-1:	EX_LD(LOAD_TWIN(%i1 + %g0, %o4, %o5))
+-	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3))
++1:	EX_LD(LOAD_TWIN(%i1 + %g0, %o4, %o5), NG_ret_i2_plus_g1)
++	EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3), NG_ret_i2_plus_g1)
+ 	LOAD(prefetch, %i1 + %o1, #one_read)
+-	EX_ST(STORE_INIT(%o4, %o0 + 0x00))	! initializes cache line
+-	EX_ST(STORE_INIT(%o5, %o0 + 0x08))
+-	EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5))
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x10))
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x18))
+-	EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3))
++	EX_ST(STORE_INIT(%o4, %o0 + 0x00), NG_ret_i2_plus_g1)	! initializes cache line
++	EX_ST(STORE_INIT(%o5, %o0 + 0x08), NG_ret_i2_plus_g1_minus_8)
++	EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%o2, %o0 + 0x10), NG_ret_i2_plus_g1_minus_16)
++	EX_ST(STORE_INIT(%o3, %o0 + 0x18), NG_ret_i2_plus_g1_minus_24)
++	EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3), NG_ret_i2_plus_g1_minus_32)
+ 	add	%i1, 64, %i1
+-	EX_ST(STORE_INIT(%o4, %o0 + 0x20))
+-	EX_ST(STORE_INIT(%o5, %o0 + 0x28))
+-	EX_ST(STORE_INIT(%o2, %o0 + 0x30))
+-	EX_ST(STORE_INIT(%o3, %o0 + 0x38))
++	EX_ST(STORE_INIT(%o4, %o0 + 0x20), NG_ret_i2_plus_g1_minus_32)
++	EX_ST(STORE_INIT(%o5, %o0 + 0x28), NG_ret_i2_plus_g1_minus_40)
++	EX_ST(STORE_INIT(%o2, %o0 + 0x30), NG_ret_i2_plus_g1_minus_48)
++	EX_ST(STORE_INIT(%o3, %o0 + 0x38), NG_ret_i2_plus_g1_minus_56)
+ 	subcc	%g1, 64, %g1
+ 	bne,pt	%XCC, 1b
+ 	 add	%o0, 64, %o0
+@@ -321,28 +404,28 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	andn		%i2, 0xf, %i4
+ 	and		%i2, 0xf, %i2
+ 1:	subcc		%i4, 0x10, %i4
+-	EX_LD(LOAD(ldx, %i1, %o4))
++	EX_LD(LOAD(ldx, %i1, %o4), NG_ret_i2_plus_i4)
+ 	add		%i1, 0x08, %i1
+-	EX_LD(LOAD(ldx, %i1, %g1))
++	EX_LD(LOAD(ldx, %i1, %g1), NG_ret_i2_plus_i4)
+ 	sub		%i1, 0x08, %i1
+-	EX_ST(STORE(stx, %o4, %i1 + %i3))
++	EX_ST(STORE(stx, %o4, %i1 + %i3), NG_ret_i2_plus_i4)
+ 	add		%i1, 0x8, %i1
+-	EX_ST(STORE(stx, %g1, %i1 + %i3))
++	EX_ST(STORE(stx, %g1, %i1 + %i3), NG_ret_i2_plus_i4_minus_8)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%i1, 0x8, %i1
+ 73:	andcc		%i2, 0x8, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%i2, 0x8, %i2
+-	EX_LD(LOAD(ldx, %i1, %o4))
+-	EX_ST(STORE(stx, %o4, %i1 + %i3))
++	EX_LD(LOAD(ldx, %i1, %o4), NG_ret_i2_plus_8)
++	EX_ST(STORE(stx, %o4, %i1 + %i3), NG_ret_i2_plus_8)
+ 	add		%i1, 0x8, %i1
+ 1:	andcc		%i2, 0x4, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%i2, 0x4, %i2
+-	EX_LD(LOAD(lduw, %i1, %i5))
+-	EX_ST(STORE(stw, %i5, %i1 + %i3))
++	EX_LD(LOAD(lduw, %i1, %i5), NG_ret_i2_plus_4)
++	EX_ST(STORE(stw, %i5, %i1 + %i3), NG_ret_i2_plus_4)
+ 	add		%i1, 0x4, %i1
+ 1:	cmp		%i2, 0
+ 	be,pt		%XCC, 85f
+@@ -358,8 +441,8 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	sub		%i2, %g1, %i2
+ 
+ 1:	subcc		%g1, 1, %g1
+-	EX_LD(LOAD(ldub, %i1, %i5))
+-	EX_ST(STORE(stb, %i5, %i1 + %i3))
++	EX_LD(LOAD(ldub, %i1, %i5), NG_ret_i2_plus_g1_plus_1)
++	EX_ST(STORE(stb, %i5, %i1 + %i3), NG_ret_i2_plus_g1_plus_1)
+ 	bgu,pt		%icc, 1b
+ 	 add		%i1, 1, %i1
+ 
+@@ -375,16 +458,16 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 
+ 8:	mov		64, %i3
+ 	andn		%i1, 0x7, %i1
+-	EX_LD(LOAD(ldx, %i1, %g2))
++	EX_LD(LOAD(ldx, %i1, %g2), NG_ret_i2)
+ 	sub		%i3, %g1, %i3
+ 	andn		%i2, 0x7, %i4
+ 	sllx		%g2, %g1, %g2
+ 1:	add		%i1, 0x8, %i1
+-	EX_LD(LOAD(ldx, %i1, %g3))
++	EX_LD(LOAD(ldx, %i1, %g3), NG_ret_i2_and_7_plus_i4)
+ 	subcc		%i4, 0x8, %i4
+ 	srlx		%g3, %i3, %i5
+ 	or		%i5, %g2, %i5
+-	EX_ST(STORE(stx, %i5, %o0))
++	EX_ST(STORE(stx, %i5, %o0), NG_ret_i2_and_7_plus_i4)
+ 	add		%o0, 0x8, %o0
+ 	bgu,pt		%icc, 1b
+ 	 sllx		%g3, %g1, %g2
+@@ -404,8 +487,8 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 
+ 1:
+ 	subcc		%i2, 4, %i2
+-	EX_LD(LOAD(lduw, %i1, %g1))
+-	EX_ST(STORE(stw, %g1, %i1 + %i3))
++	EX_LD(LOAD(lduw, %i1, %g1), NG_ret_i2_plus_4)
++	EX_ST(STORE(stw, %g1, %i1 + %i3), NG_ret_i2_plus_4)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%i1, 4, %i1
+ 
+@@ -415,8 +498,8 @@ FUNC_NAME:	/* %i0=dst, %i1=src, %i2=len */
+ 	.align		32
+ 90:
+ 	subcc		%i2, 1, %i2
+-	EX_LD(LOAD(ldub, %i1, %g1))
+-	EX_ST(STORE(stb, %g1, %i1 + %i3))
++	EX_LD(LOAD(ldub, %i1, %g1), NG_ret_i2_plus_1)
++	EX_ST(STORE(stb, %g1, %i1 + %i3), NG_ret_i2_plus_1)
+ 	bgu,pt		%XCC, 90b
+ 	 add		%i1, 1, %i1
+ 	ret
+diff --git a/arch/sparc/lib/U1copy_from_user.S b/arch/sparc/lib/U1copy_from_user.S
+index ecc5692fa2b4..bb6ff73229e3 100644
+--- a/arch/sparc/lib/U1copy_from_user.S
++++ b/arch/sparc/lib/U1copy_from_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_LD_FP(x)		\
++#define EX_LD_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_fp;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/U1copy_to_user.S b/arch/sparc/lib/U1copy_to_user.S
+index 9eea392e44d4..ed92ce739558 100644
+--- a/arch/sparc/lib/U1copy_to_user.S
++++ b/arch/sparc/lib/U1copy_to_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_ST_FP(x)		\
++#define EX_ST_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_fp;\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/U1memcpy.S b/arch/sparc/lib/U1memcpy.S
+index 3e6209ebb7d7..f30d2ab2c371 100644
+--- a/arch/sparc/lib/U1memcpy.S
++++ b/arch/sparc/lib/U1memcpy.S
+@@ -5,6 +5,7 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #include <asm/visasm.h>
+ #include <asm/asi.h>
+ #define GLOBAL_SPARE	g7
+@@ -23,21 +24,17 @@
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ #ifndef EX_LD_FP
+-#define EX_LD_FP(x)	x
++#define EX_LD_FP(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ #ifndef EX_ST_FP
+-#define EX_ST_FP(x)	x
+-#endif
+-
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
++#define EX_ST_FP(x,y)	x
+ #endif
+ 
+ #ifndef LOAD
+@@ -78,53 +75,169 @@
+ 	faligndata		%f7, %f8, %f60;			\
+ 	faligndata		%f8, %f9, %f62;
+ 
+-#define MAIN_LOOP_CHUNK(src, dest, fdest, fsrc, len, jmptgt)	\
+-	EX_LD_FP(LOAD_BLK(%src, %fdest));				\
+-	EX_ST_FP(STORE_BLK(%fsrc, %dest));				\
+-	add			%src, 0x40, %src;		\
+-	subcc			%len, 0x40, %len;		\
+-	be,pn			%xcc, jmptgt;			\
+-	 add			%dest, 0x40, %dest;		\
+-
+-#define LOOP_CHUNK1(src, dest, len, branch_dest)		\
+-	MAIN_LOOP_CHUNK(src, dest, f0,  f48, len, branch_dest)
+-#define LOOP_CHUNK2(src, dest, len, branch_dest)		\
+-	MAIN_LOOP_CHUNK(src, dest, f16, f48, len, branch_dest)
+-#define LOOP_CHUNK3(src, dest, len, branch_dest)		\
+-	MAIN_LOOP_CHUNK(src, dest, f32, f48, len, branch_dest)
++#define MAIN_LOOP_CHUNK(src, dest, fdest, fsrc, jmptgt)			\
++	EX_LD_FP(LOAD_BLK(%src, %fdest), U1_gs_80_fp);			\
++	EX_ST_FP(STORE_BLK(%fsrc, %dest), U1_gs_80_fp);			\
++	add			%src, 0x40, %src;			\
++	subcc			%GLOBAL_SPARE, 0x40, %GLOBAL_SPARE;	\
++	be,pn			%xcc, jmptgt;				\
++	 add			%dest, 0x40, %dest;			\
++
++#define LOOP_CHUNK1(src, dest, branch_dest)		\
++	MAIN_LOOP_CHUNK(src, dest, f0,  f48, branch_dest)
++#define LOOP_CHUNK2(src, dest, branch_dest)		\
++	MAIN_LOOP_CHUNK(src, dest, f16, f48, branch_dest)
++#define LOOP_CHUNK3(src, dest, branch_dest)		\
++	MAIN_LOOP_CHUNK(src, dest, f32, f48, branch_dest)
+ 
+ #define DO_SYNC			membar	#Sync;
+ #define STORE_SYNC(dest, fsrc)				\
+-	EX_ST_FP(STORE_BLK(%fsrc, %dest));			\
++	EX_ST_FP(STORE_BLK(%fsrc, %dest), U1_gs_80_fp);	\
+ 	add			%dest, 0x40, %dest;	\
+ 	DO_SYNC
+ 
+ #define STORE_JUMP(dest, fsrc, target)			\
+-	EX_ST_FP(STORE_BLK(%fsrc, %dest));			\
++	EX_ST_FP(STORE_BLK(%fsrc, %dest), U1_gs_40_fp);	\
+ 	add			%dest, 0x40, %dest;	\
+ 	ba,pt			%xcc, target;		\
+ 	 nop;
+ 
+-#define FINISH_VISCHUNK(dest, f0, f1, left)	\
+-	subcc			%left, 8, %left;\
+-	bl,pn			%xcc, 95f;	\
+-	 faligndata		%f0, %f1, %f48;	\
+-	EX_ST_FP(STORE(std, %f48, %dest));		\
++#define FINISH_VISCHUNK(dest, f0, f1)			\
++	subcc			%g3, 8, %g3;		\
++	bl,pn			%xcc, 95f;		\
++	 faligndata		%f0, %f1, %f48;		\
++	EX_ST_FP(STORE(std, %f48, %dest), U1_g3_8_fp);	\
+ 	add			%dest, 8, %dest;
+ 
+-#define UNEVEN_VISCHUNK_LAST(dest, f0, f1, left)	\
+-	subcc			%left, 8, %left;	\
+-	bl,pn			%xcc, 95f;		\
++#define UNEVEN_VISCHUNK_LAST(dest, f0, f1)	\
++	subcc			%g3, 8, %g3;	\
++	bl,pn			%xcc, 95f;	\
+ 	 fsrc2			%f0, %f1;
+ 
+-#define UNEVEN_VISCHUNK(dest, f0, f1, left)		\
+-	UNEVEN_VISCHUNK_LAST(dest, f0, f1, left)	\
++#define UNEVEN_VISCHUNK(dest, f0, f1)		\
++	UNEVEN_VISCHUNK_LAST(dest, f0, f1)	\
+ 	ba,a,pt			%xcc, 93f;
+ 
+ 	.register	%g2,#scratch
+ 	.register	%g3,#scratch
+ 
+ 	.text
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++ENTRY(U1_g1_1_fp)
++	VISExitHalf
++	add		%g1, 1, %g1
++	add		%g1, %g2, %g1
++	retl
++	 add		%g1, %o2, %o0
++ENDPROC(U1_g1_1_fp)
++ENTRY(U1_g2_0_fp)
++	VISExitHalf
++	retl
++	 add		%g2, %o2, %o0
++ENDPROC(U1_g2_0_fp)
++ENTRY(U1_g2_8_fp)
++	VISExitHalf
++	add		%g2, 8, %g2
++	retl
++	 add		%g2, %o2, %o0
++ENDPROC(U1_g2_8_fp)
++ENTRY(U1_gs_0_fp)
++	VISExitHalf
++	add		%GLOBAL_SPARE, %g3, %o0
++	retl
++	 add		%o0, %o2, %o0
++ENDPROC(U1_gs_0_fp)
++ENTRY(U1_gs_80_fp)
++	VISExitHalf
++	add		%GLOBAL_SPARE, 0x80, %GLOBAL_SPARE
++	add		%GLOBAL_SPARE, %g3, %o0
++	retl
++	 add		%o0, %o2, %o0
++ENDPROC(U1_gs_80_fp)
++ENTRY(U1_gs_40_fp)
++	VISExitHalf
++	add		%GLOBAL_SPARE, 0x40, %GLOBAL_SPARE
++	add		%GLOBAL_SPARE, %g3, %o0
++	retl
++	 add		%o0, %o2, %o0
++ENDPROC(U1_gs_40_fp)
++ENTRY(U1_g3_0_fp)
++	VISExitHalf
++	retl
++	 add		%g3, %o2, %o0
++ENDPROC(U1_g3_0_fp)
++ENTRY(U1_g3_8_fp)
++	VISExitHalf
++	add		%g3, 8, %g3
++	retl
++	 add		%g3, %o2, %o0
++ENDPROC(U1_g3_8_fp)
++ENTRY(U1_o2_0_fp)
++	VISExitHalf
++	retl
++	 mov		%o2, %o0
++ENDPROC(U1_o2_0_fp)
++ENTRY(U1_o2_1_fp)
++	VISExitHalf
++	retl
++	 add		%o2, 1, %o0
++ENDPROC(U1_o2_1_fp)
++ENTRY(U1_gs_0)
++	VISExitHalf
++	retl
++	 add		%GLOBAL_SPARE, %o2, %o0
++ENDPROC(U1_gs_0)
++ENTRY(U1_gs_8)
++	VISExitHalf
++	add		%GLOBAL_SPARE, %o2, %GLOBAL_SPARE
++	retl
++	 add		%GLOBAL_SPARE, 0x8, %o0
++ENDPROC(U1_gs_8)
++ENTRY(U1_gs_10)
++	VISExitHalf
++	add		%GLOBAL_SPARE, %o2, %GLOBAL_SPARE
++	retl
++	 add		%GLOBAL_SPARE, 0x10, %o0
++ENDPROC(U1_gs_10)
++ENTRY(U1_o2_0)
++	retl
++	 mov		%o2, %o0
++ENDPROC(U1_o2_0)
++ENTRY(U1_o2_8)
++	retl
++	 add		%o2, 8, %o0
++ENDPROC(U1_o2_8)
++ENTRY(U1_o2_4)
++	retl
++	 add		%o2, 4, %o0
++ENDPROC(U1_o2_4)
++ENTRY(U1_o2_1)
++	retl
++	 add		%o2, 1, %o0
++ENDPROC(U1_o2_1)
++ENTRY(U1_g1_0)
++	retl
++	 add		%g1, %o2, %o0
++ENDPROC(U1_g1_0)
++ENTRY(U1_g1_1)
++	add		%g1, 1, %g1
++	retl
++	 add		%g1, %o2, %o0
++ENDPROC(U1_g1_1)
++ENTRY(U1_gs_0_o2_adj)
++	and		%o2, 7, %o2
++	retl
++	 add		%GLOBAL_SPARE, %o2, %o0
++ENDPROC(U1_gs_0_o2_adj)
++ENTRY(U1_gs_8_o2_adj)
++	and		%o2, 7, %o2
++	add		%GLOBAL_SPARE, 8, %GLOBAL_SPARE
++	retl
++	 add		%GLOBAL_SPARE, %o2, %o0
++ENDPROC(U1_gs_8_o2_adj)
++#endif
++
+ 	.align		64
+ 
+ 	.globl		FUNC_NAME
+@@ -166,8 +279,8 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	 and		%g2, 0x38, %g2
+ 
+ 1:	subcc		%g1, 0x1, %g1
+-	EX_LD_FP(LOAD(ldub, %o1 + 0x00, %o3))
+-	EX_ST_FP(STORE(stb, %o3, %o1 + %GLOBAL_SPARE))
++	EX_LD_FP(LOAD(ldub, %o1 + 0x00, %o3), U1_g1_1_fp)
++	EX_ST_FP(STORE(stb, %o3, %o1 + %GLOBAL_SPARE), U1_g1_1_fp)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 0x1, %o1
+ 
+@@ -178,20 +291,20 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	be,pt		%icc, 3f
+ 	 alignaddr	%o1, %g0, %o1
+ 
+-	EX_LD_FP(LOAD(ldd, %o1, %f4))
+-1:	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f6))
++	EX_LD_FP(LOAD(ldd, %o1, %f4), U1_g2_0_fp)
++1:	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f6), U1_g2_0_fp)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f4, %f6, %f0
+-	EX_ST_FP(STORE(std, %f0, %o0))
++	EX_ST_FP(STORE(std, %f0, %o0), U1_g2_8_fp)
+ 	be,pn		%icc, 3f
+ 	 add		%o0, 0x8, %o0
+ 
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f4))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f4), U1_g2_0_fp)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f6, %f4, %f0
+-	EX_ST_FP(STORE(std, %f0, %o0))
++	EX_ST_FP(STORE(std, %f0, %o0), U1_g2_8_fp)
+ 	bne,pt		%icc, 1b
+ 	 add		%o0, 0x8, %o0
+ 
+@@ -214,13 +327,13 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	add		%g1, %GLOBAL_SPARE, %g1
+ 	subcc		%o2, %g3, %o2
+ 
+-	EX_LD_FP(LOAD_BLK(%o1, %f0))
++	EX_LD_FP(LOAD_BLK(%o1, %f0), U1_gs_0_fp)
+ 	add		%o1, 0x40, %o1
+ 	add		%g1, %g3, %g1
+-	EX_LD_FP(LOAD_BLK(%o1, %f16))
++	EX_LD_FP(LOAD_BLK(%o1, %f16), U1_gs_0_fp)
+ 	add		%o1, 0x40, %o1
+ 	sub		%GLOBAL_SPARE, 0x80, %GLOBAL_SPARE
+-	EX_LD_FP(LOAD_BLK(%o1, %f32))
++	EX_LD_FP(LOAD_BLK(%o1, %f32), U1_gs_80_fp)
+ 	add		%o1, 0x40, %o1
+ 
+ 	/* There are 8 instances of the unrolled loop,
+@@ -240,11 +353,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 
+ 	.align		64
+ 1:	FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f0, %f2, %f48
+ 1:	FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32)
+@@ -261,11 +374,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 56f)
+ 
+ 1:	FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f2, %f4, %f48
+ 1:	FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34)
+@@ -282,11 +395,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 57f)
+ 
+ 1:	FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f4, %f6, %f48
+ 1:	FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36)
+@@ -303,11 +416,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 58f)
+ 
+ 1:	FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) 
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f6, %f8, %f48
+ 1:	FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38)
+@@ -324,11 +437,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 59f)
+ 
+ 1:	FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f8, %f10, %f48
+ 1:	FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40)
+@@ -345,11 +458,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 60f)
+ 
+ 1:	FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f10, %f12, %f48
+ 1:	FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42)
+@@ -366,11 +479,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 61f)
+ 
+ 1:	FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f12, %f14, %f48
+ 1:	FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44)
+@@ -387,11 +500,11 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	STORE_JUMP(o0, f48, 62f)
+ 
+ 1:	FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30)
+-	LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
++	LOOP_CHUNK1(o1, o0, 1f)
+ 	FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46)
+-	LOOP_CHUNK2(o1, o0, GLOBAL_SPARE, 2f)
++	LOOP_CHUNK2(o1, o0, 2f)
+ 	FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14)
+-	LOOP_CHUNK3(o1, o0, GLOBAL_SPARE, 3f)
++	LOOP_CHUNK3(o1, o0, 3f)
+ 	ba,pt		%xcc, 1b+4
+ 	 faligndata	%f14, %f16, %f48
+ 1:	FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46)
+@@ -407,53 +520,53 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46)
+ 	STORE_JUMP(o0, f48, 63f)
+ 
+-40:	FINISH_VISCHUNK(o0, f0,  f2,  g3)
+-41:	FINISH_VISCHUNK(o0, f2,  f4,  g3)
+-42:	FINISH_VISCHUNK(o0, f4,  f6,  g3)
+-43:	FINISH_VISCHUNK(o0, f6,  f8,  g3)
+-44:	FINISH_VISCHUNK(o0, f8,  f10, g3)
+-45:	FINISH_VISCHUNK(o0, f10, f12, g3)
+-46:	FINISH_VISCHUNK(o0, f12, f14, g3)
+-47:	UNEVEN_VISCHUNK(o0, f14, f0,  g3)
+-48:	FINISH_VISCHUNK(o0, f16, f18, g3)
+-49:	FINISH_VISCHUNK(o0, f18, f20, g3)
+-50:	FINISH_VISCHUNK(o0, f20, f22, g3)
+-51:	FINISH_VISCHUNK(o0, f22, f24, g3)
+-52:	FINISH_VISCHUNK(o0, f24, f26, g3)
+-53:	FINISH_VISCHUNK(o0, f26, f28, g3)
+-54:	FINISH_VISCHUNK(o0, f28, f30, g3)
+-55:	UNEVEN_VISCHUNK(o0, f30, f0,  g3)
+-56:	FINISH_VISCHUNK(o0, f32, f34, g3)
+-57:	FINISH_VISCHUNK(o0, f34, f36, g3)
+-58:	FINISH_VISCHUNK(o0, f36, f38, g3)
+-59:	FINISH_VISCHUNK(o0, f38, f40, g3)
+-60:	FINISH_VISCHUNK(o0, f40, f42, g3)
+-61:	FINISH_VISCHUNK(o0, f42, f44, g3)
+-62:	FINISH_VISCHUNK(o0, f44, f46, g3)
+-63:	UNEVEN_VISCHUNK_LAST(o0, f46, f0,  g3)
+-
+-93:	EX_LD_FP(LOAD(ldd, %o1, %f2))
++40:	FINISH_VISCHUNK(o0, f0,  f2)
++41:	FINISH_VISCHUNK(o0, f2,  f4)
++42:	FINISH_VISCHUNK(o0, f4,  f6)
++43:	FINISH_VISCHUNK(o0, f6,  f8)
++44:	FINISH_VISCHUNK(o0, f8,  f10)
++45:	FINISH_VISCHUNK(o0, f10, f12)
++46:	FINISH_VISCHUNK(o0, f12, f14)
++47:	UNEVEN_VISCHUNK(o0, f14, f0)
++48:	FINISH_VISCHUNK(o0, f16, f18)
++49:	FINISH_VISCHUNK(o0, f18, f20)
++50:	FINISH_VISCHUNK(o0, f20, f22)
++51:	FINISH_VISCHUNK(o0, f22, f24)
++52:	FINISH_VISCHUNK(o0, f24, f26)
++53:	FINISH_VISCHUNK(o0, f26, f28)
++54:	FINISH_VISCHUNK(o0, f28, f30)
++55:	UNEVEN_VISCHUNK(o0, f30, f0)
++56:	FINISH_VISCHUNK(o0, f32, f34)
++57:	FINISH_VISCHUNK(o0, f34, f36)
++58:	FINISH_VISCHUNK(o0, f36, f38)
++59:	FINISH_VISCHUNK(o0, f38, f40)
++60:	FINISH_VISCHUNK(o0, f40, f42)
++61:	FINISH_VISCHUNK(o0, f42, f44)
++62:	FINISH_VISCHUNK(o0, f44, f46)
++63:	UNEVEN_VISCHUNK_LAST(o0, f46, f0)
++
++93:	EX_LD_FP(LOAD(ldd, %o1, %f2), U1_g3_0_fp)
+ 	add		%o1, 8, %o1
+ 	subcc		%g3, 8, %g3
+ 	faligndata	%f0, %f2, %f8
+-	EX_ST_FP(STORE(std, %f8, %o0))
++	EX_ST_FP(STORE(std, %f8, %o0), U1_g3_8_fp)
+ 	bl,pn		%xcc, 95f
+ 	 add		%o0, 8, %o0
+-	EX_LD_FP(LOAD(ldd, %o1, %f0))
++	EX_LD_FP(LOAD(ldd, %o1, %f0), U1_g3_0_fp)
+ 	add		%o1, 8, %o1
+ 	subcc		%g3, 8, %g3
+ 	faligndata	%f2, %f0, %f8
+-	EX_ST_FP(STORE(std, %f8, %o0))
++	EX_ST_FP(STORE(std, %f8, %o0), U1_g3_8_fp)
+ 	bge,pt		%xcc, 93b
+ 	 add		%o0, 8, %o0
+ 
+ 95:	brz,pt		%o2, 2f
+ 	 mov		%g1, %o1
+ 
+-1:	EX_LD_FP(LOAD(ldub, %o1, %o3))
++1:	EX_LD_FP(LOAD(ldub, %o1, %o3), U1_o2_0_fp)
+ 	add		%o1, 1, %o1
+ 	subcc		%o2, 1, %o2
+-	EX_ST_FP(STORE(stb, %o3, %o0))
++	EX_ST_FP(STORE(stb, %o3, %o0), U1_o2_1_fp)
+ 	bne,pt		%xcc, 1b
+ 	 add		%o0, 1, %o0
+ 
+@@ -469,27 +582,27 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 
+ 72:	andn		%o2, 0xf, %GLOBAL_SPARE
+ 	and		%o2, 0xf, %o2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x00, %o5))
+-	EX_LD(LOAD(ldx, %o1 + 0x08, %g1))
++1:	EX_LD(LOAD(ldx, %o1 + 0x00, %o5), U1_gs_0)
++	EX_LD(LOAD(ldx, %o1 + 0x08, %g1), U1_gs_0)
+ 	subcc		%GLOBAL_SPARE, 0x10, %GLOBAL_SPARE
+-	EX_ST(STORE(stx, %o5, %o1 + %o3))
++	EX_ST(STORE(stx, %o5, %o1 + %o3), U1_gs_10)
+ 	add		%o1, 0x8, %o1
+-	EX_ST(STORE(stx, %g1, %o1 + %o3))
++	EX_ST(STORE(stx, %g1, %o1 + %o3), U1_gs_8)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 0x8, %o1
+ 73:	andcc		%o2, 0x8, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+-	EX_LD(LOAD(ldx, %o1, %o5))
++	EX_LD(LOAD(ldx, %o1, %o5), U1_o2_0)
+ 	sub		%o2, 0x8, %o2
+-	EX_ST(STORE(stx, %o5, %o1 + %o3))
++	EX_ST(STORE(stx, %o5, %o1 + %o3), U1_o2_8)
+ 	add		%o1, 0x8, %o1
+ 1:	andcc		%o2, 0x4, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+-	EX_LD(LOAD(lduw, %o1, %o5))
++	EX_LD(LOAD(lduw, %o1, %o5), U1_o2_0)
+ 	sub		%o2, 0x4, %o2
+-	EX_ST(STORE(stw, %o5, %o1 + %o3))
++	EX_ST(STORE(stw, %o5, %o1 + %o3), U1_o2_4)
+ 	add		%o1, 0x4, %o1
+ 1:	cmp		%o2, 0
+ 	be,pt		%XCC, 85f
+@@ -503,9 +616,9 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	 sub		%g0, %g1, %g1
+ 	sub		%o2, %g1, %o2
+ 
+-1:	EX_LD(LOAD(ldub, %o1, %o5))
++1:	EX_LD(LOAD(ldub, %o1, %o5), U1_g1_0)
+ 	subcc		%g1, 1, %g1
+-	EX_ST(STORE(stb, %o5, %o1 + %o3))
++	EX_ST(STORE(stb, %o5, %o1 + %o3), U1_g1_1)
+ 	bgu,pt		%icc, 1b
+ 	 add		%o1, 1, %o1
+ 
+@@ -521,16 +634,16 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 
+ 8:	mov		64, %o3
+ 	andn		%o1, 0x7, %o1
+-	EX_LD(LOAD(ldx, %o1, %g2))
++	EX_LD(LOAD(ldx, %o1, %g2), U1_o2_0)
+ 	sub		%o3, %g1, %o3
+ 	andn		%o2, 0x7, %GLOBAL_SPARE
+ 	sllx		%g2, %g1, %g2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x8, %g3))
++1:	EX_LD(LOAD(ldx, %o1 + 0x8, %g3), U1_gs_0_o2_adj)
+ 	subcc		%GLOBAL_SPARE, 0x8, %GLOBAL_SPARE
+ 	add		%o1, 0x8, %o1
+ 	srlx		%g3, %o3, %o5
+ 	or		%o5, %g2, %o5
+-	EX_ST(STORE(stx, %o5, %o0))
++	EX_ST(STORE(stx, %o5, %o0), U1_gs_8_o2_adj)
+ 	add		%o0, 0x8, %o0
+ 	bgu,pt		%icc, 1b
+ 	 sllx		%g3, %g1, %g2
+@@ -548,9 +661,9 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	bne,pn		%XCC, 90f
+ 	 sub		%o0, %o1, %o3
+ 
+-1:	EX_LD(LOAD(lduw, %o1, %g1))
++1:	EX_LD(LOAD(lduw, %o1, %g1), U1_o2_0)
+ 	subcc		%o2, 4, %o2
+-	EX_ST(STORE(stw, %g1, %o1 + %o3))
++	EX_ST(STORE(stw, %g1, %o1 + %o3), U1_o2_4)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 4, %o1
+ 
+@@ -558,9 +671,9 @@ FUNC_NAME:		/* %o0=dst, %o1=src, %o2=len */
+ 	 mov		EX_RETVAL(%o4), %o0
+ 
+ 	.align		32
+-90:	EX_LD(LOAD(ldub, %o1, %g1))
++90:	EX_LD(LOAD(ldub, %o1, %g1), U1_o2_0)
+ 	subcc		%o2, 1, %o2
+-	EX_ST(STORE(stb, %g1, %o1 + %o3))
++	EX_ST(STORE(stb, %g1, %o1 + %o3), U1_o2_1)
+ 	bgu,pt		%XCC, 90b
+ 	 add		%o1, 1, %o1
+ 	retl
+diff --git a/arch/sparc/lib/U3copy_from_user.S b/arch/sparc/lib/U3copy_from_user.S
+index 88ad73d86fe4..db73010a1af8 100644
+--- a/arch/sparc/lib/U3copy_from_user.S
++++ b/arch/sparc/lib/U3copy_from_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+  */
+ 
+-#define EX_LD(x)		\
++#define EX_LD(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_LD_FP(x)		\
++#define EX_LD_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/U3copy_to_user.S b/arch/sparc/lib/U3copy_to_user.S
+index 845139d75537..c4ee858e352a 100644
+--- a/arch/sparc/lib/U3copy_to_user.S
++++ b/arch/sparc/lib/U3copy_to_user.S
+@@ -3,19 +3,19 @@
+  * Copyright (C) 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+  */
+ 
+-#define EX_ST(x)		\
++#define EX_ST(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, y;		\
+ 	.text;			\
+ 	.align 4;
+ 
+-#define EX_ST_FP(x)		\
++#define EX_ST_FP(x,y)		\
+ 98:	x;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one_fp;\
++	.word 98b, y##_fp;	\
+ 	.text;			\
+ 	.align 4;
+ 
+diff --git a/arch/sparc/lib/U3memcpy.S b/arch/sparc/lib/U3memcpy.S
+index 491ee69e4995..54f98706b03b 100644
+--- a/arch/sparc/lib/U3memcpy.S
++++ b/arch/sparc/lib/U3memcpy.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #ifdef __KERNEL__
++#include <linux/linkage.h>
+ #include <asm/visasm.h>
+ #include <asm/asi.h>
+ #define GLOBAL_SPARE	%g7
+@@ -22,21 +23,17 @@
+ #endif
+ 
+ #ifndef EX_LD
+-#define EX_LD(x)	x
++#define EX_LD(x,y)	x
+ #endif
+ #ifndef EX_LD_FP
+-#define EX_LD_FP(x)	x
++#define EX_LD_FP(x,y)	x
+ #endif
+ 
+ #ifndef EX_ST
+-#define EX_ST(x)	x
++#define EX_ST(x,y)	x
+ #endif
+ #ifndef EX_ST_FP
+-#define EX_ST_FP(x)	x
+-#endif
+-
+-#ifndef EX_RETVAL
+-#define EX_RETVAL(x)	x
++#define EX_ST_FP(x,y)	x
+ #endif
+ 
+ #ifndef LOAD
+@@ -77,6 +74,87 @@
+ 	 */
+ 
+ 	.text
++#ifndef EX_RETVAL
++#define EX_RETVAL(x)	x
++__restore_fp:
++	VISExitHalf
++	retl
++	 nop
++ENTRY(U3_retl_o2_plus_g2_plus_g1_plus_1_fp)
++	add	%g1, 1, %g1
++	add	%g2, %g1, %g2
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %g2, %o0
++ENDPROC(U3_retl_o2_plus_g2_plus_g1_plus_1_fp)
++ENTRY(U3_retl_o2_plus_g2_fp)
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %g2, %o0
++ENDPROC(U3_retl_o2_plus_g2_fp)
++ENTRY(U3_retl_o2_plus_g2_plus_8_fp)
++	add	%g2, 8, %g2
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %g2, %o0
++ENDPROC(U3_retl_o2_plus_g2_plus_8_fp)
++ENTRY(U3_retl_o2)
++	retl
++	 mov	%o2, %o0
++ENDPROC(U3_retl_o2)
++ENTRY(U3_retl_o2_plus_1)
++	retl
++	 add	%o2, 1, %o0
++ENDPROC(U3_retl_o2_plus_1)
++ENTRY(U3_retl_o2_plus_4)
++	retl
++	 add	%o2, 4, %o0
++ENDPROC(U3_retl_o2_plus_4)
++ENTRY(U3_retl_o2_plus_8)
++	retl
++	 add	%o2, 8, %o0
++ENDPROC(U3_retl_o2_plus_8)
++ENTRY(U3_retl_o2_plus_g1_plus_1)
++	add	%g1, 1, %g1
++	retl
++	 add	%o2, %g1, %o0
++ENDPROC(U3_retl_o2_plus_g1_plus_1)
++ENTRY(U3_retl_o2_fp)
++	ba,pt	%xcc, __restore_fp
++	 mov	%o2, %o0
++ENDPROC(U3_retl_o2_fp)
++ENTRY(U3_retl_o2_plus_o3_sll_6_plus_0x80_fp)
++	sll	%o3, 6, %o3
++	add	%o3, 0x80, %o3
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %o3, %o0
++ENDPROC(U3_retl_o2_plus_o3_sll_6_plus_0x80_fp)
++ENTRY(U3_retl_o2_plus_o3_sll_6_plus_0x40_fp)
++	sll	%o3, 6, %o3
++	add	%o3, 0x40, %o3
++	ba,pt	%xcc, __restore_fp
++	 add	%o2, %o3, %o0
++ENDPROC(U3_retl_o2_plus_o3_sll_6_plus_0x40_fp)
++ENTRY(U3_retl_o2_plus_GS_plus_0x10)
++	add	GLOBAL_SPARE, 0x10, GLOBAL_SPARE
++	retl
++	 add	%o2, GLOBAL_SPARE, %o0
++ENDPROC(U3_retl_o2_plus_GS_plus_0x10)
++ENTRY(U3_retl_o2_plus_GS_plus_0x08)
++	add	GLOBAL_SPARE, 0x08, GLOBAL_SPARE
++	retl
++	 add	%o2, GLOBAL_SPARE, %o0
++ENDPROC(U3_retl_o2_plus_GS_plus_0x08)
++ENTRY(U3_retl_o2_and_7_plus_GS)
++	and	%o2, 7, %o2
++	retl
++	 add	%o2, GLOBAL_SPARE, %o2
++ENDPROC(U3_retl_o2_and_7_plus_GS)
++ENTRY(U3_retl_o2_and_7_plus_GS_plus_8)
++	add	GLOBAL_SPARE, 8, GLOBAL_SPARE
++	and	%o2, 7, %o2
++	retl
++	 add	%o2, GLOBAL_SPARE, %o2
++ENDPROC(U3_retl_o2_and_7_plus_GS_plus_8)
++#endif
++
+ 	.align		64
+ 
+ 	/* The cheetah's flexible spine, oversized liver, enlarged heart,
+@@ -126,8 +204,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	 and		%g2, 0x38, %g2
+ 
+ 1:	subcc		%g1, 0x1, %g1
+-	EX_LD_FP(LOAD(ldub, %o1 + 0x00, %o3))
+-	EX_ST_FP(STORE(stb, %o3, %o1 + GLOBAL_SPARE))
++	EX_LD_FP(LOAD(ldub, %o1 + 0x00, %o3), U3_retl_o2_plus_g2_plus_g1_plus_1)
++	EX_ST_FP(STORE(stb, %o3, %o1 + GLOBAL_SPARE), U3_retl_o2_plus_g2_plus_g1_plus_1)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 0x1, %o1
+ 
+@@ -138,20 +216,20 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	be,pt		%icc, 3f
+ 	 alignaddr	%o1, %g0, %o1
+ 
+-	EX_LD_FP(LOAD(ldd, %o1, %f4))
+-1:	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f6))
++	EX_LD_FP(LOAD(ldd, %o1, %f4), U3_retl_o2_plus_g2)
++1:	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f6), U3_retl_o2_plus_g2)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f4, %f6, %f0
+-	EX_ST_FP(STORE(std, %f0, %o0))
++	EX_ST_FP(STORE(std, %f0, %o0), U3_retl_o2_plus_g2_plus_8)
+ 	be,pn		%icc, 3f
+ 	 add		%o0, 0x8, %o0
+ 
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f4))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x8, %f4), U3_retl_o2_plus_g2)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f6, %f4, %f2
+-	EX_ST_FP(STORE(std, %f2, %o0))
++	EX_ST_FP(STORE(std, %f2, %o0), U3_retl_o2_plus_g2_plus_8)
+ 	bne,pt		%icc, 1b
+ 	 add		%o0, 0x8, %o0
+ 
+@@ -161,25 +239,25 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	LOAD(prefetch, %o1 + 0x080, #one_read)
+ 	LOAD(prefetch, %o1 + 0x0c0, #one_read)
+ 	LOAD(prefetch, %o1 + 0x100, #one_read)
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x000, %f0))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x000, %f0), U3_retl_o2)
+ 	LOAD(prefetch, %o1 + 0x140, #one_read)
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2), U3_retl_o2)
+ 	LOAD(prefetch, %o1 + 0x180, #one_read)
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4), U3_retl_o2)
+ 	LOAD(prefetch, %o1 + 0x1c0, #one_read)
+ 	faligndata	%f0, %f2, %f16
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6), U3_retl_o2)
+ 	faligndata	%f2, %f4, %f18
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8), U3_retl_o2)
+ 	faligndata	%f4, %f6, %f20
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10), U3_retl_o2)
+ 	faligndata	%f6, %f8, %f22
+ 
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12), U3_retl_o2)
+ 	faligndata	%f8, %f10, %f24
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14), U3_retl_o2)
+ 	faligndata	%f10, %f12, %f26
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0), U3_retl_o2)
+ 
+ 	subcc		GLOBAL_SPARE, 0x80, GLOBAL_SPARE
+ 	add		%o1, 0x40, %o1
+@@ -190,26 +268,26 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 	.align		64
+ 1:
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 	faligndata	%f12, %f14, %f28
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 	faligndata	%f14, %f0, %f30
+-	EX_ST_FP(STORE_BLK(%f16, %o0))
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6))
++	EX_ST_FP(STORE_BLK(%f16, %o0), U3_retl_o2_plus_o3_sll_6_plus_0x80)
++	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f0, %f2, %f16
+ 	add		%o0, 0x40, %o0
+ 
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f2, %f4, %f18
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f4, %f6, %f20
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	subcc		%o3, 0x01, %o3
+ 	faligndata	%f6, %f8, %f22
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 
+ 	faligndata	%f8, %f10, %f24
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 	LOAD(prefetch, %o1 + 0x1c0, #one_read)
+ 	faligndata	%f10, %f12, %f26
+ 	bg,pt		%XCC, 1b
+@@ -217,29 +295,29 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 	/* Finally we copy the last full 64-byte block. */
+ 2:
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x008, %f2), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 	faligndata	%f12, %f14, %f28
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x010, %f4), U3_retl_o2_plus_o3_sll_6_plus_0x80)
+ 	faligndata	%f14, %f0, %f30
+-	EX_ST_FP(STORE_BLK(%f16, %o0))
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6))
++	EX_ST_FP(STORE_BLK(%f16, %o0), U3_retl_o2_plus_o3_sll_6_plus_0x80)
++	EX_LD_FP(LOAD(ldd, %o1 + 0x018, %f6), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f0, %f2, %f16
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x020, %f8), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f2, %f4, %f18
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x028, %f10), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f4, %f6, %f20
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x030, %f12), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f6, %f8, %f22
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x038, %f14), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	faligndata	%f8, %f10, %f24
+ 	cmp		%g1, 0
+ 	be,pt		%XCC, 1f
+ 	 add		%o0, 0x40, %o0
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x040, %f0), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 1:	faligndata	%f10, %f12, %f26
+ 	faligndata	%f12, %f14, %f28
+ 	faligndata	%f14, %f0, %f30
+-	EX_ST_FP(STORE_BLK(%f16, %o0))
++	EX_ST_FP(STORE_BLK(%f16, %o0), U3_retl_o2_plus_o3_sll_6_plus_0x40)
+ 	add		%o0, 0x40, %o0
+ 	add		%o1, 0x40, %o1
+ 	membar		#Sync
+@@ -259,20 +337,20 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 	sub		%o2, %g2, %o2
+ 	be,a,pt		%XCC, 1f
+-	 EX_LD_FP(LOAD(ldd, %o1 + 0x00, %f0))
++	 EX_LD_FP(LOAD(ldd, %o1 + 0x00, %f0), U3_retl_o2_plus_g2)
+ 
+-1:	EX_LD_FP(LOAD(ldd, %o1 + 0x08, %f2))
++1:	EX_LD_FP(LOAD(ldd, %o1 + 0x08, %f2), U3_retl_o2_plus_g2)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f0, %f2, %f8
+-	EX_ST_FP(STORE(std, %f8, %o0))
++	EX_ST_FP(STORE(std, %f8, %o0), U3_retl_o2_plus_g2_plus_8)
+ 	be,pn		%XCC, 2f
+ 	 add		%o0, 0x8, %o0
+-	EX_LD_FP(LOAD(ldd, %o1 + 0x08, %f0))
++	EX_LD_FP(LOAD(ldd, %o1 + 0x08, %f0), U3_retl_o2_plus_g2)
+ 	add		%o1, 0x8, %o1
+ 	subcc		%g2, 0x8, %g2
+ 	faligndata	%f2, %f0, %f8
+-	EX_ST_FP(STORE(std, %f8, %o0))
++	EX_ST_FP(STORE(std, %f8, %o0), U3_retl_o2_plus_g2_plus_8)
+ 	bne,pn		%XCC, 1b
+ 	 add		%o0, 0x8, %o0
+ 
+@@ -292,30 +370,33 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	 andcc		%o2, 0x8, %g0
+ 	be,pt		%icc, 1f
+ 	 nop
+-	EX_LD(LOAD(ldx, %o1, %o5))
+-	EX_ST(STORE(stx, %o5, %o1 + %o3))
++	EX_LD(LOAD(ldx, %o1, %o5), U3_retl_o2)
++	EX_ST(STORE(stx, %o5, %o1 + %o3), U3_retl_o2)
+ 	add		%o1, 0x8, %o1
++	sub		%o2, 8, %o2
+ 
+ 1:	andcc		%o2, 0x4, %g0
+ 	be,pt		%icc, 1f
+ 	 nop
+-	EX_LD(LOAD(lduw, %o1, %o5))
+-	EX_ST(STORE(stw, %o5, %o1 + %o3))
++	EX_LD(LOAD(lduw, %o1, %o5), U3_retl_o2)
++	EX_ST(STORE(stw, %o5, %o1 + %o3), U3_retl_o2)
+ 	add		%o1, 0x4, %o1
++	sub		%o2, 4, %o2
+ 
+ 1:	andcc		%o2, 0x2, %g0
+ 	be,pt		%icc, 1f
+ 	 nop
+-	EX_LD(LOAD(lduh, %o1, %o5))
+-	EX_ST(STORE(sth, %o5, %o1 + %o3))
++	EX_LD(LOAD(lduh, %o1, %o5), U3_retl_o2)
++	EX_ST(STORE(sth, %o5, %o1 + %o3), U3_retl_o2)
+ 	add		%o1, 0x2, %o1
++	sub		%o2, 2, %o2
+ 
+ 1:	andcc		%o2, 0x1, %g0
+ 	be,pt		%icc, 85f
+ 	 nop
+-	EX_LD(LOAD(ldub, %o1, %o5))
++	EX_LD(LOAD(ldub, %o1, %o5), U3_retl_o2)
+ 	ba,pt		%xcc, 85f
+-	 EX_ST(STORE(stb, %o5, %o1 + %o3))
++	 EX_ST(STORE(stb, %o5, %o1 + %o3), U3_retl_o2)
+ 
+ 	.align		64
+ 70: /* 16 < len <= 64 */
+@@ -326,26 +407,26 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	andn		%o2, 0xf, GLOBAL_SPARE
+ 	and		%o2, 0xf, %o2
+ 1:	subcc		GLOBAL_SPARE, 0x10, GLOBAL_SPARE
+-	EX_LD(LOAD(ldx, %o1 + 0x00, %o5))
+-	EX_LD(LOAD(ldx, %o1 + 0x08, %g1))
+-	EX_ST(STORE(stx, %o5, %o1 + %o3))
++	EX_LD(LOAD(ldx, %o1 + 0x00, %o5), U3_retl_o2_plus_GS_plus_0x10)
++	EX_LD(LOAD(ldx, %o1 + 0x08, %g1), U3_retl_o2_plus_GS_plus_0x10)
++	EX_ST(STORE(stx, %o5, %o1 + %o3), U3_retl_o2_plus_GS_plus_0x10)
+ 	add		%o1, 0x8, %o1
+-	EX_ST(STORE(stx, %g1, %o1 + %o3))
++	EX_ST(STORE(stx, %g1, %o1 + %o3), U3_retl_o2_plus_GS_plus_0x08)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 0x8, %o1
+ 73:	andcc		%o2, 0x8, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%o2, 0x8, %o2
+-	EX_LD(LOAD(ldx, %o1, %o5))
+-	EX_ST(STORE(stx, %o5, %o1 + %o3))
++	EX_LD(LOAD(ldx, %o1, %o5), U3_retl_o2_plus_8)
++	EX_ST(STORE(stx, %o5, %o1 + %o3), U3_retl_o2_plus_8)
+ 	add		%o1, 0x8, %o1
+ 1:	andcc		%o2, 0x4, %g0
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%o2, 0x4, %o2
+-	EX_LD(LOAD(lduw, %o1, %o5))
+-	EX_ST(STORE(stw, %o5, %o1 + %o3))
++	EX_LD(LOAD(lduw, %o1, %o5), U3_retl_o2_plus_4)
++	EX_ST(STORE(stw, %o5, %o1 + %o3), U3_retl_o2_plus_4)
+ 	add		%o1, 0x4, %o1
+ 1:	cmp		%o2, 0
+ 	be,pt		%XCC, 85f
+@@ -361,8 +442,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	sub		%o2, %g1, %o2
+ 
+ 1:	subcc		%g1, 1, %g1
+-	EX_LD(LOAD(ldub, %o1, %o5))
+-	EX_ST(STORE(stb, %o5, %o1 + %o3))
++	EX_LD(LOAD(ldub, %o1, %o5), U3_retl_o2_plus_g1_plus_1)
++	EX_ST(STORE(stb, %o5, %o1 + %o3), U3_retl_o2_plus_g1_plus_1)
+ 	bgu,pt		%icc, 1b
+ 	 add		%o1, 1, %o1
+ 
+@@ -378,16 +459,16 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 8:	mov		64, %o3
+ 	andn		%o1, 0x7, %o1
+-	EX_LD(LOAD(ldx, %o1, %g2))
++	EX_LD(LOAD(ldx, %o1, %g2), U3_retl_o2)
+ 	sub		%o3, %g1, %o3
+ 	andn		%o2, 0x7, GLOBAL_SPARE
+ 	sllx		%g2, %g1, %g2
+-1:	EX_LD(LOAD(ldx, %o1 + 0x8, %g3))
++1:	EX_LD(LOAD(ldx, %o1 + 0x8, %g3), U3_retl_o2_and_7_plus_GS)
+ 	subcc		GLOBAL_SPARE, 0x8, GLOBAL_SPARE
+ 	add		%o1, 0x8, %o1
+ 	srlx		%g3, %o3, %o5
+ 	or		%o5, %g2, %o5
+-	EX_ST(STORE(stx, %o5, %o0))
++	EX_ST(STORE(stx, %o5, %o0), U3_retl_o2_and_7_plus_GS_plus_8)
+ 	add		%o0, 0x8, %o0
+ 	bgu,pt		%icc, 1b
+ 	 sllx		%g3, %g1, %g2
+@@ -407,8 +488,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 1:
+ 	subcc		%o2, 4, %o2
+-	EX_LD(LOAD(lduw, %o1, %g1))
+-	EX_ST(STORE(stw, %g1, %o1 + %o3))
++	EX_LD(LOAD(lduw, %o1, %g1), U3_retl_o2_plus_4)
++	EX_ST(STORE(stw, %g1, %o1 + %o3), U3_retl_o2_plus_4)
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o1, 4, %o1
+ 
+@@ -418,8 +499,8 @@ FUNC_NAME:	/* %o0=dst, %o1=src, %o2=len */
+ 	.align		32
+ 90:
+ 	subcc		%o2, 1, %o2
+-	EX_LD(LOAD(ldub, %o1, %g1))
+-	EX_ST(STORE(stb, %g1, %o1 + %o3))
++	EX_LD(LOAD(ldub, %o1, %g1), U3_retl_o2_plus_1)
++	EX_ST(STORE(stb, %g1, %o1 + %o3), U3_retl_o2_plus_1)
+ 	bgu,pt		%XCC, 90b
+ 	 add		%o1, 1, %o1
+ 	retl
+diff --git a/arch/sparc/lib/copy_in_user.S b/arch/sparc/lib/copy_in_user.S
+index 302c0e60dc2c..4c89b486fa0d 100644
+--- a/arch/sparc/lib/copy_in_user.S
++++ b/arch/sparc/lib/copy_in_user.S
+@@ -8,18 +8,33 @@
+ 
+ #define XCC xcc
+ 
+-#define EX(x,y)			\
++#define EX(x,y,z)		\
+ 98:	x,y;			\
+ 	.section __ex_table,"a";\
+ 	.align 4;		\
+-	.word 98b, __retl_one;	\
++	.word 98b, z;		\
+ 	.text;			\
+ 	.align 4;
+ 
++#define EX_O4(x,y) EX(x,y,__retl_o4_plus_8)
++#define EX_O2_4(x,y) EX(x,y,__retl_o2_plus_4)
++#define EX_O2_1(x,y) EX(x,y,__retl_o2_plus_1)
++
+ 	.register	%g2,#scratch
+ 	.register	%g3,#scratch
+ 
+ 	.text
++__retl_o4_plus_8:
++	add	%o4, %o2, %o4
++	retl
++	 add	%o4, 8, %o0
++__retl_o2_plus_4:
++	retl
++	 add	%o2, 4, %o0
++__retl_o2_plus_1:
++	retl
++	 add	%o2, 1, %o0
++
+ 	.align	32
+ 
+ 	/* Don't try to get too fancy here, just nice and
+@@ -44,8 +59,8 @@ ENTRY(___copy_in_user)	/* %o0=dst, %o1=src, %o2=len */
+ 	andn		%o2, 0x7, %o4
+ 	and		%o2, 0x7, %o2
+ 1:	subcc		%o4, 0x8, %o4
+-	EX(ldxa [%o1] %asi, %o5)
+-	EX(stxa %o5, [%o0] %asi)
++	EX_O4(ldxa [%o1] %asi, %o5)
++	EX_O4(stxa %o5, [%o0] %asi)
+ 	add		%o1, 0x8, %o1
+ 	bgu,pt		%XCC, 1b
+ 	 add		%o0, 0x8, %o0
+@@ -53,8 +68,8 @@ ENTRY(___copy_in_user)	/* %o0=dst, %o1=src, %o2=len */
+ 	be,pt		%XCC, 1f
+ 	 nop
+ 	sub		%o2, 0x4, %o2
+-	EX(lduwa [%o1] %asi, %o5)
+-	EX(stwa %o5, [%o0] %asi)
++	EX_O2_4(lduwa [%o1] %asi, %o5)
++	EX_O2_4(stwa %o5, [%o0] %asi)
+ 	add		%o1, 0x4, %o1
+ 	add		%o0, 0x4, %o0
+ 1:	cmp		%o2, 0
+@@ -70,8 +85,8 @@ ENTRY(___copy_in_user)	/* %o0=dst, %o1=src, %o2=len */
+ 
+ 82:
+ 	subcc		%o2, 4, %o2
+-	EX(lduwa [%o1] %asi, %g1)
+-	EX(stwa %g1, [%o0] %asi)
++	EX_O2_4(lduwa [%o1] %asi, %g1)
++	EX_O2_4(stwa %g1, [%o0] %asi)
+ 	add		%o1, 4, %o1
+ 	bgu,pt		%XCC, 82b
+ 	 add		%o0, 4, %o0
+@@ -82,8 +97,8 @@ ENTRY(___copy_in_user)	/* %o0=dst, %o1=src, %o2=len */
+ 	.align	32
+ 90:
+ 	subcc		%o2, 1, %o2
+-	EX(lduba [%o1] %asi, %g1)
+-	EX(stba %g1, [%o0] %asi)
++	EX_O2_1(lduba [%o1] %asi, %g1)
++	EX_O2_1(stba %g1, [%o0] %asi)
+ 	add		%o1, 1, %o1
+ 	bgu,pt		%XCC, 90b
+ 	 add		%o0, 1, %o0
+diff --git a/arch/sparc/lib/user_fixup.c b/arch/sparc/lib/user_fixup.c
+deleted file mode 100644
+index ac96ae236709..000000000000
+--- a/arch/sparc/lib/user_fixup.c
++++ /dev/null
+@@ -1,71 +0,0 @@
+-/* user_fixup.c: Fix up user copy faults.
+- *
+- * Copyright (C) 2004 David S. Miller <davem@redhat.com>
+- */
+-
+-#include <linux/compiler.h>
+-#include <linux/kernel.h>
+-#include <linux/string.h>
+-#include <linux/errno.h>
+-#include <linux/module.h>
+-
+-#include <asm/uaccess.h>
+-
+-/* Calculating the exact fault address when using
+- * block loads and stores can be very complicated.
+- *
+- * Instead of trying to be clever and handling all
+- * of the cases, just fix things up simply here.
+- */
+-
+-static unsigned long compute_size(unsigned long start, unsigned long size, unsigned long *offset)
+-{
+-	unsigned long fault_addr = current_thread_info()->fault_address;
+-	unsigned long end = start + size;
+-
+-	if (fault_addr < start || fault_addr >= end) {
+-		*offset = 0;
+-	} else {
+-		*offset = fault_addr - start;
+-		size = end - fault_addr;
+-	}
+-	return size;
+-}
+-
+-unsigned long copy_from_user_fixup(void *to, const void __user *from, unsigned long size)
+-{
+-	unsigned long offset;
+-
+-	size = compute_size((unsigned long) from, size, &offset);
+-	if (likely(size))
+-		memset(to + offset, 0, size);
+-
+-	return size;
+-}
+-EXPORT_SYMBOL(copy_from_user_fixup);
+-
+-unsigned long copy_to_user_fixup(void __user *to, const void *from, unsigned long size)
+-{
+-	unsigned long offset;
+-
+-	return compute_size((unsigned long) to, size, &offset);
+-}
+-EXPORT_SYMBOL(copy_to_user_fixup);
+-
+-unsigned long copy_in_user_fixup(void __user *to, void __user *from, unsigned long size)
+-{
+-	unsigned long fault_addr = current_thread_info()->fault_address;
+-	unsigned long start = (unsigned long) to;
+-	unsigned long end = start + size;
+-
+-	if (fault_addr >= start && fault_addr < end)
+-		return end - fault_addr;
+-
+-	start = (unsigned long) from;
+-	end = start + size;
+-	if (fault_addr >= start && fault_addr < end)
+-		return end - fault_addr;
+-
+-	return size;
+-}
+-EXPORT_SYMBOL(copy_in_user_fixup);
+diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
+index f2b77112e9d8..e20fbbafb0b0 100644
+--- a/arch/sparc/mm/tsb.c
++++ b/arch/sparc/mm/tsb.c
+@@ -27,6 +27,20 @@ static inline int tag_compare(unsigned long tag, unsigned long vaddr)
+ 	return (tag == (vaddr >> 22));
+ }
+ 
++static void flush_tsb_kernel_range_scan(unsigned long start, unsigned long end)
++{
++	unsigned long idx;
++
++	for (idx = 0; idx < KERNEL_TSB_NENTRIES; idx++) {
++		struct tsb *ent = &swapper_tsb[idx];
++		unsigned long match = idx << 13;
++
++		match |= (ent->tag << 22);
++		if (match >= start && match < end)
++			ent->tag = (1UL << TSB_TAG_INVALID_BIT);
++	}
++}
++
+ /* TSB flushes need only occur on the processor initiating the address
+  * space modification, not on each cpu the address space has run on.
+  * Only the TLB flush needs that treatment.
+@@ -36,6 +50,9 @@ void flush_tsb_kernel_range(unsigned long start, unsigned long end)
+ {
+ 	unsigned long v;
+ 
++	if ((end - start) >> PAGE_SHIFT >= 2 * KERNEL_TSB_NENTRIES)
++		return flush_tsb_kernel_range_scan(start, end);
++
+ 	for (v = start; v < end; v += PAGE_SIZE) {
+ 		unsigned long hash = tsb_hash(v, PAGE_SHIFT,
+ 					      KERNEL_TSB_NENTRIES);
+diff --git a/arch/sparc/mm/ultra.S b/arch/sparc/mm/ultra.S
+index b4f4733abc6e..5d2fd6cd3189 100644
+--- a/arch/sparc/mm/ultra.S
++++ b/arch/sparc/mm/ultra.S
+@@ -30,7 +30,7 @@
+ 	.text
+ 	.align		32
+ 	.globl		__flush_tlb_mm
+-__flush_tlb_mm:		/* 18 insns */
++__flush_tlb_mm:		/* 19 insns */
+ 	/* %o0=(ctx & TAG_CONTEXT_BITS), %o1=SECONDARY_CONTEXT */
+ 	ldxa		[%o1] ASI_DMMU, %g2
+ 	cmp		%g2, %o0
+@@ -81,7 +81,7 @@ __flush_tlb_page:	/* 22 insns */
+ 
+ 	.align		32
+ 	.globl		__flush_tlb_pending
+-__flush_tlb_pending:	/* 26 insns */
++__flush_tlb_pending:	/* 27 insns */
+ 	/* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
+ 	rdpr		%pstate, %g7
+ 	sllx		%o1, 3, %o1
+@@ -113,12 +113,14 @@ __flush_tlb_pending:	/* 26 insns */
+ 
+ 	.align		32
+ 	.globl		__flush_tlb_kernel_range
+-__flush_tlb_kernel_range:	/* 16 insns */
++__flush_tlb_kernel_range:	/* 31 insns */
+ 	/* %o0=start, %o1=end */
+ 	cmp		%o0, %o1
+ 	be,pn		%xcc, 2f
++	 sub		%o1, %o0, %o3
++	srlx		%o3, 18, %o4
++	brnz,pn		%o4, __spitfire_flush_tlb_kernel_range_slow
+ 	 sethi		%hi(PAGE_SIZE), %o4
+-	sub		%o1, %o0, %o3
+ 	sub		%o3, %o4, %o3
+ 	or		%o0, 0x20, %o0		! Nucleus
+ 1:	stxa		%g0, [%o0 + %o3] ASI_DMMU_DEMAP
+@@ -131,6 +133,41 @@ __flush_tlb_kernel_range:	/* 16 insns */
+ 	retl
+ 	 nop
+ 	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++
++__spitfire_flush_tlb_kernel_range_slow:
++	mov		63 * 8, %o4
++1:	ldxa		[%o4] ASI_ITLB_DATA_ACCESS, %o3
++	andcc		%o3, 0x40, %g0			/* _PAGE_L_4U */
++	bne,pn		%xcc, 2f
++	 mov		TLB_TAG_ACCESS, %o3
++	stxa		%g0, [%o3] ASI_IMMU
++	stxa		%g0, [%o4] ASI_ITLB_DATA_ACCESS
++	membar		#Sync
++2:	ldxa		[%o4] ASI_DTLB_DATA_ACCESS, %o3
++	andcc		%o3, 0x40, %g0
++	bne,pn		%xcc, 2f
++	 mov		TLB_TAG_ACCESS, %o3
++	stxa		%g0, [%o3] ASI_DMMU
++	stxa		%g0, [%o4] ASI_DTLB_DATA_ACCESS
++	membar		#Sync
++2:	sub		%o4, 8, %o4
++	brgez,pt	%o4, 1b
++	 nop
++	retl
++	 nop
+ 
+ __spitfire_flush_tlb_mm_slow:
+ 	rdpr		%pstate, %g1
+@@ -285,6 +322,40 @@ __cheetah_flush_tlb_pending:	/* 27 insns */
+ 	retl
+ 	 wrpr		%g7, 0x0, %pstate
+ 
++__cheetah_flush_tlb_kernel_range:	/* 31 insns */
++	/* %o0=start, %o1=end */
++	cmp		%o0, %o1
++	be,pn		%xcc, 2f
++	 sub		%o1, %o0, %o3
++	srlx		%o3, 18, %o4
++	brnz,pn		%o4, 3f
++	 sethi		%hi(PAGE_SIZE), %o4
++	sub		%o3, %o4, %o3
++	or		%o0, 0x20, %o0		! Nucleus
++1:	stxa		%g0, [%o0 + %o3] ASI_DMMU_DEMAP
++	stxa		%g0, [%o0 + %o3] ASI_IMMU_DEMAP
++	membar		#Sync
++	brnz,pt		%o3, 1b
++	 sub		%o3, %o4, %o3
++2:	sethi		%hi(KERNBASE), %o3
++	flush		%o3
++	retl
++	 nop
++3:	mov		0x80, %o4
++	stxa		%g0, [%o4] ASI_DMMU_DEMAP
++	membar		#Sync
++	stxa		%g0, [%o4] ASI_IMMU_DEMAP
++	membar		#Sync
++	retl
++	 nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++
+ #ifdef DCACHE_ALIASING_POSSIBLE
+ __cheetah_flush_dcache_page: /* 11 insns */
+ 	sethi		%hi(PAGE_OFFSET), %g1
+@@ -309,19 +380,28 @@ __hypervisor_tlb_tl0_error:
+ 	ret
+ 	 restore
+ 
+-__hypervisor_flush_tlb_mm: /* 10 insns */
++__hypervisor_flush_tlb_mm: /* 19 insns */
+ 	mov		%o0, %o2	/* ARG2: mmu context */
+ 	mov		0, %o0		/* ARG0: CPU lists unimplemented */
+ 	mov		0, %o1		/* ARG1: CPU lists unimplemented */
+ 	mov		HV_MMU_ALL, %o3	/* ARG3: flags */
+ 	mov		HV_FAST_MMU_DEMAP_CTX, %o5
+ 	ta		HV_FAST_TRAP
+-	brnz,pn		%o0, __hypervisor_tlb_tl0_error
++	brnz,pn		%o0, 1f
+ 	 mov		HV_FAST_MMU_DEMAP_CTX, %o1
+ 	retl
+ 	 nop
++1:	sethi		%hi(__hypervisor_tlb_tl0_error), %o5
++	jmpl		%o5 + %lo(__hypervisor_tlb_tl0_error), %g0
++	 nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
+ 
+-__hypervisor_flush_tlb_page: /* 11 insns */
++__hypervisor_flush_tlb_page: /* 22 insns */
+ 	/* %o0 = context, %o1 = vaddr */
+ 	mov		%o0, %g2
+ 	mov		%o1, %o0              /* ARG0: vaddr + IMMU-bit */
+@@ -330,12 +410,23 @@ __hypervisor_flush_tlb_page: /* 11 insns */
+ 	srlx		%o0, PAGE_SHIFT, %o0
+ 	sllx		%o0, PAGE_SHIFT, %o0
+ 	ta		HV_MMU_UNMAP_ADDR_TRAP
+-	brnz,pn		%o0, __hypervisor_tlb_tl0_error
++	brnz,pn		%o0, 1f
+ 	 mov		HV_MMU_UNMAP_ADDR_TRAP, %o1
+ 	retl
+ 	 nop
++1:	sethi		%hi(__hypervisor_tlb_tl0_error), %o2
++	jmpl		%o2 + %lo(__hypervisor_tlb_tl0_error), %g0
++	 nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
+ 
+-__hypervisor_flush_tlb_pending: /* 16 insns */
++__hypervisor_flush_tlb_pending: /* 27 insns */
+ 	/* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
+ 	sllx		%o1, 3, %g1
+ 	mov		%o2, %g2
+@@ -347,31 +438,57 @@ __hypervisor_flush_tlb_pending: /* 16 insns */
+ 	srlx		%o0, PAGE_SHIFT, %o0
+ 	sllx		%o0, PAGE_SHIFT, %o0
+ 	ta		HV_MMU_UNMAP_ADDR_TRAP
+-	brnz,pn		%o0, __hypervisor_tlb_tl0_error
++	brnz,pn		%o0, 1f
+ 	 mov		HV_MMU_UNMAP_ADDR_TRAP, %o1
+ 	brnz,pt		%g1, 1b
+ 	 nop
+ 	retl
+ 	 nop
++1:	sethi		%hi(__hypervisor_tlb_tl0_error), %o2
++	jmpl		%o2 + %lo(__hypervisor_tlb_tl0_error), %g0
++	 nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
+ 
+-__hypervisor_flush_tlb_kernel_range: /* 16 insns */
++__hypervisor_flush_tlb_kernel_range: /* 31 insns */
+ 	/* %o0=start, %o1=end */
+ 	cmp		%o0, %o1
+ 	be,pn		%xcc, 2f
+-	 sethi		%hi(PAGE_SIZE), %g3
+-	mov		%o0, %g1
+-	sub		%o1, %g1, %g2
++	 sub		%o1, %o0, %g2
++	srlx		%g2, 18, %g3
++	brnz,pn		%g3, 4f
++	 mov		%o0, %g1
++	sethi		%hi(PAGE_SIZE), %g3
+ 	sub		%g2, %g3, %g2
+ 1:	add		%g1, %g2, %o0	/* ARG0: virtual address */
+ 	mov		0, %o1		/* ARG1: mmu context */
+ 	mov		HV_MMU_ALL, %o2	/* ARG2: flags */
+ 	ta		HV_MMU_UNMAP_ADDR_TRAP
+-	brnz,pn		%o0, __hypervisor_tlb_tl0_error
++	brnz,pn		%o0, 3f
+ 	 mov		HV_MMU_UNMAP_ADDR_TRAP, %o1
+ 	brnz,pt		%g2, 1b
+ 	 sub		%g2, %g3, %g2
+ 2:	retl
+ 	 nop
++3:	sethi		%hi(__hypervisor_tlb_tl0_error), %o2
++	jmpl		%o2 + %lo(__hypervisor_tlb_tl0_error), %g0
++	 nop
++4:	mov		0, %o0		/* ARG0: CPU lists unimplemented */
++	mov		0, %o1		/* ARG1: CPU lists unimplemented */
++	mov		0, %o2		/* ARG2: mmu context == nucleus */
++	mov		HV_MMU_ALL, %o3	/* ARG3: flags */
++	mov		HV_FAST_MMU_DEMAP_CTX, %o5
++	ta		HV_FAST_TRAP
++	brnz,pn		%o0, 3b
++	 mov		HV_FAST_MMU_DEMAP_CTX, %o1
++	retl
++	 nop
+ 
+ #ifdef DCACHE_ALIASING_POSSIBLE
+ 	/* XXX Niagara and friends have an 8K cache, so no aliasing is
+@@ -394,43 +511,6 @@ tlb_patch_one:
+ 	retl
+ 	 nop
+ 
+-	.globl		cheetah_patch_cachetlbops
+-cheetah_patch_cachetlbops:
+-	save		%sp, -128, %sp
+-
+-	sethi		%hi(__flush_tlb_mm), %o0
+-	or		%o0, %lo(__flush_tlb_mm), %o0
+-	sethi		%hi(__cheetah_flush_tlb_mm), %o1
+-	or		%o1, %lo(__cheetah_flush_tlb_mm), %o1
+-	call		tlb_patch_one
+-	 mov		19, %o2
+-
+-	sethi		%hi(__flush_tlb_page), %o0
+-	or		%o0, %lo(__flush_tlb_page), %o0
+-	sethi		%hi(__cheetah_flush_tlb_page), %o1
+-	or		%o1, %lo(__cheetah_flush_tlb_page), %o1
+-	call		tlb_patch_one
+-	 mov		22, %o2
+-
+-	sethi		%hi(__flush_tlb_pending), %o0
+-	or		%o0, %lo(__flush_tlb_pending), %o0
+-	sethi		%hi(__cheetah_flush_tlb_pending), %o1
+-	or		%o1, %lo(__cheetah_flush_tlb_pending), %o1
+-	call		tlb_patch_one
+-	 mov		27, %o2
+-
+-#ifdef DCACHE_ALIASING_POSSIBLE
+-	sethi		%hi(__flush_dcache_page), %o0
+-	or		%o0, %lo(__flush_dcache_page), %o0
+-	sethi		%hi(__cheetah_flush_dcache_page), %o1
+-	or		%o1, %lo(__cheetah_flush_dcache_page), %o1
+-	call		tlb_patch_one
+-	 mov		11, %o2
+-#endif /* DCACHE_ALIASING_POSSIBLE */
+-
+-	ret
+-	 restore
+-
+ #ifdef CONFIG_SMP
+ 	/* These are all called by the slaves of a cross call, at
+ 	 * trap level 1, with interrupts fully disabled.
+@@ -447,7 +527,7 @@ cheetah_patch_cachetlbops:
+ 	 */
+ 	.align		32
+ 	.globl		xcall_flush_tlb_mm
+-xcall_flush_tlb_mm:	/* 21 insns */
++xcall_flush_tlb_mm:	/* 24 insns */
+ 	mov		PRIMARY_CONTEXT, %g2
+ 	ldxa		[%g2] ASI_DMMU, %g3
+ 	srlx		%g3, CTX_PGSZ1_NUC_SHIFT, %g4
+@@ -469,9 +549,12 @@ xcall_flush_tlb_mm:	/* 21 insns */
+ 	nop
+ 	nop
+ 	nop
++	nop
++	nop
++	nop
+ 
+ 	.globl		xcall_flush_tlb_page
+-xcall_flush_tlb_page:	/* 17 insns */
++xcall_flush_tlb_page:	/* 20 insns */
+ 	/* %g5=context, %g1=vaddr */
+ 	mov		PRIMARY_CONTEXT, %g4
+ 	ldxa		[%g4] ASI_DMMU, %g2
+@@ -490,15 +573,20 @@ xcall_flush_tlb_page:	/* 17 insns */
+ 	retry
+ 	nop
+ 	nop
++	nop
++	nop
++	nop
+ 
+ 	.globl		xcall_flush_tlb_kernel_range
+-xcall_flush_tlb_kernel_range:	/* 25 insns */
++xcall_flush_tlb_kernel_range:	/* 44 insns */
+ 	sethi		%hi(PAGE_SIZE - 1), %g2
+ 	or		%g2, %lo(PAGE_SIZE - 1), %g2
+ 	andn		%g1, %g2, %g1
+ 	andn		%g7, %g2, %g7
+ 	sub		%g7, %g1, %g3
+-	add		%g2, 1, %g2
++	srlx		%g3, 18, %g2
++	brnz,pn		%g2, 2f
++	 add		%g2, 1, %g2
+ 	sub		%g3, %g2, %g3
+ 	or		%g1, 0x20, %g1		! Nucleus
+ 1:	stxa		%g0, [%g1 + %g3] ASI_DMMU_DEMAP
+@@ -507,8 +595,25 @@ xcall_flush_tlb_kernel_range:	/* 25 insns */
+ 	brnz,pt		%g3, 1b
+ 	 sub		%g3, %g2, %g3
+ 	retry
+-	nop
+-	nop
++2:	mov		63 * 8, %g1
++1:	ldxa		[%g1] ASI_ITLB_DATA_ACCESS, %g2
++	andcc		%g2, 0x40, %g0			/* _PAGE_L_4U */
++	bne,pn		%xcc, 2f
++	 mov		TLB_TAG_ACCESS, %g2
++	stxa		%g0, [%g2] ASI_IMMU
++	stxa		%g0, [%g1] ASI_ITLB_DATA_ACCESS
++	membar		#Sync
++2:	ldxa		[%g1] ASI_DTLB_DATA_ACCESS, %g2
++	andcc		%g2, 0x40, %g0
++	bne,pn		%xcc, 2f
++	 mov		TLB_TAG_ACCESS, %g2
++	stxa		%g0, [%g2] ASI_DMMU
++	stxa		%g0, [%g1] ASI_DTLB_DATA_ACCESS
++	membar		#Sync
++2:	sub		%g1, 8, %g1
++	brgez,pt	%g1, 1b
++	 nop
++	retry
+ 	nop
+ 	nop
+ 	nop
+@@ -637,6 +742,52 @@ xcall_fetch_glob_pmu_n4:
+ 
+ 	retry
+ 
++__cheetah_xcall_flush_tlb_kernel_range:	/* 44 insns */
++	sethi		%hi(PAGE_SIZE - 1), %g2
++	or		%g2, %lo(PAGE_SIZE - 1), %g2
++	andn		%g1, %g2, %g1
++	andn		%g7, %g2, %g7
++	sub		%g7, %g1, %g3
++	srlx		%g3, 18, %g2
++	brnz,pn		%g2, 2f
++	 add		%g2, 1, %g2
++	sub		%g3, %g2, %g3
++	or		%g1, 0x20, %g1		! Nucleus
++1:	stxa		%g0, [%g1 + %g3] ASI_DMMU_DEMAP
++	stxa		%g0, [%g1 + %g3] ASI_IMMU_DEMAP
++	membar		#Sync
++	brnz,pt		%g3, 1b
++	 sub		%g3, %g2, %g3
++	retry
++2:	mov		0x80, %g2
++	stxa		%g0, [%g2] ASI_DMMU_DEMAP
++	membar		#Sync
++	stxa		%g0, [%g2] ASI_IMMU_DEMAP
++	membar		#Sync
++	retry
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++
+ #ifdef DCACHE_ALIASING_POSSIBLE
+ 	.align		32
+ 	.globl		xcall_flush_dcache_page_cheetah
+@@ -700,7 +851,7 @@ __hypervisor_tlb_xcall_error:
+ 	ba,a,pt	%xcc, rtrap
+ 
+ 	.globl		__hypervisor_xcall_flush_tlb_mm
+-__hypervisor_xcall_flush_tlb_mm: /* 21 insns */
++__hypervisor_xcall_flush_tlb_mm: /* 24 insns */
+ 	/* %g5=ctx, g1,g2,g3,g4,g7=scratch, %g6=unusable */
+ 	mov		%o0, %g2
+ 	mov		%o1, %g3
+@@ -714,7 +865,7 @@ __hypervisor_xcall_flush_tlb_mm: /* 21 insns */
+ 	mov		HV_FAST_MMU_DEMAP_CTX, %o5
+ 	ta		HV_FAST_TRAP
+ 	mov		HV_FAST_MMU_DEMAP_CTX, %g6
+-	brnz,pn		%o0, __hypervisor_tlb_xcall_error
++	brnz,pn		%o0, 1f
+ 	 mov		%o0, %g5
+ 	mov		%g2, %o0
+ 	mov		%g3, %o1
+@@ -723,9 +874,12 @@ __hypervisor_xcall_flush_tlb_mm: /* 21 insns */
+ 	mov		%g7, %o5
+ 	membar		#Sync
+ 	retry
++1:	sethi		%hi(__hypervisor_tlb_xcall_error), %g4
++	jmpl		%g4 + %lo(__hypervisor_tlb_xcall_error), %g0
++	 nop
+ 
+ 	.globl		__hypervisor_xcall_flush_tlb_page
+-__hypervisor_xcall_flush_tlb_page: /* 17 insns */
++__hypervisor_xcall_flush_tlb_page: /* 20 insns */
+ 	/* %g5=ctx, %g1=vaddr */
+ 	mov		%o0, %g2
+ 	mov		%o1, %g3
+@@ -737,42 +891,64 @@ __hypervisor_xcall_flush_tlb_page: /* 17 insns */
+ 	sllx		%o0, PAGE_SHIFT, %o0
+ 	ta		HV_MMU_UNMAP_ADDR_TRAP
+ 	mov		HV_MMU_UNMAP_ADDR_TRAP, %g6
+-	brnz,a,pn	%o0, __hypervisor_tlb_xcall_error
++	brnz,a,pn	%o0, 1f
+ 	 mov		%o0, %g5
+ 	mov		%g2, %o0
+ 	mov		%g3, %o1
+ 	mov		%g4, %o2
+ 	membar		#Sync
+ 	retry
++1:	sethi		%hi(__hypervisor_tlb_xcall_error), %g4
++	jmpl		%g4 + %lo(__hypervisor_tlb_xcall_error), %g0
++	 nop
+ 
+ 	.globl		__hypervisor_xcall_flush_tlb_kernel_range
+-__hypervisor_xcall_flush_tlb_kernel_range: /* 25 insns */
++__hypervisor_xcall_flush_tlb_kernel_range: /* 44 insns */
+ 	/* %g1=start, %g7=end, g2,g3,g4,g5,g6=scratch */
+ 	sethi		%hi(PAGE_SIZE - 1), %g2
+ 	or		%g2, %lo(PAGE_SIZE - 1), %g2
+ 	andn		%g1, %g2, %g1
+ 	andn		%g7, %g2, %g7
+ 	sub		%g7, %g1, %g3
++	srlx		%g3, 18, %g7
+ 	add		%g2, 1, %g2
+ 	sub		%g3, %g2, %g3
+ 	mov		%o0, %g2
+ 	mov		%o1, %g4
+-	mov		%o2, %g7
++	brnz,pn		%g7, 2f
++	 mov		%o2, %g7
+ 1:	add		%g1, %g3, %o0	/* ARG0: virtual address */
+ 	mov		0, %o1		/* ARG1: mmu context */
+ 	mov		HV_MMU_ALL, %o2	/* ARG2: flags */
+ 	ta		HV_MMU_UNMAP_ADDR_TRAP
+ 	mov		HV_MMU_UNMAP_ADDR_TRAP, %g6
+-	brnz,pn		%o0, __hypervisor_tlb_xcall_error
++	brnz,pn		%o0, 1f
+ 	 mov		%o0, %g5
+ 	sethi		%hi(PAGE_SIZE), %o2
+ 	brnz,pt		%g3, 1b
+ 	 sub		%g3, %o2, %g3
+-	mov		%g2, %o0
++5:	mov		%g2, %o0
+ 	mov		%g4, %o1
+ 	mov		%g7, %o2
+ 	membar		#Sync
+ 	retry
++1:	sethi		%hi(__hypervisor_tlb_xcall_error), %g4
++	jmpl		%g4 + %lo(__hypervisor_tlb_xcall_error), %g0
++	 nop
++2:	mov		%o3, %g1
++	mov		%o5, %g3
++	mov		0, %o0		/* ARG0: CPU lists unimplemented */
++	mov		0, %o1		/* ARG1: CPU lists unimplemented */
++	mov		0, %o2		/* ARG2: mmu context == nucleus */
++	mov		HV_MMU_ALL, %o3	/* ARG3: flags */
++	mov		HV_FAST_MMU_DEMAP_CTX, %o5
++	ta		HV_FAST_TRAP
++	mov		%g1, %o3
++	brz,pt		%o0, 5b
++	 mov		%g3, %o5
++	mov		HV_FAST_MMU_DEMAP_CTX, %g6
++	ba,pt		%xcc, 1b
++	 clr		%g5
+ 
+ 	/* These just get rescheduled to PIL vectors. */
+ 	.globl		xcall_call_function
+@@ -809,6 +985,58 @@ xcall_kgdb_capture:
+ 
+ #endif /* CONFIG_SMP */
+ 
++	.globl		cheetah_patch_cachetlbops
++cheetah_patch_cachetlbops:
++	save		%sp, -128, %sp
++
++	sethi		%hi(__flush_tlb_mm), %o0
++	or		%o0, %lo(__flush_tlb_mm), %o0
++	sethi		%hi(__cheetah_flush_tlb_mm), %o1
++	or		%o1, %lo(__cheetah_flush_tlb_mm), %o1
++	call		tlb_patch_one
++	 mov		19, %o2
++
++	sethi		%hi(__flush_tlb_page), %o0
++	or		%o0, %lo(__flush_tlb_page), %o0
++	sethi		%hi(__cheetah_flush_tlb_page), %o1
++	or		%o1, %lo(__cheetah_flush_tlb_page), %o1
++	call		tlb_patch_one
++	 mov		22, %o2
++
++	sethi		%hi(__flush_tlb_pending), %o0
++	or		%o0, %lo(__flush_tlb_pending), %o0
++	sethi		%hi(__cheetah_flush_tlb_pending), %o1
++	or		%o1, %lo(__cheetah_flush_tlb_pending), %o1
++	call		tlb_patch_one
++	 mov		27, %o2
++
++	sethi		%hi(__flush_tlb_kernel_range), %o0
++	or		%o0, %lo(__flush_tlb_kernel_range), %o0
++	sethi		%hi(__cheetah_flush_tlb_kernel_range), %o1
++	or		%o1, %lo(__cheetah_flush_tlb_kernel_range), %o1
++	call		tlb_patch_one
++	 mov		31, %o2
++
++#ifdef DCACHE_ALIASING_POSSIBLE
++	sethi		%hi(__flush_dcache_page), %o0
++	or		%o0, %lo(__flush_dcache_page), %o0
++	sethi		%hi(__cheetah_flush_dcache_page), %o1
++	or		%o1, %lo(__cheetah_flush_dcache_page), %o1
++	call		tlb_patch_one
++	 mov		11, %o2
++#endif /* DCACHE_ALIASING_POSSIBLE */
++
++#ifdef CONFIG_SMP
++	sethi		%hi(xcall_flush_tlb_kernel_range), %o0
++	or		%o0, %lo(xcall_flush_tlb_kernel_range), %o0
++	sethi		%hi(__cheetah_xcall_flush_tlb_kernel_range), %o1
++	or		%o1, %lo(__cheetah_xcall_flush_tlb_kernel_range), %o1
++	call		tlb_patch_one
++	 mov		44, %o2
++#endif /* CONFIG_SMP */
++
++	ret
++	 restore
+ 
+ 	.globl		hypervisor_patch_cachetlbops
+ hypervisor_patch_cachetlbops:
+@@ -819,28 +1047,28 @@ hypervisor_patch_cachetlbops:
+ 	sethi		%hi(__hypervisor_flush_tlb_mm), %o1
+ 	or		%o1, %lo(__hypervisor_flush_tlb_mm), %o1
+ 	call		tlb_patch_one
+-	 mov		10, %o2
++	 mov		19, %o2
+ 
+ 	sethi		%hi(__flush_tlb_page), %o0
+ 	or		%o0, %lo(__flush_tlb_page), %o0
+ 	sethi		%hi(__hypervisor_flush_tlb_page), %o1
+ 	or		%o1, %lo(__hypervisor_flush_tlb_page), %o1
+ 	call		tlb_patch_one
+-	 mov		11, %o2
++	 mov		22, %o2
+ 
+ 	sethi		%hi(__flush_tlb_pending), %o0
+ 	or		%o0, %lo(__flush_tlb_pending), %o0
+ 	sethi		%hi(__hypervisor_flush_tlb_pending), %o1
+ 	or		%o1, %lo(__hypervisor_flush_tlb_pending), %o1
+ 	call		tlb_patch_one
+-	 mov		16, %o2
++	 mov		27, %o2
+ 
+ 	sethi		%hi(__flush_tlb_kernel_range), %o0
+ 	or		%o0, %lo(__flush_tlb_kernel_range), %o0
+ 	sethi		%hi(__hypervisor_flush_tlb_kernel_range), %o1
+ 	or		%o1, %lo(__hypervisor_flush_tlb_kernel_range), %o1
+ 	call		tlb_patch_one
+-	 mov		16, %o2
++	 mov		31, %o2
+ 
+ #ifdef DCACHE_ALIASING_POSSIBLE
+ 	sethi		%hi(__flush_dcache_page), %o0
+@@ -857,21 +1085,21 @@ hypervisor_patch_cachetlbops:
+ 	sethi		%hi(__hypervisor_xcall_flush_tlb_mm), %o1
+ 	or		%o1, %lo(__hypervisor_xcall_flush_tlb_mm), %o1
+ 	call		tlb_patch_one
+-	 mov		21, %o2
++	 mov		24, %o2
+ 
+ 	sethi		%hi(xcall_flush_tlb_page), %o0
+ 	or		%o0, %lo(xcall_flush_tlb_page), %o0
+ 	sethi		%hi(__hypervisor_xcall_flush_tlb_page), %o1
+ 	or		%o1, %lo(__hypervisor_xcall_flush_tlb_page), %o1
+ 	call		tlb_patch_one
+-	 mov		17, %o2
++	 mov		20, %o2
+ 
+ 	sethi		%hi(xcall_flush_tlb_kernel_range), %o0
+ 	or		%o0, %lo(xcall_flush_tlb_kernel_range), %o0
+ 	sethi		%hi(__hypervisor_xcall_flush_tlb_kernel_range), %o1
+ 	or		%o1, %lo(__hypervisor_xcall_flush_tlb_kernel_range), %o1
+ 	call		tlb_patch_one
+-	 mov		25, %o2
++	 mov		44, %o2
+ #endif /* CONFIG_SMP */
+ 
+ 	ret
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index c4751ece76f6..45e87c9cc828 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -307,6 +307,10 @@ static void bgmac_dma_rx_enable(struct bgmac *bgmac,
+ 	u32 ctl;
+ 
+ 	ctl = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_RX_CTL);
++
++	/* preserve ONLY bits 16-17 from current hardware value */
++	ctl &= BGMAC_DMA_RX_ADDREXT_MASK;
++
+ 	if (bgmac->feature_flags & BGMAC_FEAT_RX_MASK_SETUP) {
+ 		ctl &= ~BGMAC_DMA_RX_BL_MASK;
+ 		ctl |= BGMAC_DMA_RX_BL_128 << BGMAC_DMA_RX_BL_SHIFT;
+@@ -317,7 +321,6 @@ static void bgmac_dma_rx_enable(struct bgmac *bgmac,
+ 		ctl &= ~BGMAC_DMA_RX_PT_MASK;
+ 		ctl |= BGMAC_DMA_RX_PT_1 << BGMAC_DMA_RX_PT_SHIFT;
+ 	}
+-	ctl &= BGMAC_DMA_RX_ADDREXT_MASK;
+ 	ctl |= BGMAC_DMA_RX_ENABLE;
+ 	ctl |= BGMAC_DMA_RX_PARITY_DISABLE;
+ 	ctl |= BGMAC_DMA_RX_OVERFLOW_CONT;
+diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
+index 505ceaf451e2..2c850a92ab15 100644
+--- a/drivers/net/ethernet/broadcom/bnx2.c
++++ b/drivers/net/ethernet/broadcom/bnx2.c
+@@ -49,6 +49,7 @@
+ #include <linux/firmware.h>
+ #include <linux/log2.h>
+ #include <linux/aer.h>
++#include <linux/crash_dump.h>
+ 
+ #if defined(CONFIG_CNIC) || defined(CONFIG_CNIC_MODULE)
+ #define BCM_CNIC 1
+@@ -4759,15 +4760,16 @@ bnx2_setup_msix_tbl(struct bnx2 *bp)
+ 	BNX2_WR(bp, BNX2_PCI_GRC_WINDOW3_ADDR, BNX2_MSIX_PBA_ADDR);
+ }
+ 
+-static int
+-bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)
++static void
++bnx2_wait_dma_complete(struct bnx2 *bp)
+ {
+ 	u32 val;
+-	int i, rc = 0;
+-	u8 old_port;
++	int i;
+ 
+-	/* Wait for the current PCI transaction to complete before
+-	 * issuing a reset. */
++	/*
++	 * Wait for the current PCI transaction to complete before
++	 * issuing a reset.
++	 */
+ 	if ((BNX2_CHIP(bp) == BNX2_CHIP_5706) ||
+ 	    (BNX2_CHIP(bp) == BNX2_CHIP_5708)) {
+ 		BNX2_WR(bp, BNX2_MISC_ENABLE_CLR_BITS,
+@@ -4791,6 +4793,21 @@ bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)
+ 		}
+ 	}
+ 
++	return;
++}
++
++
++static int
++bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)
++{
++	u32 val;
++	int i, rc = 0;
++	u8 old_port;
++
++	/* Wait for the current PCI transaction to complete before
++	 * issuing a reset. */
++	bnx2_wait_dma_complete(bp);
++
+ 	/* Wait for the firmware to tell us it is ok to issue a reset. */
+ 	bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT0 | reset_code, 1, 1);
+ 
+@@ -6356,6 +6373,10 @@ bnx2_open(struct net_device *dev)
+ 	struct bnx2 *bp = netdev_priv(dev);
+ 	int rc;
+ 
++	rc = bnx2_request_firmware(bp);
++	if (rc < 0)
++		goto out;
++
+ 	netif_carrier_off(dev);
+ 
+ 	bnx2_disable_int(bp);
+@@ -6424,6 +6445,7 @@ open_err:
+ 	bnx2_free_irq(bp);
+ 	bnx2_free_mem(bp);
+ 	bnx2_del_napi(bp);
++	bnx2_release_firmware(bp);
+ 	goto out;
+ }
+ 
+@@ -8570,12 +8592,15 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	pci_set_drvdata(pdev, dev);
+ 
+-	rc = bnx2_request_firmware(bp);
+-	if (rc < 0)
+-		goto error;
+-
++	/*
++	 * In-flight DMA from 1st kernel could continue going in kdump kernel.
++	 * New io-page table has been created before bnx2 does reset at open stage.
++	 * We have to wait for the in-flight DMA to complete to avoid it look up
++	 * into the newly created io-page table.
++	 */
++	if (is_kdump_kernel())
++		bnx2_wait_dma_complete(bp);
+ 
+-	bnx2_reset_chip(bp, BNX2_DRV_MSG_CODE_RESET);
+ 	memcpy(dev->dev_addr, bp->mac_addr, ETH_ALEN);
+ 
+ 	dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG |
+@@ -8608,7 +8633,6 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	return 0;
+ 
+ error:
+-	bnx2_release_firmware(bp);
+ 	pci_iounmap(pdev, bp->regview);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index d48873bcbddf..5cdc96bdd444 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -231,7 +231,7 @@ mlxsw_sp_span_entry_create(struct mlxsw_sp_port *port)
+ 
+ 	span_entry->used = true;
+ 	span_entry->id = index;
+-	span_entry->ref_count = 0;
++	span_entry->ref_count = 1;
+ 	span_entry->local_port = local_port;
+ 	return span_entry;
+ }
+@@ -268,6 +268,7 @@ struct mlxsw_sp_span_entry *mlxsw_sp_span_entry_get(struct mlxsw_sp_port *port)
+ 
+ 	span_entry = mlxsw_sp_span_entry_find(port);
+ 	if (span_entry) {
++		/* Already exists, just take a reference */
+ 		span_entry->ref_count++;
+ 		return span_entry;
+ 	}
+@@ -278,6 +279,7 @@ struct mlxsw_sp_span_entry *mlxsw_sp_span_entry_get(struct mlxsw_sp_port *port)
+ static int mlxsw_sp_span_entry_put(struct mlxsw_sp *mlxsw_sp,
+ 				   struct mlxsw_sp_span_entry *span_entry)
+ {
++	WARN_ON(!span_entry->ref_count);
+ 	if (--span_entry->ref_count == 0)
+ 		mlxsw_sp_span_entry_destroy(mlxsw_sp, span_entry);
+ 	return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 3f5c51da6d3e..62514b9bf988 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -777,6 +777,26 @@ static void mlxsw_sp_router_neigh_rec_process(struct mlxsw_sp *mlxsw_sp,
+ 	}
+ }
+ 
++static bool mlxsw_sp_router_rauhtd_is_full(char *rauhtd_pl)
++{
++	u8 num_rec, last_rec_index, num_entries;
++
++	num_rec = mlxsw_reg_rauhtd_num_rec_get(rauhtd_pl);
++	last_rec_index = num_rec - 1;
++
++	if (num_rec < MLXSW_REG_RAUHTD_REC_MAX_NUM)
++		return false;
++	if (mlxsw_reg_rauhtd_rec_type_get(rauhtd_pl, last_rec_index) ==
++	    MLXSW_REG_RAUHTD_TYPE_IPV6)
++		return true;
++
++	num_entries = mlxsw_reg_rauhtd_ipv4_rec_num_entries_get(rauhtd_pl,
++								last_rec_index);
++	if (++num_entries == MLXSW_REG_RAUHTD_IPV4_ENT_PER_REC)
++		return true;
++	return false;
++}
++
+ static int mlxsw_sp_router_neighs_update_rauhtd(struct mlxsw_sp *mlxsw_sp)
+ {
+ 	char *rauhtd_pl;
+@@ -803,7 +823,7 @@ static int mlxsw_sp_router_neighs_update_rauhtd(struct mlxsw_sp *mlxsw_sp)
+ 		for (i = 0; i < num_rec; i++)
+ 			mlxsw_sp_router_neigh_rec_process(mlxsw_sp, rauhtd_pl,
+ 							  i);
+-	} while (num_rec);
++	} while (mlxsw_sp_router_rauhtd_is_full(rauhtd_pl));
+ 	rtnl_unlock();
+ 
+ 	kfree(rauhtd_pl);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 4c8c60af7985..fe9e7b1979b8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -871,6 +871,13 @@ static int stmmac_init_phy(struct net_device *dev)
+ 		return -ENODEV;
+ 	}
+ 
++	/* stmmac_adjust_link will change this to PHY_IGNORE_INTERRUPT to avoid
++	 * subsequent PHY polling, make sure we force a link transition if
++	 * we have a UP/DOWN/UP transition
++	 */
++	if (phydev->is_pseudo_fixed_link)
++		phydev->irq = PHY_POLL;
++
+ 	pr_debug("stmmac_init_phy:  %s: attached to PHY (UID 0x%x)"
+ 		 " Link = %d\n", dev->name, phydev->phy_id, phydev->link);
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 5c8429f23a89..3a5530d0511b 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -133,8 +133,60 @@ struct ffs_epfile {
+ 	/*
+ 	 * Buffer for holding data from partial reads which may happen since
+ 	 * we’re rounding user read requests to a multiple of a max packet size.
++	 *
++	 * The pointer is initialised with NULL value and may be set by
++	 * __ffs_epfile_read_data function to point to a temporary buffer.
++	 *
++	 * In normal operation, calls to __ffs_epfile_read_buffered will consume
++	 * data from said buffer and eventually free it.  Importantly, while the
++	 * function is using the buffer, it sets the pointer to NULL.  This is
++	 * all right since __ffs_epfile_read_data and __ffs_epfile_read_buffered
++	 * can never run concurrently (they are synchronised by epfile->mutex)
++	 * so the latter will not assign a new value to the pointer.
++	 *
++	 * Meanwhile ffs_func_eps_disable frees the buffer (if the pointer is
++	 * valid) and sets the pointer to READ_BUFFER_DROP value.  This special
++	 * value is crux of the synchronisation between ffs_func_eps_disable and
++	 * __ffs_epfile_read_data.
++	 *
++	 * Once __ffs_epfile_read_data is about to finish it will try to set the
++	 * pointer back to its old value (as described above), but seeing as the
++	 * pointer is not-NULL (namely READ_BUFFER_DROP) it will instead free
++	 * the buffer.
++	 *
++	 * == State transitions ==
++	 *
++	 * • ptr == NULL:  (initial state)
++	 *   ◦ __ffs_epfile_read_buffer_free: go to ptr == DROP
++	 *   ◦ __ffs_epfile_read_buffered:    nop
++	 *   ◦ __ffs_epfile_read_data allocates temp buffer: go to ptr == buf
++	 *   ◦ reading finishes:              n/a, not in ‘and reading’ state
++	 * • ptr == DROP:
++	 *   ◦ __ffs_epfile_read_buffer_free: nop
++	 *   ◦ __ffs_epfile_read_buffered:    go to ptr == NULL
++	 *   ◦ __ffs_epfile_read_data allocates temp buffer: free buf, nop
++	 *   ◦ reading finishes:              n/a, not in ‘and reading’ state
++	 * • ptr == buf:
++	 *   ◦ __ffs_epfile_read_buffer_free: free buf, go to ptr == DROP
++	 *   ◦ __ffs_epfile_read_buffered:    go to ptr == NULL and reading
++	 *   ◦ __ffs_epfile_read_data:        n/a, __ffs_epfile_read_buffered
++	 *                                    is always called first
++	 *   ◦ reading finishes:              n/a, not in ‘and reading’ state
++	 * • ptr == NULL and reading:
++	 *   ◦ __ffs_epfile_read_buffer_free: go to ptr == DROP and reading
++	 *   ◦ __ffs_epfile_read_buffered:    n/a, mutex is held
++	 *   ◦ __ffs_epfile_read_data:        n/a, mutex is held
++	 *   ◦ reading finishes and …
++	 *     … all data read:               free buf, go to ptr == NULL
++	 *     … otherwise:                   go to ptr == buf and reading
++	 * • ptr == DROP and reading:
++	 *   ◦ __ffs_epfile_read_buffer_free: nop
++	 *   ◦ __ffs_epfile_read_buffered:    n/a, mutex is held
++	 *   ◦ __ffs_epfile_read_data:        n/a, mutex is held
++	 *   ◦ reading finishes:              free buf, go to ptr == DROP
+ 	 */
+-	struct ffs_buffer		*read_buffer;	/* P: epfile->mutex */
++	struct ffs_buffer		*read_buffer;
++#define READ_BUFFER_DROP ((struct ffs_buffer *)ERR_PTR(-ESHUTDOWN))
+ 
+ 	char				name[5];
+ 
+@@ -733,25 +785,47 @@ static void ffs_epfile_async_io_complete(struct usb_ep *_ep,
+ 	schedule_work(&io_data->work);
+ }
+ 
++static void __ffs_epfile_read_buffer_free(struct ffs_epfile *epfile)
++{
++	/*
++	 * See comment in struct ffs_epfile for full read_buffer pointer
++	 * synchronisation story.
++	 */
++	struct ffs_buffer *buf = xchg(&epfile->read_buffer, READ_BUFFER_DROP);
++	if (buf && buf != READ_BUFFER_DROP)
++		kfree(buf);
++}
++
+ /* Assumes epfile->mutex is held. */
+ static ssize_t __ffs_epfile_read_buffered(struct ffs_epfile *epfile,
+ 					  struct iov_iter *iter)
+ {
+-	struct ffs_buffer *buf = epfile->read_buffer;
++	/*
++	 * Null out epfile->read_buffer so ffs_func_eps_disable does not free
++	 * the buffer while we are using it.  See comment in struct ffs_epfile
++	 * for full read_buffer pointer synchronisation story.
++	 */
++	struct ffs_buffer *buf = xchg(&epfile->read_buffer, NULL);
+ 	ssize_t ret;
+-	if (!buf)
++	if (!buf || buf == READ_BUFFER_DROP)
+ 		return 0;
+ 
+ 	ret = copy_to_iter(buf->data, buf->length, iter);
+ 	if (buf->length == ret) {
+ 		kfree(buf);
+-		epfile->read_buffer = NULL;
+-	} else if (unlikely(iov_iter_count(iter))) {
++		return ret;
++	}
++
++	if (unlikely(iov_iter_count(iter))) {
+ 		ret = -EFAULT;
+ 	} else {
+ 		buf->length -= ret;
+ 		buf->data += ret;
+ 	}
++
++	if (cmpxchg(&epfile->read_buffer, NULL, buf))
++		kfree(buf);
++
+ 	return ret;
+ }
+ 
+@@ -780,7 +854,15 @@ static ssize_t __ffs_epfile_read_data(struct ffs_epfile *epfile,
+ 	buf->length = data_len;
+ 	buf->data = buf->storage;
+ 	memcpy(buf->storage, data + ret, data_len);
+-	epfile->read_buffer = buf;
++
++	/*
++	 * At this point read_buffer is NULL or READ_BUFFER_DROP (if
++	 * ffs_func_eps_disable has been called in the meanwhile).  See comment
++	 * in struct ffs_epfile for full read_buffer pointer synchronisation
++	 * story.
++	 */
++	if (unlikely(cmpxchg(&epfile->read_buffer, NULL, buf)))
++		kfree(buf);
+ 
+ 	return ret;
+ }
+@@ -1094,8 +1176,7 @@ ffs_epfile_release(struct inode *inode, struct file *file)
+ 
+ 	ENTER();
+ 
+-	kfree(epfile->read_buffer);
+-	epfile->read_buffer = NULL;
++	__ffs_epfile_read_buffer_free(epfile);
+ 	ffs_data_closed(epfile->ffs);
+ 
+ 	return 0;
+@@ -1721,24 +1802,20 @@ static void ffs_func_eps_disable(struct ffs_function *func)
+ 	unsigned count            = func->ffs->eps_count;
+ 	unsigned long flags;
+ 
++	spin_lock_irqsave(&func->ffs->eps_lock, flags);
+ 	do {
+-		if (epfile)
+-			mutex_lock(&epfile->mutex);
+-		spin_lock_irqsave(&func->ffs->eps_lock, flags);
+ 		/* pending requests get nuked */
+ 		if (likely(ep->ep))
+ 			usb_ep_disable(ep->ep);
+ 		++ep;
+-		spin_unlock_irqrestore(&func->ffs->eps_lock, flags);
+ 
+ 		if (epfile) {
+ 			epfile->ep = NULL;
+-			kfree(epfile->read_buffer);
+-			epfile->read_buffer = NULL;
+-			mutex_unlock(&epfile->mutex);
++			__ffs_epfile_read_buffer_free(epfile);
+ 			++epfile;
+ 		}
+ 	} while (--count);
++	spin_unlock_irqrestore(&func->ffs->eps_lock, flags);
+ }
+ 
+ static int ffs_func_eps_enable(struct ffs_function *func)
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 156b0c11b524..0ccf6daf6f56 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -47,7 +47,6 @@ struct inet_skb_parm {
+ #define IPSKB_REROUTED		BIT(4)
+ #define IPSKB_DOREDIRECT	BIT(5)
+ #define IPSKB_FRAG_PMTU		BIT(6)
+-#define IPSKB_FRAG_SEGS		BIT(7)
+ 
+ 	u16			frag_max_size;
+ };
+diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
+index 43a5a0e4524c..b01d5d1d7439 100644
+--- a/include/net/ip6_tunnel.h
++++ b/include/net/ip6_tunnel.h
+@@ -145,6 +145,7 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
+ {
+ 	int pkt_len, err;
+ 
++	memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
+ 	pkt_len = skb->len - skb_inner_network_offset(skb);
+ 	err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb);
+ 	if (unlikely(net_xmit_eval(err)))
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 8741988e6880..c26eab962ec7 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1587,11 +1587,11 @@ static inline void sock_put(struct sock *sk)
+ void sock_gen_put(struct sock *sk);
+ 
+ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested,
+-		     unsigned int trim_cap);
++		     unsigned int trim_cap, bool refcounted);
+ static inline int sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+ 				 const int nested)
+ {
+-	return __sk_receive_skb(sk, skb, nested, 1);
++	return __sk_receive_skb(sk, skb, nested, 1, true);
+ }
+ 
+ static inline void sk_tx_queue_set(struct sock *sk, int tx_queue)
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 7717302cab91..0de698940793 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1164,6 +1164,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp)
+ }
+ 
+ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb);
++int tcp_filter(struct sock *sk, struct sk_buff *skb);
+ 
+ #undef STATE_TRACE
+ 
+diff --git a/include/uapi/linux/atm_zatm.h b/include/uapi/linux/atm_zatm.h
+index 5cd4d4d2dd1d..9c9c6ad55f14 100644
+--- a/include/uapi/linux/atm_zatm.h
++++ b/include/uapi/linux/atm_zatm.h
+@@ -14,7 +14,6 @@
+ 
+ #include <linux/atmapi.h>
+ #include <linux/atmioc.h>
+-#include <linux/time.h>
+ 
+ #define ZATM_GETPOOL	_IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)
+ 						/* get pool statistics */
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 570eeca7bdfa..ad1bc67aff1b 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -687,7 +687,8 @@ static void delete_all_elements(struct bpf_htab *htab)
+ 
+ 		hlist_for_each_entry_safe(l, n, head, hash_node) {
+ 			hlist_del_rcu(&l->hash_node);
+-			htab_elem_free(htab, l);
++			if (l->state != HTAB_EXTRA_ELEM_USED)
++				htab_elem_free(htab, l);
+ 		}
+ 	}
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 44b3ba462ba1..9ce9d7284ea7 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2484,7 +2484,7 @@ int skb_checksum_help(struct sk_buff *skb)
+ 			goto out;
+ 	}
+ 
+-	*(__sum16 *)(skb->data + offset) = csum_fold(csum);
++	*(__sum16 *)(skb->data + offset) = csum_fold(csum) ?: CSUM_MANGLED_0;
+ out_set_summed:
+ 	skb->ip_summed = CHECKSUM_NONE;
+ out:
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 52742a02814f..5550a86f7264 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -118,7 +118,7 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
+ 	struct flow_dissector_key_tags *key_tags;
+ 	struct flow_dissector_key_keyid *key_keyid;
+ 	u8 ip_proto = 0;
+-	bool ret = false;
++	bool ret;
+ 
+ 	if (!data) {
+ 		data = skb->data;
+@@ -481,12 +481,17 @@ ip_proto_again:
+ out_good:
+ 	ret = true;
+ 
+-out_bad:
++	key_control->thoff = (u16)nhoff;
++out:
+ 	key_basic->n_proto = proto;
+ 	key_basic->ip_proto = ip_proto;
+-	key_control->thoff = (u16)nhoff;
+ 
+ 	return ret;
++
++out_bad:
++	ret = false;
++	key_control->thoff = min_t(u16, nhoff, skb ? skb->len : hlen);
++	goto out;
+ }
+ EXPORT_SYMBOL(__skb_flow_dissect);
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index fd7b41edf1ce..10acaccca5c8 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -453,7 +453,7 @@ int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ EXPORT_SYMBOL(sock_queue_rcv_skb);
+ 
+ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+-		     const int nested, unsigned int trim_cap)
++		     const int nested, unsigned int trim_cap, bool refcounted)
+ {
+ 	int rc = NET_RX_SUCCESS;
+ 
+@@ -487,7 +487,8 @@ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+ 
+ 	bh_unlock_sock(sk);
+ out:
+-	sock_put(sk);
++	if (refcounted)
++		sock_put(sk);
+ 	return rc;
+ discard_and_relse:
+ 	kfree_skb(skb);
+@@ -1563,6 +1564,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ 		RCU_INIT_POINTER(newsk->sk_reuseport_cb, NULL);
+ 
+ 		newsk->sk_err	   = 0;
++		newsk->sk_err_soft = 0;
+ 		newsk->sk_priority = 0;
+ 		newsk->sk_incoming_cpu = raw_smp_processor_id();
+ 		atomic64_set(&newsk->sk_cookie, 0);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index 345a3aeb8c7e..b567c8725aea 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -235,7 +235,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
+ {
+ 	const struct iphdr *iph = (struct iphdr *)skb->data;
+ 	const u8 offset = iph->ihl << 2;
+-	const struct dccp_hdr *dh = (struct dccp_hdr *)(skb->data + offset);
++	const struct dccp_hdr *dh;
+ 	struct dccp_sock *dp;
+ 	struct inet_sock *inet;
+ 	const int type = icmp_hdr(skb)->type;
+@@ -245,11 +245,13 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
+ 	int err;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	if (skb->len < offset + sizeof(*dh) ||
+-	    skb->len < offset + __dccp_basic_hdr_len(dh)) {
+-		__ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
+-		return;
+-	}
++	/* Only need dccph_dport & dccph_sport which are the first
++	 * 4 bytes in dccp header.
++	 * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
++	 */
++	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
++	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet_lookup_established(net, &dccp_hashinfo,
+ 				       iph->daddr, dh->dccph_dport,
+@@ -868,7 +870,7 @@ lookup:
+ 		goto discard_and_relse;
+ 	nf_reset(skb);
+ 
+-	return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4);
++	return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4, refcounted);
+ 
+ no_dccp_socket:
+ 	if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 3828f94b234c..715e5d1dc107 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -70,7 +70,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 			u8 type, u8 code, int offset, __be32 info)
+ {
+ 	const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data;
+-	const struct dccp_hdr *dh = (struct dccp_hdr *)(skb->data + offset);
++	const struct dccp_hdr *dh;
+ 	struct dccp_sock *dp;
+ 	struct ipv6_pinfo *np;
+ 	struct sock *sk;
+@@ -78,12 +78,13 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	__u64 seq;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	if (skb->len < offset + sizeof(*dh) ||
+-	    skb->len < offset + __dccp_basic_hdr_len(dh)) {
+-		__ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+-				  ICMP6_MIB_INERRORS);
+-		return;
+-	}
++	/* Only need dccph_dport & dccph_sport which are the first
++	 * 4 bytes in dccp header.
++	 * Our caller (icmpv6_notify()) already pulled 8 bytes for us.
++	 */
++	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
++	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet6_lookup_established(net, &dccp_hashinfo,
+ 					&hdr->daddr, dh->dccph_dport,
+@@ -738,7 +739,8 @@ lookup:
+ 	if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
+ 		goto discard_and_relse;
+ 
+-	return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4) ? -1 : 0;
++	return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4,
++				refcounted) ? -1 : 0;
+ 
+ no_dccp_socket:
+ 	if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+@@ -956,6 +958,7 @@ static const struct inet_connection_sock_af_ops dccp_ipv6_mapped = {
+ 	.getsockopt	   = ipv6_getsockopt,
+ 	.addr2sockaddr	   = inet6_csk_addr2sockaddr,
+ 	.sockaddr_len	   = sizeof(struct sockaddr_in6),
++	.bind_conflict	   = inet6_csk_bind_conflict,
+ #ifdef CONFIG_COMPAT
+ 	.compat_setsockopt = compat_ipv6_setsockopt,
+ 	.compat_getsockopt = compat_ipv6_getsockopt,
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 41e65804ddf5..9fe25bf63296 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -1009,6 +1009,10 @@ void dccp_close(struct sock *sk, long timeout)
+ 		__kfree_skb(skb);
+ 	}
+ 
++	/* If socket has been already reset kill it. */
++	if (sk->sk_state == DCCP_CLOSED)
++		goto adjudge_to_death;
++
+ 	if (data_was_unread) {
+ 		/* Unread data was tossed, send an appropriate Reset Code */
+ 		DCCP_WARN("ABORT with %u bytes unread\n", data_was_unread);
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index e2ffc2a5c7db..7ef703102dca 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -2455,22 +2455,19 @@ static struct key_vector *fib_route_get_idx(struct fib_route_iter *iter,
+ 	struct key_vector *l, **tp = &iter->tnode;
+ 	t_key key;
+ 
+-	/* use cache location of next-to-find key */
++	/* use cached location of previously found key */
+ 	if (iter->pos > 0 && pos >= iter->pos) {
+-		pos -= iter->pos;
+ 		key = iter->key;
+ 	} else {
+-		iter->pos = 0;
++		iter->pos = 1;
+ 		key = 0;
+ 	}
+ 
+-	while ((l = leaf_walk_rcu(tp, key)) != NULL) {
++	pos -= iter->pos;
++
++	while ((l = leaf_walk_rcu(tp, key)) && (pos-- > 0)) {
+ 		key = l->key + 1;
+ 		iter->pos++;
+-
+-		if (--pos <= 0)
+-			break;
+-
+ 		l = NULL;
+ 
+ 		/* handle unlikely case of a key wrap */
+@@ -2479,7 +2476,7 @@ static struct key_vector *fib_route_get_idx(struct fib_route_iter *iter,
+ 	}
+ 
+ 	if (l)
+-		iter->key = key;	/* remember it */
++		iter->key = l->key;	/* remember it */
+ 	else
+ 		iter->pos = 0;		/* forget it */
+ 
+@@ -2507,7 +2504,7 @@ static void *fib_route_seq_start(struct seq_file *seq, loff_t *pos)
+ 		return fib_route_get_idx(iter, *pos);
+ 
+ 	iter->pos = 0;
+-	iter->key = 0;
++	iter->key = KEY_MAX;
+ 
+ 	return SEQ_START_TOKEN;
+ }
+@@ -2516,7 +2513,7 @@ static void *fib_route_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	struct fib_route_iter *iter = seq->private;
+ 	struct key_vector *l = NULL;
+-	t_key key = iter->key;
++	t_key key = iter->key + 1;
+ 
+ 	++*pos;
+ 
+@@ -2525,7 +2522,7 @@ static void *fib_route_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ 		l = leaf_walk_rcu(&iter->tnode, key);
+ 
+ 	if (l) {
+-		iter->key = l->key + 1;
++		iter->key = l->key;
+ 		iter->pos++;
+ 	} else {
+ 		iter->pos = 0;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 38abe70e595f..48734ee6293f 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -477,7 +477,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ 	fl4->flowi4_proto = IPPROTO_ICMP;
+ 	fl4->fl4_icmp_type = type;
+ 	fl4->fl4_icmp_code = code;
+-	fl4->flowi4_oif = l3mdev_master_ifindex(skb_in->dev);
++	fl4->flowi4_oif = l3mdev_master_ifindex(skb_dst(skb_in)->dev);
+ 
+ 	security_skb_classify_flow(skb_in, flowi4_to_flowi(fl4));
+ 	rt = __ip_route_output_key_hash(net, fl4,
+@@ -502,7 +502,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ 	if (err)
+ 		goto relookup_failed;
+ 
+-	if (inet_addr_type_dev_table(net, skb_in->dev,
++	if (inet_addr_type_dev_table(net, skb_dst(skb_in)->dev,
+ 				     fl4_dec.saddr) == RTN_LOCAL) {
+ 		rt2 = __ip_route_output_key(net, &fl4_dec);
+ 		if (IS_ERR(rt2))
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index 8b4ffd216839..9f0a7b96646f 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -117,7 +117,7 @@ int ip_forward(struct sk_buff *skb)
+ 	if (opt->is_strictroute && rt->rt_uses_gateway)
+ 		goto sr_failed;
+ 
+-	IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;
++	IPCB(skb)->flags |= IPSKB_FORWARDED;
+ 	mtu = ip_dst_mtu_maybe_forward(&rt->dst, true);
+ 	if (ip_exceeds_mtu(skb, mtu)) {
+ 		IP_INC_STATS(net, IPSTATS_MIB_FRAGFAILS);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index dde37fb340bf..307daed9a4b9 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -223,11 +223,9 @@ static int ip_finish_output_gso(struct net *net, struct sock *sk,
+ 	struct sk_buff *segs;
+ 	int ret = 0;
+ 
+-	/* common case: fragmentation of segments is not allowed,
+-	 * or seglen is <= mtu
++	/* common case: seglen is <= mtu
+ 	 */
+-	if (((IPCB(skb)->flags & IPSKB_FRAG_SEGS) == 0) ||
+-	      skb_gso_validate_mtu(skb, mtu))
++	if (skb_gso_validate_mtu(skb, mtu))
+ 		return ip_finish_output2(net, sk, skb);
+ 
+ 	/* Slowpath -  GSO segment length is exceeding the dst MTU.
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index 0f227db0e9ac..afd6b5968caf 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -63,7 +63,6 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
+ 	int pkt_len = skb->len - skb_inner_network_offset(skb);
+ 	struct net *net = dev_net(rt->dst.dev);
+ 	struct net_device *dev = skb->dev;
+-	int skb_iif = skb->skb_iif;
+ 	struct iphdr *iph;
+ 	int err;
+ 
+@@ -73,16 +72,6 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
+ 	skb_dst_set(skb, &rt->dst);
+ 	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 
+-	if (skb_iif && !(df & htons(IP_DF))) {
+-		/* Arrived from an ingress interface, got encapsulated, with
+-		 * fragmentation of encapulating frames allowed.
+-		 * If skb is gso, the resulting encapsulated network segments
+-		 * may exceed dst mtu.
+-		 * Allow IP Fragmentation of segments.
+-		 */
+-		IPCB(skb)->flags |= IPSKB_FRAG_SEGS;
+-	}
+-
+ 	/* Push down and install the IP header. */
+ 	skb_push(skb, sizeof(struct iphdr));
+ 	skb_reset_network_header(skb);
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 5f006e13de56..27089f5ebbb1 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1749,7 +1749,7 @@ static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
+ 		vif->dev->stats.tx_bytes += skb->len;
+ 	}
+ 
+-	IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;
++	IPCB(skb)->flags |= IPSKB_FORWARDED;
+ 
+ 	/* RFC1584 teaches, that DVMRP/PIM router must deliver packets locally
+ 	 * not only before forwarding, but after forwarding on all output
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 62c3ed0b7556..2f23ef1a8486 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -753,7 +753,9 @@ static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flow
+ 			goto reject_redirect;
+ 	}
+ 
+-	n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw);
++	n = __ipv4_neigh_lookup(rt->dst.dev, new_gw);
++	if (!n)
++		n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev);
+ 	if (!IS_ERR(n)) {
+ 		if (!(n->nud_state & NUD_VALID)) {
+ 			neigh_event_send(n, NULL);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ffbb218de520..c876f5ddc86c 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1145,7 +1145,7 @@ restart:
+ 
+ 	err = -EPIPE;
+ 	if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))
+-		goto out_err;
++		goto do_error;
+ 
+ 	sg = !!(sk->sk_route_caps & NETIF_F_SG);
+ 
+@@ -1219,7 +1219,7 @@ new_segment:
+ 
+ 			if (!skb_can_coalesce(skb, i, pfrag->page,
+ 					      pfrag->offset)) {
+-				if (i == sysctl_max_skb_frags || !sg) {
++				if (i >= sysctl_max_skb_frags || !sg) {
+ 					tcp_mark_push(tp, skb);
+ 					goto new_segment;
+ 				}
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index 10d728b6804c..ab37c6775630 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -56,6 +56,7 @@ struct dctcp {
+ 	u32 next_seq;
+ 	u32 ce_state;
+ 	u32 delayed_ack_reserved;
++	u32 loss_cwnd;
+ };
+ 
+ static unsigned int dctcp_shift_g __read_mostly = 4; /* g = 1/2^4 */
+@@ -96,6 +97,7 @@ static void dctcp_init(struct sock *sk)
+ 		ca->dctcp_alpha = min(dctcp_alpha_on_init, DCTCP_MAX_ALPHA);
+ 
+ 		ca->delayed_ack_reserved = 0;
++		ca->loss_cwnd = 0;
+ 		ca->ce_state = 0;
+ 
+ 		dctcp_reset(tp, ca);
+@@ -111,9 +113,10 @@ static void dctcp_init(struct sock *sk)
+ 
+ static u32 dctcp_ssthresh(struct sock *sk)
+ {
+-	const struct dctcp *ca = inet_csk_ca(sk);
++	struct dctcp *ca = inet_csk_ca(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
++	ca->loss_cwnd = tp->snd_cwnd;
+ 	return max(tp->snd_cwnd - ((tp->snd_cwnd * ca->dctcp_alpha) >> 11U), 2U);
+ }
+ 
+@@ -308,12 +311,20 @@ static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr,
+ 	return 0;
+ }
+ 
++static u32 dctcp_cwnd_undo(struct sock *sk)
++{
++	const struct dctcp *ca = inet_csk_ca(sk);
++
++	return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
++}
++
+ static struct tcp_congestion_ops dctcp __read_mostly = {
+ 	.init		= dctcp_init,
+ 	.in_ack_event   = dctcp_update_alpha,
+ 	.cwnd_event	= dctcp_cwnd_event,
+ 	.ssthresh	= dctcp_ssthresh,
+ 	.cong_avoid	= tcp_reno_cong_avoid,
++	.undo_cwnd	= dctcp_cwnd_undo,
+ 	.set_state	= dctcp_state,
+ 	.get_info	= dctcp_get_info,
+ 	.flags		= TCP_CONG_NEEDS_ECN,
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 7158d4f8dae4..7b235fa12903 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1537,6 +1537,21 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
+ }
+ EXPORT_SYMBOL(tcp_prequeue);
+ 
++int tcp_filter(struct sock *sk, struct sk_buff *skb)
++{
++	struct tcphdr *th = (struct tcphdr *)skb->data;
++	unsigned int eaten = skb->len;
++	int err;
++
++	err = sk_filter_trim_cap(sk, skb, th->doff * 4);
++	if (!err) {
++		eaten -= skb->len;
++		TCP_SKB_CB(skb)->end_seq -= eaten;
++	}
++	return err;
++}
++EXPORT_SYMBOL(tcp_filter);
++
+ /*
+  *	From tcp_input.c
+  */
+@@ -1648,8 +1663,10 @@ process:
+ 
+ 	nf_reset(skb);
+ 
+-	if (sk_filter(sk, skb))
++	if (tcp_filter(sk, skb))
+ 		goto discard_and_relse;
++	th = (const struct tcphdr *)skb->data;
++	iph = ip_hdr(skb);
+ 
+ 	skb->dev = NULL;
+ 
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index bd59c343d35f..7370ad2e693a 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -448,7 +448,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 	if (__ipv6_addr_needs_scope_id(addr_type))
+ 		iif = skb->dev->ifindex;
+ 	else
+-		iif = l3mdev_master_ifindex(skb->dev);
++		iif = l3mdev_master_ifindex(skb_dst(skb)->dev);
+ 
+ 	/*
+ 	 *	Must not send error if the source does not uniquely
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index fc67822c42e0..af6a09efad5b 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1228,7 +1228,7 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ 	if (skb->protocol == htons(ETH_P_IP))
+ 		return tcp_v4_do_rcv(sk, skb);
+ 
+-	if (sk_filter(sk, skb))
++	if (tcp_filter(sk, skb))
+ 		goto discard;
+ 
+ 	/*
+@@ -1455,8 +1455,10 @@ process:
+ 	if (tcp_v6_inbound_md5_hash(sk, skb))
+ 		goto discard_and_relse;
+ 
+-	if (sk_filter(sk, skb))
++	if (tcp_filter(sk, skb))
+ 		goto discard_and_relse;
++	th = (const struct tcphdr *)skb->data;
++	hdr = ipv6_hdr(skb);
+ 
+ 	skb->dev = NULL;
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index baccbf3c1c60..7b0e059bf13b 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1214,9 +1214,12 @@ static int __sctp_connect(struct sock *sk,
+ 
+ 	timeo = sock_sndtimeo(sk, f_flags & O_NONBLOCK);
+ 
+-	err = sctp_wait_for_connect(asoc, &timeo);
+-	if ((err == 0 || err == -EINPROGRESS) && assoc_id)
++	if (assoc_id)
+ 		*assoc_id = asoc->assoc_id;
++	err = sctp_wait_for_connect(asoc, &timeo);
++	/* Note: the asoc may be freed after the return of
++	 * sctp_wait_for_connect.
++	 */
+ 
+ 	/* Don't free association on exit. */
+ 	asoc = NULL;
+@@ -4278,19 +4281,18 @@ static void sctp_shutdown(struct sock *sk, int how)
+ {
+ 	struct net *net = sock_net(sk);
+ 	struct sctp_endpoint *ep;
+-	struct sctp_association *asoc;
+ 
+ 	if (!sctp_style(sk, TCP))
+ 		return;
+ 
+-	if (how & SEND_SHUTDOWN) {
++	ep = sctp_sk(sk)->ep;
++	if (how & SEND_SHUTDOWN && !list_empty(&ep->asocs)) {
++		struct sctp_association *asoc;
++
+ 		sk->sk_state = SCTP_SS_CLOSING;
+-		ep = sctp_sk(sk)->ep;
+-		if (!list_empty(&ep->asocs)) {
+-			asoc = list_entry(ep->asocs.next,
+-					  struct sctp_association, asocs);
+-			sctp_primitive_SHUTDOWN(net, asoc, NULL);
+-		}
++		asoc = list_entry(ep->asocs.next,
++				  struct sctp_association, asocs);
++		sctp_primitive_SHUTDOWN(net, asoc, NULL);
+ 	}
+ }
+ 
+diff --git a/net/socket.c b/net/socket.c
+index a1bd16106625..03bc2c289c94 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2041,6 +2041,8 @@ int __sys_sendmmsg(int fd, struct mmsghdr __user *mmsg, unsigned int vlen,
+ 		if (err)
+ 			break;
+ 		++datagrams;
++		if (msg_data_left(&msg_sys))
++			break;
+ 		cond_resched();
+ 	}
+ 
+diff --git a/tools/spi/spidev_test.c b/tools/spi/spidev_test.c
+index f3825b676e38..f046b77cfefe 100644
+--- a/tools/spi/spidev_test.c
++++ b/tools/spi/spidev_test.c
+@@ -19,6 +19,7 @@
+ #include <getopt.h>
+ #include <fcntl.h>
+ #include <sys/ioctl.h>
++#include <linux/ioctl.h>
+ #include <sys/stat.h>
+ #include <linux/types.h>
+ #include <linux/spi/spidev.h>


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-21 14:55 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-11-21 14:55 UTC (permalink / raw
  To: gentoo-commits

commit:     da402fa940145d444f70632399df6fdbdbb40162
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 21 14:54:55 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Nov 21 14:54:55 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=da402fa9

Update gentoo kconfig patch adding CHECKPOINT_RESTORE for GENTOO_LINUX_INIT_SYSTEMD. See bug #598623

 4567_distro-Gentoo-Kconfig.patch | 26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index cf5a20c..acb0972 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,15 @@
---- a/Kconfig	2016-08-30 14:30:48.508361013 -0400
-+++ b/Kconfig	2016-08-30 14:31:40.718683061 -0400
-@@ -9,3 +9,5 @@ config SRCARCH
+--- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
++++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
+@@ -8,4 +8,6 @@ config SRCARCH
+ 	string
  	option env="SRCARCH"
  
- source "arch/$SRCARCH/Kconfig"
-+
 +source "distro/Kconfig"
---- /dev/null	2016-08-30 01:47:09.760073185 -0400
-+++ b/distro/Kconfig	2016-08-30 14:32:21.378933599 -0400
-@@ -0,0 +1,133 @@
++
+ source "arch/$SRCARCH/Kconfig"
+--- /dev/null	2016-11-15 00:56:18.320838834 -0500
++++ b/distro/Kconfig	2016-11-16 06:24:29.457357409 -0500
+@@ -0,0 +1,142 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -32,6 +33,7 @@
 +
 +	select DEVTMPFS
 +	select TMPFS
++	select UNIX
 +
 +	select MMU
 +	select SHMEM
@@ -111,16 +113,24 @@
 +	select AUTOFS4_FS
 +	select BLK_DEV_BSG
 +	select CGROUPS
++	select CHECKPOINT_RESTORE
++	select DEVPTS_MULTIPLE_INSTANCES
++	select DMIID
 +	select EPOLL
 +	select FANOTIFY
 +	select FHANDLE
 +	select INOTIFY_USER
++	select IPV6
 +	select NET
 +	select NET_NS
 +	select PROC_FS
++	select SECCOMP
++	select SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
++	select TMPFS_POSIX_ACL
++	select TMPFS_XATTR
 +
 +	select ANON_INODES
 +	select BLOCK


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-11-26 14:21 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-11-26 14:21 UTC (permalink / raw
  To: gentoo-commits

commit:     323a66be9ef3d4a7514e055204c780958716758d
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 26 14:19:40 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Nov 26 14:19:40 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=323a66be

Linux patch 4.8.11

 0000_README             |    4 +
 1010_linux-4.8.11.patch | 2351 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2355 insertions(+)

diff --git a/0000_README b/0000_README
index 13976e7..4aa1baf 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-4.8.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.10
 
+Patch:  1010_linux-4.8.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-4.8.11.patch b/1010_linux-4.8.11.patch
new file mode 100644
index 0000000..49be830
--- /dev/null
+++ b/1010_linux-4.8.11.patch
@@ -0,0 +1,2351 @@
+diff --git a/Makefile b/Makefile
+index 7cf2b4985703..2b1bcbacebcd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+@@ -399,11 +399,12 @@ KBUILD_CFLAGS   := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ 		   -fno-strict-aliasing -fno-common \
+ 		   -Werror-implicit-function-declaration \
+ 		   -Wno-format-security \
+-		   -std=gnu89
++		   -std=gnu89 $(call cc-option,-fno-PIE)
++
+ 
+ KBUILD_AFLAGS_KERNEL :=
+ KBUILD_CFLAGS_KERNEL :=
+-KBUILD_AFLAGS   := -D__ASSEMBLY__
++KBUILD_AFLAGS   := -D__ASSEMBLY__ $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS_MODULE  := -DMODULE
+ KBUILD_CFLAGS_MODULE  := -DMODULE
+ KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds
+@@ -621,6 +622,7 @@ include arch/$(SRCARCH)/Makefile
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-fno-delete-null-pointer-checks,)
+ KBUILD_CFLAGS	+= $(call cc-disable-warning,maybe-uninitialized,)
++KBUILD_CFLAGS	+= $(call cc-disable-warning,frame-address,)
+ 
+ ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
+ KBUILD_CFLAGS	+= -Os
+diff --git a/arch/arm/boot/dts/imx53-qsb.dts b/arch/arm/boot/dts/imx53-qsb.dts
+index dec4b073ceb1..379939699164 100644
+--- a/arch/arm/boot/dts/imx53-qsb.dts
++++ b/arch/arm/boot/dts/imx53-qsb.dts
+@@ -64,8 +64,8 @@
+ 			};
+ 
+ 			ldo3_reg: ldo3 {
+-				regulator-min-microvolt = <600000>;
+-				regulator-max-microvolt = <1800000>;
++				regulator-min-microvolt = <1725000>;
++				regulator-max-microvolt = <3300000>;
+ 				regulator-always-on;
+ 			};
+ 
+@@ -76,8 +76,8 @@
+ 			};
+ 
+ 			ldo5_reg: ldo5 {
+-				regulator-min-microvolt = <1725000>;
+-				regulator-max-microvolt = <3300000>;
++				regulator-min-microvolt = <1200000>;
++				regulator-max-microvolt = <3600000>;
+ 				regulator-always-on;
+ 			};
+ 
+@@ -100,14 +100,14 @@
+ 			};
+ 
+ 			ldo9_reg: ldo9 {
+-				regulator-min-microvolt = <1200000>;
++				regulator-min-microvolt = <1250000>;
+ 				regulator-max-microvolt = <3600000>;
+ 				regulator-always-on;
+ 			};
+ 
+ 			ldo10_reg: ldo10 {
+-				regulator-min-microvolt = <1250000>;
+-				regulator-max-microvolt = <3650000>;
++				regulator-min-microvolt = <1200000>;
++				regulator-max-microvolt = <3600000>;
+ 				regulator-always-on;
+ 			};
+ 		};
+diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
+index 2065f46fa740..38b6a2b49d68 100644
+--- a/arch/arm64/include/asm/perf_event.h
++++ b/arch/arm64/include/asm/perf_event.h
+@@ -46,7 +46,15 @@
+ #define	ARMV8_PMU_EVTYPE_MASK	0xc800ffff	/* Mask for writable bits */
+ #define	ARMV8_PMU_EVTYPE_EVENT	0xffff		/* Mask for EVENT bits */
+ 
+-#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
++/*
++ * PMUv3 event types: required events
++ */
++#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x00
++#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x03
++#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x04
++#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x10
++#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x11
++#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x12
+ 
+ /*
+  * Event filters for PMUv3
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 838ccf123307..2c4df1520c45 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -30,17 +30,9 @@
+ 
+ /*
+  * ARMv8 PMUv3 Performance Events handling code.
+- * Common event types.
++ * Common event types (some are defined in asm/perf_event.h).
+  */
+ 
+-/* Required events. */
+-#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x00
+-#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x03
+-#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x04
+-#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x10
+-#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x11
+-#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x12
+-
+ /* At least one of the following is required. */
+ #define ARMV8_PMUV3_PERFCTR_INST_RETIRED			0x08
+ #define ARMV8_PMUV3_PERFCTR_INST_SPEC				0x1B
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index e51367d159d0..31c144f7339a 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -602,8 +602,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+ 
+ 			idx = ARMV8_PMU_CYCLE_IDX;
+ 		} else {
+-			BUG();
++			return false;
+ 		}
++	} else if (r->CRn == 0 && r->CRm == 9) {
++		/* PMCCNTR */
++		if (pmu_access_event_counter_el0_disabled(vcpu))
++			return false;
++
++		idx = ARMV8_PMU_CYCLE_IDX;
+ 	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+ 		/* PMEVCNTRn_EL0 */
+ 		if (pmu_access_event_counter_el0_disabled(vcpu))
+@@ -611,7 +617,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+ 
+ 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+ 	} else {
+-		BUG();
++		return false;
+ 	}
+ 
+ 	if (!pmu_counter_idx_valid(vcpu, idx))
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 7ac8e6eaab5b..8d586cff8a41 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -226,17 +226,25 @@ static void __init configure_exceptions(void)
+ 		if (firmware_has_feature(FW_FEATURE_OPAL))
+ 			opal_configure_cores();
+ 
+-		/* Enable AIL if supported, and we are in hypervisor mode */
+-		if (early_cpu_has_feature(CPU_FTR_HVMODE) &&
+-		    early_cpu_has_feature(CPU_FTR_ARCH_207S)) {
+-			unsigned long lpcr = mfspr(SPRN_LPCR);
+-			mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
+-		}
++		/* AIL on native is done in cpu_ready_for_interrupts() */
+ 	}
+ }
+ 
+ static void cpu_ready_for_interrupts(void)
+ {
++	/*
++	 * Enable AIL if supported, and we are in hypervisor mode. This
++	 * is called once for every processor.
++	 *
++	 * If we are not in hypervisor mode the job is done once for
++	 * the whole partition in configure_exceptions().
++	 */
++	if (early_cpu_has_feature(CPU_FTR_HVMODE) &&
++	    early_cpu_has_feature(CPU_FTR_ARCH_207S)) {
++		unsigned long lpcr = mfspr(SPRN_LPCR);
++		mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
++	}
++
+ 	/* Set IR and DR in PACA MSR */
+ 	get_paca()->kernel_msr = MSR_KERNEL;
+ }
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index b81fe2d63e15..1e81a37c034e 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -347,7 +347,6 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
+ #ifdef CONFIG_SMP
+ 	unsigned bits;
+ 	int cpu = smp_processor_id();
+-	unsigned int socket_id, core_complex_id;
+ 
+ 	bits = c->x86_coreid_bits;
+ 	/* Low order bits define the core id (index of core in socket) */
+@@ -365,10 +364,7 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
+ 	 if (c->x86 != 0x17 || !cpuid_edx(0x80000006))
+ 		return;
+ 
+-	socket_id	= (c->apicid >> bits) - 1;
+-	core_complex_id	= (c->apicid & ((1 << bits) - 1)) >> 3;
+-
+-	per_cpu(cpu_llc_id, cpu) = (socket_id << 3) | core_complex_id;
++	per_cpu(cpu_llc_id, cpu) = c->apicid >> 3;
+ #endif
+ }
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 46f74d461f3f..2fff65794f46 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -210,7 +210,18 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
+ 	struct kvm_shared_msrs *locals
+ 		= container_of(urn, struct kvm_shared_msrs, urn);
+ 	struct kvm_shared_msr_values *values;
++	unsigned long flags;
+ 
++	/*
++	 * Disabling irqs at this point since the following code could be
++	 * interrupted and executed through kvm_arch_hardware_disable()
++	 */
++	local_irq_save(flags);
++	if (locals->registered) {
++		locals->registered = false;
++		user_return_notifier_unregister(urn);
++	}
++	local_irq_restore(flags);
+ 	for (slot = 0; slot < shared_msrs_global.nr; ++slot) {
+ 		values = &locals->values[slot];
+ 		if (values->host != values->curr) {
+@@ -218,8 +229,6 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
+ 			values->curr = values->host;
+ 		}
+ 	}
+-	locals->registered = false;
+-	user_return_notifier_unregister(urn);
+ }
+ 
+ static void shared_msr_update(unsigned slot, u32 msr)
+@@ -3372,6 +3381,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 	};
+ 	case KVM_SET_VAPIC_ADDR: {
+ 		struct kvm_vapic_addr va;
++		int idx;
+ 
+ 		r = -EINVAL;
+ 		if (!lapic_in_kernel(vcpu))
+@@ -3379,7 +3389,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		r = -EFAULT;
+ 		if (copy_from_user(&va, argp, sizeof va))
+ 			goto out;
++		idx = srcu_read_lock(&vcpu->kvm->srcu);
+ 		r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
++		srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ 		break;
+ 	}
+ 	case KVM_X86_SETUP_MCE: {
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index ac58c1616408..555b9fa0ad43 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -16,6 +16,7 @@ KCOV_INSTRUMENT := n
+ 
+ KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large
+ KBUILD_CFLAGS += -m$(BITS)
++KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+ 
+ $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
+ 		$(call if_changed,ld)
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index e44944f4be77..2932a5bd892f 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1027,6 +1027,8 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
+ 	TRACE_DEVICE(dev);
+ 	TRACE_SUSPEND(0);
+ 
++	dpm_wait_for_children(dev, async);
++
+ 	if (async_error)
+ 		goto Complete;
+ 
+@@ -1038,8 +1040,6 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
+ 	if (dev->power.syscore || dev->power.direct_complete)
+ 		goto Complete;
+ 
+-	dpm_wait_for_children(dev, async);
+-
+ 	if (dev->pm_domain) {
+ 		info = "noirq power domain ";
+ 		callback = pm_noirq_op(&dev->pm_domain->ops, state);
+@@ -1174,6 +1174,8 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
+ 
+ 	__pm_runtime_disable(dev, false);
+ 
++	dpm_wait_for_children(dev, async);
++
+ 	if (async_error)
+ 		goto Complete;
+ 
+@@ -1185,8 +1187,6 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
+ 	if (dev->power.syscore || dev->power.direct_complete)
+ 		goto Complete;
+ 
+-	dpm_wait_for_children(dev, async);
+-
+ 	if (dev->pm_domain) {
+ 		info = "late power domain ";
+ 		callback = pm_late_early_op(&dev->pm_domain->ops, state);
+diff --git a/drivers/clk/imx/clk-pllv3.c b/drivers/clk/imx/clk-pllv3.c
+index 19f9b622981a..7a6acc3e4a92 100644
+--- a/drivers/clk/imx/clk-pllv3.c
++++ b/drivers/clk/imx/clk-pllv3.c
+@@ -223,7 +223,7 @@ static unsigned long clk_pllv3_av_recalc_rate(struct clk_hw *hw,
+ 	temp64 *= mfn;
+ 	do_div(temp64, mfd);
+ 
+-	return (parent_rate * div) + (u32)temp64;
++	return parent_rate * div + (unsigned long)temp64;
+ }
+ 
+ static long clk_pllv3_av_round_rate(struct clk_hw *hw, unsigned long rate,
+@@ -247,7 +247,11 @@ static long clk_pllv3_av_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	do_div(temp64, parent_rate);
+ 	mfn = temp64;
+ 
+-	return parent_rate * div + parent_rate * mfn / mfd;
++	temp64 = (u64)parent_rate;
++	temp64 *= mfn;
++	do_div(temp64, mfd);
++
++	return parent_rate * div + (unsigned long)temp64;
+ }
+ 
+ static int clk_pllv3_av_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/clk/mmp/clk-of-mmp2.c b/drivers/clk/mmp/clk-of-mmp2.c
+index 3a51fff1b0e7..9adaf48aea23 100644
+--- a/drivers/clk/mmp/clk-of-mmp2.c
++++ b/drivers/clk/mmp/clk-of-mmp2.c
+@@ -313,7 +313,7 @@ static void __init mmp2_clk_init(struct device_node *np)
+ 	}
+ 
+ 	pxa_unit->apmu_base = of_iomap(np, 1);
+-	if (!pxa_unit->mpmu_base) {
++	if (!pxa_unit->apmu_base) {
+ 		pr_err("failed to map apmu registers\n");
+ 		return;
+ 	}
+diff --git a/drivers/clk/mmp/clk-of-pxa168.c b/drivers/clk/mmp/clk-of-pxa168.c
+index 87f2317b2a00..f110c02e83cb 100644
+--- a/drivers/clk/mmp/clk-of-pxa168.c
++++ b/drivers/clk/mmp/clk-of-pxa168.c
+@@ -262,7 +262,7 @@ static void __init pxa168_clk_init(struct device_node *np)
+ 	}
+ 
+ 	pxa_unit->apmu_base = of_iomap(np, 1);
+-	if (!pxa_unit->mpmu_base) {
++	if (!pxa_unit->apmu_base) {
+ 		pr_err("failed to map apmu registers\n");
+ 		return;
+ 	}
+diff --git a/drivers/clk/mmp/clk-of-pxa910.c b/drivers/clk/mmp/clk-of-pxa910.c
+index e22a67f76d93..64d1ef49caeb 100644
+--- a/drivers/clk/mmp/clk-of-pxa910.c
++++ b/drivers/clk/mmp/clk-of-pxa910.c
+@@ -282,7 +282,7 @@ static void __init pxa910_clk_init(struct device_node *np)
+ 	}
+ 
+ 	pxa_unit->apmu_base = of_iomap(np, 1);
+-	if (!pxa_unit->mpmu_base) {
++	if (!pxa_unit->apmu_base) {
+ 		pr_err("failed to map apmu registers\n");
+ 		return;
+ 	}
+@@ -294,7 +294,7 @@ static void __init pxa910_clk_init(struct device_node *np)
+ 	}
+ 
+ 	pxa_unit->apbcp_base = of_iomap(np, 3);
+-	if (!pxa_unit->mpmu_base) {
++	if (!pxa_unit->apbcp_base) {
+ 		pr_err("failed to map apbcp registers\n");
+ 		return;
+ 	}
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index b3044219772c..2cde3796cb82 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -4542,6 +4542,15 @@ static int __init caam_algapi_init(void)
+ 		if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))
+ 				continue;
+ 
++		/*
++		 * Check support for AES modes not available
++		 * on LP devices.
++		 */
++		if ((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP)
++			if ((alg->class1_alg_type & OP_ALG_AAI_MASK) ==
++			     OP_ALG_AAI_XTS)
++				continue;
++
+ 		t_alg = caam_alg_alloc(alg);
+ 		if (IS_ERR(t_alg)) {
+ 			err = PTR_ERR(t_alg);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 02f2a5621bb0..47d08b9da60d 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -372,14 +372,15 @@ static void pca953x_gpio_set_multiple(struct gpio_chip *gc,
+ 		break;
+ 	}
+ 
+-	memcpy(reg_val, chip->reg_output, NBANK(chip));
+ 	mutex_lock(&chip->i2c_lock);
++	memcpy(reg_val, chip->reg_output, NBANK(chip));
+ 	for(bank=0; bank<NBANK(chip); bank++) {
+ 		unsigned bankmask = mask[bank / sizeof(*mask)] >>
+ 				    ((bank % sizeof(*mask)) * 8);
+ 		if(bankmask) {
+ 			unsigned bankval  = bits[bank / sizeof(*bits)] >>
+ 					    ((bank % sizeof(*bits)) * 8);
++			bankval &= bankmask;
+ 			reg_val[bank] = (reg_val[bank] & ~bankmask) | bankval;
+ 		}
+ 	}
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index b2dee1024166..15704aaf9e4e 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2667,8 +2667,11 @@ int gpiochip_lock_as_irq(struct gpio_chip *chip, unsigned int offset)
+ 	if (IS_ERR(desc))
+ 		return PTR_ERR(desc);
+ 
+-	/* Flush direction if something changed behind our back */
+-	if (chip->get_direction) {
++	/*
++	 * If it's fast: flush the direction setting if something changed
++	 * behind our back
++	 */
++	if (!chip->can_sleep && chip->get_direction) {
+ 		int dir = chip->get_direction(chip, offset);
+ 
+ 		if (dir)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 700c56baf2de..e443073f6ece 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -492,6 +492,7 @@ struct amdgpu_bo {
+ 	u64				metadata_flags;
+ 	void				*metadata;
+ 	u32				metadata_size;
++	unsigned			prime_shared_count;
+ 	/* list of all virtual address to which this bo
+ 	 * is associated to
+ 	 */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 651115dcce12..c02db01f6583 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -132,7 +132,7 @@ static int amdgpu_bo_list_set(struct amdgpu_device *adev,
+ 		entry->priority = min(info[i].bo_priority,
+ 				      AMDGPU_BO_LIST_MAX_PRIORITY);
+ 		entry->tv.bo = &entry->robj->tbo;
+-		entry->tv.shared = true;
++		entry->tv.shared = !entry->robj->prime_shared_count;
+ 
+ 		if (entry->robj->prefered_domains == AMDGPU_GEM_DOMAIN_GDS)
+ 			gds_obj = entry->robj;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+index 7700dc22f243..3826d5aea0a6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+@@ -74,20 +74,36 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
++	bo->prime_shared_count = 1;
+ 	return &bo->gem_base;
+ }
+ 
+ int amdgpu_gem_prime_pin(struct drm_gem_object *obj)
+ {
+ 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+-	int ret = 0;
++	long ret = 0;
+ 
+ 	ret = amdgpu_bo_reserve(bo, false);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+ 
++	/*
++	 * Wait for all shared fences to complete before we switch to future
++	 * use of exclusive fence on this prime shared bo.
++	 */
++	ret = reservation_object_wait_timeout_rcu(bo->tbo.resv, true, false,
++						  MAX_SCHEDULE_TIMEOUT);
++	if (unlikely(ret < 0)) {
++		DRM_DEBUG_PRIME("Fence wait failed: %li\n", ret);
++		amdgpu_bo_unreserve(bo);
++		return ret;
++	}
++
+ 	/* pin buffer into GTT */
+ 	ret = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT, NULL);
++	if (likely(ret == 0))
++		bo->prime_shared_count++;
++
+ 	amdgpu_bo_unreserve(bo);
+ 	return ret;
+ }
+@@ -102,6 +118,8 @@ void amdgpu_gem_prime_unpin(struct drm_gem_object *obj)
+ 		return;
+ 
+ 	amdgpu_bo_unpin(bo);
++	if (bo->prime_shared_count)
++		bo->prime_shared_count--;
+ 	amdgpu_bo_unreserve(bo);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
+index 1f8af87c6294..cf2560708e03 100644
+--- a/drivers/gpu/drm/i915/intel_bios.c
++++ b/drivers/gpu/drm/i915/intel_bios.c
+@@ -1143,7 +1143,7 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
+ 	if (!child)
+ 		return;
+ 
+-	aux_channel = child->raw[25];
++	aux_channel = child->common.aux_channel;
+ 	ddc_pin = child->common.ddc_pin;
+ 
+ 	is_dvi = child->common.device_type & DEVICE_TYPE_TMDS_DVI_SIGNALING;
+@@ -1673,7 +1673,8 @@ bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port)
+ 	return false;
+ }
+ 
+-bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port)
++static bool child_dev_is_dp_dual_mode(const union child_device_config *p_child,
++				      enum port port)
+ {
+ 	static const struct {
+ 		u16 dp, hdmi;
+@@ -1687,22 +1688,35 @@ bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum por
+ 		[PORT_D] = { DVO_PORT_DPD, DVO_PORT_HDMID, },
+ 		[PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, },
+ 	};
+-	int i;
+ 
+ 	if (port == PORT_A || port >= ARRAY_SIZE(port_mapping))
+ 		return false;
+ 
+-	if (!dev_priv->vbt.child_dev_num)
++	if ((p_child->common.device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) !=
++	    (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS))
+ 		return false;
+ 
++	if (p_child->common.dvo_port == port_mapping[port].dp)
++		return true;
++
++	/* Only accept a HDMI dvo_port as DP++ if it has an AUX channel */
++	if (p_child->common.dvo_port == port_mapping[port].hdmi &&
++	    p_child->common.aux_channel != 0)
++		return true;
++
++	return false;
++}
++
++bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv,
++				     enum port port)
++{
++	int i;
++
+ 	for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
+ 		const union child_device_config *p_child =
+ 			&dev_priv->vbt.child_dev[i];
+ 
+-		if ((p_child->common.dvo_port == port_mapping[port].dp ||
+-		     p_child->common.dvo_port == port_mapping[port].hdmi) &&
+-		    (p_child->common.device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) ==
+-		    (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS))
++		if (child_dev_is_dp_dual_mode(p_child, port))
+ 			return true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 3051182cf483..b8aeb28e14d7 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4323,21 +4323,11 @@ static enum drm_connector_status
+ intel_dp_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct intel_dp *intel_dp = intel_attached_dp(connector);
+-	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+-	struct intel_encoder *intel_encoder = &intel_dig_port->base;
+ 	enum drm_connector_status status = connector->status;
+ 
+ 	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ 		      connector->base.id, connector->name);
+ 
+-	if (intel_dp->is_mst) {
+-		/* MST devices are disconnected from a monitor POV */
+-		intel_dp_unset_edid(intel_dp);
+-		if (intel_encoder->type != INTEL_OUTPUT_EDP)
+-			intel_encoder->type = INTEL_OUTPUT_DP;
+-		return connector_status_disconnected;
+-	}
+-
+ 	/* If full detect is not performed yet, do a full detect */
+ 	if (!intel_dp->detect_done)
+ 		status = intel_dp_long_pulse(intel_dp->attached_connector);
+diff --git a/drivers/gpu/drm/i915/intel_vbt_defs.h b/drivers/gpu/drm/i915/intel_vbt_defs.h
+index 68db9621f1f0..8886cab19f98 100644
+--- a/drivers/gpu/drm/i915/intel_vbt_defs.h
++++ b/drivers/gpu/drm/i915/intel_vbt_defs.h
+@@ -280,7 +280,8 @@ struct common_child_dev_config {
+ 	u8 dp_support:1;
+ 	u8 tmds_support:1;
+ 	u8 support_reserved:5;
+-	u8 not_common3[12];
++	u8 aux_channel;
++	u8 not_common3[11];
+ 	u8 iboost_level;
+ } __packed;
+ 
+diff --git a/drivers/i2c/Kconfig b/drivers/i2c/Kconfig
+index d223650a97e4..11edabf425ae 100644
+--- a/drivers/i2c/Kconfig
++++ b/drivers/i2c/Kconfig
+@@ -59,7 +59,6 @@ config I2C_CHARDEV
+ 
+ config I2C_MUX
+ 	tristate "I2C bus multiplexing support"
+-	depends on HAS_IOMEM
+ 	help
+ 	  Say Y here if you want the I2C core to support the ability to
+ 	  handle multiplexed I2C bus topologies, by presenting each
+diff --git a/drivers/i2c/muxes/Kconfig b/drivers/i2c/muxes/Kconfig
+index e280c8ecc0b5..96de9ce5669b 100644
+--- a/drivers/i2c/muxes/Kconfig
++++ b/drivers/i2c/muxes/Kconfig
+@@ -63,6 +63,7 @@ config I2C_MUX_PINCTRL
+ 
+ config I2C_MUX_REG
+ 	tristate "Register-based I2C multiplexer"
++	depends on HAS_IOMEM
+ 	help
+ 	  If you say yes to this option, support will be included for a
+ 	  register based I2C multiplexer. This driver provides access to
+diff --git a/drivers/i2c/muxes/i2c-mux-pca954x.c b/drivers/i2c/muxes/i2c-mux-pca954x.c
+index 3278ebf1cc5c..7e6f300009c5 100644
+--- a/drivers/i2c/muxes/i2c-mux-pca954x.c
++++ b/drivers/i2c/muxes/i2c-mux-pca954x.c
+@@ -247,9 +247,9 @@ static int pca954x_probe(struct i2c_client *client,
+ 				/* discard unconfigured channels */
+ 				break;
+ 			idle_disconnect_pd = pdata->modes[num].deselect_on_exit;
+-			data->deselect |= (idle_disconnect_pd
+-					   || idle_disconnect_dt) << num;
+ 		}
++		data->deselect |= (idle_disconnect_pd ||
++				   idle_disconnect_dt) << num;
+ 
+ 		ret = i2c_mux_add_adapter(muxc, force, num, class);
+ 
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index c99525512b34..71c7c4c328ef 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -80,6 +80,8 @@ static struct ib_cm {
+ 	__be32 random_id_operand;
+ 	struct list_head timewait_list;
+ 	struct workqueue_struct *wq;
++	/* Sync on cm change port state */
++	spinlock_t state_lock;
+ } cm;
+ 
+ /* Counter indexes ordered by attribute ID */
+@@ -161,6 +163,8 @@ struct cm_port {
+ 	struct ib_mad_agent *mad_agent;
+ 	struct kobject port_obj;
+ 	u8 port_num;
++	struct list_head cm_priv_prim_list;
++	struct list_head cm_priv_altr_list;
+ 	struct cm_counter_group counter_group[CM_COUNTER_GROUPS];
+ };
+ 
+@@ -241,6 +245,12 @@ struct cm_id_private {
+ 	u8 service_timeout;
+ 	u8 target_ack_delay;
+ 
++	struct list_head prim_list;
++	struct list_head altr_list;
++	/* Indicates that the send port mad is registered and av is set */
++	int prim_send_port_not_ready;
++	int altr_send_port_not_ready;
++
+ 	struct list_head work_list;
+ 	atomic_t work_count;
+ };
+@@ -259,20 +269,47 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
+ 	struct ib_mad_agent *mad_agent;
+ 	struct ib_mad_send_buf *m;
+ 	struct ib_ah *ah;
++	struct cm_av *av;
++	unsigned long flags, flags2;
++	int ret = 0;
+ 
++	/* don't let the port to be released till the agent is down */
++	spin_lock_irqsave(&cm.state_lock, flags2);
++	spin_lock_irqsave(&cm.lock, flags);
++	if (!cm_id_priv->prim_send_port_not_ready)
++		av = &cm_id_priv->av;
++	else if (!cm_id_priv->altr_send_port_not_ready &&
++		 (cm_id_priv->alt_av.port))
++		av = &cm_id_priv->alt_av;
++	else {
++		pr_info("%s: not valid CM id\n", __func__);
++		ret = -ENODEV;
++		spin_unlock_irqrestore(&cm.lock, flags);
++		goto out;
++	}
++	spin_unlock_irqrestore(&cm.lock, flags);
++	/* Make sure the port haven't released the mad yet */
+ 	mad_agent = cm_id_priv->av.port->mad_agent;
+-	ah = ib_create_ah(mad_agent->qp->pd, &cm_id_priv->av.ah_attr);
+-	if (IS_ERR(ah))
+-		return PTR_ERR(ah);
++	if (!mad_agent) {
++		pr_info("%s: not a valid MAD agent\n", __func__);
++		ret = -ENODEV;
++		goto out;
++	}
++	ah = ib_create_ah(mad_agent->qp->pd, &av->ah_attr);
++	if (IS_ERR(ah)) {
++		ret = PTR_ERR(ah);
++		goto out;
++	}
+ 
+ 	m = ib_create_send_mad(mad_agent, cm_id_priv->id.remote_cm_qpn,
+-			       cm_id_priv->av.pkey_index,
++			       av->pkey_index,
+ 			       0, IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
+ 			       GFP_ATOMIC,
+ 			       IB_MGMT_BASE_VERSION);
+ 	if (IS_ERR(m)) {
+ 		ib_destroy_ah(ah);
+-		return PTR_ERR(m);
++		ret = PTR_ERR(m);
++		goto out;
+ 	}
+ 
+ 	/* Timeout set by caller if response is expected. */
+@@ -282,7 +319,10 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
+ 	atomic_inc(&cm_id_priv->refcount);
+ 	m->context[0] = cm_id_priv;
+ 	*msg = m;
+-	return 0;
++
++out:
++	spin_unlock_irqrestore(&cm.state_lock, flags2);
++	return ret;
+ }
+ 
+ static int cm_alloc_response_msg(struct cm_port *port,
+@@ -352,7 +392,8 @@ static void cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
+ 			   grh, &av->ah_attr);
+ }
+ 
+-static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av)
++static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av,
++			      struct cm_id_private *cm_id_priv)
+ {
+ 	struct cm_device *cm_dev;
+ 	struct cm_port *port = NULL;
+@@ -387,7 +428,17 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av)
+ 			     &av->ah_attr);
+ 	av->timeout = path->packet_life_time + 1;
+ 
+-	return 0;
++	spin_lock_irqsave(&cm.lock, flags);
++	if (&cm_id_priv->av == av)
++		list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list);
++	else if (&cm_id_priv->alt_av == av)
++		list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list);
++	else
++		ret = -EINVAL;
++
++	spin_unlock_irqrestore(&cm.lock, flags);
++
++	return ret;
+ }
+ 
+ static int cm_alloc_id(struct cm_id_private *cm_id_priv)
+@@ -677,6 +728,8 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
+ 	spin_lock_init(&cm_id_priv->lock);
+ 	init_completion(&cm_id_priv->comp);
+ 	INIT_LIST_HEAD(&cm_id_priv->work_list);
++	INIT_LIST_HEAD(&cm_id_priv->prim_list);
++	INIT_LIST_HEAD(&cm_id_priv->altr_list);
+ 	atomic_set(&cm_id_priv->work_count, -1);
+ 	atomic_set(&cm_id_priv->refcount, 1);
+ 	return &cm_id_priv->id;
+@@ -892,6 +945,15 @@ retest:
+ 		break;
+ 	}
+ 
++	spin_lock_irq(&cm.lock);
++	if (!list_empty(&cm_id_priv->altr_list) &&
++	    (!cm_id_priv->altr_send_port_not_ready))
++		list_del(&cm_id_priv->altr_list);
++	if (!list_empty(&cm_id_priv->prim_list) &&
++	    (!cm_id_priv->prim_send_port_not_ready))
++		list_del(&cm_id_priv->prim_list);
++	spin_unlock_irq(&cm.lock);
++
+ 	cm_free_id(cm_id->local_id);
+ 	cm_deref_id(cm_id_priv);
+ 	wait_for_completion(&cm_id_priv->comp);
+@@ -1192,12 +1254,13 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
+ 		goto out;
+ 	}
+ 
+-	ret = cm_init_av_by_path(param->primary_path, &cm_id_priv->av);
++	ret = cm_init_av_by_path(param->primary_path, &cm_id_priv->av,
++				 cm_id_priv);
+ 	if (ret)
+ 		goto error1;
+ 	if (param->alternate_path) {
+ 		ret = cm_init_av_by_path(param->alternate_path,
+-					 &cm_id_priv->alt_av);
++					 &cm_id_priv->alt_av, cm_id_priv);
+ 		if (ret)
+ 			goto error1;
+ 	}
+@@ -1653,7 +1716,8 @@ static int cm_req_handler(struct cm_work *work)
+ 			dev_put(gid_attr.ndev);
+ 		}
+ 		work->path[0].gid_type = gid_attr.gid_type;
+-		ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av);
++		ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av,
++					 cm_id_priv);
+ 	}
+ 	if (ret) {
+ 		int err = ib_get_cached_gid(work->port->cm_dev->ib_device,
+@@ -1672,7 +1736,8 @@ static int cm_req_handler(struct cm_work *work)
+ 		goto rejected;
+ 	}
+ 	if (req_msg->alt_local_lid) {
+-		ret = cm_init_av_by_path(&work->path[1], &cm_id_priv->alt_av);
++		ret = cm_init_av_by_path(&work->path[1], &cm_id_priv->alt_av,
++					 cm_id_priv);
+ 		if (ret) {
+ 			ib_send_cm_rej(cm_id, IB_CM_REJ_INVALID_ALT_GID,
+ 				       &work->path[0].sgid,
+@@ -2727,7 +2792,8 @@ int ib_send_cm_lap(struct ib_cm_id *cm_id,
+ 		goto out;
+ 	}
+ 
+-	ret = cm_init_av_by_path(alternate_path, &cm_id_priv->alt_av);
++	ret = cm_init_av_by_path(alternate_path, &cm_id_priv->alt_av,
++				 cm_id_priv);
+ 	if (ret)
+ 		goto out;
+ 	cm_id_priv->alt_av.timeout =
+@@ -2839,7 +2905,8 @@ static int cm_lap_handler(struct cm_work *work)
+ 	cm_init_av_for_response(work->port, work->mad_recv_wc->wc,
+ 				work->mad_recv_wc->recv_buf.grh,
+ 				&cm_id_priv->av);
+-	cm_init_av_by_path(param->alternate_path, &cm_id_priv->alt_av);
++	cm_init_av_by_path(param->alternate_path, &cm_id_priv->alt_av,
++			   cm_id_priv);
+ 	ret = atomic_inc_and_test(&cm_id_priv->work_count);
+ 	if (!ret)
+ 		list_add_tail(&work->list, &cm_id_priv->work_list);
+@@ -3031,7 +3098,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
+ 		return -EINVAL;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+-	ret = cm_init_av_by_path(param->path, &cm_id_priv->av);
++	ret = cm_init_av_by_path(param->path, &cm_id_priv->av, cm_id_priv);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -3468,7 +3535,9 @@ out:
+ static int cm_migrate(struct ib_cm_id *cm_id)
+ {
+ 	struct cm_id_private *cm_id_priv;
++	struct cm_av tmp_av;
+ 	unsigned long flags;
++	int tmp_send_port_not_ready;
+ 	int ret = 0;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+@@ -3477,7 +3546,14 @@ static int cm_migrate(struct ib_cm_id *cm_id)
+ 	    (cm_id->lap_state == IB_CM_LAP_UNINIT ||
+ 	     cm_id->lap_state == IB_CM_LAP_IDLE)) {
+ 		cm_id->lap_state = IB_CM_LAP_IDLE;
++		/* Swap address vector */
++		tmp_av = cm_id_priv->av;
+ 		cm_id_priv->av = cm_id_priv->alt_av;
++		cm_id_priv->alt_av = tmp_av;
++		/* Swap port send ready state */
++		tmp_send_port_not_ready = cm_id_priv->prim_send_port_not_ready;
++		cm_id_priv->prim_send_port_not_ready = cm_id_priv->altr_send_port_not_ready;
++		cm_id_priv->altr_send_port_not_ready = tmp_send_port_not_ready;
+ 	} else
+ 		ret = -EINVAL;
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+@@ -3888,6 +3964,9 @@ static void cm_add_one(struct ib_device *ib_device)
+ 		port->cm_dev = cm_dev;
+ 		port->port_num = i;
+ 
++		INIT_LIST_HEAD(&port->cm_priv_prim_list);
++		INIT_LIST_HEAD(&port->cm_priv_altr_list);
++
+ 		ret = cm_create_port_fs(port);
+ 		if (ret)
+ 			goto error1;
+@@ -3945,6 +4024,8 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ {
+ 	struct cm_device *cm_dev = client_data;
+ 	struct cm_port *port;
++	struct cm_id_private *cm_id_priv;
++	struct ib_mad_agent *cur_mad_agent;
+ 	struct ib_port_modify port_modify = {
+ 		.clr_port_cap_mask = IB_PORT_CM_SUP
+ 	};
+@@ -3968,15 +4049,27 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 
+ 		port = cm_dev->port[i-1];
+ 		ib_modify_port(ib_device, port->port_num, 0, &port_modify);
++		/* Mark all the cm_id's as not valid */
++		spin_lock_irq(&cm.lock);
++		list_for_each_entry(cm_id_priv, &port->cm_priv_altr_list, altr_list)
++			cm_id_priv->altr_send_port_not_ready = 1;
++		list_for_each_entry(cm_id_priv, &port->cm_priv_prim_list, prim_list)
++			cm_id_priv->prim_send_port_not_ready = 1;
++		spin_unlock_irq(&cm.lock);
+ 		/*
+ 		 * We flush the queue here after the going_down set, this
+ 		 * verify that no new works will be queued in the recv handler,
+ 		 * after that we can call the unregister_mad_agent
+ 		 */
+ 		flush_workqueue(cm.wq);
+-		ib_unregister_mad_agent(port->mad_agent);
++		spin_lock_irq(&cm.state_lock);
++		cur_mad_agent = port->mad_agent;
++		port->mad_agent = NULL;
++		spin_unlock_irq(&cm.state_lock);
++		ib_unregister_mad_agent(cur_mad_agent);
+ 		cm_remove_port_fs(port);
+ 	}
++
+ 	device_unregister(cm_dev->device);
+ 	kfree(cm_dev);
+ }
+@@ -3989,6 +4082,7 @@ static int __init ib_cm_init(void)
+ 	INIT_LIST_HEAD(&cm.device_list);
+ 	rwlock_init(&cm.device_lock);
+ 	spin_lock_init(&cm.lock);
++	spin_lock_init(&cm.state_lock);
+ 	cm.listen_service_table = RB_ROOT;
+ 	cm.listen_service_id = be64_to_cpu(IB_CM_ASSIGN_SERVICE_ID);
+ 	cm.remote_id_table = RB_ROOT;
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index c68746ce6624..bdab61d9103c 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -174,7 +174,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ 
+ 	cur_base = addr & PAGE_MASK;
+ 
+-	if (npages == 0) {
++	if (npages == 0 || npages > UINT_MAX) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 0012fa58c105..44b1104eb168 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -262,12 +262,9 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
+ 			container_of(uobj, struct ib_uqp_object, uevent.uobject);
+ 
+ 		idr_remove_uobj(&ib_uverbs_qp_idr, uobj);
+-		if (qp != qp->real_qp) {
+-			ib_close_qp(qp);
+-		} else {
++		if (qp == qp->real_qp)
+ 			ib_uverbs_detach_umcast(qp, uqp);
+-			ib_destroy_qp(qp);
+-		}
++		ib_destroy_qp(qp);
+ 		ib_uverbs_release_uevent(file, &uqp->uevent);
+ 		kfree(uqp);
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index bcf76c33726b..e3629983f1bf 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -87,7 +87,7 @@ void hfi1_add_rnr_timer(struct rvt_qp *qp, u32 to)
+ 	struct hfi1_qp_priv *priv = qp->priv;
+ 
+ 	qp->s_flags |= RVT_S_WAIT_RNR;
+-	qp->s_timer.expires = jiffies + usecs_to_jiffies(to);
++	priv->s_rnr_timer.expires = jiffies + usecs_to_jiffies(to);
+ 	add_timer(&priv->s_rnr_timer);
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index 1694037d1eee..8f59a4fded93 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -1152,7 +1152,7 @@ static int pin_vector_pages(struct user_sdma_request *req,
+ 	rb_node = hfi1_mmu_rb_extract(pq->handler,
+ 				      (unsigned long)iovec->iov.iov_base,
+ 				      iovec->iov.iov_len);
+-	if (rb_node && !IS_ERR(rb_node))
++	if (rb_node)
+ 		node = container_of(rb_node, struct sdma_mmu_node, rb);
+ 	else
+ 		rb_node = NULL;
+diff --git a/drivers/infiniband/hw/mlx4/ah.c b/drivers/infiniband/hw/mlx4/ah.c
+index 5fc623362731..b9bf0759f10a 100644
+--- a/drivers/infiniband/hw/mlx4/ah.c
++++ b/drivers/infiniband/hw/mlx4/ah.c
+@@ -102,7 +102,10 @@ static struct ib_ah *create_iboe_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr
+ 	if (vlan_tag < 0x1000)
+ 		vlan_tag |= (ah_attr->sl & 7) << 13;
+ 	ah->av.eth.port_pd = cpu_to_be32(to_mpd(pd)->pdn | (ah_attr->port_num << 24));
+-	ah->av.eth.gid_index = mlx4_ib_gid_index_to_real_index(ibdev, ah_attr->port_num, ah_attr->grh.sgid_index);
++	ret = mlx4_ib_gid_index_to_real_index(ibdev, ah_attr->port_num, ah_attr->grh.sgid_index);
++	if (ret < 0)
++		return ERR_PTR(ret);
++	ah->av.eth.gid_index = ret;
+ 	ah->av.eth.vlan = cpu_to_be16(vlan_tag);
+ 	ah->av.eth.hop_limit = ah_attr->grh.hop_limit;
+ 	if (ah_attr->static_rate) {
+diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
+index 5df63dacaaa3..efb6414cc5e4 100644
+--- a/drivers/infiniband/hw/mlx4/cq.c
++++ b/drivers/infiniband/hw/mlx4/cq.c
+@@ -253,11 +253,14 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev,
+ 	if (context)
+ 		if (ib_copy_to_udata(udata, &cq->mcq.cqn, sizeof (__u32))) {
+ 			err = -EFAULT;
+-			goto err_dbmap;
++			goto err_cq_free;
+ 		}
+ 
+ 	return &cq->ibcq;
+ 
++err_cq_free:
++	mlx4_cq_free(dev->dev, &cq->mcq);
++
+ err_dbmap:
+ 	if (context)
+ 		mlx4_ib_db_unmap_user(to_mucontext(context), &cq->db);
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index e4fac9292e4a..ebe43cb7d06b 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -917,8 +917,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
+ 		if (err)
+ 			goto err_create;
+ 	} else {
+-		/* for now choose 64 bytes till we have a proper interface */
+-		cqe_size = 64;
++		cqe_size = cache_line_size() == 128 ? 128 : 64;
+ 		err = create_cq_kernel(dev, cq, entries, cqe_size, &cqb,
+ 				       &index, &inlen);
+ 		if (err)
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index bff8707a2f1f..19f8820b4e92 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2100,14 +2100,14 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, void *context,
+ {
+ 	struct mlx5_ib_dev *ibdev = (struct mlx5_ib_dev *)context;
+ 	struct ib_event ibev;
+-
++	bool fatal = false;
+ 	u8 port = 0;
+ 
+ 	switch (event) {
+ 	case MLX5_DEV_EVENT_SYS_ERROR:
+-		ibdev->ib_active = false;
+ 		ibev.event = IB_EVENT_DEVICE_FATAL;
+ 		mlx5_ib_handle_internal_error(ibdev);
++		fatal = true;
+ 		break;
+ 
+ 	case MLX5_DEV_EVENT_PORT_UP:
+@@ -2154,6 +2154,9 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, void *context,
+ 
+ 	if (ibdev->ib_active)
+ 		ib_dispatch_event(&ibev);
++
++	if (fatal)
++		ibdev->ib_active = false;
+ }
+ 
+ static void get_ext_port_caps(struct mlx5_ib_dev *dev)
+@@ -2835,7 +2838,7 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
+ 	}
+ 	err = init_node_data(dev);
+ 	if (err)
+-		goto err_dealloc;
++		goto err_free_port;
+ 
+ 	mutex_init(&dev->flow_db.lock);
+ 	mutex_init(&dev->cap_mask_mutex);
+@@ -2845,7 +2848,7 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
+ 	if (ll == IB_LINK_LAYER_ETHERNET) {
+ 		err = mlx5_enable_roce(dev);
+ 		if (err)
+-			goto err_dealloc;
++			goto err_free_port;
+ 	}
+ 
+ 	err = create_dev_resources(&dev->devr);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index affc3f6598ca..19d590d39484 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -2037,8 +2037,8 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
+ 
+ 		mlx5_ib_dbg(dev, "ib qpnum 0x%x, mlx qpn 0x%x, rcqn 0x%x, scqn 0x%x\n",
+ 			    qp->ibqp.qp_num, qp->trans_qp.base.mqp.qpn,
+-			    to_mcq(init_attr->recv_cq)->mcq.cqn,
+-			    to_mcq(init_attr->send_cq)->mcq.cqn);
++			    init_attr->recv_cq ? to_mcq(init_attr->recv_cq)->mcq.cqn : -1,
++			    init_attr->send_cq ? to_mcq(init_attr->send_cq)->mcq.cqn : -1);
+ 
+ 		qp->trans_qp.xrcdn = xrcdn;
+ 
+@@ -4702,6 +4702,14 @@ struct ib_rwq_ind_table *mlx5_ib_create_rwq_ind_table(struct ib_device *device,
+ 				 udata->inlen))
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
++	if (init_attr->log_ind_tbl_size >
++	    MLX5_CAP_GEN(dev->mdev, log_max_rqt_size)) {
++		mlx5_ib_dbg(dev, "log_ind_tbl_size = %d is bigger than supported = %d\n",
++			    init_attr->log_ind_tbl_size,
++			    MLX5_CAP_GEN(dev->mdev, log_max_rqt_size));
++		return ERR_PTR(-EINVAL);
++	}
++
+ 	min_resp_len = offsetof(typeof(resp), reserved) + sizeof(resp.reserved);
+ 	if (udata->outlen && udata->outlen < min_resp_len)
+ 		return ERR_PTR(-EINVAL);
+diff --git a/drivers/infiniband/sw/rdmavt/dma.c b/drivers/infiniband/sw/rdmavt/dma.c
+index 33076a5eee2f..04ebbb576385 100644
+--- a/drivers/infiniband/sw/rdmavt/dma.c
++++ b/drivers/infiniband/sw/rdmavt/dma.c
+@@ -90,9 +90,6 @@ static u64 rvt_dma_map_page(struct ib_device *dev, struct page *page,
+ 	if (WARN_ON(!valid_dma_direction(direction)))
+ 		return BAD_DMA_ADDRESS;
+ 
+-	if (offset + size > PAGE_SIZE)
+-		return BAD_DMA_ADDRESS;
+-
+ 	addr = (u64)page_address(page);
+ 	if (addr)
+ 		addr += offset;
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index eedf2f1cafdf..7f5d7358c99e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -243,10 +243,8 @@ static struct socket *rxe_setup_udp_tunnel(struct net *net, __be16 port,
+ {
+ 	int err;
+ 	struct socket *sock;
+-	struct udp_port_cfg udp_cfg;
+-	struct udp_tunnel_sock_cfg tnl_cfg;
+-
+-	memset(&udp_cfg, 0, sizeof(udp_cfg));
++	struct udp_port_cfg udp_cfg = {0};
++	struct udp_tunnel_sock_cfg tnl_cfg = {0};
+ 
+ 	if (ipv6) {
+ 		udp_cfg.family = AF_INET6;
+@@ -264,10 +262,8 @@ static struct socket *rxe_setup_udp_tunnel(struct net *net, __be16 port,
+ 		return ERR_PTR(err);
+ 	}
+ 
+-	tnl_cfg.sk_user_data = NULL;
+ 	tnl_cfg.encap_type = 1;
+ 	tnl_cfg.encap_rcv = rxe_udp_encap_recv;
+-	tnl_cfg.encap_destroy = NULL;
+ 
+ 	/* Setup UDP tunnel */
+ 	setup_udp_tunnel_sock(net, sock, &tnl_cfg);
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 22ba24f2a2c1..f724a7ef9b67 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -522,6 +522,7 @@ static void rxe_qp_reset(struct rxe_qp *qp)
+ 	if (qp->sq.queue) {
+ 		__rxe_do_task(&qp->comp.task);
+ 		__rxe_do_task(&qp->req.task);
++		rxe_queue_reset(qp->sq.queue);
+ 	}
+ 
+ 	/* cleanup attributes */
+@@ -573,6 +574,7 @@ void rxe_qp_error(struct rxe_qp *qp)
+ {
+ 	qp->req.state = QP_STATE_ERROR;
+ 	qp->resp.state = QP_STATE_ERROR;
++	qp->attr.qp_state = IB_QPS_ERR;
+ 
+ 	/* drain work and packet queues */
+ 	rxe_run_task(&qp->resp.task, 1);
+diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c
+index 08274254eb88..d14bf496d62d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_queue.c
++++ b/drivers/infiniband/sw/rxe/rxe_queue.c
+@@ -84,6 +84,15 @@ err1:
+ 	return -EINVAL;
+ }
+ 
++inline void rxe_queue_reset(struct rxe_queue *q)
++{
++	/* queue is comprised from header and the memory
++	 * of the actual queue. See "struct rxe_queue_buf" in rxe_queue.h
++	 * reset only the queue itself and not the management header
++	 */
++	memset(q->buf->data, 0, q->buf_size - sizeof(struct rxe_queue_buf));
++}
++
+ struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe,
+ 				 int *num_elem,
+ 				 unsigned int elem_size)
+diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h
+index 239fd609c31e..8c8641c87817 100644
+--- a/drivers/infiniband/sw/rxe/rxe_queue.h
++++ b/drivers/infiniband/sw/rxe/rxe_queue.h
+@@ -84,6 +84,8 @@ int do_mmap_info(struct rxe_dev *rxe,
+ 		 size_t buf_size,
+ 		 struct rxe_mmap_info **ip_p);
+ 
++void rxe_queue_reset(struct rxe_queue *q);
++
+ struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe,
+ 				 int *num_elem,
+ 				 unsigned int elem_size);
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 13a848a518e8..43bb16649e44 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -695,7 +695,8 @@ next_wqe:
+ 						       qp->req.wqe_index);
+ 			wqe->state = wqe_state_done;
+ 			wqe->status = IB_WC_SUCCESS;
+-			goto complete;
++			__rxe_do_task(&qp->comp.task);
++			return 0;
+ 		}
+ 		payload = mtu;
+ 	}
+@@ -744,13 +745,17 @@ err:
+ 	wqe->status = IB_WC_LOC_PROT_ERR;
+ 	wqe->state = wqe_state_error;
+ 
+-complete:
+-	if (qp_type(qp) != IB_QPT_RC) {
+-		while (rxe_completer(qp) == 0)
+-			;
+-	}
+-
+-	return 0;
++	/*
++	 * IBA Spec. Section 10.7.3.1 SIGNALED COMPLETIONS
++	 * ---------8<---------8<-------------
++	 * ...Note that if a completion error occurs, a Work Completion
++	 * will always be generated, even if the signaling
++	 * indicator requests an Unsignaled Completion.
++	 * ---------8<---------8<-------------
++	 */
++	wqe->wr.send_flags |= IB_SEND_SIGNALED;
++	__rxe_do_task(&qp->comp.task);
++	return -EAGAIN;
+ 
+ exit:
+ 	return -EAGAIN;
+diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
+index 41b113875d64..70c646b0097d 100644
+--- a/drivers/mfd/intel-lpss.c
++++ b/drivers/mfd/intel-lpss.c
+@@ -502,9 +502,6 @@ int intel_lpss_suspend(struct device *dev)
+ 	for (i = 0; i < LPSS_PRIV_REG_COUNT; i++)
+ 		lpss->priv_ctx[i] = readl(lpss->priv + i * 4);
+ 
+-	/* Put the device into reset state */
+-	writel(0, lpss->priv + LPSS_PRIV_RESETS);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(intel_lpss_suspend);
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 3ac486a597f3..c57e407020f1 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -399,6 +399,8 @@ int mfd_clone_cell(const char *cell, const char **clones, size_t n_clones)
+ 					clones[i]);
+ 	}
+ 
++	put_device(dev);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(mfd_clone_cell);
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index 94c7cc02fdab..00dd7ff70e01 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -761,6 +761,8 @@ static int stmpe1801_reset(struct stmpe *stmpe)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	msleep(10);
++
+ 	timeout = jiffies + msecs_to_jiffies(100);
+ 	while (time_before(jiffies, timeout)) {
+ 		ret = __stmpe_reg_read(stmpe, STMPE1801_REG_SYS_CTRL);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 1b5f531eeb25..bf3fd34924bd 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2010,23 +2010,33 @@ static struct virtio_device_id id_table[] = {
+ 	{ 0 },
+ };
+ 
++#define VIRTNET_FEATURES \
++	VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, \
++	VIRTIO_NET_F_MAC, \
++	VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, \
++	VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, \
++	VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, \
++	VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, \
++	VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, \
++	VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \
++	VIRTIO_NET_F_CTRL_MAC_ADDR, \
++	VIRTIO_NET_F_MTU
++
+ static unsigned int features[] = {
+-	VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM,
+-	VIRTIO_NET_F_GSO, VIRTIO_NET_F_MAC,
+-	VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6,
+-	VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6,
+-	VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO,
+-	VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ,
+-	VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN,
+-	VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ,
+-	VIRTIO_NET_F_CTRL_MAC_ADDR,
++	VIRTNET_FEATURES,
++};
++
++static unsigned int features_legacy[] = {
++	VIRTNET_FEATURES,
++	VIRTIO_NET_F_GSO,
+ 	VIRTIO_F_ANY_LAYOUT,
+-	VIRTIO_NET_F_MTU,
+ };
+ 
+ static struct virtio_driver virtio_net_driver = {
+ 	.feature_table = features,
+ 	.feature_table_size = ARRAY_SIZE(features),
++	.feature_table_legacy = features_legacy,
++	.feature_table_size_legacy = ARRAY_SIZE(features_legacy),
+ 	.driver.name =	KBUILD_MODNAME,
+ 	.driver.owner =	THIS_MODULE,
+ 	.id_table =	id_table,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 4fdc3dad3e85..ea67ae9c87a0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -1087,6 +1087,15 @@ iwl_mvm_netdetect_config(struct iwl_mvm *mvm,
+ 		ret = iwl_mvm_switch_to_d3(mvm);
+ 		if (ret)
+ 			return ret;
++	} else {
++		/* In theory, we wouldn't have to stop a running sched
++		 * scan in order to start another one (for
++		 * net-detect).  But in practice this doesn't seem to
++		 * work properly, so stop any running sched_scan now.
++		 */
++		ret = iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED, true);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	/* rfkill release can be either for wowlan or netdetect */
+@@ -2088,6 +2097,16 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 	iwl_mvm_update_changed_regdom(mvm);
+ 
+ 	if (mvm->net_detect) {
++		/* If this is a non-unified image, we restart the FW,
++		 * so no need to stop the netdetect scan.  If that
++		 * fails, continue and try to get the wake-up reasons,
++		 * but trigger a HW restart by keeping a failure code
++		 * in ret.
++		 */
++		if (unified_image)
++			ret = iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_NETDETECT,
++						false);
++
+ 		iwl_mvm_query_netdetect_reasons(mvm, vif);
+ 		/* has unlocked the mutex, so skip that */
+ 		goto out;
+@@ -2271,7 +2290,8 @@ static void iwl_mvm_d3_test_disconn_work_iter(void *_data, u8 *mac,
+ static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)
+ {
+ 	struct iwl_mvm *mvm = inode->i_private;
+-	int remaining_time = 10;
++	bool unified_image = fw_has_capa(&mvm->fw->ucode_capa,
++					 IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);
+ 
+ 	mvm->d3_test_active = false;
+ 
+@@ -2282,17 +2302,21 @@ static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)
+ 	mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
+ 
+ 	iwl_abort_notification_waits(&mvm->notif_wait);
+-	ieee80211_restart_hw(mvm->hw);
++	if (!unified_image) {
++		int remaining_time = 10;
+ 
+-	/* wait for restart and disconnect all interfaces */
+-	while (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+-	       remaining_time > 0) {
+-		remaining_time--;
+-		msleep(1000);
+-	}
++		ieee80211_restart_hw(mvm->hw);
+ 
+-	if (remaining_time == 0)
+-		IWL_ERR(mvm, "Timed out waiting for HW restart to finish!\n");
++		/* wait for restart and disconnect all interfaces */
++		while (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
++		       remaining_time > 0) {
++			remaining_time--;
++			msleep(1000);
++		}
++
++		if (remaining_time == 0)
++			IWL_ERR(mvm, "Timed out waiting for HW restart!\n");
++	}
+ 
+ 	ieee80211_iterate_active_interfaces_atomic(
+ 		mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 5dd77e336617..90a1f4a06ba1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -4097,7 +4097,6 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
+ 				     struct iwl_mvm_internal_rxq_notif *notif,
+ 				     u32 size)
+ {
+-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(notif_waitq);
+ 	u32 qmask = BIT(mvm->trans->num_rx_queues) - 1;
+ 	int ret;
+ 
+@@ -4119,7 +4118,7 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
+ 	}
+ 
+ 	if (notif->sync)
+-		ret = wait_event_timeout(notif_waitq,
++		ret = wait_event_timeout(mvm->rx_sync_waitq,
+ 					 atomic_read(&mvm->queue_sync_counter) == 0,
+ 					 HZ);
+ 	WARN_ON_ONCE(!ret);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index 6a615bb73042..e9cb970139c7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -932,6 +932,7 @@ struct iwl_mvm {
+ 	/* sync d0i3_tx queue and IWL_MVM_STATUS_IN_D0I3 status flag */
+ 	spinlock_t d0i3_tx_lock;
+ 	wait_queue_head_t d0i3_exit_waitq;
++	wait_queue_head_t rx_sync_waitq;
+ 
+ 	/* BT-Coex */
+ 	struct iwl_bt_coex_profile_notif last_bt_notif;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 55d9096da68c..30bbdec97d03 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -618,6 +618,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 	spin_lock_init(&mvm->refs_lock);
+ 	skb_queue_head_init(&mvm->d0i3_tx);
+ 	init_waitqueue_head(&mvm->d0i3_exit_waitq);
++	init_waitqueue_head(&mvm->rx_sync_waitq);
+ 
+ 	atomic_set(&mvm->queue_sync_counter, 0);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index afb7eb60e454..2b994be10b42 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -545,7 +545,8 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb,
+ 				  "Received expired RX queue sync message\n");
+ 			return;
+ 		}
+-		atomic_dec(&mvm->queue_sync_counter);
++		if (!atomic_dec_return(&mvm->queue_sync_counter))
++			wake_up(&mvm->rx_sync_waitq);
+ 	}
+ 
+ 	switch (internal_notif->type) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index dac120f8861b..3707ec60b575 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1185,6 +1185,9 @@ static int iwl_mvm_num_scans(struct iwl_mvm *mvm)
+ 
+ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
+ {
++	bool unified_image = fw_has_capa(&mvm->fw->ucode_capa,
++					 IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);
++
+ 	/* This looks a bit arbitrary, but the idea is that if we run
+ 	 * out of possible simultaneous scans and the userspace is
+ 	 * trying to run a scan type that is already running, we
+@@ -1211,12 +1214,30 @@ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
+ 			return -EBUSY;
+ 		return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_REGULAR, true);
+ 	case IWL_MVM_SCAN_NETDETECT:
+-		/* No need to stop anything for net-detect since the
+-		 * firmware is restarted anyway.  This way, any sched
+-		 * scans that were running will be restarted when we
+-		 * resume.
+-		*/
+-		return 0;
++		/* For non-unified images, there's no need to stop
++		 * anything for net-detect since the firmware is
++		 * restarted anyway.  This way, any sched scans that
++		 * were running will be restarted when we resume.
++		 */
++		if (!unified_image)
++			return 0;
++
++		/* If this is a unified image and we ran out of scans,
++		 * we need to stop something.  Prefer stopping regular
++		 * scans, because the results are useless at this
++		 * point, and we should be able to keep running
++		 * another scheduled scan while suspended.
++		 */
++		if (mvm->scan_status & IWL_MVM_SCAN_REGULAR_MASK)
++			return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_REGULAR,
++						 true);
++		if (mvm->scan_status & IWL_MVM_SCAN_SCHED_MASK)
++			return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED,
++						 true);
++
++		/* fall through, something is wrong if no scan was
++		 * running but we ran out of scans.
++		 */
+ 	default:
+ 		WARN_ON(1);
+ 		break;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 78cf9a7f3eac..13842ca124ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -526,48 +526,64 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ MODULE_DEVICE_TABLE(pci, iwl_hw_card_ids);
+ 
+ #ifdef CONFIG_ACPI
+-#define SPL_METHOD		"SPLC"
+-#define SPL_DOMAINTYPE_MODULE	BIT(0)
+-#define SPL_DOMAINTYPE_WIFI	BIT(1)
+-#define SPL_DOMAINTYPE_WIGIG	BIT(2)
+-#define SPL_DOMAINTYPE_RFEM	BIT(3)
++#define ACPI_SPLC_METHOD	"SPLC"
++#define ACPI_SPLC_DOMAIN_WIFI	(0x07)
+ 
+-static u64 splx_get_pwr_limit(struct iwl_trans *trans, union acpi_object *splx)
++static u64 splc_get_pwr_limit(struct iwl_trans *trans, union acpi_object *splc)
+ {
+-	union acpi_object *limits, *domain_type, *power_limit;
+-
+-	if (splx->type != ACPI_TYPE_PACKAGE ||
+-	    splx->package.count != 2 ||
+-	    splx->package.elements[0].type != ACPI_TYPE_INTEGER ||
+-	    splx->package.elements[0].integer.value != 0) {
+-		IWL_ERR(trans, "Unsupported splx structure\n");
++	union acpi_object *data_pkg, *dflt_pwr_limit;
++	int i;
++
++	/* We need at least two elements, one for the revision and one
++	 * for the data itself.  Also check that the revision is
++	 * supported (currently only revision 0).
++	*/
++	if (splc->type != ACPI_TYPE_PACKAGE ||
++	    splc->package.count < 2 ||
++	    splc->package.elements[0].type != ACPI_TYPE_INTEGER ||
++	    splc->package.elements[0].integer.value != 0) {
++		IWL_DEBUG_INFO(trans,
++			       "Unsupported structure returned by the SPLC method.  Ignoring.\n");
+ 		return 0;
+ 	}
+ 
+-	limits = &splx->package.elements[1];
+-	if (limits->type != ACPI_TYPE_PACKAGE ||
+-	    limits->package.count < 2 ||
+-	    limits->package.elements[0].type != ACPI_TYPE_INTEGER ||
+-	    limits->package.elements[1].type != ACPI_TYPE_INTEGER) {
+-		IWL_ERR(trans, "Invalid limits element\n");
+-		return 0;
++	/* loop through all the packages to find the one for WiFi */
++	for (i = 1; i < splc->package.count; i++) {
++		union acpi_object *domain;
++
++		data_pkg = &splc->package.elements[i];
++
++		/* Skip anything that is not a package with the right
++		 * amount of elements (i.e. at least 2 integers).
++		 */
++		if (data_pkg->type != ACPI_TYPE_PACKAGE ||
++		    data_pkg->package.count < 2 ||
++		    data_pkg->package.elements[0].type != ACPI_TYPE_INTEGER ||
++		    data_pkg->package.elements[1].type != ACPI_TYPE_INTEGER)
++			continue;
++
++		domain = &data_pkg->package.elements[0];
++		if (domain->integer.value == ACPI_SPLC_DOMAIN_WIFI)
++			break;
++
++		data_pkg = NULL;
+ 	}
+ 
+-	domain_type = &limits->package.elements[0];
+-	power_limit = &limits->package.elements[1];
+-	if (!(domain_type->integer.value & SPL_DOMAINTYPE_WIFI)) {
+-		IWL_DEBUG_INFO(trans, "WiFi power is not limited\n");
++	if (!data_pkg) {
++		IWL_DEBUG_INFO(trans,
++			       "No element for the WiFi domain returned by the SPLC method.\n");
+ 		return 0;
+ 	}
+ 
+-	return power_limit->integer.value;
++	dflt_pwr_limit = &data_pkg->package.elements[1];
++	return dflt_pwr_limit->integer.value;
+ }
+ 
+ static void set_dflt_pwr_limit(struct iwl_trans *trans, struct pci_dev *pdev)
+ {
+ 	acpi_handle pxsx_handle;
+ 	acpi_handle handle;
+-	struct acpi_buffer splx = {ACPI_ALLOCATE_BUFFER, NULL};
++	struct acpi_buffer splc = {ACPI_ALLOCATE_BUFFER, NULL};
+ 	acpi_status status;
+ 
+ 	pxsx_handle = ACPI_HANDLE(&pdev->dev);
+@@ -578,23 +594,24 @@ static void set_dflt_pwr_limit(struct iwl_trans *trans, struct pci_dev *pdev)
+ 	}
+ 
+ 	/* Get the method's handle */
+-	status = acpi_get_handle(pxsx_handle, (acpi_string)SPL_METHOD, &handle);
++	status = acpi_get_handle(pxsx_handle, (acpi_string)ACPI_SPLC_METHOD,
++				 &handle);
+ 	if (ACPI_FAILURE(status)) {
+-		IWL_DEBUG_INFO(trans, "SPL method not found\n");
++		IWL_DEBUG_INFO(trans, "SPLC method not found\n");
+ 		return;
+ 	}
+ 
+ 	/* Call SPLC with no arguments */
+-	status = acpi_evaluate_object(handle, NULL, NULL, &splx);
++	status = acpi_evaluate_object(handle, NULL, NULL, &splc);
+ 	if (ACPI_FAILURE(status)) {
+ 		IWL_ERR(trans, "SPLC invocation failed (0x%x)\n", status);
+ 		return;
+ 	}
+ 
+-	trans->dflt_pwr_limit = splx_get_pwr_limit(trans, splx.pointer);
++	trans->dflt_pwr_limit = splc_get_pwr_limit(trans, splc.pointer);
+ 	IWL_DEBUG_INFO(trans, "Default power limit set to %lld\n",
+ 		       trans->dflt_pwr_limit);
+-	kfree(splx.pointer);
++	kfree(splc.pointer);
+ }
+ 
+ #else /* CONFIG_ACPI */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 18650dccdb58..478bba527977 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -522,6 +522,7 @@ error:
+ static int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
+ 			      int slots_num, u32 txq_id)
+ {
++	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int ret;
+ 
+ 	txq->need_update = false;
+@@ -536,6 +537,13 @@ static int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
+ 		return ret;
+ 
+ 	spin_lock_init(&txq->lock);
++
++	if (txq_id == trans_pcie->cmd_queue) {
++		static struct lock_class_key iwl_pcie_cmd_queue_lock_class;
++
++		lockdep_set_class(&txq->lock, &iwl_pcie_cmd_queue_lock_class);
++	}
++
+ 	__skb_queue_head_init(&txq->overflow_q);
+ 
+ 	/*
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index ec2e9c5fb993..22394fe30579 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -109,6 +109,7 @@
+ /* OMAP_RTC_OSC_REG bit fields: */
+ #define OMAP_RTC_OSC_32KCLK_EN		BIT(6)
+ #define OMAP_RTC_OSC_SEL_32KCLK_SRC	BIT(3)
++#define OMAP_RTC_OSC_OSC32K_GZ_DISABLE	BIT(4)
+ 
+ /* OMAP_RTC_IRQWAKEEN bit fields: */
+ #define OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN	BIT(1)
+@@ -646,8 +647,9 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 	 */
+ 	if (rtc->has_ext_clk) {
+ 		reg = rtc_read(rtc, OMAP_RTC_OSC_REG);
+-		rtc_write(rtc, OMAP_RTC_OSC_REG,
+-			  reg | OMAP_RTC_OSC_SEL_32KCLK_SRC);
++		reg &= ~OMAP_RTC_OSC_OSC32K_GZ_DISABLE;
++		reg |= OMAP_RTC_OSC_32KCLK_EN | OMAP_RTC_OSC_SEL_32KCLK_SRC;
++		rtc_writel(rtc, OMAP_RTC_OSC_REG, reg);
+ 	}
+ 
+ 	rtc->type->lock(rtc);
+diff --git a/drivers/uwb/lc-rc.c b/drivers/uwb/lc-rc.c
+index d059ad4d0dbd..97ee1b46db69 100644
+--- a/drivers/uwb/lc-rc.c
++++ b/drivers/uwb/lc-rc.c
+@@ -56,8 +56,11 @@ static struct uwb_rc *uwb_rc_find_by_index(int index)
+ 	struct uwb_rc *rc = NULL;
+ 
+ 	dev = class_find_device(&uwb_rc_class, NULL, &index, uwb_rc_index_match);
+-	if (dev)
++	if (dev) {
+ 		rc = dev_get_drvdata(dev);
++		put_device(dev);
++	}
++
+ 	return rc;
+ }
+ 
+@@ -467,7 +470,9 @@ struct uwb_rc *__uwb_rc_try_get(struct uwb_rc *target_rc)
+ 	if (dev) {
+ 		rc = dev_get_drvdata(dev);
+ 		__uwb_rc_get(rc);
++		put_device(dev);
+ 	}
++
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(__uwb_rc_try_get);
+@@ -520,8 +525,11 @@ struct uwb_rc *uwb_rc_get_by_grandpa(const struct device *grandpa_dev)
+ 
+ 	dev = class_find_device(&uwb_rc_class, NULL, grandpa_dev,
+ 				find_rc_grandpa);
+-	if (dev)
++	if (dev) {
+ 		rc = dev_get_drvdata(dev);
++		put_device(dev);
++	}
++
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(uwb_rc_get_by_grandpa);
+@@ -553,8 +561,10 @@ struct uwb_rc *uwb_rc_get_by_dev(const struct uwb_dev_addr *addr)
+ 	struct uwb_rc *rc = NULL;
+ 
+ 	dev = class_find_device(&uwb_rc_class, NULL, addr, find_rc_dev);
+-	if (dev)
++	if (dev) {
+ 		rc = dev_get_drvdata(dev);
++		put_device(dev);
++	}
+ 
+ 	return rc;
+ }
+diff --git a/drivers/uwb/pal.c b/drivers/uwb/pal.c
+index c1304b8d4985..678e93741ae1 100644
+--- a/drivers/uwb/pal.c
++++ b/drivers/uwb/pal.c
+@@ -97,6 +97,8 @@ static bool uwb_rc_class_device_exists(struct uwb_rc *target_rc)
+ 
+ 	dev = class_find_device(&uwb_rc_class, NULL, target_rc,	find_rc);
+ 
++	put_device(dev);
++
+ 	return (dev != NULL);
+ }
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index ea31931386ec..7bd21aaedaaf 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -235,6 +235,7 @@ struct ext4_io_submit {
+ #define	EXT4_MAX_BLOCK_SIZE		65536
+ #define EXT4_MIN_BLOCK_LOG_SIZE		10
+ #define EXT4_MAX_BLOCK_LOG_SIZE		16
++#define EXT4_MAX_CLUSTER_LOG_SIZE	30
+ #ifdef __KERNEL__
+ # define EXT4_BLOCK_SIZE(s)		((s)->s_blocksize)
+ #else
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 3ec8708989ca..ec89f5005c2d 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3518,7 +3518,15 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (blocksize < EXT4_MIN_BLOCK_SIZE ||
+ 	    blocksize > EXT4_MAX_BLOCK_SIZE) {
+ 		ext4_msg(sb, KERN_ERR,
+-		       "Unsupported filesystem blocksize %d", blocksize);
++		       "Unsupported filesystem blocksize %d (%d log_block_size)",
++			 blocksize, le32_to_cpu(es->s_log_block_size));
++		goto failed_mount;
++	}
++	if (le32_to_cpu(es->s_log_block_size) >
++	    (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
++		ext4_msg(sb, KERN_ERR,
++			 "Invalid log block size: %u",
++			 le32_to_cpu(es->s_log_block_size));
+ 		goto failed_mount;
+ 	}
+ 
+@@ -3650,6 +3658,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 				 "block size (%d)", clustersize, blocksize);
+ 			goto failed_mount;
+ 		}
++		if (le32_to_cpu(es->s_log_cluster_size) >
++		    (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
++			ext4_msg(sb, KERN_ERR,
++				 "Invalid log cluster size: %u",
++				 le32_to_cpu(es->s_log_cluster_size));
++			goto failed_mount;
++		}
+ 		sbi->s_cluster_bits = le32_to_cpu(es->s_log_cluster_size) -
+ 			le32_to_cpu(es->s_log_block_size);
+ 		sbi->s_clusters_per_group =
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 3988b43c2f5a..a621dd98a865 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1985,6 +1985,10 @@ static int fuse_write_end(struct file *file, struct address_space *mapping,
+ {
+ 	struct inode *inode = page->mapping->host;
+ 
++	/* Haven't copied anything?  Skip zeroing, size extending, dirtying. */
++	if (!copied)
++		goto unlock;
++
+ 	if (!PageUptodate(page)) {
+ 		/* Zero any unwritten bytes at the end of the page */
+ 		size_t endoff = (pos + copied) & ~PAGE_MASK;
+@@ -1995,6 +1999,8 @@ static int fuse_write_end(struct file *file, struct address_space *mapping,
+ 
+ 	fuse_write_update_size(inode, pos + copied);
+ 	set_page_dirty(page);
++
++unlock:
+ 	unlock_page(page);
+ 	put_page(page);
+ 
+diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
+index ab02a457da1f..e5d193440374 100644
+--- a/include/linux/sunrpc/svc_xprt.h
++++ b/include/linux/sunrpc/svc_xprt.h
+@@ -25,6 +25,7 @@ struct svc_xprt_ops {
+ 	void		(*xpo_detach)(struct svc_xprt *);
+ 	void		(*xpo_free)(struct svc_xprt *);
+ 	int		(*xpo_secure_port)(struct svc_rqst *);
++	void		(*xpo_kill_temp_xprt)(struct svc_xprt *);
+ };
+ 
+ struct svc_xprt_class {
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 9530fcd27704..9d592c66f754 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1341,12 +1341,12 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ 
+ 	} else if (new->flags & IRQF_TRIGGER_MASK) {
+ 		unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;
+-		unsigned int omsk = irq_settings_get_trigger_mask(desc);
++		unsigned int omsk = irqd_get_trigger_type(&desc->irq_data);
+ 
+ 		if (nmsk != omsk)
+ 			/* hope the handler works with current  trigger mode */
+ 			pr_warn("irq %d uses trigger mode %u; requested %u\n",
+-				irq, nmsk, omsk);
++				irq, omsk, nmsk);
+ 	}
+ 
+ 	*old_ptr = new;
+diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c
+index 084452e34a12..bdff5ed57f10 100644
+--- a/kernel/power/suspend_test.c
++++ b/kernel/power/suspend_test.c
+@@ -203,8 +203,10 @@ static int __init test_suspend(void)
+ 
+ 	/* RTCs have initialized by now too ... can we use one? */
+ 	dev = class_find_device(rtc_class, NULL, NULL, has_wakealarm);
+-	if (dev)
++	if (dev) {
+ 		rtc = rtc_class_open(dev_name(dev));
++		put_device(dev);
++	}
+ 	if (!rtc) {
+ 		printk(warn_no_rtc);
+ 		return 0;
+diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
+index d0a1617b52b4..979e7bfbde7a 100644
+--- a/kernel/trace/Makefile
++++ b/kernel/trace/Makefile
+@@ -1,8 +1,4 @@
+ 
+-# We are fully aware of the dangers of __builtin_return_address()
+-FRAME_CFLAGS := $(call cc-disable-warning,frame-address)
+-KBUILD_CFLAGS += $(FRAME_CFLAGS)
+-
+ # Do not instrument the tracer itself:
+ 
+ ifdef CONFIG_FUNCTION_TRACER
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 84752c8e28b5..b1d7f1b5e791 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -1856,6 +1856,10 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
+ 
+ 	/* Update rec->flags */
+ 	do_for_each_ftrace_rec(pg, rec) {
++
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		/* We need to update only differences of filter_hash */
+ 		in_old = !!ftrace_lookup_ip(old_hash, rec->ip);
+ 		in_new = !!ftrace_lookup_ip(new_hash, rec->ip);
+@@ -1878,6 +1882,10 @@ rollback:
+ 
+ 	/* Roll back what we did above */
+ 	do_for_each_ftrace_rec(pg, rec) {
++
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		if (rec == end)
+ 			goto err_out;
+ 
+@@ -2391,6 +2399,10 @@ void __weak ftrace_replace_code(int enable)
+ 		return;
+ 
+ 	do_for_each_ftrace_rec(pg, rec) {
++
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		failed = __ftrace_replace_code(rec, enable);
+ 		if (failed) {
+ 			ftrace_bug(failed, rec);
+@@ -2757,7 +2769,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
+ 		struct dyn_ftrace *rec;
+ 
+ 		do_for_each_ftrace_rec(pg, rec) {
+-			if (FTRACE_WARN_ON_ONCE(rec->flags))
++			if (FTRACE_WARN_ON_ONCE(rec->flags & ~FTRACE_FL_DISABLED))
+ 				pr_warn("  %pS flags:%lx\n",
+ 					(void *)rec->ip, rec->flags);
+ 		} while_for_each_ftrace_rec();
+@@ -3592,6 +3604,10 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
+ 		goto out_unlock;
+ 
+ 	do_for_each_ftrace_rec(pg, rec) {
++
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		if (ftrace_match_record(rec, &func_g, mod_match, exclude_mod)) {
+ 			ret = enter_record(hash, rec, clear_filter);
+ 			if (ret < 0) {
+@@ -3787,6 +3803,9 @@ register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
+ 
+ 	do_for_each_ftrace_rec(pg, rec) {
+ 
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		if (!ftrace_match_record(rec, &func_g, NULL, 0))
+ 			continue;
+ 
+@@ -4679,6 +4698,9 @@ ftrace_set_func(unsigned long *array, int *idx, int size, char *buffer)
+ 
+ 	do_for_each_ftrace_rec(pg, rec) {
+ 
++		if (rec->flags & FTRACE_FL_DISABLED)
++			continue;
++
+ 		if (ftrace_match_record(rec, &func_g, NULL, 0)) {
+ 			/* if it is in the array */
+ 			exists = false;
+diff --git a/mm/Makefile b/mm/Makefile
+index 2ca1faf3fa09..295bd7a9f76b 100644
+--- a/mm/Makefile
++++ b/mm/Makefile
+@@ -21,9 +21,6 @@ KCOV_INSTRUMENT_memcontrol.o := n
+ KCOV_INSTRUMENT_mmzone.o := n
+ KCOV_INSTRUMENT_vmstat.o := n
+ 
+-# Since __builtin_frame_address does work as used, disable the warning.
+-CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+-
+ mmu-y			:= nommu.o
+ mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
+ 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 8e999ffdf28b..8af9d25ff988 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1549,24 +1549,31 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len,
+ 	struct sockaddr_can *addr = (struct sockaddr_can *)uaddr;
+ 	struct sock *sk = sock->sk;
+ 	struct bcm_sock *bo = bcm_sk(sk);
++	int ret = 0;
+ 
+ 	if (len < sizeof(*addr))
+ 		return -EINVAL;
+ 
+-	if (bo->bound)
+-		return -EISCONN;
++	lock_sock(sk);
++
++	if (bo->bound) {
++		ret = -EISCONN;
++		goto fail;
++	}
+ 
+ 	/* bind a device to this socket */
+ 	if (addr->can_ifindex) {
+ 		struct net_device *dev;
+ 
+ 		dev = dev_get_by_index(&init_net, addr->can_ifindex);
+-		if (!dev)
+-			return -ENODEV;
+-
++		if (!dev) {
++			ret = -ENODEV;
++			goto fail;
++		}
+ 		if (dev->type != ARPHRD_CAN) {
+ 			dev_put(dev);
+-			return -ENODEV;
++			ret = -ENODEV;
++			goto fail;
+ 		}
+ 
+ 		bo->ifindex = dev->ifindex;
+@@ -1577,17 +1584,24 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len,
+ 		bo->ifindex = 0;
+ 	}
+ 
+-	bo->bound = 1;
+-
+ 	if (proc_dir) {
+ 		/* unique socket address as filename */
+ 		sprintf(bo->procname, "%lu", sock_i_ino(sk));
+ 		bo->bcm_proc_read = proc_create_data(bo->procname, 0644,
+ 						     proc_dir,
+ 						     &bcm_proc_fops, sk);
++		if (!bo->bcm_proc_read) {
++			ret = -ENOMEM;
++			goto fail;
++		}
+ 	}
+ 
+-	return 0;
++	bo->bound = 1;
++
++fail:
++	release_sock(sk);
++
++	return ret;
+ }
+ 
+ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 0af26699bf04..584ac76ec555 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -143,7 +143,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_DYNSET_TIMEOUT] != NULL) {
+ 		if (!(set->flags & NFT_SET_TIMEOUT))
+ 			return -EINVAL;
+-		timeout = be64_to_cpu(nla_get_be64(tb[NFTA_DYNSET_TIMEOUT]));
++		timeout = msecs_to_jiffies(be64_to_cpu(nla_get_be64(
++						tb[NFTA_DYNSET_TIMEOUT])));
+ 	}
+ 
+ 	priv->sreg_key = nft_parse_register(tb[NFTA_DYNSET_SREG_KEY]);
+@@ -230,7 +231,8 @@ static int nft_dynset_dump(struct sk_buff *skb, const struct nft_expr *expr)
+ 		goto nla_put_failure;
+ 	if (nla_put_string(skb, NFTA_DYNSET_SET_NAME, priv->set->name))
+ 		goto nla_put_failure;
+-	if (nla_put_be64(skb, NFTA_DYNSET_TIMEOUT, cpu_to_be64(priv->timeout),
++	if (nla_put_be64(skb, NFTA_DYNSET_TIMEOUT,
++			 cpu_to_be64(jiffies_to_msecs(priv->timeout)),
+ 			 NFTA_DYNSET_PAD))
+ 		goto nla_put_failure;
+ 	if (priv->expr && nft_expr_dump(skb, NFTA_DYNSET_EXPR, priv->expr))
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index c3f652395a80..3bc1d61694cb 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -1002,14 +1002,8 @@ static void svc_age_temp_xprts(unsigned long closure)
+ void svc_age_temp_xprts_now(struct svc_serv *serv, struct sockaddr *server_addr)
+ {
+ 	struct svc_xprt *xprt;
+-	struct svc_sock *svsk;
+-	struct socket *sock;
+ 	struct list_head *le, *next;
+ 	LIST_HEAD(to_be_closed);
+-	struct linger no_linger = {
+-		.l_onoff = 1,
+-		.l_linger = 0,
+-	};
+ 
+ 	spin_lock_bh(&serv->sv_lock);
+ 	list_for_each_safe(le, next, &serv->sv_tempsocks) {
+@@ -1027,10 +1021,7 @@ void svc_age_temp_xprts_now(struct svc_serv *serv, struct sockaddr *server_addr)
+ 		list_del_init(le);
+ 		xprt = list_entry(le, struct svc_xprt, xpt_list);
+ 		dprintk("svc_age_temp_xprts_now: closing %p\n", xprt);
+-		svsk = container_of(xprt, struct svc_sock, sk_xprt);
+-		sock = svsk->sk_sock;
+-		kernel_setsockopt(sock, SOL_SOCKET, SO_LINGER,
+-				  (char *)&no_linger, sizeof(no_linger));
++		xprt->xpt_ops->xpo_kill_temp_xprt(xprt);
+ 		svc_close_xprt(xprt);
+ 	}
+ }
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 57625f64efd5..a4bc98265d88 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -438,6 +438,21 @@ static int svc_tcp_has_wspace(struct svc_xprt *xprt)
+ 	return !test_bit(SOCK_NOSPACE, &svsk->sk_sock->flags);
+ }
+ 
++static void svc_tcp_kill_temp_xprt(struct svc_xprt *xprt)
++{
++	struct svc_sock *svsk;
++	struct socket *sock;
++	struct linger no_linger = {
++		.l_onoff = 1,
++		.l_linger = 0,
++	};
++
++	svsk = container_of(xprt, struct svc_sock, sk_xprt);
++	sock = svsk->sk_sock;
++	kernel_setsockopt(sock, SOL_SOCKET, SO_LINGER,
++			  (char *)&no_linger, sizeof(no_linger));
++}
++
+ /*
+  * See net/ipv6/ip_sockglue.c : ip_cmsg_recv_pktinfo
+  */
+@@ -648,6 +663,10 @@ static struct svc_xprt *svc_udp_accept(struct svc_xprt *xprt)
+ 	return NULL;
+ }
+ 
++static void svc_udp_kill_temp_xprt(struct svc_xprt *xprt)
++{
++}
++
+ static struct svc_xprt *svc_udp_create(struct svc_serv *serv,
+ 				       struct net *net,
+ 				       struct sockaddr *sa, int salen,
+@@ -667,6 +686,7 @@ static struct svc_xprt_ops svc_udp_ops = {
+ 	.xpo_has_wspace = svc_udp_has_wspace,
+ 	.xpo_accept = svc_udp_accept,
+ 	.xpo_secure_port = svc_sock_secure_port,
++	.xpo_kill_temp_xprt = svc_udp_kill_temp_xprt,
+ };
+ 
+ static struct svc_xprt_class svc_udp_class = {
+@@ -1242,6 +1262,7 @@ static struct svc_xprt_ops svc_tcp_ops = {
+ 	.xpo_has_wspace = svc_tcp_has_wspace,
+ 	.xpo_accept = svc_tcp_accept,
+ 	.xpo_secure_port = svc_sock_secure_port,
++	.xpo_kill_temp_xprt = svc_tcp_kill_temp_xprt,
+ };
+ 
+ static struct svc_xprt_class svc_tcp_class = {
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index 924271c9ef3e..a55b8093a7f9 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -67,6 +67,7 @@ static void svc_rdma_detach(struct svc_xprt *xprt);
+ static void svc_rdma_free(struct svc_xprt *xprt);
+ static int svc_rdma_has_wspace(struct svc_xprt *xprt);
+ static int svc_rdma_secure_port(struct svc_rqst *);
++static void svc_rdma_kill_temp_xprt(struct svc_xprt *);
+ 
+ static struct svc_xprt_ops svc_rdma_ops = {
+ 	.xpo_create = svc_rdma_create,
+@@ -79,6 +80,7 @@ static struct svc_xprt_ops svc_rdma_ops = {
+ 	.xpo_has_wspace = svc_rdma_has_wspace,
+ 	.xpo_accept = svc_rdma_accept,
+ 	.xpo_secure_port = svc_rdma_secure_port,
++	.xpo_kill_temp_xprt = svc_rdma_kill_temp_xprt,
+ };
+ 
+ struct svc_xprt_class svc_rdma_class = {
+@@ -1285,6 +1287,10 @@ static int svc_rdma_secure_port(struct svc_rqst *rqstp)
+ 	return 1;
+ }
+ 
++static void svc_rdma_kill_temp_xprt(struct svc_xprt *xprt)
++{
++}
++
+ int svc_rdma_send(struct svcxprt_rdma *xprt, struct ib_send_wr *wr)
+ {
+ 	struct ib_send_wr *bad_wr, *n_wr;
+diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh
+index 973e8c141567..17867e723a51 100755
+--- a/scripts/gcc-x86_64-has-stack-protector.sh
++++ b/scripts/gcc-x86_64-has-stack-protector.sh
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+ if [ "$?" -eq "0" ] ; then
+ 	echo y
+ else
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 26e866f65314..162818058a5d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6905,8 +6905,6 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.v.pins = (const struct hda_pintbl[]) {
+ 			{ 0x15, 0x40f000f0 }, /* disabled */
+ 			{ 0x16, 0x40f000f0 }, /* disabled */
+-			{ 0x18, 0x01014011 }, /* LO */
+-			{ 0x1a, 0x01014012 }, /* LO */
+ 			{ }
+ 		}
+ 	},
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index 6a23302297c9..4d9d320a7971 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -13,7 +13,8 @@ static void (*old_vmaster_hook)(void *, int);
+ static bool is_thinkpad(struct hda_codec *codec)
+ {
+ 	return (codec->core.subsystem_id >> 16 == 0x17aa) &&
+-	       (acpi_dev_found("LEN0068") || acpi_dev_found("IBM0068"));
++	       (acpi_dev_found("LEN0068") || acpi_dev_found("LEN0268") ||
++		acpi_dev_found("IBM0068"));
+ }
+ 
+ static void update_tpacpi_mute_led(void *private_data, int enabled)
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 9e5276d6dda0..2ddc034673a8 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -315,7 +315,8 @@ static int snd_usb_audio_free(struct snd_usb_audio *chip)
+ 		snd_usb_endpoint_free(ep);
+ 
+ 	mutex_destroy(&chip->mutex);
+-	dev_set_drvdata(&chip->dev->dev, NULL);
++	if (!atomic_read(&chip->shutdown))
++		dev_set_drvdata(&chip->dev->dev, NULL);
+ 	kfree(chip);
+ 	return 0;
+ }
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index de15dbcdcecf..7214913861cf 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -1596,18 +1596,18 @@ static void hists__hierarchy_output_resort(struct hists *hists,
+ 		if (prog)
+ 			ui_progress__update(prog, 1);
+ 
++		hists->nr_entries++;
++		if (!he->filtered) {
++			hists->nr_non_filtered_entries++;
++			hists__calc_col_len(hists, he);
++		}
++
+ 		if (!he->leaf) {
+ 			hists__hierarchy_output_resort(hists, prog,
+ 						       &he->hroot_in,
+ 						       &he->hroot_out,
+ 						       min_callchain_hits,
+ 						       use_callchain);
+-			hists->nr_entries++;
+-			if (!he->filtered) {
+-				hists->nr_non_filtered_entries++;
+-				hists__calc_col_len(hists, he);
+-			}
+-
+ 			continue;
+ 		}
+ 
+diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
+index 6e9c40eea208..69ccce308458 100644
+--- a/virt/kvm/arm/pmu.c
++++ b/virt/kvm/arm/pmu.c
+@@ -305,7 +305,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
+ 			continue;
+ 		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+ 		       & ARMV8_PMU_EVTYPE_EVENT;
+-		if ((type == ARMV8_PMU_EVTYPE_EVENT_SW_INCR)
++		if ((type == ARMV8_PMUV3_PERFCTR_SW_INCR)
+ 		    && (enable & BIT(i))) {
+ 			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+ 			reg = lower_32_bits(reg);
+@@ -379,7 +379,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+ 	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
+ 
+ 	/* Software increment event does't need to be backed by a perf event */
+-	if (eventsel == ARMV8_PMU_EVTYPE_EVENT_SW_INCR)
++	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR &&
++	    select_idx != ARMV8_PMU_CYCLE_IDX)
+ 		return;
+ 
+ 	memset(&attr, 0, sizeof(struct perf_event_attr));
+@@ -391,7 +392,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+ 	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
+ 	attr.exclude_hv = 1; /* Don't count EL2 events */
+ 	attr.exclude_host = 1; /* Don't count host events */
+-	attr.config = eventsel;
++	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
++		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
+ 
+ 	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+ 	/* The initial sample period (overflow count) of an event. */


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-12-02 16:23 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-12-02 16:23 UTC (permalink / raw
  To: gentoo-commits

commit:     27ab52c49dea953256202d19c96202f5cf703bbe
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  2 16:22:48 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Dec  2 16:22:48 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=27ab52c4

Linux patch 4.8.12

 0000_README             |    4 +
 1011_linux-4.8.12.patch | 1563 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1567 insertions(+)

diff --git a/0000_README b/0000_README
index 4aa1baf..cd56013 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-4.8.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.11
 
+Patch:  1011_linux-4.8.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-4.8.12.patch b/1011_linux-4.8.12.patch
new file mode 100644
index 0000000..9855afb
--- /dev/null
+++ b/1011_linux-4.8.12.patch
@@ -0,0 +1,1563 @@
+diff --git a/Makefile b/Makefile
+index 2b1bcbacebcd..7b0c92f53169 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index af12c2db9bb8..81c11a62b1fa 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -33,7 +33,9 @@ config PARISC
+ 	select HAVE_ARCH_HASH
+ 	select HAVE_ARCH_SECCOMP_FILTER
+ 	select HAVE_ARCH_TRACEHOOK
+-	select HAVE_UNSTABLE_SCHED_CLOCK if (SMP || !64BIT)
++	select GENERIC_SCHED_CLOCK
++	select HAVE_UNSTABLE_SCHED_CLOCK if SMP
++	select GENERIC_CLOCKEVENTS
+ 	select ARCH_NO_COHERENT_DMA_MMAP
+ 	select CPU_NO_EFFICIENT_FFS
+ 
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 67001277256c..c2259d4a3c33 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -369,6 +369,7 @@ void __init parisc_setup_cache_timing(void)
+ {
+ 	unsigned long rangetime, alltime;
+ 	unsigned long size, start;
++	unsigned long threshold;
+ 
+ 	alltime = mfctl(16);
+ 	flush_data_cache();
+@@ -382,17 +383,12 @@ void __init parisc_setup_cache_timing(void)
+ 	printk(KERN_DEBUG "Whole cache flush %lu cycles, flushing %lu bytes %lu cycles\n",
+ 		alltime, size, rangetime);
+ 
+-	/* Racy, but if we see an intermediate value, it's ok too... */
+-	parisc_cache_flush_threshold = size * alltime / rangetime;
+-
+-	parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold);
+-	if (!parisc_cache_flush_threshold)
+-		parisc_cache_flush_threshold = FLUSH_THRESHOLD;
+-
+-	if (parisc_cache_flush_threshold > cache_info.dc_size)
+-		parisc_cache_flush_threshold = cache_info.dc_size;
+-
+-	printk(KERN_INFO "Setting cache flush threshold to %lu kB\n",
++	threshold = L1_CACHE_ALIGN(size * alltime / rangetime);
++	if (threshold > cache_info.dc_size)
++		threshold = cache_info.dc_size;
++	if (threshold)
++		parisc_cache_flush_threshold = threshold;
++	printk(KERN_INFO "Cache flush threshold set to %lu KiB\n",
+ 		parisc_cache_flush_threshold/1024);
+ 
+ 	/* calculate TLB flush threshold */
+@@ -401,7 +397,7 @@ void __init parisc_setup_cache_timing(void)
+ 	flush_tlb_all();
+ 	alltime = mfctl(16) - alltime;
+ 
+-	size = PAGE_SIZE;
++	size = 0;
+ 	start = (unsigned long) _text;
+ 	rangetime = mfctl(16);
+ 	while (start < (unsigned long) _end) {
+@@ -414,13 +410,10 @@ void __init parisc_setup_cache_timing(void)
+ 	printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n",
+ 		alltime, size, rangetime);
+ 
+-	parisc_tlb_flush_threshold = size * alltime / rangetime;
+-	parisc_tlb_flush_threshold *= num_online_cpus();
+-	parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold);
+-	if (!parisc_tlb_flush_threshold)
+-		parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD;
+-
+-	printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n",
++	threshold = PAGE_ALIGN(num_online_cpus() * size * alltime / rangetime);
++	if (threshold)
++		parisc_tlb_flush_threshold = threshold;
++	printk(KERN_INFO "TLB flush threshold set to %lu KiB\n",
+ 		parisc_tlb_flush_threshold/1024);
+ }
+ 
+diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S
+index b743a80eaba0..675521919229 100644
+--- a/arch/parisc/kernel/pacache.S
++++ b/arch/parisc/kernel/pacache.S
+@@ -96,7 +96,7 @@ fitmanyloop:					/* Loop if LOOP >= 2 */
+ 
+ fitmanymiddle:					/* Loop if LOOP >= 2 */
+ 	addib,COND(>)		-1, %r31, fitmanymiddle	/* Adjusted inner loop decr */
+-	pitlbe		0(%sr1, %r28)
++	pitlbe		%r0(%sr1, %r28)
+ 	pitlbe,m	%arg1(%sr1, %r28)	/* Last pitlbe and addr adjust */
+ 	addib,COND(>)		-1, %r29, fitmanymiddle	/* Middle loop decr */
+ 	copy		%arg3, %r31		/* Re-init inner loop count */
+@@ -139,7 +139,7 @@ fdtmanyloop:					/* Loop if LOOP >= 2 */
+ 
+ fdtmanymiddle:					/* Loop if LOOP >= 2 */
+ 	addib,COND(>)		-1, %r31, fdtmanymiddle	/* Adjusted inner loop decr */
+-	pdtlbe		0(%sr1, %r28)
++	pdtlbe		%r0(%sr1, %r28)
+ 	pdtlbe,m	%arg1(%sr1, %r28)	/* Last pdtlbe and addr adjust */
+ 	addib,COND(>)		-1, %r29, fdtmanymiddle	/* Middle loop decr */
+ 	copy		%arg3, %r31		/* Re-init inner loop count */
+@@ -620,12 +620,12 @@ ENTRY(copy_user_page_asm)
+ 	/* Purge any old translations */
+ 
+ #ifdef CONFIG_PA20
+-	pdtlb,l		0(%r28)
+-	pdtlb,l		0(%r29)
++	pdtlb,l		%r0(%r28)
++	pdtlb,l		%r0(%r29)
+ #else
+ 	tlb_lock	%r20,%r21,%r22
+-	pdtlb		0(%r28)
+-	pdtlb		0(%r29)
++	pdtlb		%r0(%r28)
++	pdtlb		%r0(%r29)
+ 	tlb_unlock	%r20,%r21,%r22
+ #endif
+ 
+@@ -768,10 +768,10 @@ ENTRY(clear_user_page_asm)
+ 	/* Purge any old translation */
+ 
+ #ifdef CONFIG_PA20
+-	pdtlb,l		0(%r28)
++	pdtlb,l		%r0(%r28)
+ #else
+ 	tlb_lock	%r20,%r21,%r22
+-	pdtlb		0(%r28)
++	pdtlb		%r0(%r28)
+ 	tlb_unlock	%r20,%r21,%r22
+ #endif
+ 
+@@ -852,10 +852,10 @@ ENTRY(flush_dcache_page_asm)
+ 	/* Purge any old translation */
+ 
+ #ifdef CONFIG_PA20
+-	pdtlb,l		0(%r28)
++	pdtlb,l		%r0(%r28)
+ #else
+ 	tlb_lock	%r20,%r21,%r22
+-	pdtlb		0(%r28)
++	pdtlb		%r0(%r28)
+ 	tlb_unlock	%r20,%r21,%r22
+ #endif
+ 
+@@ -892,10 +892,10 @@ ENTRY(flush_dcache_page_asm)
+ 	sync
+ 
+ #ifdef CONFIG_PA20
+-	pdtlb,l		0(%r25)
++	pdtlb,l		%r0(%r25)
+ #else
+ 	tlb_lock	%r20,%r21,%r22
+-	pdtlb		0(%r25)
++	pdtlb		%r0(%r25)
+ 	tlb_unlock	%r20,%r21,%r22
+ #endif
+ 
+@@ -925,13 +925,18 @@ ENTRY(flush_icache_page_asm)
+ 	depwi		0, 31,PAGE_SHIFT, %r28	/* Clear any offset bits */
+ #endif
+ 
+-	/* Purge any old translation */
++	/* Purge any old translation.  Note that the FIC instruction
++	 * may use either the instruction or data TLB.  Given that we
++	 * have a flat address space, it's not clear which TLB will be
++	 * used.  So, we purge both entries.  */
+ 
+ #ifdef CONFIG_PA20
++	pdtlb,l		%r0(%r28)
+ 	pitlb,l         %r0(%sr4,%r28)
+ #else
+ 	tlb_lock        %r20,%r21,%r22
+-	pitlb           (%sr4,%r28)
++	pdtlb		%r0(%r28)
++	pitlb           %r0(%sr4,%r28)
+ 	tlb_unlock      %r20,%r21,%r22
+ #endif
+ 
+@@ -970,10 +975,12 @@ ENTRY(flush_icache_page_asm)
+ 	sync
+ 
+ #ifdef CONFIG_PA20
++	pdtlb,l		%r0(%r28)
+ 	pitlb,l         %r0(%sr4,%r25)
+ #else
+ 	tlb_lock        %r20,%r21,%r22
+-	pitlb           (%sr4,%r25)
++	pdtlb		%r0(%r28)
++	pitlb           %r0(%sr4,%r25)
+ 	tlb_unlock      %r20,%r21,%r22
+ #endif
+ 
+diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c
+index 02d9ed0f3949..494ff6e8c88a 100644
+--- a/arch/parisc/kernel/pci-dma.c
++++ b/arch/parisc/kernel/pci-dma.c
+@@ -95,8 +95,8 @@ static inline int map_pte_uncached(pte_t * pte,
+ 
+ 		if (!pte_none(*pte))
+ 			printk(KERN_ERR "map_pte_uncached: page already exists\n");
+-		set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC));
+ 		purge_tlb_start(flags);
++		set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC));
+ 		pdtlb_kernel(orig_vaddr);
+ 		purge_tlb_end(flags);
+ 		vaddr += PAGE_SIZE;
+diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c
+index 81d6f6391944..2e66a887788e 100644
+--- a/arch/parisc/kernel/setup.c
++++ b/arch/parisc/kernel/setup.c
+@@ -334,6 +334,10 @@ static int __init parisc_init(void)
+ 	/* tell PDC we're Linux. Nevermind failure. */
+ 	pdc_stable_write(0x40, &osid, sizeof(osid));
+ 	
++	/* start with known state */
++	flush_cache_all_local();
++	flush_tlb_all_local(NULL);
++
+ 	processor_init();
+ #ifdef CONFIG_SMP
+ 	pr_info("CPU(s): %d out of %d %s at %d.%06d MHz online\n",
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index 9b63b876a13a..325f30d82b64 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -14,6 +14,7 @@
+ #include <linux/module.h>
+ #include <linux/rtc.h>
+ #include <linux/sched.h>
++#include <linux/sched_clock.h>
+ #include <linux/kernel.h>
+ #include <linux/param.h>
+ #include <linux/string.h>
+@@ -39,18 +40,6 @@
+ 
+ static unsigned long clocktick __read_mostly;	/* timer cycles per tick */
+ 
+-#ifndef CONFIG_64BIT
+-/*
+- * The processor-internal cycle counter (Control Register 16) is used as time
+- * source for the sched_clock() function.  This register is 64bit wide on a
+- * 64-bit kernel and 32bit on a 32-bit kernel. Since sched_clock() always
+- * requires a 64bit counter we emulate on the 32-bit kernel the higher 32bits
+- * with a per-cpu variable which we increase every time the counter
+- * wraps-around (which happens every ~4 secounds).
+- */
+-static DEFINE_PER_CPU(unsigned long, cr16_high_32_bits);
+-#endif
+-
+ /*
+  * We keep time on PA-RISC Linux by using the Interval Timer which is
+  * a pair of registers; one is read-only and one is write-only; both
+@@ -121,12 +110,6 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
+ 	 */
+ 	mtctl(next_tick, 16);
+ 
+-#if !defined(CONFIG_64BIT)
+-	/* check for overflow on a 32bit kernel (every ~4 seconds). */
+-	if (unlikely(next_tick < now))
+-		this_cpu_inc(cr16_high_32_bits);
+-#endif
+-
+ 	/* Skip one clocktick on purpose if we missed next_tick.
+ 	 * The new CR16 must be "later" than current CR16 otherwise
+ 	 * itimer would not fire until CR16 wrapped - e.g 4 seconds
+@@ -208,7 +191,7 @@ EXPORT_SYMBOL(profile_pc);
+ 
+ /* clock source code */
+ 
+-static cycle_t read_cr16(struct clocksource *cs)
++static cycle_t notrace read_cr16(struct clocksource *cs)
+ {
+ 	return get_cycles();
+ }
+@@ -287,26 +270,9 @@ void read_persistent_clock(struct timespec *ts)
+ }
+ 
+ 
+-/*
+- * sched_clock() framework
+- */
+-
+-static u32 cyc2ns_mul __read_mostly;
+-static u32 cyc2ns_shift __read_mostly;
+-
+-u64 sched_clock(void)
++static u64 notrace read_cr16_sched_clock(void)
+ {
+-	u64 now;
+-
+-	/* Get current cycle counter (Control Register 16). */
+-#ifdef CONFIG_64BIT
+-	now = mfctl(16);
+-#else
+-	now = mfctl(16) + (((u64) this_cpu_read(cr16_high_32_bits)) << 32);
+-#endif
+-
+-	/* return the value in ns (cycles_2_ns) */
+-	return mul_u64_u32_shr(now, cyc2ns_mul, cyc2ns_shift);
++	return get_cycles();
+ }
+ 
+ 
+@@ -316,17 +282,16 @@ u64 sched_clock(void)
+ 
+ void __init time_init(void)
+ {
+-	unsigned long current_cr16_khz;
++	unsigned long cr16_hz;
+ 
+-	current_cr16_khz = PAGE0->mem_10msec/10;  /* kHz */
+ 	clocktick = (100 * PAGE0->mem_10msec) / HZ;
+-
+-	/* calculate mult/shift values for cr16 */
+-	clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz,
+-				NSEC_PER_MSEC, 0);
+-
+ 	start_cpu_itimer();	/* get CPU 0 started */
+ 
++	cr16_hz = 100 * PAGE0->mem_10msec;  /* Hz */
++
+ 	/* register at clocksource framework */
+-	clocksource_register_khz(&clocksource_cr16, current_cr16_khz);
++	clocksource_register_hz(&clocksource_cr16, cr16_hz);
++
++	/* register as sched_clock source */
++	sched_clock_register(read_cr16_sched_clock, BITS_PER_LONG, cr16_hz);
+ }
+diff --git a/arch/powerpc/boot/main.c b/arch/powerpc/boot/main.c
+index d80161b633f4..60522d22a428 100644
+--- a/arch/powerpc/boot/main.c
++++ b/arch/powerpc/boot/main.c
+@@ -217,8 +217,12 @@ void start(void)
+ 		console_ops.close();
+ 
+ 	kentry = (kernel_entry_t) vmlinux.addr;
+-	if (ft_addr)
+-		kentry(ft_addr, 0, NULL);
++	if (ft_addr) {
++		if(platform_ops.kentry)
++			platform_ops.kentry(ft_addr, vmlinux.addr);
++		else
++			kentry(ft_addr, 0, NULL);
++	}
+ 	else
+ 		kentry((unsigned long)initrd.addr, initrd.size,
+ 		       loader_info.promptr);
+diff --git a/arch/powerpc/boot/opal-calls.S b/arch/powerpc/boot/opal-calls.S
+index ff2f1b97bc53..2a99fc9a3ccf 100644
+--- a/arch/powerpc/boot/opal-calls.S
++++ b/arch/powerpc/boot/opal-calls.S
+@@ -12,6 +12,19 @@
+ 
+ 	.text
+ 
++	.globl opal_kentry
++opal_kentry:
++	/* r3 is the fdt ptr */
++	mtctr r4
++	li	r4, 0
++	li	r5, 0
++	li	r6, 0
++	li	r7, 0
++	ld	r11,opal@got(r2)
++	ld	r8,0(r11)
++	ld	r9,8(r11)
++	bctr
++
+ #define OPAL_CALL(name, token)				\
+ 	.globl name;					\
+ name:							\
+diff --git a/arch/powerpc/boot/opal.c b/arch/powerpc/boot/opal.c
+index 1f37e1c1d6d8..d7b4fd47eb44 100644
+--- a/arch/powerpc/boot/opal.c
++++ b/arch/powerpc/boot/opal.c
+@@ -23,14 +23,25 @@ struct opal {
+ 
+ static u32 opal_con_id;
+ 
++/* see opal-wrappers.S */
+ int64_t opal_console_write(int64_t term_number, u64 *length, const u8 *buffer);
+ int64_t opal_console_read(int64_t term_number, uint64_t *length, u8 *buffer);
+ int64_t opal_console_write_buffer_space(uint64_t term_number, uint64_t *length);
+ int64_t opal_console_flush(uint64_t term_number);
+ int64_t opal_poll_events(uint64_t *outstanding_event_mask);
+ 
++void opal_kentry(unsigned long fdt_addr, void *vmlinux_addr);
++
+ static int opal_con_open(void)
+ {
++	/*
++	 * When OPAL loads the boot kernel it stashes the OPAL base and entry
++	 * address in r8 and r9 so the kernel can use the OPAL console
++	 * before unflattening the devicetree. While executing the wrapper will
++	 * probably trash r8 and r9 so this kentry hook restores them before
++	 * entering the decompressed kernel.
++	 */
++	platform_ops.kentry = opal_kentry;
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/boot/ops.h b/arch/powerpc/boot/ops.h
+index e19b64ef977a..deeae6f6ba9c 100644
+--- a/arch/powerpc/boot/ops.h
++++ b/arch/powerpc/boot/ops.h
+@@ -30,6 +30,7 @@ struct platform_ops {
+ 	void *	(*realloc)(void *ptr, unsigned long size);
+ 	void	(*exit)(void);
+ 	void *	(*vmlinux_alloc)(unsigned long size);
++	void  	(*kentry)(unsigned long fdt_addr, void *vmlinux_addr);
+ };
+ extern struct platform_ops platform_ops;
+ 
+diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
+index e2fb408f8398..fd10b582fb2d 100644
+--- a/arch/powerpc/include/asm/mmu.h
++++ b/arch/powerpc/include/asm/mmu.h
+@@ -29,6 +29,12 @@
+  */
+ 
+ /*
++ * Kernel read only support.
++ * We added the ppp value 0b110 in ISA 2.04.
++ */
++#define MMU_FTR_KERNEL_RO		ASM_CONST(0x00004000)
++
++/*
+  * We need to clear top 16bits of va (from the remaining 64 bits )in
+  * tlbie* instructions
+  */
+@@ -103,10 +109,10 @@
+ #define MMU_FTRS_POWER4		MMU_FTRS_DEFAULT_HPTE_ARCH_V2
+ #define MMU_FTRS_PPC970		MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA
+ #define MMU_FTRS_POWER5		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
+-#define MMU_FTRS_POWER6		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
+-#define MMU_FTRS_POWER7		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
+-#define MMU_FTRS_POWER8		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
+-#define MMU_FTRS_POWER9		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
++#define MMU_FTRS_POWER6		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
++#define MMU_FTRS_POWER7		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
++#define MMU_FTRS_POWER8		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
++#define MMU_FTRS_POWER9		MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
+ #define MMU_FTRS_CELL		MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \
+ 				MMU_FTR_CI_LARGE_PAGE
+ #define MMU_FTRS_PA6T		MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index 978dada662ae..52cbf043e960 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -355,6 +355,7 @@
+ #define     LPCR_PECE0		ASM_CONST(0x0000000000004000)	/* ext. exceptions can cause exit */
+ #define     LPCR_PECE1		ASM_CONST(0x0000000000002000)	/* decrementer can cause exit */
+ #define     LPCR_PECE2		ASM_CONST(0x0000000000001000)	/* machine check etc can cause exit */
++#define     LPCR_PECE_HVEE	ASM_CONST(0x0000400000000000)	/* P9 Wakeup on HV interrupts */
+ #define   LPCR_MER		ASM_CONST(0x0000000000000800)	/* Mediated External Exception */
+ #define   LPCR_MER_SH		11
+ #define   LPCR_TC		ASM_CONST(0x0000000000000200)	/* Translation control */
+diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S
+index 52ff3f025437..37c027ca83b2 100644
+--- a/arch/powerpc/kernel/cpu_setup_power.S
++++ b/arch/powerpc/kernel/cpu_setup_power.S
+@@ -98,8 +98,8 @@ _GLOBAL(__setup_cpu_power9)
+ 	li	r0,0
+ 	mtspr	SPRN_LPID,r0
+ 	mfspr	r3,SPRN_LPCR
+-	ori	r3, r3, LPCR_PECEDH
+-	ori	r3, r3, LPCR_HVICE
++	LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
++	or	r3, r3, r4
+ 	bl	__init_LPCR
+ 	bl	__init_HFSCR
+ 	bl	__init_tlb_power9
+@@ -118,8 +118,8 @@ _GLOBAL(__restore_cpu_power9)
+ 	li	r0,0
+ 	mtspr	SPRN_LPID,r0
+ 	mfspr   r3,SPRN_LPCR
+-	ori	r3, r3, LPCR_PECEDH
+-	ori	r3, r3, LPCR_HVICE
++	LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
++	or	r3, r3, r4
+ 	bl	__init_LPCR
+ 	bl	__init_HFSCR
+ 	bl	__init_tlb_power9
+diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
+index 28923b2e2df1..8dff9ce6fbc1 100644
+--- a/arch/powerpc/mm/hash_utils_64.c
++++ b/arch/powerpc/mm/hash_utils_64.c
+@@ -190,8 +190,12 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
+ 		/*
+ 		 * Kernel read only mapped with ppp bits 0b110
+ 		 */
+-		if (!(pteflags & _PAGE_WRITE))
+-			rflags |= (HPTE_R_PP0 | 0x2);
++		if (!(pteflags & _PAGE_WRITE)) {
++			if (mmu_has_feature(MMU_FTR_KERNEL_RO))
++				rflags |= (HPTE_R_PP0 | 0x2);
++			else
++				rflags |= 0x3;
++		}
+ 	} else {
+ 		if (pteflags & _PAGE_RWX)
+ 			rflags |= 0x2;
+diff --git a/arch/tile/kernel/time.c b/arch/tile/kernel/time.c
+index 178989e6d3e3..ea960d660917 100644
+--- a/arch/tile/kernel/time.c
++++ b/arch/tile/kernel/time.c
+@@ -218,8 +218,8 @@ void do_timer_interrupt(struct pt_regs *regs, int fault_num)
+  */
+ unsigned long long sched_clock(void)
+ {
+-	return clocksource_cyc2ns(get_cycles(),
+-				  sched_clock_mult, SCHED_CLOCK_SHIFT);
++	return mult_frac(get_cycles(),
++			 sched_clock_mult, 1ULL << SCHED_CLOCK_SHIFT);
+ }
+ 
+ int setup_profiling_timer(unsigned int multiplier)
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 9b983a474253..8fc714b4f18a 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1070,20 +1070,20 @@ static void setup_pebs_sample_data(struct perf_event *event,
+ 	}
+ 
+ 	/*
+-	 * We use the interrupt regs as a base because the PEBS record
+-	 * does not contain a full regs set, specifically it seems to
+-	 * lack segment descriptors, which get used by things like
+-	 * user_mode().
++	 * We use the interrupt regs as a base because the PEBS record does not
++	 * contain a full regs set, specifically it seems to lack segment
++	 * descriptors, which get used by things like user_mode().
+ 	 *
+-	 * In the simple case fix up only the IP and BP,SP regs, for
+-	 * PERF_SAMPLE_IP and PERF_SAMPLE_CALLCHAIN to function properly.
+-	 * A possible PERF_SAMPLE_REGS will have to transfer all regs.
++	 * In the simple case fix up only the IP for PERF_SAMPLE_IP.
++	 *
++	 * We must however always use BP,SP from iregs for the unwinder to stay
++	 * sane; the record BP,SP can point into thin air when the record is
++	 * from a previous PMI context or an (I)RET happend between the record
++	 * and PMI.
+ 	 */
+ 	*regs = *iregs;
+ 	regs->flags = pebs->flags;
+ 	set_linear_ip(regs, pebs->ip);
+-	regs->bp = pebs->bp;
+-	regs->sp = pebs->sp;
+ 
+ 	if (sample_type & PERF_SAMPLE_REGS_INTR) {
+ 		regs->ax = pebs->ax;
+@@ -1092,10 +1092,21 @@ static void setup_pebs_sample_data(struct perf_event *event,
+ 		regs->dx = pebs->dx;
+ 		regs->si = pebs->si;
+ 		regs->di = pebs->di;
+-		regs->bp = pebs->bp;
+-		regs->sp = pebs->sp;
+ 
+-		regs->flags = pebs->flags;
++		/*
++		 * Per the above; only set BP,SP if we don't need callchains.
++		 *
++		 * XXX: does this make sense?
++		 */
++		if (!(sample_type & PERF_SAMPLE_CALLCHAIN)) {
++			regs->bp = pebs->bp;
++			regs->sp = pebs->sp;
++		}
++
++		/*
++		 * Preserve PERF_EFLAGS_VM from set_linear_ip().
++		 */
++		regs->flags = pebs->flags | (regs->flags & PERF_EFLAGS_VM);
+ #ifndef CONFIG_X86_32
+ 		regs->r8 = pebs->r8;
+ 		regs->r9 = pebs->r9;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 8c4a47706296..181c238d4df9 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -113,7 +113,7 @@ struct debug_store {
+  * Per register state.
+  */
+ struct er_account {
+-	raw_spinlock_t		lock;	/* per-core: protect structure */
++	raw_spinlock_t      lock;	/* per-core: protect structure */
+ 	u64                 config;	/* extra MSR config */
+ 	u64                 reg;	/* extra MSR number */
+ 	atomic_t            ref;	/* reference count */
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 3fc03a09a93b..c289e2f4a6e5 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -517,14 +517,14 @@ void fpu__clear(struct fpu *fpu)
+ {
+ 	WARN_ON_FPU(fpu != &current->thread.fpu); /* Almost certainly an anomaly */
+ 
+-	if (!use_eager_fpu() || !static_cpu_has(X86_FEATURE_FPU)) {
+-		/* FPU state will be reallocated lazily at the first use. */
+-		fpu__drop(fpu);
+-	} else {
+-		if (!fpu->fpstate_active) {
+-			fpu__activate_curr(fpu);
+-			user_fpu_begin();
+-		}
++	fpu__drop(fpu);
++
++	/*
++	 * Make sure fpstate is cleared and initialized.
++	 */
++	if (static_cpu_has(X86_FEATURE_FPU)) {
++		fpu__activate_curr(fpu);
++		user_fpu_begin();
+ 		copy_init_fpstate_to_fpregs();
+ 	}
+ }
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index cbd7b92585bb..a3ce9d260d68 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2105,16 +2105,10 @@ static int em_iret(struct x86_emulate_ctxt *ctxt)
+ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
+ {
+ 	int rc;
+-	unsigned short sel, old_sel;
+-	struct desc_struct old_desc, new_desc;
+-	const struct x86_emulate_ops *ops = ctxt->ops;
++	unsigned short sel;
++	struct desc_struct new_desc;
+ 	u8 cpl = ctxt->ops->cpl(ctxt);
+ 
+-	/* Assignment of RIP may only fail in 64-bit mode */
+-	if (ctxt->mode == X86EMUL_MODE_PROT64)
+-		ops->get_segment(ctxt, &old_sel, &old_desc, NULL,
+-				 VCPU_SREG_CS);
+-
+ 	memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
+ 
+ 	rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl,
+@@ -2124,12 +2118,10 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
+ 		return rc;
+ 
+ 	rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
+-	if (rc != X86EMUL_CONTINUE) {
+-		WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
+-		/* assigning eip failed; restore the old cs */
+-		ops->set_segment(ctxt, old_sel, &old_desc, 0, VCPU_SREG_CS);
+-		return rc;
+-	}
++	/* Error handling is not implemented. */
++	if (rc != X86EMUL_CONTINUE)
++		return X86EMUL_UNHANDLEABLE;
++
+ 	return rc;
+ }
+ 
+@@ -2189,14 +2181,8 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
+ {
+ 	int rc;
+ 	unsigned long eip, cs;
+-	u16 old_cs;
+ 	int cpl = ctxt->ops->cpl(ctxt);
+-	struct desc_struct old_desc, new_desc;
+-	const struct x86_emulate_ops *ops = ctxt->ops;
+-
+-	if (ctxt->mode == X86EMUL_MODE_PROT64)
+-		ops->get_segment(ctxt, &old_cs, &old_desc, NULL,
+-				 VCPU_SREG_CS);
++	struct desc_struct new_desc;
+ 
+ 	rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
+ 	if (rc != X86EMUL_CONTINUE)
+@@ -2213,10 +2199,10 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
+ 	if (rc != X86EMUL_CONTINUE)
+ 		return rc;
+ 	rc = assign_eip_far(ctxt, eip, &new_desc);
+-	if (rc != X86EMUL_CONTINUE) {
+-		WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
+-		ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS);
+-	}
++	/* Error handling is not implemented. */
++	if (rc != X86EMUL_CONTINUE)
++		return X86EMUL_UNHANDLEABLE;
++
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 1a22de70f7f7..6e219e5c07d2 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -94,7 +94,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
+ {
+ 	ioapic->rtc_status.pending_eoi = 0;
+-	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPUS);
++	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
+ }
+ 
+ static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
+index 7d2692a49657..1cc6e54436db 100644
+--- a/arch/x86/kvm/ioapic.h
++++ b/arch/x86/kvm/ioapic.h
+@@ -42,13 +42,13 @@ struct kvm_vcpu;
+ 
+ struct dest_map {
+ 	/* vcpu bitmap where IRQ has been sent */
+-	DECLARE_BITMAP(map, KVM_MAX_VCPUS);
++	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
+ 
+ 	/*
+ 	 * Vector sent to a given vcpu, only valid when
+ 	 * the vcpu's bit in map is set
+ 	 */
+-	u8 vectors[KVM_MAX_VCPUS];
++	u8 vectors[KVM_MAX_VCPU_ID];
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
+index 25810b144b58..e7a112ac51a8 100644
+--- a/arch/x86/kvm/irq_comm.c
++++ b/arch/x86/kvm/irq_comm.c
+@@ -41,6 +41,15 @@ static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
+ 			   bool line_status)
+ {
+ 	struct kvm_pic *pic = pic_irqchip(kvm);
++
++	/*
++	 * XXX: rejecting pic routes when pic isn't in use would be better,
++	 * but the default routing table is installed while kvm->arch.vpic is
++	 * NULL and KVM_CREATE_IRQCHIP can race with KVM_IRQ_LINE.
++	 */
++	if (!pic)
++		return -1;
++
+ 	return kvm_pic_set_irq(pic, e->irqchip.pin, irq_source_id, level);
+ }
+ 
+@@ -49,6 +58,10 @@ static int kvm_set_ioapic_irq(struct kvm_kernel_irq_routing_entry *e,
+ 			      bool line_status)
+ {
+ 	struct kvm_ioapic *ioapic = kvm->arch.vioapic;
++
++	if (!ioapic)
++		return -1;
++
+ 	return kvm_ioapic_set_irq(ioapic, e->irqchip.pin, irq_source_id, level,
+ 				line_status);
+ }
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index b62c85229711..d2255e4f9589 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -138,7 +138,7 @@ static inline bool kvm_apic_map_get_logical_dest(struct kvm_apic_map *map,
+ 		*mask = dest_id & 0xff;
+ 		return true;
+ 	case KVM_APIC_MODE_XAPIC_CLUSTER:
+-		*cluster = map->xapic_cluster_map[dest_id >> 4];
++		*cluster = map->xapic_cluster_map[(dest_id >> 4) & 0xf];
+ 		*mask = dest_id & 0xf;
+ 		return true;
+ 	default:
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index 832b98f822be..a3a983fd4248 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -135,7 +135,12 @@ void __init early_fixup_exception(struct pt_regs *regs, int trapnr)
+ 	if (early_recursion_flag > 2)
+ 		goto halt_loop;
+ 
+-	if (regs->cs != __KERNEL_CS)
++	/*
++	 * Old CPUs leave the high bits of CS on the stack
++	 * undefined.  I'm not sure which CPUs do this, but at least
++	 * the 486 DX works this way.
++	 */
++	if ((regs->cs & 0xFFFF) != __KERNEL_CS)
+ 		goto fail;
+ 
+ 	/*
+diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
+index 865f46ea724f..c80765b211cf 100644
+--- a/crypto/asymmetric_keys/x509_cert_parser.c
++++ b/crypto/asymmetric_keys/x509_cert_parser.c
+@@ -133,7 +133,6 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
+ 	return cert;
+ 
+ error_decode:
+-	kfree(cert->pub->key);
+ 	kfree(ctx);
+ error_no_ctx:
+ 	x509_free_certificate(cert);
+diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
+index 29f600f2c447..ff64313770bd 100644
+--- a/drivers/dax/dax.c
++++ b/drivers/dax/dax.c
+@@ -323,8 +323,8 @@ static int check_vma(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ 	if (!dax_dev->alive)
+ 		return -ENXIO;
+ 
+-	/* prevent private / writable mappings from being established */
+-	if ((vma->vm_flags & (VM_NORESERVE|VM_SHARED|VM_WRITE)) == VM_WRITE) {
++	/* prevent private mappings from being established */
++	if ((vma->vm_flags & VM_SHARED) != VM_SHARED) {
+ 		dev_info(dev, "%s: %s: fail, attempted private mapping\n",
+ 				current->comm, func);
+ 		return -EINVAL;
+diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
+index 73ae849f5170..76dd42dd7088 100644
+--- a/drivers/dax/pmem.c
++++ b/drivers/dax/pmem.c
+@@ -77,7 +77,9 @@ static int dax_pmem_probe(struct device *dev)
+ 	nsio = to_nd_namespace_io(&ndns->dev);
+ 
+ 	/* parse the 'pfn' info block via ->rw_bytes */
+-	devm_nsio_enable(dev, nsio);
++	rc = devm_nsio_enable(dev, nsio);
++	if (rc)
++		return rc;
+ 	altmap = nvdimm_setup_pfn(nd_pfn, &res, &__altmap);
+ 	if (IS_ERR(altmap))
+ 		return PTR_ERR(altmap);
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index 58470f5ced04..8c53748a769d 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -338,7 +338,9 @@ static int dmar_pci_bus_notifier(struct notifier_block *nb,
+ 	struct pci_dev *pdev = to_pci_dev(data);
+ 	struct dmar_pci_notify_info *info;
+ 
+-	/* Only care about add/remove events for physical functions */
++	/* Only care about add/remove events for physical functions.
++	 * For VFs we actually do the lookup based on the corresponding
++	 * PF in device_to_iommu() anyway. */
+ 	if (pdev->is_virtfn)
+ 		return NOTIFY_DONE;
+ 	if (action != BUS_NOTIFY_ADD_DEVICE &&
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 1257b0b80296..7fb538708cec 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -892,7 +892,13 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
+ 		return NULL;
+ 
+ 	if (dev_is_pci(dev)) {
++		struct pci_dev *pf_pdev;
++
+ 		pdev = to_pci_dev(dev);
++		/* VFs aren't listed in scope tables; we need to look up
++		 * the PF instead to find the IOMMU. */
++		pf_pdev = pci_physfn(pdev);
++		dev = &pf_pdev->dev;
+ 		segment = pci_domain_nr(pdev->bus);
+ 	} else if (has_acpi_companion(dev))
+ 		dev = &ACPI_COMPANION(dev)->dev;
+@@ -905,6 +911,13 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
+ 		for_each_active_dev_scope(drhd->devices,
+ 					  drhd->devices_cnt, i, tmp) {
+ 			if (tmp == dev) {
++				/* For a VF use its original BDF# not that of the PF
++				 * which we used for the IOMMU lookup. Strictly speaking
++				 * we could do this for all PCI devices; we only need to
++				 * get the BDF# from the scope table for ACPI matches. */
++				if (pdev->is_virtfn)
++					goto got_pdev;
++
+ 				*bus = drhd->devices[i].bus;
+ 				*devfn = drhd->devices[i].devfn;
+ 				goto out;
+diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
+index 8ebb3530afa7..cb72e0011310 100644
+--- a/drivers/iommu/intel-svm.c
++++ b/drivers/iommu/intel-svm.c
+@@ -39,10 +39,18 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu)
+ 	struct page *pages;
+ 	int order;
+ 
+-	order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT;
+-	if (order < 0)
+-		order = 0;
+-
++	/* Start at 2 because it's defined as 2^(1+PSS) */
++	iommu->pasid_max = 2 << ecap_pss(iommu->ecap);
++
++	/* Eventually I'm promised we will get a multi-level PASID table
++	 * and it won't have to be physically contiguous. Until then,
++	 * limit the size because 8MiB contiguous allocations can be hard
++	 * to come by. The limit of 0x20000, which is 1MiB for each of
++	 * the PASID and PASID-state tables, is somewhat arbitrary. */
++	if (iommu->pasid_max > 0x20000)
++		iommu->pasid_max = 0x20000;
++
++	order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max);
+ 	pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ 	if (!pages) {
+ 		pr_warn("IOMMU: %s: Failed to allocate PASID table\n",
+@@ -53,6 +61,8 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu)
+ 	pr_info("%s: Allocated order %d PASID table.\n", iommu->name, order);
+ 
+ 	if (ecap_dis(iommu->ecap)) {
++		/* Just making it explicit... */
++		BUILD_BUG_ON(sizeof(struct pasid_entry) != sizeof(struct pasid_state_entry));
+ 		pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ 		if (pages)
+ 			iommu->pasid_state_table = page_address(pages);
+@@ -68,11 +78,7 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu)
+ 
+ int intel_svm_free_pasid_tables(struct intel_iommu *iommu)
+ {
+-	int order;
+-
+-	order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT;
+-	if (order < 0)
+-		order = 0;
++	int order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max);
+ 
+ 	if (iommu->pasid_table) {
+ 		free_pages((unsigned long)iommu->pasid_table, order);
+@@ -371,8 +377,8 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
+ 		}
+ 		svm->iommu = iommu;
+ 
+-		if (pasid_max > 2 << ecap_pss(iommu->ecap))
+-			pasid_max = 2 << ecap_pss(iommu->ecap);
++		if (pasid_max > iommu->pasid_max)
++			pasid_max = iommu->pasid_max;
+ 
+ 		/* Do not use PASID 0 in caching mode (virtualised IOMMU) */
+ 		ret = idr_alloc(&iommu->pasid_idr, svm,
+diff --git a/drivers/media/tuners/tuner-xc2028.c b/drivers/media/tuners/tuner-xc2028.c
+index 317ef63ee789..8d96a22647b3 100644
+--- a/drivers/media/tuners/tuner-xc2028.c
++++ b/drivers/media/tuners/tuner-xc2028.c
+@@ -281,6 +281,14 @@ static void free_firmware(struct xc2028_data *priv)
+ 	int i;
+ 	tuner_dbg("%s called\n", __func__);
+ 
++	/* free allocated f/w string */
++	if (priv->fname != firmware_name)
++		kfree(priv->fname);
++	priv->fname = NULL;
++
++	priv->state = XC2028_NO_FIRMWARE;
++	memset(&priv->cur_fw, 0, sizeof(priv->cur_fw));
++
+ 	if (!priv->firm)
+ 		return;
+ 
+@@ -291,9 +299,6 @@ static void free_firmware(struct xc2028_data *priv)
+ 
+ 	priv->firm = NULL;
+ 	priv->firm_size = 0;
+-	priv->state = XC2028_NO_FIRMWARE;
+-
+-	memset(&priv->cur_fw, 0, sizeof(priv->cur_fw));
+ }
+ 
+ static int load_all_firmwares(struct dvb_frontend *fe,
+@@ -884,9 +889,8 @@ read_not_reliable:
+ 	return 0;
+ 
+ fail:
+-	priv->state = XC2028_NO_FIRMWARE;
++	free_firmware(priv);
+ 
+-	memset(&priv->cur_fw, 0, sizeof(priv->cur_fw));
+ 	if (retry_count < 8) {
+ 		msleep(50);
+ 		retry_count++;
+@@ -1332,11 +1336,8 @@ static int xc2028_dvb_release(struct dvb_frontend *fe)
+ 	mutex_lock(&xc2028_list_mutex);
+ 
+ 	/* only perform final cleanup if this is the last instance */
+-	if (hybrid_tuner_report_instance_count(priv) == 1) {
++	if (hybrid_tuner_report_instance_count(priv) == 1)
+ 		free_firmware(priv);
+-		kfree(priv->ctrl.fname);
+-		priv->ctrl.fname = NULL;
+-	}
+ 
+ 	if (priv)
+ 		hybrid_tuner_release_state(priv);
+@@ -1399,19 +1400,8 @@ static int xc2028_set_config(struct dvb_frontend *fe, void *priv_cfg)
+ 
+ 	/*
+ 	 * Copy the config data.
+-	 * For the firmware name, keep a local copy of the string,
+-	 * in order to avoid troubles during device release.
+ 	 */
+-	kfree(priv->ctrl.fname);
+-	priv->ctrl.fname = NULL;
+ 	memcpy(&priv->ctrl, p, sizeof(priv->ctrl));
+-	if (p->fname) {
+-		priv->ctrl.fname = kstrdup(p->fname, GFP_KERNEL);
+-		if (priv->ctrl.fname == NULL) {
+-			rc = -ENOMEM;
+-			goto unlock;
+-		}
+-	}
+ 
+ 	/*
+ 	 * If firmware name changed, frees firmware. As free_firmware will
+@@ -1426,10 +1416,15 @@ static int xc2028_set_config(struct dvb_frontend *fe, void *priv_cfg)
+ 
+ 	if (priv->state == XC2028_NO_FIRMWARE) {
+ 		if (!firmware_name[0])
+-			priv->fname = priv->ctrl.fname;
++			priv->fname = kstrdup(p->fname, GFP_KERNEL);
+ 		else
+ 			priv->fname = firmware_name;
+ 
++		if (!priv->fname) {
++			rc = -ENOMEM;
++			goto unlock;
++		}
++
+ 		rc = request_firmware_nowait(THIS_MODULE, 1,
+ 					     priv->fname,
+ 					     priv->i2c_props.adap->dev.parent,
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 239be2fde242..2267601f0ac1 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -66,6 +66,20 @@ static u32 esdhc_readl_fixup(struct sdhci_host *host,
+ 			return ret;
+ 		}
+ 	}
++	/*
++	 * The DAT[3:0] line signal levels and the CMD line signal level are
++	 * not compatible with standard SDHC register. The line signal levels
++	 * DAT[7:0] are at bits 31:24 and the command line signal level is at
++	 * bit 23. All other bits are the same as in the standard SDHC
++	 * register.
++	 */
++	if (spec_reg == SDHCI_PRESENT_STATE) {
++		ret = value & 0x000fffff;
++		ret |= (value >> 4) & SDHCI_DATA_LVL_MASK;
++		ret |= (value << 1) & SDHCI_CMD_LVL;
++		return ret;
++	}
++
+ 	ret = value;
+ 	return ret;
+ }
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 0411c9f36461..1b3bd1c7f4f6 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -73,6 +73,7 @@
+ #define  SDHCI_DATA_LVL_MASK	0x00F00000
+ #define   SDHCI_DATA_LVL_SHIFT	20
+ #define   SDHCI_DATA_0_LVL_MASK	0x00100000
++#define  SDHCI_CMD_LVL		0x01000000
+ 
+ #define SDHCI_HOST_CONTROL	0x28
+ #define  SDHCI_CTRL_LED		0x01
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 46c0f5ecd99d..58e60298a360 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3894,6 +3894,11 @@ _scsih_temp_threshold_events(struct MPT3SAS_ADAPTER *ioc,
+ 	}
+ }
+ 
++static inline bool ata_12_16_cmd(struct scsi_cmnd *scmd)
++{
++	return (scmd->cmnd[0] == ATA_12 || scmd->cmnd[0] == ATA_16);
++}
++
+ /**
+  * _scsih_flush_running_cmds - completing outstanding commands.
+  * @ioc: per adapter object
+@@ -3915,6 +3920,9 @@ _scsih_flush_running_cmds(struct MPT3SAS_ADAPTER *ioc)
+ 		if (!scmd)
+ 			continue;
+ 		count++;
++		if (ata_12_16_cmd(scmd))
++			scsi_internal_device_unblock(scmd->device,
++							SDEV_RUNNING);
+ 		mpt3sas_base_free_smid(ioc, smid);
+ 		scsi_dma_unmap(scmd);
+ 		if (ioc->pci_error_recovery)
+@@ -4019,8 +4027,6 @@ _scsih_eedp_error_handling(struct scsi_cmnd *scmd, u16 ioc_status)
+ 	    SAM_STAT_CHECK_CONDITION;
+ }
+ 
+-
+-
+ /**
+  * scsih_qcmd - main scsi request entry point
+  * @scmd: pointer to scsi command object
+@@ -4047,6 +4053,13 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ 	if (ioc->logging_level & MPT_DEBUG_SCSI)
+ 		scsi_print_command(scmd);
+ 
++	/*
++	 * Lock the device for any subsequent command until command is
++	 * done.
++	 */
++	if (ata_12_16_cmd(scmd))
++		scsi_internal_device_block(scmd->device);
++
+ 	sas_device_priv_data = scmd->device->hostdata;
+ 	if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
+ 		scmd->result = DID_NO_CONNECT << 16;
+@@ -4622,6 +4635,9 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ 	if (scmd == NULL)
+ 		return 1;
+ 
++	if (ata_12_16_cmd(scmd))
++		scsi_internal_device_unblock(scmd->device, SDEV_RUNNING);
++
+ 	mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
+ 
+ 	if (mpi_reply == NULL) {
+diff --git a/drivers/thermal/intel_powerclamp.c b/drivers/thermal/intel_powerclamp.c
+index 7a223074df3d..afada655f861 100644
+--- a/drivers/thermal/intel_powerclamp.c
++++ b/drivers/thermal/intel_powerclamp.c
+@@ -669,9 +669,16 @@ static struct thermal_cooling_device_ops powerclamp_cooling_ops = {
+ 	.set_cur_state = powerclamp_set_cur_state,
+ };
+ 
++static const struct x86_cpu_id __initconst intel_powerclamp_ids[] = {
++	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_MWAIT },
++	{}
++};
++MODULE_DEVICE_TABLE(x86cpu, intel_powerclamp_ids);
++
+ static int __init powerclamp_probe(void)
+ {
+-	if (!boot_cpu_has(X86_FEATURE_MWAIT)) {
++
++	if (!x86_match_cpu(intel_powerclamp_ids)) {
+ 		pr_err("CPU does not support MWAIT");
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 69426e644d17..3dbb4a21ab44 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -914,6 +914,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 	if (!ci)
+ 		return -ENOMEM;
+ 
++	spin_lock_init(&ci->lock);
+ 	ci->dev = dev;
+ 	ci->platdata = dev_get_platdata(dev);
+ 	ci->imx28_write_fix = !!(ci->platdata->flags &
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index b93356834bb5..bced28fa1cbd 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1895,8 +1895,6 @@ static int udc_start(struct ci_hdrc *ci)
+ 	struct usb_otg_caps *otg_caps = &ci->platdata->ci_otg_caps;
+ 	int retval = 0;
+ 
+-	spin_lock_init(&ci->lock);
+-
+ 	ci->gadget.ops          = &usb_gadget_ops;
+ 	ci->gadget.speed        = USB_SPEED_UNKNOWN;
+ 	ci->gadget.max_speed    = USB_SPEED_HIGH;
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index f61477bed3a8..243ac5ebe46a 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -131,6 +131,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
+ 	{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
++	{ USB_DEVICE(0x10C4, 0x8962) }, /* Brim Brothers charging dock */
+ 	{ USB_DEVICE(0x10C4, 0x8977) },	/* CEL MeshWorks DevKit Device */
+ 	{ USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
+ 	{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 0ff7f38d7800..6e9fc8bcc285 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1012,6 +1012,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) },
+ 	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7563U_PID) },
+ 	{ USB_DEVICE(WICED_VID, WICED_USB20706V2_PID) },
++	{ USB_DEVICE(TI_VID, TI_CC3200_LAUNCHPAD_PID),
++		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 21011c0a4c64..48ee04c94a75 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -596,6 +596,12 @@
+ #define STK541_PID		0x2109 /* Zigbee Controller */
+ 
+ /*
++ * Texas Instruments
++ */
++#define TI_VID			0x0451
++#define TI_CC3200_LAUNCHPAD_PID	0xC32A /* SimpleLink Wi-Fi CC3200 LaunchPad */
++
++/*
+  * Blackfin gnICE JTAG
+  * http://docs.blackfin.uclinux.org/doku.php?id=hw:jtag:gnice
+  */
+diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
+index ffd086733421..1a59f335b063 100644
+--- a/drivers/usb/storage/transport.c
++++ b/drivers/usb/storage/transport.c
+@@ -954,10 +954,15 @@ int usb_stor_CB_transport(struct scsi_cmnd *srb, struct us_data *us)
+ 
+ 	/* COMMAND STAGE */
+ 	/* let's send the command via the control pipe */
++	/*
++	 * Command is sometime (f.e. after scsi_eh_prep_cmnd) on the stack.
++	 * Stack may be vmallocated.  So no DMA for us.  Make a copy.
++	 */
++	memcpy(us->iobuf, srb->cmnd, srb->cmd_len);
+ 	result = usb_stor_ctrl_transfer(us, us->send_ctrl_pipe,
+ 				      US_CBI_ADSC, 
+ 				      USB_TYPE_CLASS | USB_RECIP_INTERFACE, 0, 
+-				      us->ifnum, srb->cmnd, srb->cmd_len);
++				      us->ifnum, us->iobuf, srb->cmd_len);
+ 
+ 	/* check the return code for the command */
+ 	usb_stor_dbg(us, "Call to usb_stor_ctrl_transfer() returned %d\n",
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 52a28311e2a4..48efe62e1302 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -261,7 +261,7 @@ static int nfs_callback_up_net(int minorversion, struct svc_serv *serv,
+ 	}
+ 
+ 	ret = -EPROTONOSUPPORT;
+-	if (minorversion == 0)
++	if (!IS_ENABLED(CONFIG_NFS_V4_1) || minorversion == 0)
+ 		ret = nfs4_callback_up_net(serv, net);
+ 	else if (xprt->ops->bc_up)
+ 		ret = xprt->ops->bc_up(serv, net);
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 2d9b650047a5..d49e26c6cdc7 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -429,6 +429,7 @@ struct intel_iommu {
+ 	struct page_req_dsc *prq;
+ 	unsigned char prq_name[16];    /* Name for PRQ interrupt */
+ 	struct idr pasid_idr;
++	u32 pasid_max;
+ #endif
+ 	struct q_inval  *qi;            /* Queued invalidation info */
+ 	u32 *iommu_state; /* Store iommu states between suspend and resume.*/
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index fc9bb2225291..f8c5f5ec666e 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7908,6 +7908,7 @@ restart:
+  * if <size> is not specified, the range is treated as a single address.
+  */
+ enum {
++	IF_ACT_NONE = -1,
+ 	IF_ACT_FILTER,
+ 	IF_ACT_START,
+ 	IF_ACT_STOP,
+@@ -7931,6 +7932,7 @@ static const match_table_t if_tokens = {
+ 	{ IF_SRC_KERNEL,	"%u/%u" },
+ 	{ IF_SRC_FILEADDR,	"%u@%s" },
+ 	{ IF_SRC_KERNELADDR,	"%u" },
++	{ IF_ACT_NONE,		NULL },
+ };
+ 
+ /*
+diff --git a/lib/mpi/mpi-pow.c b/lib/mpi/mpi-pow.c
+index 5464c8744ea9..e24388a863a7 100644
+--- a/lib/mpi/mpi-pow.c
++++ b/lib/mpi/mpi-pow.c
+@@ -64,8 +64,13 @@ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod)
+ 	if (!esize) {
+ 		/* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0
+ 		 * depending on if MOD equals 1.  */
+-		rp[0] = 1;
+ 		res->nlimbs = (msize == 1 && mod->d[0] == 1) ? 0 : 1;
++		if (res->nlimbs) {
++			if (mpi_resize(res, 1) < 0)
++				goto enomem;
++			rp = res->d;
++			rp[0] = 1;
++		}
+ 		res->sign = 0;
+ 		goto leave;
+ 	}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a2214c64ed3c..7401e996009a 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3161,6 +3161,16 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
+ 	if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
+ 		return false;
+ 
++#ifdef CONFIG_COMPACTION
++	/*
++	 * This is a gross workaround to compensate a lack of reliable compaction
++	 * operation. We cannot simply go OOM with the current state of the compaction
++	 * code because this can lead to pre mature OOM declaration.
++	 */
++	if (order <= PAGE_ALLOC_COSTLY_ORDER)
++		return true;
++#endif
++
+ 	/*
+ 	 * There are setups with compaction disabled which would prefer to loop
+ 	 * inside the allocator rather than hit the oom killer prematurely.
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 8af9d25ff988..436a7537e6a9 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -77,7 +77,7 @@
+ 		     (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \
+ 		     (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG))
+ 
+-#define CAN_BCM_VERSION "20160617"
++#define CAN_BCM_VERSION "20161123"
+ 
+ MODULE_DESCRIPTION("PF_CAN broadcast manager protocol");
+ MODULE_LICENSE("Dual BSD/GPL");
+@@ -109,8 +109,9 @@ struct bcm_op {
+ 	u32 count;
+ 	u32 nframes;
+ 	u32 currframe;
+-	struct canfd_frame *frames;
+-	struct canfd_frame *last_frames;
++	/* void pointers to arrays of struct can[fd]_frame */
++	void *frames;
++	void *last_frames;
+ 	struct canfd_frame sframe;
+ 	struct canfd_frame last_sframe;
+ 	struct sock *sk;
+@@ -681,7 +682,7 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data)
+ 
+ 	if (op->flags & RX_FILTER_ID) {
+ 		/* the easiest case */
+-		bcm_rx_update_and_send(op, &op->last_frames[0], rxframe);
++		bcm_rx_update_and_send(op, op->last_frames, rxframe);
+ 		goto rx_starttimer;
+ 	}
+ 
+@@ -1068,7 +1069,7 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 
+ 		if (msg_head->nframes) {
+ 			/* update CAN frames content */
+-			err = memcpy_from_msg((u8 *)op->frames, msg,
++			err = memcpy_from_msg(op->frames, msg,
+ 					      msg_head->nframes * op->cfsiz);
+ 			if (err < 0)
+ 				return err;
+@@ -1118,7 +1119,7 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 		}
+ 
+ 		if (msg_head->nframes) {
+-			err = memcpy_from_msg((u8 *)op->frames, msg,
++			err = memcpy_from_msg(op->frames, msg,
+ 					      msg_head->nframes * op->cfsiz);
+ 			if (err < 0) {
+ 				if (op->frames != &op->sframe)
+@@ -1163,6 +1164,7 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 	/* check flags */
+ 
+ 	if (op->flags & RX_RTR_FRAME) {
++		struct canfd_frame *frame0 = op->frames;
+ 
+ 		/* no timers in RTR-mode */
+ 		hrtimer_cancel(&op->thrtimer);
+@@ -1174,8 +1176,8 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ 		 * prevent a full-load-loopback-test ... ;-]
+ 		 */
+ 		if ((op->flags & TX_CP_CAN_ID) ||
+-		    (op->frames[0].can_id == op->can_id))
+-			op->frames[0].can_id = op->can_id & ~CAN_RTR_FLAG;
++		    (frame0->can_id == op->can_id))
++			frame0->can_id = op->can_id & ~CAN_RTR_FLAG;
+ 
+ 	} else {
+ 		if (op->flags & SETTIMER) {
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 5550a86f7264..396aac7e6e79 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -945,4 +945,4 @@ static int __init init_default_flow_dissectors(void)
+ 	return 0;
+ }
+ 
+-late_initcall_sync(init_default_flow_dissectors);
++core_initcall(init_default_flow_dissectors);
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index eee91443924d..66f2a1145d7c 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -71,6 +71,7 @@ struct cfg80211_registered_device {
+ 	struct list_head bss_list;
+ 	struct rb_root bss_tree;
+ 	u32 bss_generation;
++	u32 bss_entries;
+ 	struct cfg80211_scan_request *scan_req; /* protected by RTNL */
+ 	struct sk_buff *scan_msg;
+ 	struct cfg80211_sched_scan_request __rcu *sched_scan_req;
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 0358e12be54b..438143a3827d 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -57,6 +57,19 @@
+  * also linked into the probe response struct.
+  */
+ 
++/*
++ * Limit the number of BSS entries stored in mac80211. Each one is
++ * a bit over 4k at most, so this limits to roughly 4-5M of memory.
++ * If somebody wants to really attack this though, they'd likely
++ * use small beacons, and only one type of frame, limiting each of
++ * the entries to a much smaller size (in order to generate more
++ * entries in total, so overhead is bigger.)
++ */
++static int bss_entries_limit = 1000;
++module_param(bss_entries_limit, int, 0644);
++MODULE_PARM_DESC(bss_entries_limit,
++                 "limit to number of scan BSS entries (per wiphy, default 1000)");
++
+ #define IEEE80211_SCAN_RESULT_EXPIRE	(30 * HZ)
+ 
+ static void bss_free(struct cfg80211_internal_bss *bss)
+@@ -137,6 +150,10 @@ static bool __cfg80211_unlink_bss(struct cfg80211_registered_device *rdev,
+ 
+ 	list_del_init(&bss->list);
+ 	rb_erase(&bss->rbn, &rdev->bss_tree);
++	rdev->bss_entries--;
++	WARN_ONCE((rdev->bss_entries == 0) ^ list_empty(&rdev->bss_list),
++		  "rdev bss entries[%d]/list[empty:%d] corruption\n",
++		  rdev->bss_entries, list_empty(&rdev->bss_list));
+ 	bss_ref_put(rdev, bss);
+ 	return true;
+ }
+@@ -163,6 +180,40 @@ static void __cfg80211_bss_expire(struct cfg80211_registered_device *rdev,
+ 		rdev->bss_generation++;
+ }
+ 
++static bool cfg80211_bss_expire_oldest(struct cfg80211_registered_device *rdev)
++{
++	struct cfg80211_internal_bss *bss, *oldest = NULL;
++	bool ret;
++
++	lockdep_assert_held(&rdev->bss_lock);
++
++	list_for_each_entry(bss, &rdev->bss_list, list) {
++		if (atomic_read(&bss->hold))
++			continue;
++
++		if (!list_empty(&bss->hidden_list) &&
++		    !bss->pub.hidden_beacon_bss)
++			continue;
++
++		if (oldest && time_before(oldest->ts, bss->ts))
++			continue;
++		oldest = bss;
++	}
++
++	if (WARN_ON(!oldest))
++		return false;
++
++	/*
++	 * The callers make sure to increase rdev->bss_generation if anything
++	 * gets removed (and a new entry added), so there's no need to also do
++	 * it here.
++	 */
++
++	ret = __cfg80211_unlink_bss(rdev, oldest);
++	WARN_ON(!ret);
++	return ret;
++}
++
+ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev,
+ 			   bool send_message)
+ {
+@@ -693,6 +744,7 @@ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
+ 	const u8 *ie;
+ 	int i, ssidlen;
+ 	u8 fold = 0;
++	u32 n_entries = 0;
+ 
+ 	ies = rcu_access_pointer(new->pub.beacon_ies);
+ 	if (WARN_ON(!ies))
+@@ -716,6 +768,12 @@ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
+ 	/* This is the bad part ... */
+ 
+ 	list_for_each_entry(bss, &rdev->bss_list, list) {
++		/*
++		 * we're iterating all the entries anyway, so take the
++		 * opportunity to validate the list length accounting
++		 */
++		n_entries++;
++
+ 		if (!ether_addr_equal(bss->pub.bssid, new->pub.bssid))
+ 			continue;
+ 		if (bss->pub.channel != new->pub.channel)
+@@ -744,6 +802,10 @@ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
+ 				   new->pub.beacon_ies);
+ 	}
+ 
++	WARN_ONCE(n_entries != rdev->bss_entries,
++		  "rdev bss entries[%d]/list[len:%d] corruption\n",
++		  rdev->bss_entries, n_entries);
++
+ 	return true;
+ }
+ 
+@@ -898,7 +960,14 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 			}
+ 		}
+ 
++		if (rdev->bss_entries >= bss_entries_limit &&
++		    !cfg80211_bss_expire_oldest(rdev)) {
++			kfree(new);
++			goto drop;
++		}
++
+ 		list_add_tail(&new->list, &rdev->bss_list);
++		rdev->bss_entries++;
+ 		rb_insert_bss(rdev, new);
+ 		found = new;
+ 	}
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index fc3036b34e51..a4d90aa1045a 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -621,8 +621,8 @@ int aa_change_hat(const char *hats[], int count, u64 token, bool permtest)
+ 	/* released below */
+ 	cred = get_current_cred();
+ 	cxt = cred_cxt(cred);
+-	profile = aa_cred_profile(cred);
+-	previous_profile = cxt->previous;
++	profile = aa_get_newest_profile(aa_cred_profile(cred));
++	previous_profile = aa_get_newest_profile(cxt->previous);
+ 
+ 	if (unconfined(profile)) {
+ 		info = "unconfined";
+@@ -718,6 +718,8 @@ audit:
+ out:
+ 	aa_put_profile(hat);
+ 	kfree(name);
++	aa_put_profile(profile);
++	aa_put_profile(previous_profile);
+ 	put_cred(cred);
+ 
+ 	return error;


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-12-07 23:26 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-12-07 23:26 UTC (permalink / raw
  To: gentoo-commits

commit:     04658ca7a302b81aa1b6c44e4bda9850eff15279
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec  7 23:26:04 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec  7 23:26:04 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=04658ca7

packet: fix race condition in packet_set_ring. CVE-2016-8655. Bug #601926.

 0000_README                                      |  4 ++
 1520_fix-race-condition-in-packet-set-ring.patch | 62 ++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/0000_README b/0000_README
index cd56013..af402d3 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1520_fix-race-condition-in-packet-set-ring.patch
+From:   https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=84ac7260236a49c79eede91617700174c2c19b0c
+Desc:   packet: fix race condition in packet_set_ring. CVE-2016-8655. Bug #601926.
+
 Patch:  2900_dev-root-proc-mount-fix.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
 Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.

diff --git a/1520_fix-race-condition-in-packet-set-ring.patch b/1520_fix-race-condition-in-packet-set-ring.patch
new file mode 100644
index 0000000..d85527f
--- /dev/null
+++ b/1520_fix-race-condition-in-packet-set-ring.patch
@@ -0,0 +1,62 @@
+--- a/net/packet/af_packet.c	2016-12-07 18:10:25.785812861 -0500
++++ b/net/packet/af_packet.c	2016-12-07 18:18:45.597933525 -0500
+@@ -3648,19 +3648,25 @@ packet_setsockopt(struct socket *sock, i
+ 
+ 		if (optlen != sizeof(val))
+ 			return -EINVAL;
+-		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)
+-			return -EBUSY;
+ 		if (copy_from_user(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 		switch (val) {
+ 		case TPACKET_V1:
+ 		case TPACKET_V2:
+ 		case TPACKET_V3:
+-			po->tp_version = val;
+-			return 0;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
++		lock_sock(sk);
++		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {
++			ret = -EBUSY;
++		} else {
++			po->tp_version = val;
++			ret = 0;
++		}
++		release_sock(sk);
++		return ret;
+ 	}
+ 	case PACKET_RESERVE:
+ 	{
+@@ -4164,6 +4170,7 @@ static int packet_set_ring(struct sock *
+ 	/* Added to avoid minimal code churn */
+ 	struct tpacket_req *req = &req_u->req;
+ 
++	lock_sock(sk);
+ 	/* Opening a Tx-ring is NOT supported in TPACKET_V3 */
+ 	if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
+ 		net_warn_ratelimited("Tx-ring is not supported.\n");
+@@ -4245,8 +4252,6 @@ static int packet_set_ring(struct sock *
+ 			goto out;
+ 	}
+ 
+-	lock_sock(sk);
+-
+ 	/* Detach socket from network */
+ 	spin_lock(&po->bind_lock);
+ 	was_running = po->running;
+@@ -4294,11 +4299,11 @@ static int packet_set_ring(struct sock *
+ 		if (!tx_ring)
+ 			prb_shutdown_retire_blk_timer(po, rb_queue);
+ 	}
+-	release_sock(sk);
+ 
+ 	if (pg_vec)
+ 		free_pg_vec(pg_vec, order, req->tp_block_nr);
+ out:
++	release_sock(sk);
+ 	return err;
+ }
+ 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-12-09  7:25 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-12-09  7:25 UTC (permalink / raw
  To: gentoo-commits

commit:     ab0207b2570dea0fc1ca4d158a531e50aab2b7bb
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  9 07:27:50 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Dec  9 07:27:50 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ab0207b2

Linux patch 4.8.13

 0000_README             |    4 +
 1012_linux-4.8.13.patch | 1063 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1067 insertions(+)

diff --git a/0000_README b/0000_README
index af402d3..f162b9e 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-4.8.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.12
 
+Patch:  1012_linux-4.8.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-4.8.13.patch b/1012_linux-4.8.13.patch
new file mode 100644
index 0000000..63e8dae
--- /dev/null
+++ b/1012_linux-4.8.13.patch
@@ -0,0 +1,1063 @@
+diff --git a/Makefile b/Makefile
+index 7b0c92f53169..b38abe9adef8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arc/include/asm/delay.h b/arch/arc/include/asm/delay.h
+index 08e7e2a16ac1..a36e8601114d 100644
+--- a/arch/arc/include/asm/delay.h
++++ b/arch/arc/include/asm/delay.h
+@@ -22,10 +22,11 @@
+ static inline void __delay(unsigned long loops)
+ {
+ 	__asm__ __volatile__(
+-	"	lp  1f	\n"
+-	"	nop	\n"
+-	"1:		\n"
+-	: "+l"(loops));
++	"	mov lp_count, %0	\n"
++	"	lp  1f			\n"
++	"	nop			\n"
++	"1:				\n"
++	: : "r"(loops));
+ }
+ 
+ extern void __bad_udelay(void);
+diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
+index 89eeb3720051..e94ca72b974e 100644
+--- a/arch/arc/include/asm/pgtable.h
++++ b/arch/arc/include/asm/pgtable.h
+@@ -280,7 +280,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
+ 
+ #define pte_page(pte)		pfn_to_page(pte_pfn(pte))
+ #define mk_pte(page, prot)	pfn_pte(page_to_pfn(page), prot)
+-#define pfn_pte(pfn, prot)	__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
++#define pfn_pte(pfn, prot)	__pte(__pfn_to_phys(pfn) | pgprot_val(prot))
+ 
+ /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/
+ #define pte_pfn(pte)		(pte_val(pte) >> PAGE_SHIFT)
+diff --git a/arch/arm64/boot/dts/arm/juno-r1.dts b/arch/arm64/boot/dts/arm/juno-r1.dts
+index 123a58b29cbd..f0b857d6d73c 100644
+--- a/arch/arm64/boot/dts/arm/juno-r1.dts
++++ b/arch/arm64/boot/dts/arm/juno-r1.dts
+@@ -76,7 +76,7 @@
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x1010000>;
+ 				local-timer-stop;
+-				entry-latency-us = <300>;
++				entry-latency-us = <400>;
+ 				exit-latency-us = <1200>;
+ 				min-residency-us = <2500>;
+ 			};
+diff --git a/arch/arm64/boot/dts/arm/juno-r2.dts b/arch/arm64/boot/dts/arm/juno-r2.dts
+index 007be826efce..26aaa6a7670f 100644
+--- a/arch/arm64/boot/dts/arm/juno-r2.dts
++++ b/arch/arm64/boot/dts/arm/juno-r2.dts
+@@ -76,7 +76,7 @@
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x1010000>;
+ 				local-timer-stop;
+-				entry-latency-us = <300>;
++				entry-latency-us = <400>;
+ 				exit-latency-us = <1200>;
+ 				min-residency-us = <2500>;
+ 			};
+diff --git a/arch/arm64/boot/dts/arm/juno.dts b/arch/arm64/boot/dts/arm/juno.dts
+index a7270eff6939..6e154d948a80 100644
+--- a/arch/arm64/boot/dts/arm/juno.dts
++++ b/arch/arm64/boot/dts/arm/juno.dts
+@@ -76,7 +76,7 @@
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x1010000>;
+ 				local-timer-stop;
+-				entry-latency-us = <300>;
++				entry-latency-us = <400>;
+ 				exit-latency-us = <1200>;
+ 				min-residency-us = <2500>;
+ 			};
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index 7099f26e3702..b96346b943b7 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -90,7 +90,7 @@ struct arm64_cpu_capabilities {
+ 	u16 capability;
+ 	int def_scope;			/* default scope */
+ 	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
+-	void (*enable)(void *);		/* Called on all active CPUs */
++	int (*enable)(void *);		/* Called on all active CPUs */
+ 	union {
+ 		struct {	/* To be used for erratum handling only */
+ 			u32 midr_model;
+diff --git a/arch/arm64/include/asm/exec.h b/arch/arm64/include/asm/exec.h
+index db0563c23482..f7865dd9d868 100644
+--- a/arch/arm64/include/asm/exec.h
++++ b/arch/arm64/include/asm/exec.h
+@@ -18,6 +18,9 @@
+ #ifndef __ASM_EXEC_H
+ #define __ASM_EXEC_H
+ 
++#include <linux/sched.h>
++
+ extern unsigned long arch_align_stack(unsigned long sp);
++void uao_thread_switch(struct task_struct *next);
+ 
+ #endif	/* __ASM_EXEC_H */
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index ace0a96e7d6e..3be0ab013e35 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -190,8 +190,8 @@ static inline void spin_lock_prefetch(const void *ptr)
+ 
+ #endif
+ 
+-void cpu_enable_pan(void *__unused);
+-void cpu_enable_uao(void *__unused);
+-void cpu_enable_cache_maint_trap(void *__unused);
++int cpu_enable_pan(void *__unused);
++int cpu_enable_uao(void *__unused);
++int cpu_enable_cache_maint_trap(void *__unused);
+ 
+ #endif /* __ASM_PROCESSOR_H */
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 62272eac1352..94a0330f7ec9 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -19,7 +19,9 @@
+ #define pr_fmt(fmt) "CPU features: " fmt
+ 
+ #include <linux/bsearch.h>
++#include <linux/cpumask.h>
+ #include <linux/sort.h>
++#include <linux/stop_machine.h>
+ #include <linux/types.h>
+ #include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+@@ -936,7 +938,13 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
+ {
+ 	for (; caps->matches; caps++)
+ 		if (caps->enable && cpus_have_cap(caps->capability))
+-			on_each_cpu(caps->enable, NULL, true);
++			/*
++			 * Use stop_machine() as it schedules the work allowing
++			 * us to modify PSTATE, instead of on_each_cpu() which
++			 * uses an IPI, giving us a PSTATE that disappears when
++			 * we return.
++			 */
++			stop_machine(caps->enable, NULL, cpu_online_mask);
+ }
+ 
+ /*
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 6cd2612236dc..9cc8667212a6 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -49,6 +49,7 @@
+ #include <asm/alternative.h>
+ #include <asm/compat.h>
+ #include <asm/cacheflush.h>
++#include <asm/exec.h>
+ #include <asm/fpsimd.h>
+ #include <asm/mmu_context.h>
+ #include <asm/processor.h>
+@@ -303,7 +304,7 @@ static void tls_thread_switch(struct task_struct *next)
+ }
+ 
+ /* Restore the UAO state depending on next's addr_limit */
+-static void uao_thread_switch(struct task_struct *next)
++void uao_thread_switch(struct task_struct *next)
+ {
+ 	if (IS_ENABLED(CONFIG_ARM64_UAO)) {
+ 		if (task_thread_info(next)->addr_limit == KERNEL_DS)
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index b616e365cee3..23ddf5500b09 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -1,8 +1,11 @@
+ #include <linux/ftrace.h>
+ #include <linux/percpu.h>
+ #include <linux/slab.h>
++#include <asm/alternative.h>
+ #include <asm/cacheflush.h>
++#include <asm/cpufeature.h>
+ #include <asm/debug-monitors.h>
++#include <asm/exec.h>
+ #include <asm/pgtable.h>
+ #include <asm/memory.h>
+ #include <asm/mmu_context.h>
+@@ -48,6 +51,14 @@ void notrace __cpu_suspend_exit(void)
+ 	set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
+ 
+ 	/*
++	 * PSTATE was not saved over suspend/resume, re-enable any detected
++	 * features that might not have been set correctly.
++	 */
++	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,
++			CONFIG_ARM64_PAN));
++	uao_thread_switch(current);
++
++	/*
+ 	 * Restore HW breakpoint registers to sane values
+ 	 * before debug exceptions are possibly reenabled
+ 	 * through local_dbg_restore.
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 771a01a7fbce..9595d3d9c3db 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -428,9 +428,10 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
+ }
+ 
+-void cpu_enable_cache_maint_trap(void *__unused)
++int cpu_enable_cache_maint_trap(void *__unused)
+ {
+ 	config_sctlr_el1(SCTLR_EL1_UCI, 0);
++	return 0;
+ }
+ 
+ #define __user_cache_maint(insn, address, res)			\
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 05d2bd776c69..67506c3c5476 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -29,7 +29,9 @@
+ #include <linux/sched.h>
+ #include <linux/highmem.h>
+ #include <linux/perf_event.h>
++#include <linux/preempt.h>
+ 
++#include <asm/bug.h>
+ #include <asm/cpufeature.h>
+ #include <asm/exception.h>
+ #include <asm/debug-monitors.h>
+@@ -671,9 +673,17 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
+ NOKPROBE_SYMBOL(do_debug_exception);
+ 
+ #ifdef CONFIG_ARM64_PAN
+-void cpu_enable_pan(void *__unused)
++int cpu_enable_pan(void *__unused)
+ {
++	/*
++	 * We modify PSTATE. This won't work from irq context as the PSTATE
++	 * is discarded once we return from the exception.
++	 */
++	WARN_ON_ONCE(in_interrupt());
++
+ 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
++	asm(SET_PSTATE_PAN(1));
++	return 0;
+ }
+ #endif /* CONFIG_ARM64_PAN */
+ 
+@@ -684,8 +694,9 @@ void cpu_enable_pan(void *__unused)
+  * We need to enable the feature at runtime (instead of adding it to
+  * PSR_MODE_EL1h) as the feature may not be implemented by the cpu.
+  */
+-void cpu_enable_uao(void *__unused)
++int cpu_enable_uao(void *__unused)
+ {
+ 	asm(SET_PSTATE_UAO(1));
++	return 0;
+ }
+ #endif /* CONFIG_ARM64_UAO */
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index d0efb5cb1b00..a4e070a51584 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2344,7 +2344,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ 		frame.next_frame     = 0;
+ 		frame.return_address = 0;
+ 
+-		if (!access_ok(VERIFY_READ, fp, 8))
++		if (!valid_user_frame(fp, sizeof(frame)))
+ 			break;
+ 
+ 		bytes = __copy_from_user_nmi(&frame.next_frame, fp, 4);
+@@ -2354,9 +2354,6 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ 		if (bytes != 0)
+ 			break;
+ 
+-		if (!valid_user_frame(fp, sizeof(frame)))
+-			break;
+-
+ 		perf_callchain_store(entry, cs_base + frame.return_address);
+ 		fp = compat_ptr(ss_base + frame.next_frame);
+ 	}
+@@ -2405,7 +2402,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 		frame.next_frame	     = NULL;
+ 		frame.return_address = 0;
+ 
+-		if (!access_ok(VERIFY_READ, fp, sizeof(*fp) * 2))
++		if (!valid_user_frame(fp, sizeof(frame)))
+ 			break;
+ 
+ 		bytes = __copy_from_user_nmi(&frame.next_frame, fp, sizeof(*fp));
+@@ -2415,9 +2412,6 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 		if (bytes != 0)
+ 			break;
+ 
+-		if (!valid_user_frame(fp, sizeof(frame)))
+-			break;
+-
+ 		perf_callchain_store(entry, frame.return_address);
+ 		fp = (void __user *)frame.next_frame;
+ 	}
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index e207b33e4ce9..1e007a93bdcc 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -1088,7 +1088,7 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ 		desc[1] = tf->command; /* status */
+ 		desc[2] = tf->device;
+ 		desc[3] = tf->nsect;
+-		desc[0] = 0;
++		desc[7] = 0;
+ 		if (tf->flags & ATA_TFLAG_LBA48)  {
+ 			desc[8] |= 0x80;
+ 			if (tf->hob_nsect)
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 04365b17ee67..5163c8f918cb 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1403,7 +1403,8 @@ static ssize_t hot_remove_store(struct class *class,
+ 	zram = idr_find(&zram_index_idr, dev_id);
+ 	if (zram) {
+ 		ret = zram_remove(zram);
+-		idr_remove(&zram_index_idr, dev_id);
++		if (!ret)
++			idr_remove(&zram_index_idr, dev_id);
+ 	} else {
+ 		ret = -ENODEV;
+ 	}
+diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
+index 838b22aa8b67..f2c9274b8bd5 100644
+--- a/drivers/clk/sunxi/clk-sunxi.c
++++ b/drivers/clk/sunxi/clk-sunxi.c
+@@ -373,7 +373,7 @@ static void sun4i_get_apb1_factors(struct factors_request *req)
+ 	else
+ 		calcp = 3;
+ 
+-	calcm = (req->parent_rate >> calcp) - 1;
++	calcm = (div >> calcp) - 1;
+ 
+ 	req->rate = (req->parent_rate >> calcp) / (calcm + 1);
+ 	req->m = calcm;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+index 10b5ddf2c588..1ed085f01a49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+@@ -33,6 +33,7 @@ struct amdgpu_atpx {
+ 
+ static struct amdgpu_atpx_priv {
+ 	bool atpx_detected;
++	bool bridge_pm_usable;
+ 	/* handle for device - and atpx */
+ 	acpi_handle dhandle;
+ 	acpi_handle other_handle;
+@@ -200,7 +201,11 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
+ 	atpx->is_hybrid = false;
+ 	if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
+ 		printk("ATPX Hybrid Graphics\n");
+-		atpx->functions.power_cntl = false;
++		/*
++		 * Disable legacy PM methods only when pcie port PM is usable,
++		 * otherwise the device might fail to power off or power on.
++		 */
++		atpx->functions.power_cntl = !amdgpu_atpx_priv.bridge_pm_usable;
+ 		atpx->is_hybrid = true;
+ 	}
+ 
+@@ -546,17 +551,25 @@ static bool amdgpu_atpx_detect(void)
+ 	struct pci_dev *pdev = NULL;
+ 	bool has_atpx = false;
+ 	int vga_count = 0;
++	bool d3_supported = false;
++	struct pci_dev *parent_pdev;
+ 
+ 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
+ 		vga_count++;
+ 
+ 		has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
++
++		parent_pdev = pci_upstream_bridge(pdev);
++		d3_supported |= parent_pdev && parent_pdev->bridge_d3;
+ 	}
+ 
+ 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
+ 		vga_count++;
+ 
+ 		has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
++
++		parent_pdev = pci_upstream_bridge(pdev);
++		d3_supported |= parent_pdev && parent_pdev->bridge_d3;
+ 	}
+ 
+ 	if (has_atpx && vga_count == 2) {
+@@ -564,6 +577,7 @@ static bool amdgpu_atpx_detect(void)
+ 		printk(KERN_INFO "vga_switcheroo: detected switching method %s handle\n",
+ 		       acpi_method_name);
+ 		amdgpu_atpx_priv.atpx_detected = true;
++		amdgpu_atpx_priv.bridge_pm_usable = d3_supported;
+ 		amdgpu_atpx_init();
+ 		return true;
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index a77ce9983f69..b8e3854a1f20 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2540,7 +2540,7 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
+ 			page = shmem_read_mapping_page(mapping, i);
+ 			if (IS_ERR(page)) {
+ 				ret = PTR_ERR(page);
+-				goto err_pages;
++				goto err_sg;
+ 			}
+ 		}
+ #ifdef CONFIG_SWIOTLB
+@@ -2583,8 +2583,9 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
+ 
+ 	return 0;
+ 
+-err_pages:
++err_sg:
+ 	sg_mark_end(sg);
++err_pages:
+ 	for_each_sgt_page(page, sgt_iter, st)
+ 		put_page(page);
+ 	sg_free_table(st);
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index e26f88965c58..35d385d70d8e 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -11791,7 +11791,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
+ 	intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ 	if (__i915_reset_in_progress_or_wedged(intel_crtc->reset_counter)) {
+ 		ret = -EIO;
+-		goto cleanup;
++		goto unlock;
+ 	}
+ 
+ 	atomic_inc(&intel_crtc->unpin_work_count);
+@@ -11877,6 +11877,7 @@ cleanup_pending:
+ 	if (!IS_ERR_OR_NULL(request))
+ 		i915_add_request_no_flush(request);
+ 	atomic_dec(&intel_crtc->unpin_work_count);
++unlock:
+ 	mutex_unlock(&dev->struct_mutex);
+ cleanup:
+ 	crtc->primary->fb = old_fb;
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 8f62671fcfbf..54acfccee7eb 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -249,13 +249,6 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev)
+ 	if (irq < 0)
+ 		return irq;
+ 
+-	ret = devm_request_irq(dev, irq, mtk_disp_ovl_irq_handler,
+-			       IRQF_TRIGGER_NONE, dev_name(dev), priv);
+-	if (ret < 0) {
+-		dev_err(dev, "Failed to request irq %d: %d\n", irq, ret);
+-		return ret;
+-	}
+-
+ 	comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DISP_OVL);
+ 	if (comp_id < 0) {
+ 		dev_err(dev, "Failed to identify by alias: %d\n", comp_id);
+@@ -271,6 +264,13 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
++	ret = devm_request_irq(dev, irq, mtk_disp_ovl_irq_handler,
++			       IRQF_TRIGGER_NONE, dev_name(dev), priv);
++	if (ret < 0) {
++		dev_err(dev, "Failed to request irq %d: %d\n", irq, ret);
++		return ret;
++	}
++
+ 	ret = component_add(dev, &mtk_disp_ovl_component_ops);
+ 	if (ret)
+ 		dev_err(dev, "Failed to add component: %d\n", ret);
+diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
+index ddef0d494084..34b4ace5d75f 100644
+--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
++++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
+@@ -33,6 +33,7 @@ struct radeon_atpx {
+ 
+ static struct radeon_atpx_priv {
+ 	bool atpx_detected;
++	bool bridge_pm_usable;
+ 	/* handle for device - and atpx */
+ 	acpi_handle dhandle;
+ 	struct radeon_atpx atpx;
+@@ -198,7 +199,11 @@ static int radeon_atpx_validate(struct radeon_atpx *atpx)
+ 	atpx->is_hybrid = false;
+ 	if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
+ 		printk("ATPX Hybrid Graphics\n");
+-		atpx->functions.power_cntl = false;
++		/*
++		 * Disable legacy PM methods only when pcie port PM is usable,
++		 * otherwise the device might fail to power off or power on.
++		 */
++		atpx->functions.power_cntl = !radeon_atpx_priv.bridge_pm_usable;
+ 		atpx->is_hybrid = true;
+ 	}
+ 
+@@ -543,11 +548,16 @@ static bool radeon_atpx_detect(void)
+ 	struct pci_dev *pdev = NULL;
+ 	bool has_atpx = false;
+ 	int vga_count = 0;
++	bool d3_supported = false;
++	struct pci_dev *parent_pdev;
+ 
+ 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
+ 		vga_count++;
+ 
+ 		has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true);
++
++		parent_pdev = pci_upstream_bridge(pdev);
++		d3_supported |= parent_pdev && parent_pdev->bridge_d3;
+ 	}
+ 
+ 	/* some newer PX laptops mark the dGPU as a non-VGA display device */
+@@ -555,6 +565,9 @@ static bool radeon_atpx_detect(void)
+ 		vga_count++;
+ 
+ 		has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true);
++
++		parent_pdev = pci_upstream_bridge(pdev);
++		d3_supported |= parent_pdev && parent_pdev->bridge_d3;
+ 	}
+ 
+ 	if (has_atpx && vga_count == 2) {
+@@ -562,6 +575,7 @@ static bool radeon_atpx_detect(void)
+ 		printk(KERN_INFO "vga_switcheroo: detected switching method %s handle\n",
+ 		       acpi_method_name);
+ 		radeon_atpx_priv.atpx_detected = true;
++		radeon_atpx_priv.bridge_pm_usable = d3_supported;
+ 		radeon_atpx_init();
+ 		return true;
+ 	}
+diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
+index 5784e20542a4..9f6203c8577c 100644
+--- a/drivers/input/mouse/psmouse-base.c
++++ b/drivers/input/mouse/psmouse-base.c
+@@ -1115,10 +1115,6 @@ static int psmouse_extensions(struct psmouse *psmouse,
+ 		if (psmouse_try_protocol(psmouse, PSMOUSE_TOUCHKIT_PS2,
+ 					 &max_proto, set_properties, true))
+ 			return PSMOUSE_TOUCHKIT_PS2;
+-
+-		if (psmouse_try_protocol(psmouse, PSMOUSE_BYD,
+-					 &max_proto, set_properties, true))
+-			return PSMOUSE_BYD;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index a8ff969c95c2..cbc7dfae0a0e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -2203,8 +2203,9 @@ done:
+ 			is_scanning_required = 1;
+ 		} else {
+ 			mwifiex_dbg(priv->adapter, MSG,
+-				    "info: trying to associate to '%s' bssid %pM\n",
+-				    (char *)req_ssid.ssid, bss->bssid);
++				    "info: trying to associate to '%.*s' bssid %pM\n",
++				    req_ssid.ssid_len, (char *)req_ssid.ssid,
++				    bss->bssid);
+ 			memcpy(&priv->cfg_bssid, bss->bssid, ETH_ALEN);
+ 			break;
+ 		}
+@@ -2264,8 +2265,8 @@ mwifiex_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev,
+ 	}
+ 
+ 	mwifiex_dbg(adapter, INFO,
+-		    "info: Trying to associate to %s and bssid %pM\n",
+-		    (char *)sme->ssid, sme->bssid);
++		    "info: Trying to associate to %.*s and bssid %pM\n",
++		    (int)sme->ssid_len, (char *)sme->ssid, sme->bssid);
+ 
+ 	if (!mwifiex_stop_bg_scan(priv))
+ 		cfg80211_sched_scan_stopped_rtnl(priv->wdev.wiphy);
+@@ -2398,8 +2399,8 @@ mwifiex_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev,
+ 	}
+ 
+ 	mwifiex_dbg(priv->adapter, MSG,
+-		    "info: trying to join to %s and bssid %pM\n",
+-		    (char *)params->ssid, params->bssid);
++		    "info: trying to join to %.*s and bssid %pM\n",
++		    params->ssid_len, (char *)params->ssid, params->bssid);
+ 
+ 	mwifiex_set_ibss_params(priv, params);
+ 
+diff --git a/drivers/pci/pcie/aer/aer_inject.c b/drivers/pci/pcie/aer/aer_inject.c
+index db553dc22c8e..2b6a59266689 100644
+--- a/drivers/pci/pcie/aer/aer_inject.c
++++ b/drivers/pci/pcie/aer/aer_inject.c
+@@ -307,20 +307,6 @@ out:
+ 	return 0;
+ }
+ 
+-static struct pci_dev *pcie_find_root_port(struct pci_dev *dev)
+-{
+-	while (1) {
+-		if (!pci_is_pcie(dev))
+-			break;
+-		if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-			return dev;
+-		if (!dev->bus->self)
+-			break;
+-		dev = dev->bus->self;
+-	}
+-	return NULL;
+-}
+-
+ static int find_aer_device_iter(struct device *device, void *data)
+ {
+ 	struct pcie_device **result = data;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 93f280df3428..f6eff4aaffd7 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1439,6 +1439,21 @@ static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp)
+ 		dev_warn(&dev->dev, "PCI-X settings not supported\n");
+ }
+ 
++static bool pcie_root_rcb_set(struct pci_dev *dev)
++{
++	struct pci_dev *rp = pcie_find_root_port(dev);
++	u16 lnkctl;
++
++	if (!rp)
++		return false;
++
++	pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl);
++	if (lnkctl & PCI_EXP_LNKCTL_RCB)
++		return true;
++
++	return false;
++}
++
+ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
+ {
+ 	int pos;
+@@ -1468,9 +1483,20 @@ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
+ 			~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or);
+ 
+ 	/* Initialize Link Control Register */
+-	if (pcie_cap_has_lnkctl(dev))
++	if (pcie_cap_has_lnkctl(dev)) {
++
++		/*
++		 * If the Root Port supports Read Completion Boundary of
++		 * 128, set RCB to 128.  Otherwise, clear it.
++		 */
++		hpp->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB;
++		hpp->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB;
++		if (pcie_root_rcb_set(dev))
++			hpp->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB;
++
+ 		pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL,
+ 			~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or);
++	}
+ 
+ 	/* Find Advanced Error Reporting Enhanced Capability */
+ 	pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
+diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
+index 0296d8178ae2..a813239300c3 100644
+--- a/drivers/pwm/sysfs.c
++++ b/drivers/pwm/sysfs.c
+@@ -425,6 +425,8 @@ void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
+ 		if (test_bit(PWMF_EXPORTED, &pwm->flags))
+ 			pwm_unexport_child(parent, pwm);
+ 	}
++
++	put_device(parent);
+ }
+ 
+ static int __init pwm_sysfs_init(void)
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 030d0023e1d2..5138a841ff4f 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -2007,7 +2007,7 @@ static struct hpsa_scsi_dev_t *lookup_hpsa_scsi_dev(struct ctlr_info *h,
+ 
+ static int hpsa_slave_alloc(struct scsi_device *sdev)
+ {
+-	struct hpsa_scsi_dev_t *sd;
++	struct hpsa_scsi_dev_t *sd = NULL;
+ 	unsigned long flags;
+ 	struct ctlr_info *h;
+ 
+@@ -2024,7 +2024,8 @@ static int hpsa_slave_alloc(struct scsi_device *sdev)
+ 			sd->target = sdev_id(sdev);
+ 			sd->lun = sdev->lun;
+ 		}
+-	} else
++	}
++	if (!sd)
+ 		sd = lookup_hpsa_scsi_dev(h, sdev_channel(sdev),
+ 					sdev_id(sdev), sdev->lun);
+ 
+@@ -3805,6 +3806,7 @@ static int hpsa_update_device_info(struct ctlr_info *h,
+ 		sizeof(this_device->vendor));
+ 	memcpy(this_device->model, &inq_buff[16],
+ 		sizeof(this_device->model));
++	this_device->rev = inq_buff[2];
+ 	memset(this_device->device_id, 0,
+ 		sizeof(this_device->device_id));
+ 	hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8,
+@@ -3887,10 +3889,14 @@ static void figure_bus_target_lun(struct ctlr_info *h,
+ 
+ 	if (!is_logical_dev_addr_mode(lunaddrbytes)) {
+ 		/* physical device, target and lun filled in later */
+-		if (is_hba_lunid(lunaddrbytes))
++		if (is_hba_lunid(lunaddrbytes)) {
++			int bus = HPSA_HBA_BUS;
++
++			if (!device->rev)
++				bus = HPSA_LEGACY_HBA_BUS;
+ 			hpsa_set_bus_target_lun(device,
+-					HPSA_HBA_BUS, 0, lunid & 0x3fff);
+-		else
++					bus, 0, lunid & 0x3fff);
++		} else
+ 			/* defer target, lun assignment for physical devices */
+ 			hpsa_set_bus_target_lun(device,
+ 					HPSA_PHYSICAL_DEVICE_BUS, -1, -1);
+diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
+index a1487e67f7a1..9d45dde5747f 100644
+--- a/drivers/scsi/hpsa.h
++++ b/drivers/scsi/hpsa.h
+@@ -69,6 +69,7 @@ struct hpsa_scsi_dev_t {
+ 	u64 sas_address;
+ 	unsigned char vendor[8];        /* bytes 8-15 of inquiry data */
+ 	unsigned char model[16];        /* bytes 16-31 of inquiry data */
++	unsigned char rev;		/* byte 2 of inquiry data */
+ 	unsigned char raid_level;	/* from inquiry page 0xC1 */
+ 	unsigned char volume_offline;	/* discovered via TUR or VPD */
+ 	u16 queue_depth;		/* max queue_depth for this device */
+@@ -403,6 +404,7 @@ struct offline_device_entry {
+ #define HPSA_RAID_VOLUME_BUS		1
+ #define HPSA_EXTERNAL_RAID_VOLUME_BUS	2
+ #define HPSA_HBA_BUS			0
++#define HPSA_LEGACY_HBA_BUS		3
+ 
+ /*
+ 	Send the command to the hardware
+diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
+index 04ce7cfb6d1b..50c71678a156 100644
+--- a/drivers/scsi/libfc/fc_lport.c
++++ b/drivers/scsi/libfc/fc_lport.c
+@@ -308,7 +308,7 @@ struct fc_host_statistics *fc_get_host_stats(struct Scsi_Host *shost)
+ 	fc_stats = &lport->host_stats;
+ 	memset(fc_stats, 0, sizeof(struct fc_host_statistics));
+ 
+-	fc_stats->seconds_since_last_reset = (lport->boot_time - jiffies) / HZ;
++	fc_stats->seconds_since_last_reset = (jiffies - lport->boot_time) / HZ;
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		struct fc_stats *stats;
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index a78415d77434..78be4aee7064 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -329,11 +329,11 @@ static struct dentry *ovl_d_real(struct dentry *dentry,
+ 	if (!real)
+ 		goto bug;
+ 
++	/* Handle recursion */
++	real = d_real(real, inode, open_flags);
++
+ 	if (!inode || inode == d_inode(real))
+ 		return real;
+-
+-	/* Handle recursion */
+-	return d_real(real, inode, open_flags);
+ bug:
+ 	WARN(1, "ovl_d_real(%pd4, %s:%lu): real dentry not found\n", dentry,
+ 	     inode ? inode->i_sb->s_id : "NULL", inode ? inode->i_ino : 0);
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 573c5a18908f..0a0b2d5148e1 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -256,7 +256,9 @@
+ #endif
+ #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP && !__CHECKER__ */
+ 
+-#if GCC_VERSION >= 50000
++#if GCC_VERSION >= 70000
++#define KASAN_ABI_VERSION 5
++#elif GCC_VERSION >= 50000
+ #define KASAN_ABI_VERSION 4
+ #elif GCC_VERSION >= 40902
+ #define KASAN_ABI_VERSION 3
+diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
+index 01e84436cddf..d47cc4ab74ee 100644
+--- a/include/linux/pagemap.h
++++ b/include/linux/pagemap.h
+@@ -364,16 +364,13 @@ static inline struct page *read_mapping_page(struct address_space *mapping,
+ }
+ 
+ /*
+- * Get the offset in PAGE_SIZE.
+- * (TODO: hugepage should have ->index in PAGE_SIZE)
++ * Get index of the page with in radix-tree
++ * (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE)
+  */
+-static inline pgoff_t page_to_pgoff(struct page *page)
++static inline pgoff_t page_to_index(struct page *page)
+ {
+ 	pgoff_t pgoff;
+ 
+-	if (unlikely(PageHeadHuge(page)))
+-		return page->index << compound_order(page);
+-
+ 	if (likely(!PageTransTail(page)))
+ 		return page->index;
+ 
+@@ -387,6 +384,18 @@ static inline pgoff_t page_to_pgoff(struct page *page)
+ }
+ 
+ /*
++ * Get the offset in PAGE_SIZE.
++ * (TODO: hugepage should have ->index in PAGE_SIZE)
++ */
++static inline pgoff_t page_to_pgoff(struct page *page)
++{
++	if (unlikely(PageHeadHuge(page)))
++		return page->index << compound_order(page);
++
++	return page_to_index(page);
++}
++
++/*
+  * Return byte-offset into filesystem object for page.
+  */
+ static inline loff_t page_offset(struct page *page)
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 0ab835965669..03f3df0888fe 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1896,6 +1896,20 @@ static inline int pci_pcie_type(const struct pci_dev *dev)
+ 	return (pcie_caps_reg(dev) & PCI_EXP_FLAGS_TYPE) >> 4;
+ }
+ 
++static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev)
++{
++	while (1) {
++		if (!pci_is_pcie(dev))
++			break;
++		if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
++			return dev;
++		if (!dev->bus->self)
++			break;
++		dev = dev->bus->self;
++	}
++	return NULL;
++}
++
+ void pci_request_acs(void);
+ bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags);
+ bool pci_acs_path_enabled(struct pci_dev *start,
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index d6d071fc3c56..3af60ee69053 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -640,7 +640,7 @@
+  * Control a data application associated with the currently viewed channel,
+  * e.g. teletext or data broadcast application (MHEG, MHP, HbbTV, etc.)
+  */
+-#define KEY_DATA			0x275
++#define KEY_DATA			0x277
+ 
+ #define BTN_TRIGGER_HAPPY		0x2c0
+ #define BTN_TRIGGER_HAPPY1		0x2c0
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 0082fce402a0..85c5a883c6e3 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2173,6 +2173,7 @@ static int rcu_nocb_kthread(void *arg)
+ 				cl++;
+ 			c++;
+ 			local_bh_enable();
++			cond_resched_rcu_qs();
+ 			list = next;
+ 		}
+ 		trace_rcu_batch_end(rdp->rsp->name, c, !!list, 0, 0, 1);
+diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
+index e5c2181fee6f..03f4545b103d 100644
+--- a/mm/kasan/kasan.h
++++ b/mm/kasan/kasan.h
+@@ -53,6 +53,9 @@ struct kasan_global {
+ #if KASAN_ABI_VERSION >= 4
+ 	struct kasan_source_location *location;
+ #endif
++#if KASAN_ABI_VERSION >= 5
++	char *odr_indicator;
++#endif
+ };
+ 
+ /**
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 728d7790dc2d..87e1a7ca3846 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -103,6 +103,7 @@ static struct khugepaged_scan khugepaged_scan = {
+ 	.mm_head = LIST_HEAD_INIT(khugepaged_scan.mm_head),
+ };
+ 
++#ifdef CONFIG_SYSFS
+ static ssize_t scan_sleep_millisecs_show(struct kobject *kobj,
+ 					 struct kobj_attribute *attr,
+ 					 char *buf)
+@@ -295,6 +296,7 @@ struct attribute_group khugepaged_attr_group = {
+ 	.attrs = khugepaged_attr,
+ 	.name = "khugepaged",
+ };
++#endif /* CONFIG_SYSFS */
+ 
+ #define VM_NO_KHUGEPAGED (VM_SPECIAL | VM_HUGETLB)
+ 
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 14645be06e30..9c91acc0e328 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -190,10 +190,13 @@ unsigned int munlock_vma_page(struct page *page)
+ 	 */
+ 	spin_lock_irq(zone_lru_lock(zone));
+ 
+-	nr_pages = hpage_nr_pages(page);
+-	if (!TestClearPageMlocked(page))
++	if (!TestClearPageMlocked(page)) {
++		/* Potentially, PTE-mapped THP: do not skip the rest PTEs */
++		nr_pages = 1;
+ 		goto unlock_out;
++	}
+ 
++	nr_pages = hpage_nr_pages(page);
+ 	__mod_zone_page_state(zone, NR_MLOCK, -nr_pages);
+ 
+ 	if (__munlock_isolate_lru_page(page, true)) {
+diff --git a/mm/truncate.c b/mm/truncate.c
+index a01cce450a26..8d8c62d89e6d 100644
+--- a/mm/truncate.c
++++ b/mm/truncate.c
+@@ -283,7 +283,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
+ 
+ 			if (!trylock_page(page))
+ 				continue;
+-			WARN_ON(page_to_pgoff(page) != index);
++			WARN_ON(page_to_index(page) != index);
+ 			if (PageWriteback(page)) {
+ 				unlock_page(page);
+ 				continue;
+@@ -371,7 +371,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
+ 			}
+ 
+ 			lock_page(page);
+-			WARN_ON(page_to_pgoff(page) != index);
++			WARN_ON(page_to_index(page) != index);
+ 			wait_on_page_writeback(page);
+ 			truncate_inode_page(mapping, page);
+ 			unlock_page(page);
+@@ -492,7 +492,7 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
+ 			if (!trylock_page(page))
+ 				continue;
+ 
+-			WARN_ON(page_to_pgoff(page) != index);
++			WARN_ON(page_to_index(page) != index);
+ 
+ 			/* Middle of THP: skip */
+ 			if (PageTransTail(page)) {
+@@ -612,7 +612,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
+ 			}
+ 
+ 			lock_page(page);
+-			WARN_ON(page_to_pgoff(page) != index);
++			WARN_ON(page_to_index(page) != index);
+ 			if (page->mapping != mapping) {
+ 				unlock_page(page);
+ 				continue;
+diff --git a/mm/workingset.c b/mm/workingset.c
+index 617475f529f4..fb1f9183d89a 100644
+--- a/mm/workingset.c
++++ b/mm/workingset.c
+@@ -348,7 +348,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
+ 	shadow_nodes = list_lru_shrink_count(&workingset_shadow_nodes, sc);
+ 	local_irq_enable();
+ 
+-	if (memcg_kmem_enabled()) {
++	if (sc->memcg) {
+ 		pages = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid,
+ 						     LRU_ALL_FILE);
+ 	} else {
+diff --git a/net/batman-adv/tp_meter.c b/net/batman-adv/tp_meter.c
+index 2333777f919d..8af1611b8ab2 100644
+--- a/net/batman-adv/tp_meter.c
++++ b/net/batman-adv/tp_meter.c
+@@ -837,6 +837,7 @@ static int batadv_tp_send(void *arg)
+ 	primary_if = batadv_primary_if_get_selected(bat_priv);
+ 	if (unlikely(!primary_if)) {
+ 		err = BATADV_TP_REASON_DST_UNREACHABLE;
++		tp_vars->reason = err;
+ 		goto out;
+ 	}
+ 
+diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
+index 0bf6709d1006..6fb4314cec49 100644
+--- a/virt/kvm/arm/vgic/vgic-v2.c
++++ b/virt/kvm/arm/vgic/vgic-v2.c
+@@ -50,8 +50,10 @@ void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu)
+ 
+ 			WARN_ON(cpuif->vgic_lr[lr] & GICH_LR_STATE);
+ 
+-			kvm_notify_acked_irq(vcpu->kvm, 0,
+-					     intid - VGIC_NR_PRIVATE_IRQS);
++			/* Only SPIs require notification */
++			if (vgic_valid_spi(vcpu->kvm, intid))
++				kvm_notify_acked_irq(vcpu->kvm, 0,
++						     intid - VGIC_NR_PRIVATE_IRQS);
+ 		}
+ 	}
+ 
+diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
+index 9f0dae397d9c..5c9f9745e6ca 100644
+--- a/virt/kvm/arm/vgic/vgic-v3.c
++++ b/virt/kvm/arm/vgic/vgic-v3.c
+@@ -41,8 +41,10 @@ void vgic_v3_process_maintenance(struct kvm_vcpu *vcpu)
+ 
+ 			WARN_ON(cpuif->vgic_lr[lr] & ICH_LR_STATE);
+ 
+-			kvm_notify_acked_irq(vcpu->kvm, 0,
+-					     intid - VGIC_NR_PRIVATE_IRQS);
++			/* Only SPIs require notification */
++			if (vgic_valid_spi(vcpu->kvm, intid))
++				kvm_notify_acked_irq(vcpu->kvm, 0,
++						     intid - VGIC_NR_PRIVATE_IRQS);
+ 		}
+ 
+ 		/*
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 195078225aa5..690d15eaee05 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2852,10 +2852,10 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
+ 
+ 	ret = anon_inode_getfd(ops->name, &kvm_device_fops, dev, O_RDWR | O_CLOEXEC);
+ 	if (ret < 0) {
+-		ops->destroy(dev);
+ 		mutex_lock(&kvm->lock);
+ 		list_del(&dev->vm_node);
+ 		mutex_unlock(&kvm->lock);
++		ops->destroy(dev);
+ 		return ret;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-12-11  7:18 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2016-12-11  7:18 UTC (permalink / raw
  To: gentoo-commits

commit:     6a66ca4c64bcb45ae769c732d3e5540063c5e685
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 11 07:17:34 2016 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Dec 11 07:17:34 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6a66ca4c

Linux patch 4.8.14

 0000_README                                      |    8 +-
 1013_linux-4.8.14.patch                          | 1725 ++++++++++++++++++++++
 1520_fix-race-condition-in-packet-set-ring.patch |   62 -
 3 files changed, 1729 insertions(+), 66 deletions(-)

diff --git a/0000_README b/0000_README
index f162b9e..9b67d47 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-4.8.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.13
 
+Patch:  1013_linux-4.8.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.
@@ -103,10 +107,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1520_fix-race-condition-in-packet-set-ring.patch
-From:   https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=84ac7260236a49c79eede91617700174c2c19b0c
-Desc:   packet: fix race condition in packet_set_ring. CVE-2016-8655. Bug #601926.
-
 Patch:  2900_dev-root-proc-mount-fix.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
 Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.

diff --git a/1013_linux-4.8.14.patch b/1013_linux-4.8.14.patch
new file mode 100644
index 0000000..65e8e07
--- /dev/null
+++ b/1013_linux-4.8.14.patch
@@ -0,0 +1,1725 @@
+diff --git a/Makefile b/Makefile
+index b38abe9adef8..6a7492473a0d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index c3c12efe0bc0..9c0c8fd0b292 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -89,7 +89,7 @@ asmlinkage void do_sigreturn(struct pt_regs *regs)
+ 	sf = (struct signal_frame __user *) regs->u_regs[UREG_FP];
+ 
+ 	/* 1. Make sure we are not getting garbage from the user */
+-	if (!invalid_frame_pointer(sf, sizeof(*sf)))
++	if (invalid_frame_pointer(sf, sizeof(*sf)))
+ 		goto segv_and_exit;
+ 
+ 	if (get_user(ufp, &sf->info.si_regs.u_regs[UREG_FP]))
+@@ -150,7 +150,7 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
+ 
+ 	synchronize_user_stack();
+ 	sf = (struct rt_signal_frame __user *) regs->u_regs[UREG_FP];
+-	if (!invalid_frame_pointer(sf, sizeof(*sf)))
++	if (invalid_frame_pointer(sf, sizeof(*sf)))
+ 		goto segv;
+ 
+ 	if (get_user(ufp, &sf->regs.u_regs[UREG_FP]))
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index 7ac6b62fb7c1..05c770825386 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -802,8 +802,10 @@ struct mdesc_mblock {
+ };
+ static struct mdesc_mblock *mblocks;
+ static int num_mblocks;
++static int find_numa_node_for_addr(unsigned long pa,
++				   struct node_mem_mask *pnode_mask);
+ 
+-static unsigned long ra_to_pa(unsigned long addr)
++static unsigned long __init ra_to_pa(unsigned long addr)
+ {
+ 	int i;
+ 
+@@ -819,8 +821,11 @@ static unsigned long ra_to_pa(unsigned long addr)
+ 	return addr;
+ }
+ 
+-static int find_node(unsigned long addr)
++static int __init find_node(unsigned long addr)
+ {
++	static bool search_mdesc = true;
++	static struct node_mem_mask last_mem_mask = { ~0UL, ~0UL };
++	static int last_index;
+ 	int i;
+ 
+ 	addr = ra_to_pa(addr);
+@@ -830,13 +835,30 @@ static int find_node(unsigned long addr)
+ 		if ((addr & p->mask) == p->val)
+ 			return i;
+ 	}
+-	/* The following condition has been observed on LDOM guests.*/
+-	WARN_ONCE(1, "find_node: A physical address doesn't match a NUMA node"
+-		" rule. Some physical memory will be owned by node 0.");
+-	return 0;
++	/* The following condition has been observed on LDOM guests because
++	 * node_masks only contains the best latency mask and value.
++	 * LDOM guest's mdesc can contain a single latency group to
++	 * cover multiple address range. Print warning message only if the
++	 * address cannot be found in node_masks nor mdesc.
++	 */
++	if ((search_mdesc) &&
++	    ((addr & last_mem_mask.mask) != last_mem_mask.val)) {
++		/* find the available node in the mdesc */
++		last_index = find_numa_node_for_addr(addr, &last_mem_mask);
++		numadbg("find_node: latency group for address 0x%lx is %d\n",
++			addr, last_index);
++		if ((last_index < 0) || (last_index >= num_node_masks)) {
++			/* WARN_ONCE() and use default group 0 */
++			WARN_ONCE(1, "find_node: A physical address doesn't match a NUMA node rule. Some physical memory will be owned by node 0.");
++			search_mdesc = false;
++			last_index = 0;
++		}
++	}
++
++	return last_index;
+ }
+ 
+-static u64 memblock_nid_range(u64 start, u64 end, int *nid)
++static u64 __init memblock_nid_range(u64 start, u64 end, int *nid)
+ {
+ 	*nid = find_node(start);
+ 	start += PAGE_SIZE;
+@@ -1160,6 +1182,41 @@ int __node_distance(int from, int to)
+ 	return numa_latency[from][to];
+ }
+ 
++static int find_numa_node_for_addr(unsigned long pa,
++				   struct node_mem_mask *pnode_mask)
++{
++	struct mdesc_handle *md = mdesc_grab();
++	u64 node, arc;
++	int i = 0;
++
++	node = mdesc_node_by_name(md, MDESC_NODE_NULL, "latency-groups");
++	if (node == MDESC_NODE_NULL)
++		goto out;
++
++	mdesc_for_each_node_by_name(md, node, "group") {
++		mdesc_for_each_arc(arc, md, node, MDESC_ARC_TYPE_FWD) {
++			u64 target = mdesc_arc_target(md, arc);
++			struct mdesc_mlgroup *m = find_mlgroup(target);
++
++			if (!m)
++				continue;
++			if ((pa & m->mask) == m->match) {
++				if (pnode_mask) {
++					pnode_mask->mask = m->mask;
++					pnode_mask->val = m->match;
++				}
++				mdesc_release(md);
++				return i;
++			}
++		}
++		i++;
++	}
++
++out:
++	mdesc_release(md);
++	return -1;
++}
++
+ static int __init find_best_numa_node_for_mlgroup(struct mdesc_mlgroup *grp)
+ {
+ 	int i;
+diff --git a/block/blk-map.c b/block/blk-map.c
+index b8657fa8dc9a..27fd8d92892d 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -118,6 +118,9 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
+ 	struct iov_iter i;
+ 	int ret;
+ 
++	if (!iter_is_iovec(iter))
++		goto fail;
++
+ 	if (map_data)
+ 		copy = true;
+ 	else if (iov_iter_alignment(iter) & align)
+@@ -140,6 +143,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
+ 
+ unmap_rq:
+ 	__blk_rq_unmap_user(bio);
++fail:
+ 	rq->bio = NULL;
+ 	return -EINVAL;
+ }
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index bda37d336736..b081929e80bc 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -904,9 +904,10 @@ static void b53_vlan_add(struct dsa_switch *ds, int port,
+ 
+ 		vl->members |= BIT(port) | BIT(cpu_port);
+ 		if (untagged)
+-			vl->untag |= BIT(port) | BIT(cpu_port);
++			vl->untag |= BIT(port);
+ 		else
+-			vl->untag &= ~(BIT(port) | BIT(cpu_port));
++			vl->untag &= ~BIT(port);
++		vl->untag &= ~BIT(cpu_port);
+ 
+ 		b53_set_vlan_entry(dev, vid, vl);
+ 		b53_fast_age_vlan(dev, vid);
+@@ -915,8 +916,6 @@ static void b53_vlan_add(struct dsa_switch *ds, int port,
+ 	if (pvid) {
+ 		b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port),
+ 			    vlan->vid_end);
+-		b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(cpu_port),
+-			    vlan->vid_end);
+ 		b53_fast_age_vlan(dev, vid);
+ 	}
+ }
+@@ -926,7 +925,6 @@ static int b53_vlan_del(struct dsa_switch *ds, int port,
+ {
+ 	struct b53_device *dev = ds_to_priv(ds);
+ 	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+-	unsigned int cpu_port = dev->cpu_port;
+ 	struct b53_vlan *vl;
+ 	u16 vid;
+ 	u16 pvid;
+@@ -939,8 +937,6 @@ static int b53_vlan_del(struct dsa_switch *ds, int port,
+ 		b53_get_vlan_entry(dev, vid, vl);
+ 
+ 		vl->members &= ~BIT(port);
+-		if ((vl->members & BIT(cpu_port)) == BIT(cpu_port))
+-			vl->members = 0;
+ 
+ 		if (pvid == vid) {
+ 			if (is5325(dev) || is5365(dev))
+@@ -949,18 +945,14 @@ static int b53_vlan_del(struct dsa_switch *ds, int port,
+ 				pvid = 0;
+ 		}
+ 
+-		if (untagged) {
++		if (untagged)
+ 			vl->untag &= ~(BIT(port));
+-			if ((vl->untag & BIT(cpu_port)) == BIT(cpu_port))
+-				vl->untag = 0;
+-		}
+ 
+ 		b53_set_vlan_entry(dev, vid, vl);
+ 		b53_fast_age_vlan(dev, vid);
+ 	}
+ 
+ 	b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), pvid);
+-	b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(cpu_port), pvid);
+ 	b53_fast_age_vlan(dev, pvid);
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index b2b838724a9b..4036865b7c08 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1167,6 +1167,7 @@ static void bcm_sf2_sw_adjust_link(struct dsa_switch *ds, int port,
+ 				   struct phy_device *phydev)
+ {
+ 	struct bcm_sf2_priv *priv = ds_to_priv(ds);
++	struct ethtool_eee *p = &priv->port_sts[port].eee;
+ 	u32 id_mode_dis = 0, port_mode;
+ 	const char *str = NULL;
+ 	u32 reg;
+@@ -1241,6 +1242,9 @@ force_link:
+ 		reg |= DUPLX_MODE;
+ 
+ 	core_writel(priv, reg, CORE_STS_OVERRIDE_GMIIP_PORT(port));
++
++	if (!phydev->is_pseudo_fixed_link)
++		p->eee_enabled = bcm_sf2_eee_init(ds, port, phydev);
+ }
+ 
+ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 541456398dfb..842d8b90484e 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1172,6 +1172,7 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
+ 					  struct bcmgenet_tx_ring *ring)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
++	struct device *kdev = &priv->pdev->dev;
+ 	struct enet_cb *tx_cb_ptr;
+ 	struct netdev_queue *txq;
+ 	unsigned int pkts_compl = 0;
+@@ -1199,13 +1200,13 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
+ 		if (tx_cb_ptr->skb) {
+ 			pkts_compl++;
+ 			bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent;
+-			dma_unmap_single(&dev->dev,
++			dma_unmap_single(kdev,
+ 					 dma_unmap_addr(tx_cb_ptr, dma_addr),
+ 					 dma_unmap_len(tx_cb_ptr, dma_len),
+ 					 DMA_TO_DEVICE);
+ 			bcmgenet_free_cb(tx_cb_ptr);
+ 		} else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
+-			dma_unmap_page(&dev->dev,
++			dma_unmap_page(kdev,
+ 				       dma_unmap_addr(tx_cb_ptr, dma_addr),
+ 				       dma_unmap_len(tx_cb_ptr, dma_len),
+ 				       DMA_TO_DEVICE);
+@@ -1775,6 +1776,7 @@ static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv,
+ 
+ static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
+ {
++	struct device *kdev = &priv->pdev->dev;
+ 	struct enet_cb *cb;
+ 	int i;
+ 
+@@ -1782,7 +1784,7 @@ static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
+ 		cb = &priv->rx_cbs[i];
+ 
+ 		if (dma_unmap_addr(cb, dma_addr)) {
+-			dma_unmap_single(&priv->dev->dev,
++			dma_unmap_single(kdev,
+ 					 dma_unmap_addr(cb, dma_addr),
+ 					 priv->rx_buf_len, DMA_FROM_DEVICE);
+ 			dma_unmap_addr_set(cb, dma_addr, 0);
+diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
+index d954a97b0b0b..ef0dbcfa6ce6 100644
+--- a/drivers/net/ethernet/cadence/macb.c
++++ b/drivers/net/ethernet/cadence/macb.c
+@@ -959,6 +959,7 @@ static inline void macb_init_rx_ring(struct macb *bp)
+ 		addr += bp->rx_buffer_size;
+ 	}
+ 	bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP);
++	bp->rx_tail = 0;
+ }
+ 
+ static int macb_rx(struct macb *bp, int budget)
+@@ -1597,8 +1598,6 @@ static void macb_init_rings(struct macb *bp)
+ 	bp->queues[0].tx_head = 0;
+ 	bp->queues[0].tx_tail = 0;
+ 	bp->queues[0].tx_ring[TX_RING_SIZE - 1].ctrl |= MACB_BIT(TX_WRAP);
+-
+-	bp->rx_tail = 0;
+ }
+ 
+ static void macb_reset_hw(struct macb *bp)
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 467138b423d3..d747e17d3429 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -5220,6 +5220,19 @@ static SIMPLE_DEV_PM_OPS(sky2_pm_ops, sky2_suspend, sky2_resume);
+ 
+ static void sky2_shutdown(struct pci_dev *pdev)
+ {
++	struct sky2_hw *hw = pci_get_drvdata(pdev);
++	int port;
++
++	for (port = 0; port < hw->ports; port++) {
++		struct net_device *ndev = hw->dev[port];
++
++		rtnl_lock();
++		if (netif_running(ndev)) {
++			dev_close(ndev);
++			netif_device_detach(ndev);
++		}
++		rtnl_unlock();
++	}
+ 	sky2_suspend(&pdev->dev);
+ 	pci_wake_from_d3(pdev, device_may_wakeup(&pdev->dev));
+ 	pci_set_power_state(pdev, PCI_D3hot);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 054e795df90f..92c9a95169c8 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -518,7 +518,7 @@ static struct sh_eth_cpu_data r7s72100_data = {
+ 
+ 	.ecsr_value	= ECSR_ICD,
+ 	.ecsipr_value	= ECSIPR_ICDIP,
+-	.eesipr_value	= 0xff7f009f,
++	.eesipr_value	= 0xe77f009f,
+ 
+ 	.tx_check	= EESR_TC1 | EESR_FTC,
+ 	.eesr_err_check	= EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 16af1ce99233..5ad706b99af8 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -844,7 +844,6 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	struct geneve_dev *geneve = netdev_priv(dev);
+ 	struct geneve_sock *gs4 = geneve->sock4;
+ 	struct rtable *rt = NULL;
+-	const struct iphdr *iip; /* interior IP header */
+ 	int err = -EINVAL;
+ 	struct flowi4 fl4;
+ 	__u8 tos, ttl;
+@@ -871,8 +870,6 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ 	skb_reset_mac_header(skb);
+ 
+-	iip = ip_hdr(skb);
+-
+ 	if (info) {
+ 		const struct ip_tunnel_key *key = &info->key;
+ 		u8 *opts = NULL;
+@@ -892,7 +889,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 		if (unlikely(err))
+ 			goto tx_error;
+ 
+-		tos = ip_tunnel_ecn_encap(key->tos, iip, skb);
++		tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
+ 		ttl = key->ttl;
+ 		df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
+ 	} else {
+@@ -901,7 +898,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 		if (unlikely(err))
+ 			goto tx_error;
+ 
+-		tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, iip, skb);
++		tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb);
+ 		ttl = geneve->ttl;
+ 		if (!ttl && IN_MULTICAST(ntohl(fl4.daddr)))
+ 			ttl = 1;
+@@ -934,7 +931,6 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	struct geneve_dev *geneve = netdev_priv(dev);
+ 	struct geneve_sock *gs6 = geneve->sock6;
+ 	struct dst_entry *dst = NULL;
+-	const struct iphdr *iip; /* interior IP header */
+ 	int err = -EINVAL;
+ 	struct flowi6 fl6;
+ 	__u8 prio, ttl;
+@@ -959,8 +955,6 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ 	skb_reset_mac_header(skb);
+ 
+-	iip = ip_hdr(skb);
+-
+ 	if (info) {
+ 		const struct ip_tunnel_key *key = &info->key;
+ 		u8 *opts = NULL;
+@@ -981,7 +975,7 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 		if (unlikely(err))
+ 			goto tx_error;
+ 
+-		prio = ip_tunnel_ecn_encap(key->tos, iip, skb);
++		prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
+ 		ttl = key->ttl;
+ 		label = info->key.label;
+ 	} else {
+@@ -991,7 +985,7 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 			goto tx_error;
+ 
+ 		prio = ip_tunnel_ecn_encap(ip6_tclass(fl6.flowlabel),
+-					   iip, skb);
++					   ip_hdr(skb), skb);
+ 		ttl = geneve->ttl;
+ 		if (!ttl && ipv6_addr_is_multicast(&fl6.daddr))
+ 			ttl = 1;
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index c47ec0a04c8e..dd623f674487 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -388,12 +388,6 @@ void usbnet_cdc_status(struct usbnet *dev, struct urb *urb)
+ 	case USB_CDC_NOTIFY_NETWORK_CONNECTION:
+ 		netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n",
+ 			  event->wValue ? "on" : "off");
+-
+-		/* Work-around for devices with broken off-notifications */
+-		if (event->wValue &&
+-		    !test_bit(__LINK_STATE_NOCARRIER, &dev->net->state))
+-			usbnet_link_change(dev, 0, 0);
+-
+ 		usbnet_link_change(dev, !!event->wValue, 0);
+ 		break;
+ 	case USB_CDC_NOTIFY_SPEED_CHANGE:	/* tx/rx rates */
+@@ -466,6 +460,36 @@ static int usbnet_cdc_zte_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 	return 1;
+ }
+ 
++/* Ensure correct link state
++ *
++ * Some devices (ZTE MF823/831/910) export two carrier on notifications when
++ * connected. This causes the link state to be incorrect. Work around this by
++ * always setting the state to off, then on.
++ */
++void usbnet_cdc_zte_status(struct usbnet *dev, struct urb *urb)
++{
++	struct usb_cdc_notification *event;
++
++	if (urb->actual_length < sizeof(*event))
++		return;
++
++	event = urb->transfer_buffer;
++
++	if (event->bNotificationType != USB_CDC_NOTIFY_NETWORK_CONNECTION) {
++		usbnet_cdc_status(dev, urb);
++		return;
++	}
++
++	netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n",
++		  event->wValue ? "on" : "off");
++
++	if (event->wValue &&
++	    netif_carrier_ok(dev->net))
++		netif_carrier_off(dev->net);
++
++	usbnet_link_change(dev, !!event->wValue, 0);
++}
++
+ static const struct driver_info	cdc_info = {
+ 	.description =	"CDC Ethernet Device",
+ 	.flags =	FLAG_ETHER | FLAG_POINTTOPOINT,
+@@ -481,7 +505,7 @@ static const struct driver_info	zte_cdc_info = {
+ 	.flags =	FLAG_ETHER | FLAG_POINTTOPOINT,
+ 	.bind =		usbnet_cdc_zte_bind,
+ 	.unbind =	usbnet_cdc_unbind,
+-	.status =	usbnet_cdc_status,
++	.status =	usbnet_cdc_zte_status,
+ 	.set_rx_mode =	usbnet_cdc_update_filter,
+ 	.manage_power =	usbnet_manage_power,
+ 	.rx_fixup = usbnet_cdc_zte_rx_fixup,
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index bf3fd34924bd..d8072092e5f0 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1468,6 +1468,11 @@ static void virtnet_free_queues(struct virtnet_info *vi)
+ 		netif_napi_del(&vi->rq[i].napi);
+ 	}
+ 
++	/* We called napi_hash_del() before netif_napi_del(),
++	 * we need to respect an RCU grace period before freeing vi->rq
++	 */
++	synchronize_net();
++
+ 	kfree(vi->rq);
+ 	kfree(vi->sq);
+ }
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 75b4aaf31a9d..944e7baf17eb 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -102,12 +102,12 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages);
+ 
+ const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags);
+ 
+-static inline size_t iov_iter_count(struct iov_iter *i)
++static inline size_t iov_iter_count(const struct iov_iter *i)
+ {
+ 	return i->count;
+ }
+ 
+-static inline bool iter_is_iovec(struct iov_iter *i)
++static inline bool iter_is_iovec(const struct iov_iter *i)
+ {
+ 	return !(i->type & (ITER_BVEC | ITER_KVEC));
+ }
+diff --git a/include/net/gro_cells.h b/include/net/gro_cells.h
+index d15214d673b2..2a1abbf8da74 100644
+--- a/include/net/gro_cells.h
++++ b/include/net/gro_cells.h
+@@ -68,6 +68,9 @@ static inline int gro_cells_init(struct gro_cells *gcells, struct net_device *de
+ 		struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
+ 
+ 		__skb_queue_head_init(&cell->napi_skbs);
++
++		set_bit(NAPI_STATE_NO_BUSY_POLL, &cell->napi.state);
++
+ 		netif_napi_add(dev, &cell->napi, gro_cell_poll, 64);
+ 		napi_enable(&cell->napi);
+ 	}
+diff --git a/net/core/flow.c b/net/core/flow.c
+index 3937b1b68d5b..18e8893d4be5 100644
+--- a/net/core/flow.c
++++ b/net/core/flow.c
+@@ -95,7 +95,6 @@ static void flow_cache_gc_task(struct work_struct *work)
+ 	list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) {
+ 		flow_entry_kill(fce, xfrm);
+ 		atomic_dec(&xfrm->flow_cache_gc_count);
+-		WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0);
+ 	}
+ }
+ 
+@@ -236,9 +235,8 @@ flow_cache_lookup(struct net *net, const struct flowi *key, u16 family, u8 dir,
+ 		if (fcp->hash_count > fc->high_watermark)
+ 			flow_cache_shrink(fc, fcp);
+ 
+-		if (fcp->hash_count > 2 * fc->high_watermark ||
+-		    atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) {
+-			atomic_inc(&net->xfrm.flow_cache_genid);
++		if (atomic_read(&net->xfrm.flow_cache_gc_count) >
++		    2 * num_online_cpus() * fc->high_watermark) {
+ 			flo = ERR_PTR(-ENOBUFS);
+ 			goto ret_object;
+ 		}
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 2c2eb1b629b1..2e9a1c2818c7 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -217,6 +217,8 @@ int peernet2id_alloc(struct net *net, struct net *peer)
+ 	bool alloc;
+ 	int id;
+ 
++	if (atomic_read(&net->count) == 0)
++		return NETNSA_NSID_NOT_ASSIGNED;
+ 	spin_lock_irqsave(&net->nsid_lock, flags);
+ 	alloc = atomic_read(&peer->count) == 0 ? false : true;
+ 	id = __peernet2id_alloc(net, peer, &alloc);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 189cc78c77eb..08c3702f7074 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1578,7 +1578,7 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ 		head = &net->dev_index_head[h];
+ 		hlist_for_each_entry(dev, head, index_hlist) {
+ 			if (link_dump_filtered(dev, master_idx, kind_ops))
+-				continue;
++				goto cont;
+ 			if (idx < s_idx)
+ 				goto cont;
+ 			err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK,
+@@ -2791,7 +2791,10 @@ nla_put_failure:
+ 
+ static inline size_t rtnl_fdb_nlmsg_size(void)
+ {
+-	return NLMSG_ALIGN(sizeof(struct ndmsg)) + nla_total_size(ETH_ALEN);
++	return NLMSG_ALIGN(sizeof(struct ndmsg)) +
++	       nla_total_size(ETH_ALEN) +	/* NDA_LLADDR */
++	       nla_total_size(sizeof(u16)) +	/* NDA_VLAN */
++	       0;
+ }
+ 
+ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, u16 vid, int type,
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 10acaccca5c8..ba27920b6bbc 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -715,7 +715,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
+ 		val = min_t(u32, val, sysctl_wmem_max);
+ set_sndbuf:
+ 		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+-		sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF);
++		sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);
+ 		/* Wake up sending tasks if we upped the value. */
+ 		sk->sk_write_space(sk);
+ 		break;
+@@ -751,7 +751,7 @@ set_rcvbuf:
+ 		 * returning the value we actually used in getsockopt
+ 		 * is the most desirable behavior.
+ 		 */
+-		sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF);
++		sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF);
+ 		break;
+ 
+ 	case SO_RCVBUFFORCE:
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b567c8725aea..edbe59d203ef 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -700,6 +700,7 @@ int dccp_invalid_packet(struct sk_buff *skb)
+ {
+ 	const struct dccp_hdr *dh;
+ 	unsigned int cscov;
++	u8 dccph_doff;
+ 
+ 	if (skb->pkt_type != PACKET_HOST)
+ 		return 1;
+@@ -721,18 +722,19 @@ int dccp_invalid_packet(struct sk_buff *skb)
+ 	/*
+ 	 * If P.Data Offset is too small for packet type, drop packet and return
+ 	 */
+-	if (dh->dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) {
+-		DCCP_WARN("P.Data Offset(%u) too small\n", dh->dccph_doff);
++	dccph_doff = dh->dccph_doff;
++	if (dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) {
++		DCCP_WARN("P.Data Offset(%u) too small\n", dccph_doff);
+ 		return 1;
+ 	}
+ 	/*
+ 	 * If P.Data Offset is too too large for packet, drop packet and return
+ 	 */
+-	if (!pskb_may_pull(skb, dh->dccph_doff * sizeof(u32))) {
+-		DCCP_WARN("P.Data Offset(%u) too large\n", dh->dccph_doff);
++	if (!pskb_may_pull(skb, dccph_doff * sizeof(u32))) {
++		DCCP_WARN("P.Data Offset(%u) too large\n", dccph_doff);
+ 		return 1;
+ 	}
+-
++	dh = dccp_hdr(skb);
+ 	/*
+ 	 * If P.type is not Data, Ack, or DataAck and P.X == 0 (the packet
+ 	 * has short sequence numbers), drop packet and return
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index f30bad9678f0..3bdecd2f9b88 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -28,8 +28,10 @@ static struct dsa_switch_tree *dsa_get_dst(u32 tree)
+ 	struct dsa_switch_tree *dst;
+ 
+ 	list_for_each_entry(dst, &dsa_switch_trees, list)
+-		if (dst->tree == tree)
++		if (dst->tree == tree) {
++			kref_get(&dst->refcount);
+ 			return dst;
++		}
+ 	return NULL;
+ }
+ 
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index eebbc0f2baa8..ed22af67c58a 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1237,7 +1237,7 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 		fixedid = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TCP_FIXEDID);
+ 
+ 		/* fixed ID is invalid if DF bit is not set */
+-		if (fixedid && !(iph->frag_off & htons(IP_DF)))
++		if (fixedid && !(ip_hdr(skb)->frag_off & htons(IP_DF)))
+ 			goto out;
+ 	}
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index d95631d09248..20fb25e3027b 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -476,7 +476,7 @@ static int esp_input(struct xfrm_state *x, struct sk_buff *skb)
+ 		esph = (void *)skb_push(skb, 4);
+ 		*seqhi = esph->spi;
+ 		esph->spi = esph->seq_no;
+-		esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi);
++		esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi;
+ 		aead_request_set_callback(req, 0, esp_input_done_esn, skb);
+ 	}
+ 
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 1b25daf8c7f1..9301308528f8 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -157,7 +157,7 @@ static void fib_replace_table(struct net *net, struct fib_table *old,
+ 
+ int fib_unmerge(struct net *net)
+ {
+-	struct fib_table *old, *new;
++	struct fib_table *old, *new, *main_table;
+ 
+ 	/* attempt to fetch local table if it has been allocated */
+ 	old = fib_get_table(net, RT_TABLE_LOCAL);
+@@ -168,11 +168,21 @@ int fib_unmerge(struct net *net)
+ 	if (!new)
+ 		return -ENOMEM;
+ 
++	/* table is already unmerged */
++	if (new == old)
++		return 0;
++
+ 	/* replace merged table with clean table */
+-	if (new != old) {
+-		fib_replace_table(net, old, new);
+-		fib_free_table(old);
+-	}
++	fib_replace_table(net, old, new);
++	fib_free_table(old);
++
++	/* attempt to fetch main table if it has been allocated */
++	main_table = fib_get_table(net, RT_TABLE_MAIN);
++	if (!main_table)
++		return 0;
++
++	/* flush local entries from main table */
++	fib_table_flush_external(main_table);
+ 
+ 	return 0;
+ }
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index 7ef703102dca..84fd727274cf 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -681,6 +681,13 @@ static unsigned char update_suffix(struct key_vector *tn)
+ {
+ 	unsigned char slen = tn->pos;
+ 	unsigned long stride, i;
++	unsigned char slen_max;
++
++	/* only vector 0 can have a suffix length greater than or equal to
++	 * tn->pos + tn->bits, the second highest node will have a suffix
++	 * length at most of tn->pos + tn->bits - 1
++	 */
++	slen_max = min_t(unsigned char, tn->pos + tn->bits - 1, tn->slen);
+ 
+ 	/* search though the list of children looking for nodes that might
+ 	 * have a suffix greater than the one we currently have.  This is
+@@ -698,12 +705,8 @@ static unsigned char update_suffix(struct key_vector *tn)
+ 		slen = n->slen;
+ 		i &= ~(stride - 1);
+ 
+-		/* if slen covers all but the last bit we can stop here
+-		 * there will be nothing longer than that since only node
+-		 * 0 and 1 << (bits - 1) could have that as their suffix
+-		 * length.
+-		 */
+-		if ((slen + 1) >= (tn->pos + tn->bits))
++		/* stop searching if we have hit the maximum possible value */
++		if (slen >= slen_max)
+ 			break;
+ 	}
+ 
+@@ -875,39 +878,27 @@ static struct key_vector *resize(struct trie *t, struct key_vector *tn)
+ 		return collapse(t, tn);
+ 
+ 	/* update parent in case halve failed */
+-	tp = node_parent(tn);
+-
+-	/* Return if at least one deflate was run */
+-	if (max_work != MAX_WORK)
+-		return tp;
+-
+-	/* push the suffix length to the parent node */
+-	if (tn->slen > tn->pos) {
+-		unsigned char slen = update_suffix(tn);
+-
+-		if (slen > tp->slen)
+-			tp->slen = slen;
+-	}
+-
+-	return tp;
++	return node_parent(tn);
+ }
+ 
+-static void leaf_pull_suffix(struct key_vector *tp, struct key_vector *l)
++static void node_pull_suffix(struct key_vector *tn, unsigned char slen)
+ {
+-	while ((tp->slen > tp->pos) && (tp->slen > l->slen)) {
+-		if (update_suffix(tp) > l->slen)
++	unsigned char node_slen = tn->slen;
++
++	while ((node_slen > tn->pos) && (node_slen > slen)) {
++		slen = update_suffix(tn);
++		if (node_slen == slen)
+ 			break;
+-		tp = node_parent(tp);
++
++		tn = node_parent(tn);
++		node_slen = tn->slen;
+ 	}
+ }
+ 
+-static void leaf_push_suffix(struct key_vector *tn, struct key_vector *l)
++static void node_push_suffix(struct key_vector *tn, unsigned char slen)
+ {
+-	/* if this is a new leaf then tn will be NULL and we can sort
+-	 * out parent suffix lengths as a part of trie_rebalance
+-	 */
+-	while (tn->slen < l->slen) {
+-		tn->slen = l->slen;
++	while (tn->slen < slen) {
++		tn->slen = slen;
+ 		tn = node_parent(tn);
+ 	}
+ }
+@@ -1028,6 +1019,7 @@ static int fib_insert_node(struct trie *t, struct key_vector *tp,
+ 	}
+ 
+ 	/* Case 3: n is NULL, and will just insert a new leaf */
++	node_push_suffix(tp, new->fa_slen);
+ 	NODE_INIT_PARENT(l, tp);
+ 	put_child_root(tp, key, l);
+ 	trie_rebalance(t, tp);
+@@ -1069,7 +1061,7 @@ static int fib_insert_alias(struct trie *t, struct key_vector *tp,
+ 	/* if we added to the tail node then we need to update slen */
+ 	if (l->slen < new->fa_slen) {
+ 		l->slen = new->fa_slen;
+-		leaf_push_suffix(tp, l);
++		node_push_suffix(tp, new->fa_slen);
+ 	}
+ 
+ 	return 0;
+@@ -1470,6 +1462,8 @@ static void fib_remove_alias(struct trie *t, struct key_vector *tp,
+ 	 * out parent suffix lengths as a part of trie_rebalance
+ 	 */
+ 	if (hlist_empty(&l->leaf)) {
++		if (tp->slen == l->slen)
++			node_pull_suffix(tp, tp->pos);
+ 		put_child_root(tp, l->key, NULL);
+ 		node_free(l);
+ 		trie_rebalance(t, tp);
+@@ -1482,7 +1476,7 @@ static void fib_remove_alias(struct trie *t, struct key_vector *tp,
+ 
+ 	/* update the trie with the latest suffix length */
+ 	l->slen = fa->fa_slen;
+-	leaf_pull_suffix(tp, l);
++	node_pull_suffix(tp, fa->fa_slen);
+ }
+ 
+ /* Caller must hold RTNL. */
+@@ -1713,8 +1707,10 @@ struct fib_table *fib_trie_unmerge(struct fib_table *oldtb)
+ 				local_l = fib_find_node(lt, &local_tp, l->key);
+ 
+ 			if (fib_insert_alias(lt, local_tp, local_l, new_fa,
+-					     NULL, l->key))
++					     NULL, l->key)) {
++				kmem_cache_free(fn_alias_kmem, new_fa);
+ 				goto out;
++			}
+ 		}
+ 
+ 		/* stop loop if key wrapped back to 0 */
+@@ -1751,6 +1747,10 @@ void fib_table_flush_external(struct fib_table *tb)
+ 			if (IS_TRIE(pn))
+ 				break;
+ 
++			/* update the suffix to address pulled leaves */
++			if (pn->slen > pn->pos)
++				update_suffix(pn);
++
+ 			/* resize completed node */
+ 			pn = resize(t, pn);
+ 			cindex = get_index(pkey, pn);
+@@ -1826,6 +1826,10 @@ int fib_table_flush(struct fib_table *tb)
+ 			if (IS_TRIE(pn))
+ 				break;
+ 
++			/* update the suffix to address pulled leaves */
++			if (pn->slen > pn->pos)
++				update_suffix(pn);
++
+ 			/* resize completed node */
+ 			pn = resize(t, pn);
+ 			cindex = get_index(pkey, pn);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 307daed9a4b9..f4790c30c543 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -98,6 +98,9 @@ int __ip_local_out(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 
+ 	iph->tot_len = htons(skb->len);
+ 	ip_send_check(iph);
++
++	skb->protocol = htons(ETH_P_IP);
++
+ 	return nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT,
+ 		       net, sk, skb, NULL, skb_dst(skb)->dev,
+ 		       dst_output);
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 66ddcb60519a..dcdd5aed7eb1 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -662,6 +662,10 @@ int ping_common_sendmsg(int family, struct msghdr *msg, size_t len,
+ 	if (len > 0xFFFF)
+ 		return -EMSGSIZE;
+ 
++	/* Must have at least a full ICMP header. */
++	if (len < icmph_len)
++		return -EINVAL;
++
+ 	/*
+ 	 *	Check the flags.
+ 	 */
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index c0d71e7d663e..a2d54f5b0fe0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1451,7 +1451,7 @@ static void udp_v4_rehash(struct sock *sk)
+ 	udp_lib_rehash(sk, new_hash);
+ }
+ 
+-static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
++int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ 	int rc;
+ 
+diff --git a/net/ipv4/udp_impl.h b/net/ipv4/udp_impl.h
+index 7e0fe4bdd967..feb50a16398d 100644
+--- a/net/ipv4/udp_impl.h
++++ b/net/ipv4/udp_impl.h
+@@ -25,7 +25,7 @@ int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
+ 		int flags, int *addr_len);
+ int udp_sendpage(struct sock *sk, struct page *page, int offset, size_t size,
+ 		 int flags);
+-int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
++int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
+ void udp_destroy_sock(struct sock *sk);
+ 
+ #ifdef CONFIG_PROC_FS
+diff --git a/net/ipv4/udplite.c b/net/ipv4/udplite.c
+index 2eea073e27ef..705d9fbf0bcf 100644
+--- a/net/ipv4/udplite.c
++++ b/net/ipv4/udplite.c
+@@ -50,7 +50,7 @@ struct proto 	udplite_prot = {
+ 	.sendmsg	   = udp_sendmsg,
+ 	.recvmsg	   = udp_recvmsg,
+ 	.sendpage	   = udp_sendpage,
+-	.backlog_rcv	   = udp_queue_rcv_skb,
++	.backlog_rcv	   = __udp_queue_rcv_skb,
+ 	.hash		   = udp_lib_hash,
+ 	.unhash		   = udp_lib_unhash,
+ 	.get_port	   = udp_v4_get_port,
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f5432d65e6bf..8f2e36f8afd9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -163,7 +163,7 @@ static struct rt6_info *addrconf_get_prefix_route(const struct in6_addr *pfx,
+ 
+ static void addrconf_dad_start(struct inet6_ifaddr *ifp);
+ static void addrconf_dad_work(struct work_struct *w);
+-static void addrconf_dad_completed(struct inet6_ifaddr *ifp);
++static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id);
+ static void addrconf_dad_run(struct inet6_dev *idev);
+ static void addrconf_rs_timer(unsigned long data);
+ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa);
+@@ -2893,6 +2893,7 @@ static void add_addr(struct inet6_dev *idev, const struct in6_addr *addr,
+ 		spin_lock_bh(&ifp->lock);
+ 		ifp->flags &= ~IFA_F_TENTATIVE;
+ 		spin_unlock_bh(&ifp->lock);
++		rt_genid_bump_ipv6(dev_net(idev->dev));
+ 		ipv6_ifa_notify(RTM_NEWADDR, ifp);
+ 		in6_ifa_put(ifp);
+ 	}
+@@ -3736,7 +3737,7 @@ static void addrconf_dad_begin(struct inet6_ifaddr *ifp)
+ {
+ 	struct inet6_dev *idev = ifp->idev;
+ 	struct net_device *dev = idev->dev;
+-	bool notify = false;
++	bool bump_id, notify = false;
+ 
+ 	addrconf_join_solict(dev, &ifp->addr);
+ 
+@@ -3751,11 +3752,12 @@ static void addrconf_dad_begin(struct inet6_ifaddr *ifp)
+ 	    idev->cnf.accept_dad < 1 ||
+ 	    !(ifp->flags&IFA_F_TENTATIVE) ||
+ 	    ifp->flags & IFA_F_NODAD) {
++		bump_id = ifp->flags & IFA_F_TENTATIVE;
+ 		ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED);
+ 		spin_unlock(&ifp->lock);
+ 		read_unlock_bh(&idev->lock);
+ 
+-		addrconf_dad_completed(ifp);
++		addrconf_dad_completed(ifp, bump_id);
+ 		return;
+ 	}
+ 
+@@ -3815,8 +3817,8 @@ static void addrconf_dad_work(struct work_struct *w)
+ 						struct inet6_ifaddr,
+ 						dad_work);
+ 	struct inet6_dev *idev = ifp->idev;
++	bool bump_id, disable_ipv6 = false;
+ 	struct in6_addr mcaddr;
+-	bool disable_ipv6 = false;
+ 
+ 	enum {
+ 		DAD_PROCESS,
+@@ -3886,11 +3888,12 @@ static void addrconf_dad_work(struct work_struct *w)
+ 		 * DAD was successful
+ 		 */
+ 
++		bump_id = ifp->flags & IFA_F_TENTATIVE;
+ 		ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED);
+ 		spin_unlock(&ifp->lock);
+ 		write_unlock_bh(&idev->lock);
+ 
+-		addrconf_dad_completed(ifp);
++		addrconf_dad_completed(ifp, bump_id);
+ 
+ 		goto out;
+ 	}
+@@ -3927,7 +3930,7 @@ static bool ipv6_lonely_lladdr(struct inet6_ifaddr *ifp)
+ 	return true;
+ }
+ 
+-static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
++static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id)
+ {
+ 	struct net_device *dev = ifp->idev->dev;
+ 	struct in6_addr lladdr;
+@@ -3978,6 +3981,9 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
+ 		spin_unlock(&ifp->lock);
+ 		write_unlock_bh(&ifp->idev->lock);
+ 	}
++
++	if (bump_id)
++		rt_genid_bump_ipv6(dev_net(dev));
+ }
+ 
+ static void addrconf_dad_run(struct inet6_dev *idev)
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 060a60b2f8a6..111ba55fd512 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -418,7 +418,7 @@ static int esp6_input(struct xfrm_state *x, struct sk_buff *skb)
+ 		esph = (void *)skb_push(skb, 4);
+ 		*seqhi = esph->spi;
+ 		esph->spi = esph->seq_no;
+-		esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi);
++		esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi;
+ 		aead_request_set_callback(req, 0, esp_input_done_esn, skb);
+ 	}
+ 
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index a09418bda1f8..93294cf4525c 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -98,7 +98,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 		segs = ops->callbacks.gso_segment(skb, features);
+ 	}
+ 
+-	if (IS_ERR(segs))
++	if (IS_ERR_OR_NULL(segs))
+ 		goto out;
+ 
+ 	for (skb = segs; skb; skb = skb->next) {
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 41489f39c456..da4e7b377812 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1014,6 +1014,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ 	int mtu;
+ 	unsigned int psh_hlen = sizeof(struct ipv6hdr) + t->encap_hlen;
+ 	unsigned int max_headroom = psh_hlen;
++	bool use_cache = false;
+ 	int err = -1;
+ 
+ 	/* NBMA tunnel */
+@@ -1038,7 +1039,15 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ 
+ 		memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));
+ 		neigh_release(neigh);
+-	} else if (!fl6->flowi6_mark)
++	} else if (!(t->parms.flags &
++		     (IP6_TNL_F_USE_ORIG_TCLASS | IP6_TNL_F_USE_ORIG_FWMARK))) {
++		/* enable the cache only only if the routing decision does
++		 * not depend on the current inner header value
++		 */
++		use_cache = true;
++	}
++
++	if (use_cache)
+ 		dst = dst_cache_get(&t->dst_cache);
+ 
+ 	if (!ip6_tnl_xmit_ctl(t, &fl6->saddr, &fl6->daddr))
+@@ -1113,7 +1122,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ 		skb = new_skb;
+ 	}
+ 
+-	if (!fl6->flowi6_mark && ndst)
++	if (use_cache && ndst)
+ 		dst_cache_set_ip6(&t->dst_cache, ndst, &fl6->saddr);
+ 	skb_dst_set(skb, dst);
+ 
+@@ -1134,7 +1143,6 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ 	if (err)
+ 		return err;
+ 
+-	skb->protocol = htons(ETH_P_IPV6);
+ 	skb_push(skb, sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	ipv6h = ipv6_hdr(skb);
+diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
+index 462f2a76b5c2..1d184322a7b1 100644
+--- a/net/ipv6/output_core.c
++++ b/net/ipv6/output_core.c
+@@ -148,6 +148,8 @@ int __ip6_local_out(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	ipv6_hdr(skb)->payload_len = htons(len);
+ 	IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr);
+ 
++	skb->protocol = htons(ETH_P_IPV6);
++
+ 	return nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
+ 		       net, sk, skb, NULL, skb_dst(skb)->dev,
+ 		       dst_output);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index c2a8656c22eb..fa39ab8ec1fc 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -514,7 +514,7 @@ out:
+ 	return;
+ }
+ 
+-static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
++int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ 	int rc;
+ 
+diff --git a/net/ipv6/udp_impl.h b/net/ipv6/udp_impl.h
+index 0682c031ccdc..3c1dbc9f74cf 100644
+--- a/net/ipv6/udp_impl.h
++++ b/net/ipv6/udp_impl.h
+@@ -26,7 +26,7 @@ int compat_udpv6_getsockopt(struct sock *sk, int level, int optname,
+ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len);
+ int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
+ 		  int flags, int *addr_len);
+-int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
++int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
+ void udpv6_destroy_sock(struct sock *sk);
+ 
+ void udp_v6_clear_sk(struct sock *sk, int size);
+diff --git a/net/ipv6/udplite.c b/net/ipv6/udplite.c
+index fd6ef414899b..af2895c77ed6 100644
+--- a/net/ipv6/udplite.c
++++ b/net/ipv6/udplite.c
+@@ -45,7 +45,7 @@ struct proto udplitev6_prot = {
+ 	.getsockopt	   = udpv6_getsockopt,
+ 	.sendmsg	   = udpv6_sendmsg,
+ 	.recvmsg	   = udpv6_recvmsg,
+-	.backlog_rcv	   = udpv6_queue_rcv_skb,
++	.backlog_rcv	   = __udpv6_queue_rcv_skb,
+ 	.hash		   = udp_lib_hash,
+ 	.unhash		   = udp_lib_unhash,
+ 	.get_port	   = udp_v6_get_port,
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index 42de4ccd159f..d0e906d39642 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -251,8 +251,6 @@ static int l2tp_ip_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	int ret;
+ 	int chk_addr_ret;
+ 
+-	if (!sock_flag(sk, SOCK_ZAPPED))
+-		return -EINVAL;
+ 	if (addr_len < sizeof(struct sockaddr_l2tpip))
+ 		return -EINVAL;
+ 	if (addr->l2tp_family != AF_INET)
+@@ -267,6 +265,9 @@ static int l2tp_ip_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	read_unlock_bh(&l2tp_ip_lock);
+ 
+ 	lock_sock(sk);
++	if (!sock_flag(sk, SOCK_ZAPPED))
++		goto out;
++
+ 	if (sk->sk_state != TCP_CLOSE || addr_len < sizeof(struct sockaddr_l2tpip))
+ 		goto out;
+ 
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index ea2ae6664cc8..b9c6a412b806 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -269,8 +269,6 @@ static int l2tp_ip6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	int addr_type;
+ 	int err;
+ 
+-	if (!sock_flag(sk, SOCK_ZAPPED))
+-		return -EINVAL;
+ 	if (addr->l2tp_family != AF_INET6)
+ 		return -EINVAL;
+ 	if (addr_len < sizeof(*addr))
+@@ -296,6 +294,9 @@ static int l2tp_ip6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	lock_sock(sk);
+ 
+ 	err = -EINVAL;
++	if (!sock_flag(sk, SOCK_ZAPPED))
++		goto out_unlock;
++
+ 	if (sk->sk_state != TCP_CLOSE)
+ 		goto out_unlock;
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 62bea4591054..246f29d365c0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -329,7 +329,6 @@ static void netlink_sock_destruct(struct sock *sk)
+ 	if (nlk->cb_running) {
+ 		if (nlk->cb.done)
+ 			nlk->cb.done(&nlk->cb);
+-
+ 		module_put(nlk->cb.module);
+ 		kfree_skb(nlk->cb.skb);
+ 	}
+@@ -346,6 +345,14 @@ static void netlink_sock_destruct(struct sock *sk)
+ 	WARN_ON(nlk_sk(sk)->groups);
+ }
+ 
++static void netlink_sock_destruct_work(struct work_struct *work)
++{
++	struct netlink_sock *nlk = container_of(work, struct netlink_sock,
++						work);
++
++	sk_free(&nlk->sk);
++}
++
+ /* This lock without WQ_FLAG_EXCLUSIVE is good on UP and it is _very_ bad on
+  * SMP. Look, when several writers sleep and reader wakes them up, all but one
+  * immediately hit write lock and grab all the cpus. Exclusive sleep solves
+@@ -648,8 +655,18 @@ out_module:
+ static void deferred_put_nlk_sk(struct rcu_head *head)
+ {
+ 	struct netlink_sock *nlk = container_of(head, struct netlink_sock, rcu);
++	struct sock *sk = &nlk->sk;
++
++	if (!atomic_dec_and_test(&sk->sk_refcnt))
++		return;
++
++	if (nlk->cb_running && nlk->cb.done) {
++		INIT_WORK(&nlk->work, netlink_sock_destruct_work);
++		schedule_work(&nlk->work);
++		return;
++	}
+ 
+-	sock_put(&nlk->sk);
++	sk_free(sk);
+ }
+ 
+ static int netlink_release(struct socket *sock)
+diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
+index 3cfd6cc60504..4fdb38318977 100644
+--- a/net/netlink/af_netlink.h
++++ b/net/netlink/af_netlink.h
+@@ -3,6 +3,7 @@
+ 
+ #include <linux/rhashtable.h>
+ #include <linux/atomic.h>
++#include <linux/workqueue.h>
+ #include <net/sock.h>
+ 
+ #define NLGRPSZ(x)	(ALIGN(x, sizeof(unsigned long) * 8) / 8)
+@@ -33,6 +34,7 @@ struct netlink_sock {
+ 
+ 	struct rhash_head	node;
+ 	struct rcu_head		rcu;
++	struct work_struct	work;
+ };
+ 
+ static inline struct netlink_sock *nlk_sk(struct sock *sk)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d2238b204691..dd2332390c45 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3648,19 +3648,25 @@ packet_setsockopt(struct socket *sock, int level, int optname, char __user *optv
+ 
+ 		if (optlen != sizeof(val))
+ 			return -EINVAL;
+-		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)
+-			return -EBUSY;
+ 		if (copy_from_user(&val, optval, sizeof(val)))
+ 			return -EFAULT;
+ 		switch (val) {
+ 		case TPACKET_V1:
+ 		case TPACKET_V2:
+ 		case TPACKET_V3:
+-			po->tp_version = val;
+-			return 0;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
++		lock_sock(sk);
++		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {
++			ret = -EBUSY;
++		} else {
++			po->tp_version = val;
++			ret = 0;
++		}
++		release_sock(sk);
++		return ret;
+ 	}
+ 	case PACKET_RESERVE:
+ 	{
+@@ -4164,6 +4170,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	/* Added to avoid minimal code churn */
+ 	struct tpacket_req *req = &req_u->req;
+ 
++	lock_sock(sk);
+ 	/* Opening a Tx-ring is NOT supported in TPACKET_V3 */
+ 	if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
+ 		net_warn_ratelimited("Tx-ring is not supported.\n");
+@@ -4245,7 +4252,6 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 			goto out;
+ 	}
+ 
+-	lock_sock(sk);
+ 
+ 	/* Detach socket from network */
+ 	spin_lock(&po->bind_lock);
+@@ -4294,11 +4300,11 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 		if (!tx_ring)
+ 			prb_shutdown_retire_blk_timer(po, rb_queue);
+ 	}
+-	release_sock(sk);
+ 
+ 	if (pg_vec)
+ 		free_pg_vec(pg_vec, order, req->tp_block_nr);
+ out:
++	release_sock(sk);
+ 	return err;
+ }
+ 
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index b54d56d4959b..cf9b2fe8eac6 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -108,6 +108,17 @@ static void tcf_pedit_cleanup(struct tc_action *a, int bind)
+ 	kfree(keys);
+ }
+ 
++static bool offset_valid(struct sk_buff *skb, int offset)
++{
++	if (offset > 0 && offset > skb->len)
++		return false;
++
++	if  (offset < 0 && -offset > skb_headroom(skb))
++		return false;
++
++	return true;
++}
++
+ static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a,
+ 		     struct tcf_result *res)
+ {
+@@ -134,6 +145,11 @@ static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a,
+ 			if (tkey->offmask) {
+ 				char *d, _d;
+ 
++				if (!offset_valid(skb, off + tkey->at)) {
++					pr_info("tc filter pedit 'at' offset %d out of bounds\n",
++						off + tkey->at);
++					goto bad;
++				}
+ 				d = skb_header_pointer(skb, off + tkey->at, 1,
+ 						       &_d);
+ 				if (!d)
+@@ -146,10 +162,10 @@ static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a,
+ 					" offset must be on 32 bit boundaries\n");
+ 				goto bad;
+ 			}
+-			if (offset > 0 && offset > skb->len) {
+-				pr_info("tc filter pedit"
+-					" offset %d can't exceed pkt length %d\n",
+-				       offset, skb->len);
++
++			if (!offset_valid(skb, off + offset)) {
++				pr_info("tc filter pedit offset %d out of bounds\n",
++					offset);
+ 				goto bad;
+ 			}
+ 
+diff --git a/net/sched/cls_basic.c b/net/sched/cls_basic.c
+index 0b8c3ace671f..1bf1f4517db6 100644
+--- a/net/sched/cls_basic.c
++++ b/net/sched/cls_basic.c
+@@ -62,9 +62,6 @@ static unsigned long basic_get(struct tcf_proto *tp, u32 handle)
+ 	struct basic_head *head = rtnl_dereference(tp->root);
+ 	struct basic_filter *f;
+ 
+-	if (head == NULL)
+-		return 0UL;
+-
+ 	list_for_each_entry(f, &head->flist, link) {
+ 		if (f->handle == handle) {
+ 			l = (unsigned long) f;
+@@ -109,7 +106,6 @@ static bool basic_destroy(struct tcf_proto *tp, bool force)
+ 		tcf_unbind_filter(tp, &f->res);
+ 		call_rcu(&f->rcu, basic_delete_filter);
+ 	}
+-	RCU_INIT_POINTER(tp->root, NULL);
+ 	kfree_rcu(head, rcu);
+ 	return true;
+ }
+diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
+index c3002c2c68bb..dbec45848fe1 100644
+--- a/net/sched/cls_bpf.c
++++ b/net/sched/cls_bpf.c
+@@ -200,7 +200,6 @@ static bool cls_bpf_destroy(struct tcf_proto *tp, bool force)
+ 		call_rcu(&prog->rcu, __cls_bpf_delete_prog);
+ 	}
+ 
+-	RCU_INIT_POINTER(tp->root, NULL);
+ 	kfree_rcu(head, rcu);
+ 	return true;
+ }
+@@ -211,9 +210,6 @@ static unsigned long cls_bpf_get(struct tcf_proto *tp, u32 handle)
+ 	struct cls_bpf_prog *prog;
+ 	unsigned long ret = 0UL;
+ 
+-	if (head == NULL)
+-		return 0UL;
+-
+ 	list_for_each_entry(prog, &head->plist, link) {
+ 		if (prog->handle == handle) {
+ 			ret = (unsigned long) prog;
+diff --git a/net/sched/cls_cgroup.c b/net/sched/cls_cgroup.c
+index 4c85bd3a750c..c104c2019feb 100644
+--- a/net/sched/cls_cgroup.c
++++ b/net/sched/cls_cgroup.c
+@@ -130,11 +130,10 @@ static bool cls_cgroup_destroy(struct tcf_proto *tp, bool force)
+ 
+ 	if (!force)
+ 		return false;
+-
+-	if (head) {
+-		RCU_INIT_POINTER(tp->root, NULL);
++	/* Head can still be NULL due to cls_cgroup_init(). */
++	if (head)
+ 		call_rcu(&head->rcu, cls_cgroup_destroy_rcu);
+-	}
++
+ 	return true;
+ }
+ 
+diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c
+index fbfec6a18839..d7ba2b4ff0f3 100644
+--- a/net/sched/cls_flow.c
++++ b/net/sched/cls_flow.c
+@@ -583,7 +583,6 @@ static bool flow_destroy(struct tcf_proto *tp, bool force)
+ 		list_del_rcu(&f->list);
+ 		call_rcu(&f->rcu, flow_destroy_filter);
+ 	}
+-	RCU_INIT_POINTER(tp->root, NULL);
+ 	kfree_rcu(head, rcu);
+ 	return true;
+ }
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 5060801a2f6d..a411571c4d4a 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/rhashtable.h>
++#include <linux/workqueue.h>
+ 
+ #include <linux/if_ether.h>
+ #include <linux/in6.h>
+@@ -55,7 +56,10 @@ struct cls_fl_head {
+ 	bool mask_assigned;
+ 	struct list_head filters;
+ 	struct rhashtable_params ht_params;
+-	struct rcu_head rcu;
++	union {
++		struct work_struct work;
++		struct rcu_head	rcu;
++	};
+ };
+ 
+ struct cls_fl_filter {
+@@ -239,6 +243,24 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
+ 	dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc);
+ }
+ 
++static void fl_destroy_sleepable(struct work_struct *work)
++{
++	struct cls_fl_head *head = container_of(work, struct cls_fl_head,
++						work);
++	if (head->mask_assigned)
++		rhashtable_destroy(&head->ht);
++	kfree(head);
++	module_put(THIS_MODULE);
++}
++
++static void fl_destroy_rcu(struct rcu_head *rcu)
++{
++	struct cls_fl_head *head = container_of(rcu, struct cls_fl_head, rcu);
++
++	INIT_WORK(&head->work, fl_destroy_sleepable);
++	schedule_work(&head->work);
++}
++
+ static bool fl_destroy(struct tcf_proto *tp, bool force)
+ {
+ 	struct cls_fl_head *head = rtnl_dereference(tp->root);
+@@ -252,10 +274,9 @@ static bool fl_destroy(struct tcf_proto *tp, bool force)
+ 		list_del_rcu(&f->list);
+ 		call_rcu(&f->rcu, fl_destroy_filter);
+ 	}
+-	RCU_INIT_POINTER(tp->root, NULL);
+-	if (head->mask_assigned)
+-		rhashtable_destroy(&head->ht);
+-	kfree_rcu(head, rcu);
++
++	__module_get(THIS_MODULE);
++	call_rcu(&head->rcu, fl_destroy_rcu);
+ 	return true;
+ }
+ 
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 25927b6c4436..f935429bd5ef 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -114,7 +114,6 @@ static bool mall_destroy(struct tcf_proto *tp, bool force)
+ 
+ 		call_rcu(&f->rcu, mall_destroy_filter);
+ 	}
+-	RCU_INIT_POINTER(tp->root, NULL);
+ 	kfree_rcu(head, rcu);
+ 	return true;
+ }
+diff --git a/net/sched/cls_rsvp.h b/net/sched/cls_rsvp.h
+index f9c9fc075fe6..9992dfac6938 100644
+--- a/net/sched/cls_rsvp.h
++++ b/net/sched/cls_rsvp.h
+@@ -152,7 +152,8 @@ static int rsvp_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+ 		return -1;
+ 	nhptr = ip_hdr(skb);
+ #endif
+-
++	if (unlikely(!head))
++		return -1;
+ restart:
+ 
+ #if RSVP_DST_LEN == 4
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 944c8ff45055..403746b20263 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -503,7 +503,6 @@ static bool tcindex_destroy(struct tcf_proto *tp, bool force)
+ 	walker.fn = tcindex_destroy_element;
+ 	tcindex_walk(tp, &walker);
+ 
+-	RCU_INIT_POINTER(tp->root, NULL);
+ 	call_rcu(&p->rcu, __tcindex_destroy);
+ 	return true;
+ }
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 65b1bbf133bd..616769983bcd 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -402,6 +402,10 @@ int tipc_enable_l2_media(struct net *net, struct tipc_bearer *b,
+ 	dev = dev_get_by_name(net, driver_name);
+ 	if (!dev)
+ 		return -ENODEV;
++	if (tipc_mtu_bad(dev, 0)) {
++		dev_put(dev);
++		return -EINVAL;
++	}
+ 
+ 	/* Associate TIPC bearer with L2 bearer */
+ 	rcu_assign_pointer(b->media_ptr, dev);
+@@ -606,8 +610,6 @@ static int tipc_l2_device_event(struct notifier_block *nb, unsigned long evt,
+ 	if (!b)
+ 		return NOTIFY_DONE;
+ 
+-	b->mtu = dev->mtu;
+-
+ 	switch (evt) {
+ 	case NETDEV_CHANGE:
+ 		if (netif_carrier_ok(dev))
+@@ -621,6 +623,11 @@ static int tipc_l2_device_event(struct notifier_block *nb, unsigned long evt,
+ 		tipc_reset_bearer(net, b);
+ 		break;
+ 	case NETDEV_CHANGEMTU:
++		if (tipc_mtu_bad(dev, 0)) {
++			bearer_disable(net, b);
++			break;
++		}
++		b->mtu = dev->mtu;
+ 		tipc_reset_bearer(net, b);
+ 		break;
+ 	case NETDEV_CHANGEADDR:
+diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
+index 43757f1f9cb3..d93f1f1a21e6 100644
+--- a/net/tipc/bearer.h
++++ b/net/tipc/bearer.h
+@@ -39,6 +39,7 @@
+ 
+ #include "netlink.h"
+ #include "core.h"
++#include "msg.h"
+ #include <net/genetlink.h>
+ 
+ #define MAX_MEDIA	3
+@@ -59,6 +60,9 @@
+ #define TIPC_MEDIA_TYPE_IB	2
+ #define TIPC_MEDIA_TYPE_UDP	3
+ 
++/* minimum bearer MTU */
++#define TIPC_MIN_BEARER_MTU	(MAX_H_SIZE + INT_H_SIZE)
++
+ /**
+  * struct tipc_media_addr - destination address used by TIPC bearers
+  * @value: address info (format defined by media)
+@@ -213,4 +217,13 @@ void tipc_bearer_xmit(struct net *net, u32 bearer_id,
+ void tipc_bearer_bc_xmit(struct net *net, u32 bearer_id,
+ 			 struct sk_buff_head *xmitq);
+ 
++/* check if device MTU is too low for tipc headers */
++static inline bool tipc_mtu_bad(struct net_device *dev, unsigned int reserve)
++{
++	if (dev->mtu >= TIPC_MIN_BEARER_MTU + reserve)
++		return false;
++	netdev_warn(dev, "MTU too low for tipc bearer\n");
++	return true;
++}
++
+ #endif	/* _TIPC_BEARER_H */
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index ae7e14cae085..f60f346e75b3 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -372,6 +372,11 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b,
+ 		udp_conf.local_ip.s_addr = htonl(INADDR_ANY);
+ 		udp_conf.use_udp_checksums = false;
+ 		ub->ifindex = dev->ifindex;
++		if (tipc_mtu_bad(dev, sizeof(struct iphdr) +
++				      sizeof(struct udphdr))) {
++			err = -EINVAL;
++			goto err;
++		}
+ 		b->mtu = dev->mtu - sizeof(struct iphdr)
+ 			- sizeof(struct udphdr);
+ #if IS_ENABLED(CONFIG_IPV6)
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 8309687a56b0..568f307afdcf 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2199,7 +2199,8 @@ out:
+  *	Sleep until more data has arrived. But check for races..
+  */
+ static long unix_stream_data_wait(struct sock *sk, long timeo,
+-				  struct sk_buff *last, unsigned int last_len)
++				  struct sk_buff *last, unsigned int last_len,
++				  bool freezable)
+ {
+ 	struct sk_buff *tail;
+ 	DEFINE_WAIT(wait);
+@@ -2220,7 +2221,10 @@ static long unix_stream_data_wait(struct sock *sk, long timeo,
+ 
+ 		sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ 		unix_state_unlock(sk);
+-		timeo = freezable_schedule_timeout(timeo);
++		if (freezable)
++			timeo = freezable_schedule_timeout(timeo);
++		else
++			timeo = schedule_timeout(timeo);
+ 		unix_state_lock(sk);
+ 
+ 		if (sock_flag(sk, SOCK_DEAD))
+@@ -2250,7 +2254,8 @@ struct unix_stream_read_state {
+ 	unsigned int splice_flags;
+ };
+ 
+-static int unix_stream_read_generic(struct unix_stream_read_state *state)
++static int unix_stream_read_generic(struct unix_stream_read_state *state,
++				    bool freezable)
+ {
+ 	struct scm_cookie scm;
+ 	struct socket *sock = state->socket;
+@@ -2330,7 +2335,7 @@ again:
+ 			mutex_unlock(&u->iolock);
+ 
+ 			timeo = unix_stream_data_wait(sk, timeo, last,
+-						      last_len);
++						      last_len, freezable);
+ 
+ 			if (signal_pending(current)) {
+ 				err = sock_intr_errno(timeo);
+@@ -2472,7 +2477,7 @@ static int unix_stream_recvmsg(struct socket *sock, struct msghdr *msg,
+ 		.flags = flags
+ 	};
+ 
+-	return unix_stream_read_generic(&state);
++	return unix_stream_read_generic(&state, true);
+ }
+ 
+ static ssize_t skb_unix_socket_splice(struct sock *sk,
+@@ -2518,7 +2523,7 @@ static ssize_t unix_stream_splice_read(struct socket *sock,  loff_t *ppos,
+ 	    flags & SPLICE_F_NONBLOCK)
+ 		state.flags = MSG_DONTWAIT;
+ 
+-	return unix_stream_read_generic(&state);
++	return unix_stream_read_generic(&state, false);
+ }
+ 
+ static int unix_shutdown(struct socket *sock, int mode)

diff --git a/1520_fix-race-condition-in-packet-set-ring.patch b/1520_fix-race-condition-in-packet-set-ring.patch
deleted file mode 100644
index d85527f..0000000
--- a/1520_fix-race-condition-in-packet-set-ring.patch
+++ /dev/null
@@ -1,62 +0,0 @@
---- a/net/packet/af_packet.c	2016-12-07 18:10:25.785812861 -0500
-+++ b/net/packet/af_packet.c	2016-12-07 18:18:45.597933525 -0500
-@@ -3648,19 +3648,25 @@ packet_setsockopt(struct socket *sock, i
- 
- 		if (optlen != sizeof(val))
- 			return -EINVAL;
--		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)
--			return -EBUSY;
- 		if (copy_from_user(&val, optval, sizeof(val)))
- 			return -EFAULT;
- 		switch (val) {
- 		case TPACKET_V1:
- 		case TPACKET_V2:
- 		case TPACKET_V3:
--			po->tp_version = val;
--			return 0;
-+			break;
- 		default:
- 			return -EINVAL;
- 		}
-+		lock_sock(sk);
-+		if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {
-+			ret = -EBUSY;
-+		} else {
-+			po->tp_version = val;
-+			ret = 0;
-+		}
-+		release_sock(sk);
-+		return ret;
- 	}
- 	case PACKET_RESERVE:
- 	{
-@@ -4164,6 +4170,7 @@ static int packet_set_ring(struct sock *
- 	/* Added to avoid minimal code churn */
- 	struct tpacket_req *req = &req_u->req;
- 
-+	lock_sock(sk);
- 	/* Opening a Tx-ring is NOT supported in TPACKET_V3 */
- 	if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
- 		net_warn_ratelimited("Tx-ring is not supported.\n");
-@@ -4245,8 +4252,6 @@ static int packet_set_ring(struct sock *
- 			goto out;
- 	}
- 
--	lock_sock(sk);
--
- 	/* Detach socket from network */
- 	spin_lock(&po->bind_lock);
- 	was_running = po->running;
-@@ -4294,11 +4299,11 @@ static int packet_set_ring(struct sock *
- 		if (!tx_ring)
- 			prb_shutdown_retire_blk_timer(po, rb_queue);
- 	}
--	release_sock(sk);
- 
- 	if (pg_vec)
- 		free_pg_vec(pg_vec, order, req->tp_block_nr);
- out:
-+	release_sock(sk);
- 	return err;
- }
- 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2016-12-15 23:43 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2016-12-15 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     5ab163abb40b21be3023de3568846f00c39d729a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 15 23:43:07 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 15 23:43:07 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5ab163ab

Linux patch 4.8.15

 0000_README             |    4 +
 1014_linux-4.8.15.patch | 1042 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1046 insertions(+)

diff --git a/0000_README b/0000_README
index 9b67d47..37d0ff1 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-4.8.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.14
 
+Patch:  1014_linux-4.8.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-4.8.15.patch b/1014_linux-4.8.15.patch
new file mode 100644
index 0000000..fb44713
--- /dev/null
+++ b/1014_linux-4.8.15.patch
@@ -0,0 +1,1042 @@
+diff --git a/Makefile b/Makefile
+index 6a7492473a0d..c7f0e798ca34 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 1e90bdbe3a6e..fb307de5422c 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -640,9 +640,8 @@
+ 				reg = <0x30730000 0x10000>;
+ 				interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
+-					<&clks IMX7D_CLK_DUMMY>,
+-					<&clks IMX7D_CLK_DUMMY>;
+-				clock-names = "pix", "axi", "disp_axi";
++					<&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>;
++				clock-names = "pix", "axi";
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/orion5x-linkstation-lsgl.dts b/arch/arm/boot/dts/orion5x-linkstation-lsgl.dts
+index 1cf644bfd7ea..51dc734cd5b9 100644
+--- a/arch/arm/boot/dts/orion5x-linkstation-lsgl.dts
++++ b/arch/arm/boot/dts/orion5x-linkstation-lsgl.dts
+@@ -82,6 +82,10 @@
+ 	gpios = <&gpio0 9 GPIO_ACTIVE_HIGH>;
+ };
+ 
++&sata {
++	nr-ports = <2>;
++};
++
+ &ehci1 {
+ 	status = "okay";
+ };
+diff --git a/arch/m68k/include/asm/delay.h b/arch/m68k/include/asm/delay.h
+index d28fa8fe26fe..c598d847d56b 100644
+--- a/arch/m68k/include/asm/delay.h
++++ b/arch/m68k/include/asm/delay.h
+@@ -114,6 +114,6 @@ static inline void __udelay(unsigned long usecs)
+  */
+ #define	HZSCALE		(268435456 / (1000000 / HZ))
+ 
+-#define ndelay(n) __delay(DIV_ROUND_UP((n) * ((((HZSCALE) >> 11) * (loops_per_jiffy >> 11)) >> 6), 1000));
++#define ndelay(n) __delay(DIV_ROUND_UP((n) * ((((HZSCALE) >> 11) * (loops_per_jiffy >> 11)) >> 6), 1000))
+ 
+ #endif /* defined(_M68K_DELAY_H) */
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index c2c43f714684..3a4ed9f91d57 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -65,9 +65,9 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ 		unsigned long flags;				\
+ 		spin_lock_irqsave(&pa_tlb_lock, flags);		\
+ 		old_pte = *ptep;				\
+-		set_pte(ptep, pteval);				\
+ 		if (pte_inserted(old_pte))			\
+ 			purge_tlb_entries(mm, addr);		\
++		set_pte(ptep, pteval);				\
+ 		spin_unlock_irqrestore(&pa_tlb_lock, flags);	\
+ 	} while (0)
+ 
+@@ -478,8 +478,8 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned
+ 		spin_unlock_irqrestore(&pa_tlb_lock, flags);
+ 		return 0;
+ 	}
+-	set_pte(ptep, pte_mkold(pte));
+ 	purge_tlb_entries(vma->vm_mm, addr);
++	set_pte(ptep, pte_mkold(pte));
+ 	spin_unlock_irqrestore(&pa_tlb_lock, flags);
+ 	return 1;
+ }
+@@ -492,9 +492,9 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+ 
+ 	spin_lock_irqsave(&pa_tlb_lock, flags);
+ 	old_pte = *ptep;
+-	set_pte(ptep, __pte(0));
+ 	if (pte_inserted(old_pte))
+ 		purge_tlb_entries(mm, addr);
++	set_pte(ptep, __pte(0));
+ 	spin_unlock_irqrestore(&pa_tlb_lock, flags);
+ 
+ 	return old_pte;
+@@ -504,8 +504,8 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
+ {
+ 	unsigned long flags;
+ 	spin_lock_irqsave(&pa_tlb_lock, flags);
+-	set_pte(ptep, pte_wrprotect(*ptep));
+ 	purge_tlb_entries(mm, addr);
++	set_pte(ptep, pte_wrprotect(*ptep));
+ 	spin_unlock_irqrestore(&pa_tlb_lock, flags);
+ }
+ 
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index c2259d4a3c33..bbb314eb7027 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -393,6 +393,15 @@ void __init parisc_setup_cache_timing(void)
+ 
+ 	/* calculate TLB flush threshold */
+ 
++	/* On SMP machines, skip the TLB measure of kernel text which
++	 * has been mapped as huge pages. */
++	if (num_online_cpus() > 1 && !parisc_requires_coherency()) {
++		threshold = max(cache_info.it_size, cache_info.dt_size);
++		threshold *= PAGE_SIZE;
++		threshold /= num_online_cpus();
++		goto set_tlb_threshold;
++	}
++
+ 	alltime = mfctl(16);
+ 	flush_tlb_all();
+ 	alltime = mfctl(16) - alltime;
+@@ -411,6 +420,8 @@ void __init parisc_setup_cache_timing(void)
+ 		alltime, size, rangetime);
+ 
+ 	threshold = PAGE_ALIGN(num_online_cpus() * size * alltime / rangetime);
++
++set_tlb_threshold:
+ 	if (threshold)
+ 		parisc_tlb_flush_threshold = threshold;
+ 	printk(KERN_INFO "TLB flush threshold set to %lu KiB\n",
+diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S
+index 675521919229..a4761b772406 100644
+--- a/arch/parisc/kernel/pacache.S
++++ b/arch/parisc/kernel/pacache.S
+@@ -886,19 +886,10 @@ ENTRY(flush_dcache_page_asm)
+ 	fdc,m		r31(%r28)
+ 	fdc,m		r31(%r28)
+ 	fdc,m		r31(%r28)
+-	cmpb,COND(<<)		%r28, %r25,1b
++	cmpb,COND(<<)	%r28, %r25,1b
+ 	fdc,m		r31(%r28)
+ 
+ 	sync
+-
+-#ifdef CONFIG_PA20
+-	pdtlb,l		%r0(%r25)
+-#else
+-	tlb_lock	%r20,%r21,%r22
+-	pdtlb		%r0(%r25)
+-	tlb_unlock	%r20,%r21,%r22
+-#endif
+-
+ 	bv		%r0(%r2)
+ 	nop
+ 	.exit
+@@ -973,17 +964,6 @@ ENTRY(flush_icache_page_asm)
+ 	fic,m		%r31(%sr4,%r28)
+ 
+ 	sync
+-
+-#ifdef CONFIG_PA20
+-	pdtlb,l		%r0(%r28)
+-	pitlb,l         %r0(%sr4,%r25)
+-#else
+-	tlb_lock        %r20,%r21,%r22
+-	pdtlb		%r0(%r28)
+-	pitlb           %r0(%sr4,%r25)
+-	tlb_unlock      %r20,%r21,%r22
+-#endif
+-
+ 	bv		%r0(%r2)
+ 	nop
+ 	.exit
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index 1a2a6e8dc40d..1894beb2c208 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -78,7 +78,8 @@ src-wlib-y := string.S crt0.S crtsavres.S stdio.c main.c \
+ 		ns16550.c serial.c simple_alloc.c div64.S util.S \
+ 		gunzip_util.c elf_util.c $(zlib) devtree.c stdlib.c \
+ 		oflib.c ofconsole.c cuboot.c mpsc.c cpm-serial.c \
+-		uartlite.c mpc52xx-psc.c opal.c opal-calls.S
++		uartlite.c mpc52xx-psc.c opal.c
++src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) +=  opal-calls.S
+ src-wlib-$(CONFIG_40x) += 4xx.c planetcore.c
+ src-wlib-$(CONFIG_44x) += 4xx.c ebony.c bamboo.c
+ src-wlib-$(CONFIG_8xx) += mpc8xx.c planetcore.c fsl-soc.c
+diff --git a/arch/powerpc/boot/opal.c b/arch/powerpc/boot/opal.c
+index d7b4fd47eb44..0272570d02de 100644
+--- a/arch/powerpc/boot/opal.c
++++ b/arch/powerpc/boot/opal.c
+@@ -13,7 +13,7 @@
+ #include <libfdt.h>
+ #include "../include/asm/opal-api.h"
+ 
+-#ifdef __powerpc64__
++#ifdef CONFIG_PPC64_BOOT_WRAPPER
+ 
+ /* Global OPAL struct used by opal-call.S */
+ struct opal {
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 29aa8d1ce273..248f28bc4641 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -671,8 +671,10 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 
+ 	/* Clear frozen state */
+ 	rc = eeh_clear_pe_frozen_state(pe, false);
+-	if (rc)
++	if (rc) {
++		pci_unlock_rescan_remove();
+ 		return rc;
++	}
+ 
+ 	/* Give the system 5 seconds to finish running the user-space
+ 	 * hotplug shutdown scripts, e.g. ifdown for ethernet.  Yes,
+diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c
+index 42c702b3be1f..6fa450c12d6d 100644
+--- a/arch/powerpc/mm/hash64_4k.c
++++ b/arch/powerpc/mm/hash64_4k.c
+@@ -55,7 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
+ 	 */
+ 	rflags = htab_convert_pte_flags(new_pte);
+ 
+-	if (!cpu_has_feature(CPU_FTR_NOEXECUTE) &&
++	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
+ 	    !cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
+ 		rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
+ 
+diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
+index 3bbbea07378c..1a68cb19b0e3 100644
+--- a/arch/powerpc/mm/hash64_64k.c
++++ b/arch/powerpc/mm/hash64_64k.c
+@@ -87,7 +87,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
+ 	subpg_pte = new_pte & ~subpg_prot;
+ 	rflags = htab_convert_pte_flags(subpg_pte);
+ 
+-	if (!cpu_has_feature(CPU_FTR_NOEXECUTE) &&
++	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
+ 	    !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
+ 
+ 		/*
+@@ -258,7 +258,7 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
+ 
+ 	rflags = htab_convert_pte_flags(new_pte);
+ 
+-	if (!cpu_has_feature(CPU_FTR_NOEXECUTE) &&
++	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
+ 	    !cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
+ 		rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
+ 
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index a4e070a51584..8c925ecaf534 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -68,7 +68,7 @@ u64 x86_perf_event_update(struct perf_event *event)
+ 	int shift = 64 - x86_pmu.cntval_bits;
+ 	u64 prev_raw_count, new_raw_count;
+ 	int idx = hwc->idx;
+-	s64 delta;
++	u64 delta;
+ 
+ 	if (idx == INTEL_PMC_IDX_FIXED_BTS)
+ 		return 0;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 4c9a79b9cd69..3ef34c6e63ca 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4024,7 +4024,7 @@ __init int intel_pmu_init(void)
+ 
+ 	/* Support full width counters using alternative MSR range */
+ 	if (x86_pmu.intel_cap.full_width_write) {
+-		x86_pmu.max_period = x86_pmu.cntval_mask;
++		x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
+ 		x86_pmu.perfctr = MSR_IA32_PMC0;
+ 		pr_cont("full-width counters, ");
+ 	}
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 99cc64ac70ef..bd6a029094e6 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -40,6 +40,7 @@ obj-$(CONFIG_CRYPTO_ECDH) += ecdh_generic.o
+ 
+ $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
+ $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
++$(obj)/rsa_helper.o: $(obj)/rsapubkey-asn1.h $(obj)/rsaprivkey-asn1.h
+ clean-files += rsapubkey-asn1.c rsapubkey-asn1.h
+ clean-files += rsaprivkey-asn1.c rsaprivkey-asn1.h
+ 
+diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
+index 86fb59b109a9..c6e992082259 100644
+--- a/crypto/mcryptd.c
++++ b/crypto/mcryptd.c
+@@ -254,18 +254,22 @@ out_free_inst:
+ 	goto out;
+ }
+ 
+-static inline void mcryptd_check_internal(struct rtattr **tb, u32 *type,
++static inline bool mcryptd_check_internal(struct rtattr **tb, u32 *type,
+ 					  u32 *mask)
+ {
+ 	struct crypto_attr_type *algt;
+ 
+ 	algt = crypto_get_attr_type(tb);
+ 	if (IS_ERR(algt))
+-		return;
+-	if ((algt->type & CRYPTO_ALG_INTERNAL))
+-		*type |= CRYPTO_ALG_INTERNAL;
+-	if ((algt->mask & CRYPTO_ALG_INTERNAL))
+-		*mask |= CRYPTO_ALG_INTERNAL;
++		return false;
++
++	*type |= algt->type & CRYPTO_ALG_INTERNAL;
++	*mask |= algt->mask & CRYPTO_ALG_INTERNAL;
++
++	if (*type & *mask & CRYPTO_ALG_INTERNAL)
++		return true;
++	else
++		return false;
+ }
+ 
+ static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
+@@ -492,7 +496,8 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
+ 	u32 mask = 0;
+ 	int err;
+ 
+-	mcryptd_check_internal(tb, &type, &mask);
++	if (!mcryptd_check_internal(tb, &type, &mask))
++		return -EINVAL;
+ 
+ 	halg = ahash_attr_alg(tb[1], type, mask);
+ 	if (IS_ERR(halg))
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 2accf784534e..93e0d8333a20 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -94,7 +94,7 @@ static struct acpi_device *to_acpi_dev(struct acpi_nfit_desc *acpi_desc)
+ 	return to_acpi_device(acpi_desc->dev);
+ }
+ 
+-static int xlat_status(void *buf, unsigned int cmd, u32 status)
++static int xlat_bus_status(void *buf, unsigned int cmd, u32 status)
+ {
+ 	struct nd_cmd_clear_error *clear_err;
+ 	struct nd_cmd_ars_status *ars_status;
+@@ -113,7 +113,7 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 		flags = ND_ARS_PERSISTENT | ND_ARS_VOLATILE;
+ 		if ((status >> 16 & flags) == 0)
+ 			return -ENOTTY;
+-		break;
++		return 0;
+ 	case ND_CMD_ARS_START:
+ 		/* ARS is in progress */
+ 		if ((status & 0xffff) == NFIT_ARS_START_BUSY)
+@@ -122,7 +122,7 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 		/* Command failed */
+ 		if (status & 0xffff)
+ 			return -EIO;
+-		break;
++		return 0;
+ 	case ND_CMD_ARS_STATUS:
+ 		ars_status = buf;
+ 		/* Command failed */
+@@ -146,7 +146,8 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 		 * then just continue with the returned results.
+ 		 */
+ 		if (status == NFIT_ARS_STATUS_INTR) {
+-			if (ars_status->flags & NFIT_ARS_F_OVERFLOW)
++			if (ars_status->out_length >= 40 && (ars_status->flags
++						& NFIT_ARS_F_OVERFLOW))
+ 				return -ENOSPC;
+ 			return 0;
+ 		}
+@@ -154,7 +155,7 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 		/* Unknown status */
+ 		if (status >> 16)
+ 			return -EIO;
+-		break;
++		return 0;
+ 	case ND_CMD_CLEAR_ERROR:
+ 		clear_err = buf;
+ 		if (status & 0xffff)
+@@ -163,7 +164,7 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 			return -EIO;
+ 		if (clear_err->length > clear_err->cleared)
+ 			return clear_err->cleared;
+-		break;
++		return 0;
+ 	default:
+ 		break;
+ 	}
+@@ -174,6 +175,16 @@ static int xlat_status(void *buf, unsigned int cmd, u32 status)
+ 	return 0;
+ }
+ 
++static int xlat_status(struct nvdimm *nvdimm, void *buf, unsigned int cmd,
++		u32 status)
++{
++	if (!nvdimm)
++		return xlat_bus_status(buf, cmd, status);
++	if (status)
++		return -EIO;
++	return 0;
++}
++
+ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
+ 		struct nvdimm *nvdimm, unsigned int cmd, void *buf,
+ 		unsigned int buf_len, int *cmd_rc)
+@@ -298,7 +309,8 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
+ 
+ 	for (i = 0, offset = 0; i < desc->out_num; i++) {
+ 		u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i, buf,
+-				(u32 *) out_obj->buffer.pointer);
++				(u32 *) out_obj->buffer.pointer,
++				out_obj->buffer.length - offset);
+ 
+ 		if (offset + out_size > out_obj->buffer.length) {
+ 			dev_dbg(dev, "%s:%s output object underflow cmd: %s field: %d\n",
+@@ -333,7 +345,8 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
+ 			 */
+ 			rc = buf_len - offset - in_buf.buffer.length;
+ 			if (cmd_rc)
+-				*cmd_rc = xlat_status(buf, cmd, fw_status);
++				*cmd_rc = xlat_status(nvdimm, buf, cmd,
++						fw_status);
+ 		} else {
+ 			dev_err(dev, "%s:%s underrun cmd: %s buf_len: %d out_len: %d\n",
+ 					__func__, dimm_name, cmd_name, buf_len,
+@@ -343,7 +356,7 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
+ 	} else {
+ 		rc = 0;
+ 		if (cmd_rc)
+-			*cmd_rc = xlat_status(buf, cmd, fw_status);
++			*cmd_rc = xlat_status(nvdimm, buf, cmd, fw_status);
+ 	}
+ 
+  out:
+@@ -1857,19 +1870,32 @@ static int ars_get_status(struct acpi_nfit_desc *acpi_desc)
+ 	return cmd_rc;
+ }
+ 
+-static int ars_status_process_records(struct nvdimm_bus *nvdimm_bus,
++static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc,
+ 		struct nd_cmd_ars_status *ars_status)
+ {
++	struct nvdimm_bus *nvdimm_bus = acpi_desc->nvdimm_bus;
+ 	int rc;
+ 	u32 i;
+ 
++	/*
++	 * First record starts at 44 byte offset from the start of the
++	 * payload.
++	 */
++	if (ars_status->out_length < 44)
++		return 0;
+ 	for (i = 0; i < ars_status->num_records; i++) {
++		/* only process full records */
++		if (ars_status->out_length
++				< 44 + sizeof(struct nd_ars_record) * (i + 1))
++			break;
+ 		rc = nvdimm_bus_add_poison(nvdimm_bus,
+ 				ars_status->records[i].err_address,
+ 				ars_status->records[i].length);
+ 		if (rc)
+ 			return rc;
+ 	}
++	if (i < ars_status->num_records)
++		dev_warn(acpi_desc->dev, "detected truncated ars results\n");
+ 
+ 	return 0;
+ }
+@@ -2122,8 +2148,7 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc,
+ 	if (rc < 0 && rc != -ENOSPC)
+ 		return rc;
+ 
+-	if (ars_status_process_records(acpi_desc->nvdimm_bus,
+-				acpi_desc->ars_status))
++	if (ars_status_process_records(acpi_desc, acpi_desc->ars_status))
+ 		return -ENOMEM;
+ 
+ 	return 0;
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 2b38c1bb0446..7a2e4d45b266 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -47,32 +47,15 @@ static void acpi_sleep_tts_switch(u32 acpi_state)
+ 	}
+ }
+ 
+-static void acpi_sleep_pts_switch(u32 acpi_state)
+-{
+-	acpi_status status;
+-
+-	status = acpi_execute_simple_method(NULL, "\\_PTS", acpi_state);
+-	if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
+-		/*
+-		 * OS can't evaluate the _PTS object correctly. Some warning
+-		 * message will be printed. But it won't break anything.
+-		 */
+-		printk(KERN_NOTICE "Failure in evaluating _PTS object\n");
+-	}
+-}
+-
+-static int sleep_notify_reboot(struct notifier_block *this,
++static int tts_notify_reboot(struct notifier_block *this,
+ 			unsigned long code, void *x)
+ {
+ 	acpi_sleep_tts_switch(ACPI_STATE_S5);
+-
+-	acpi_sleep_pts_switch(ACPI_STATE_S5);
+-
+ 	return NOTIFY_DONE;
+ }
+ 
+-static struct notifier_block sleep_notifier = {
+-	.notifier_call	= sleep_notify_reboot,
++static struct notifier_block tts_notifier = {
++	.notifier_call	= tts_notify_reboot,
+ 	.next		= NULL,
+ 	.priority	= 0,
+ };
+@@ -916,9 +899,9 @@ int __init acpi_sleep_init(void)
+ 	pr_info(PREFIX "(supports%s)\n", supported);
+ 
+ 	/*
+-	 * Register the sleep_notifier to reboot notifier list so that the _TTS
+-	 * and _PTS object can also be evaluated when the system enters S5.
++	 * Register the tts_notifier to reboot notifier list so that the _TTS
++	 * object can also be evaluated when the system enters S5.
+ 	 */
+-	register_reboot_notifier(&sleep_notifier);
++	register_reboot_notifier(&tts_notifier);
+ 	return 0;
+ }
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 5163c8f918cb..5497f7fc44d0 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1413,8 +1413,14 @@ static ssize_t hot_remove_store(struct class *class,
+ 	return ret ? ret : count;
+ }
+ 
++/*
++ * NOTE: hot_add attribute is not the usual read-only sysfs attribute. In a
++ * sense that reading from this file does alter the state of your system -- it
++ * creates a new un-initialized zram device and returns back this device's
++ * device_id (or an error code if it fails to create a new device).
++ */
+ static struct class_attribute zram_control_class_attrs[] = {
+-	__ATTR_RO(hot_add),
++	__ATTR(hot_add, 0400, hot_add_show, NULL),
+ 	__ATTR_WO(hot_remove),
+ 	__ATTR_NULL,
+ };
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 0ec112ee5204..2341f3799591 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -557,8 +557,9 @@ static int caam_probe(struct platform_device *pdev)
+ 	 * Enable DECO watchdogs and, if this is a PHYS_ADDR_T_64BIT kernel,
+ 	 * long pointers in master configuration register
+ 	 */
+-	clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK, MCFGR_AWCACHE_CACH |
+-		      MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE | MCFGR_LARGE_BURST |
++	clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK | MCFGR_LONG_PTR,
++		      MCFGR_AWCACHE_CACH | MCFGR_AWCACHE_BUFF |
++		      MCFGR_WDENABLE | MCFGR_LARGE_BURST |
+ 		      (sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0));
+ 
+ 	/*
+diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
+index b111e14bac1e..13e89afdbb87 100644
+--- a/drivers/crypto/marvell/hash.c
++++ b/drivers/crypto/marvell/hash.c
+@@ -168,12 +168,11 @@ static void mv_cesa_ahash_std_step(struct ahash_request *req)
+ 	mv_cesa_adjust_op(engine, &creq->op_tmpl);
+ 	memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl));
+ 
+-	digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
+-	for (i = 0; i < digsize / 4; i++)
+-		writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i));
+-
+-	mv_cesa_adjust_op(engine, &creq->op_tmpl);
+-	memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl));
++	if (!sreq->offset) {
++		digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
++		for (i = 0; i < digsize / 4; i++)
++			writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i));
++	}
+ 
+ 	if (creq->cache_ptr)
+ 		memcpy_toio(engine->sram + CESA_SA_DATA_SRAM_OFFSET,
+diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
+index ff64313770bd..4894199cebab 100644
+--- a/drivers/dax/dax.c
++++ b/drivers/dax/dax.c
+@@ -324,7 +324,7 @@ static int check_vma(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ 		return -ENXIO;
+ 
+ 	/* prevent private mappings from being established */
+-	if ((vma->vm_flags & VM_SHARED) != VM_SHARED) {
++	if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) {
+ 		dev_info(dev, "%s: %s: fail, attempted private mapping\n",
+ 				current->comm, func);
+ 		return -EINVAL;
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index bfb91d8fa460..1006af40481d 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -872,23 +872,25 @@ lbl_free_candev:
+ static void peak_usb_disconnect(struct usb_interface *intf)
+ {
+ 	struct peak_usb_device *dev;
++	struct peak_usb_device *dev_prev_siblings;
+ 
+ 	/* unregister as many netdev devices as siblings */
+-	for (dev = usb_get_intfdata(intf); dev; dev = dev->prev_siblings) {
++	for (dev = usb_get_intfdata(intf); dev; dev = dev_prev_siblings) {
+ 		struct net_device *netdev = dev->netdev;
+ 		char name[IFNAMSIZ];
+ 
++		dev_prev_siblings = dev->prev_siblings;
+ 		dev->state &= ~PCAN_USB_STATE_CONNECTED;
+ 		strncpy(name, netdev->name, IFNAMSIZ);
+ 
+ 		unregister_netdev(netdev);
+-		free_candev(netdev);
+ 
+ 		kfree(dev->cmd_buf);
+ 		dev->next_siblings = NULL;
+ 		if (dev->adapter->dev_free)
+ 			dev->adapter->dev_free(dev);
+ 
++		free_candev(netdev);
+ 		dev_info(&intf->dev, "%s removed\n", name);
+ 	}
+ 
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index a8b6949a8778..23d4a1728cdf 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -715,7 +715,7 @@ EXPORT_SYMBOL_GPL(nd_cmd_in_size);
+ 
+ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		const struct nd_cmd_desc *desc, int idx, const u32 *in_field,
+-		const u32 *out_field)
++		const u32 *out_field, unsigned long remainder)
+ {
+ 	if (idx >= desc->out_num)
+ 		return UINT_MAX;
+@@ -727,9 +727,24 @@ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		return in_field[1];
+ 	else if (nvdimm && cmd == ND_CMD_VENDOR && idx == 2)
+ 		return out_field[1];
+-	else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 2)
+-		return out_field[1] - 8;
+-	else if (cmd == ND_CMD_CALL) {
++	else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 2) {
++		/*
++		 * Per table 9-276 ARS Data in ACPI 6.1, out_field[1] is
++		 * "Size of Output Buffer in bytes, including this
++		 * field."
++		 */
++		if (out_field[1] < 4)
++			return 0;
++		/*
++		 * ACPI 6.1 is ambiguous if 'status' is included in the
++		 * output size. If we encounter an output size that
++		 * overshoots the remainder by 4 bytes, assume it was
++		 * including 'status'.
++		 */
++		if (out_field[1] - 8 == remainder)
++			return remainder;
++		return out_field[1] - 4;
++	} else if (cmd == ND_CMD_CALL) {
+ 		struct nd_cmd_pkg *pkg = (struct nd_cmd_pkg *) in_field;
+ 
+ 		return pkg->nd_size_out;
+@@ -876,7 +891,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ 	/* process an output envelope */
+ 	for (i = 0; i < desc->out_num; i++) {
+ 		u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i,
+-				(u32 *) in_env, (u32 *) out_env);
++				(u32 *) in_env, (u32 *) out_env, 0);
+ 		u32 copy;
+ 
+ 		if (out_size == UINT_MAX) {
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 7080ce2920fd..8214ebae9d50 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -1323,18 +1323,20 @@ lpfc_sli_ringtxcmpl_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ {
+ 	lockdep_assert_held(&phba->hbalock);
+ 
+-	BUG_ON(!piocb || !piocb->vport);
++	BUG_ON(!piocb);
+ 
+ 	list_add_tail(&piocb->list, &pring->txcmplq);
+ 	piocb->iocb_flag |= LPFC_IO_ON_TXCMPLQ;
+ 
+ 	if ((unlikely(pring->ringno == LPFC_ELS_RING)) &&
+ 	   (piocb->iocb.ulpCommand != CMD_ABORT_XRI_CN) &&
+-	   (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN) &&
+-	    (!(piocb->vport->load_flag & FC_UNLOADING)))
+-		mod_timer(&piocb->vport->els_tmofunc,
+-			  jiffies +
+-			  msecs_to_jiffies(1000 * (phba->fc_ratov << 1)));
++	   (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN)) {
++		BUG_ON(!piocb->vport);
++		if (!(piocb->vport->load_flag & FC_UNLOADING))
++			mod_timer(&piocb->vport->els_tmofunc,
++				  jiffies +
++				  msecs_to_jiffies(1000 * (phba->fc_ratov << 1)));
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index e3b30ea9ece5..a504e2e003da 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -506,7 +506,7 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
+ 	 * executing.
+ 	 */
+ 
+-	if (!vhost_vsock_get(vsk->local_addr.svm_cid)) {
++	if (!vhost_vsock_get(vsk->remote_addr.svm_cid)) {
+ 		sock_set_flag(sk, SOCK_DONE);
+ 		vsk->peer_shutdown = SHUTDOWN_MASK;
+ 		sk->sk_state = SS_UNCONNECTED;
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index df4b3e6fa563..93142bfe6112 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1257,26 +1257,30 @@ static int ceph_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 			return -ECHILD;
+ 
+ 		op = ceph_snap(dir) == CEPH_SNAPDIR ?
+-			CEPH_MDS_OP_LOOKUPSNAP : CEPH_MDS_OP_LOOKUP;
++			CEPH_MDS_OP_LOOKUPSNAP : CEPH_MDS_OP_GETATTR;
+ 		req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
+ 		if (!IS_ERR(req)) {
+ 			req->r_dentry = dget(dentry);
+-			req->r_num_caps = 2;
++			req->r_num_caps = op == CEPH_MDS_OP_GETATTR ? 1 : 2;
+ 
+ 			mask = CEPH_STAT_CAP_INODE | CEPH_CAP_AUTH_SHARED;
+ 			if (ceph_security_xattr_wanted(dir))
+ 				mask |= CEPH_CAP_XATTR_SHARED;
+ 			req->r_args.getattr.mask = mask;
+ 
+-			req->r_locked_dir = dir;
+ 			err = ceph_mdsc_do_request(mdsc, NULL, req);
+-			if (err == 0 || err == -ENOENT) {
+-				if (dentry == req->r_dentry) {
+-					valid = !d_unhashed(dentry);
+-				} else {
+-					d_invalidate(req->r_dentry);
+-					err = -EAGAIN;
+-				}
++			switch (err) {
++			case 0:
++				if (d_really_is_positive(dentry) &&
++				    d_inode(dentry) == req->r_target_inode)
++					valid = 1;
++				break;
++			case -ENOENT:
++				if (d_really_is_negative(dentry))
++					valid = 1;
++				/* Fallthrough */
++			default:
++				break;
+ 			}
+ 			ceph_mdsc_put_request(req);
+ 			dout("d_revalidate %p lookup result=%d\n",
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 4ff9251e9d3a..eb5373a026e3 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1709,8 +1709,6 @@ static int fuse_setattr(struct dentry *entry, struct iattr *attr)
+ 		return -EACCES;
+ 
+ 	if (attr->ia_valid & (ATTR_KILL_SUID | ATTR_KILL_SGID)) {
+-		int kill;
+-
+ 		attr->ia_valid &= ~(ATTR_KILL_SUID | ATTR_KILL_SGID |
+ 				    ATTR_MODE);
+ 		/*
+@@ -1722,12 +1720,11 @@ static int fuse_setattr(struct dentry *entry, struct iattr *attr)
+ 			return ret;
+ 
+ 		attr->ia_mode = inode->i_mode;
+-		kill = should_remove_suid(entry);
+-		if (kill & ATTR_KILL_SUID) {
++		if (inode->i_mode & S_ISUID) {
+ 			attr->ia_valid |= ATTR_MODE;
+ 			attr->ia_mode &= ~S_ISUID;
+ 		}
+-		if (kill & ATTR_KILL_SGID) {
++		if ((inode->i_mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
+ 			attr->ia_valid |= ATTR_MODE;
+ 			attr->ia_mode &= ~S_ISGID;
+ 		}
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 797d9c8e9a1b..c8938eb21e34 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -105,22 +105,16 @@ extern bool cpuhp_tasks_frozen;
+ 		{ .notifier_call = fn, .priority = pri };	\
+ 	__register_cpu_notifier(&fn##_nb);			\
+ }
+-#else /* #if defined(CONFIG_HOTPLUG_CPU) || !defined(MODULE) */
+-#define cpu_notifier(fn, pri)	do { (void)(fn); } while (0)
+-#define __cpu_notifier(fn, pri)	do { (void)(fn); } while (0)
+-#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) || !defined(MODULE) */
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ extern int register_cpu_notifier(struct notifier_block *nb);
+ extern int __register_cpu_notifier(struct notifier_block *nb);
+ extern void unregister_cpu_notifier(struct notifier_block *nb);
+ extern void __unregister_cpu_notifier(struct notifier_block *nb);
+-#else
+ 
+-#ifndef MODULE
+-extern int register_cpu_notifier(struct notifier_block *nb);
+-extern int __register_cpu_notifier(struct notifier_block *nb);
+-#else
++#else /* #if defined(CONFIG_HOTPLUG_CPU) || !defined(MODULE) */
++#define cpu_notifier(fn, pri)	do { (void)(fn); } while (0)
++#define __cpu_notifier(fn, pri)	do { (void)(fn); } while (0)
++
+ static inline int register_cpu_notifier(struct notifier_block *nb)
+ {
+ 	return 0;
+@@ -130,7 +124,6 @@ static inline int __register_cpu_notifier(struct notifier_block *nb)
+ {
+ 	return 0;
+ }
+-#endif
+ 
+ static inline void unregister_cpu_notifier(struct notifier_block *nb)
+ {
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index bbfce62a0bd7..d02d65dfe2d0 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -153,7 +153,7 @@ u32 nd_cmd_in_size(struct nvdimm *nvdimm, int cmd,
+ 		const struct nd_cmd_desc *desc, int idx, void *buf);
+ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		const struct nd_cmd_desc *desc, int idx, const u32 *in_field,
+-		const u32 *out_field);
++		const u32 *out_field, unsigned long remainder);
+ int nvdimm_bus_check_dimm_count(struct nvdimm_bus *nvdimm_bus, int dimm_count);
+ struct nd_region *nvdimm_pmem_region_create(struct nvdimm_bus *nvdimm_bus,
+ 		struct nd_region_desc *ndr_desc);
+diff --git a/include/uapi/linux/can.h b/include/uapi/linux/can.h
+index 9692cda5f8fc..c48d93a28d1a 100644
+--- a/include/uapi/linux/can.h
++++ b/include/uapi/linux/can.h
+@@ -196,5 +196,6 @@ struct can_filter {
+ };
+ 
+ #define CAN_INV_FILTER 0x20000000U /* to be set in can_filter.can_id */
++#define CAN_RAW_FILTER_MAX 512 /* maximum number of can_filter set via setsockopt() */
+ 
+ #endif /* !_UAPI_CAN_H */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 341bf80f80bd..73fb59fda809 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -578,7 +578,6 @@ void __init cpuhp_threads_init(void)
+ 	kthread_unpark(this_cpu_read(cpuhp_state.thread));
+ }
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ EXPORT_SYMBOL(register_cpu_notifier);
+ EXPORT_SYMBOL(__register_cpu_notifier);
+ void unregister_cpu_notifier(struct notifier_block *nb)
+@@ -595,6 +594,7 @@ void __unregister_cpu_notifier(struct notifier_block *nb)
+ }
+ EXPORT_SYMBOL(__unregister_cpu_notifier);
+ 
++#ifdef CONFIG_HOTPLUG_CPU
+ /**
+  * clear_tasks_mm_cpumask - Safely clear tasks' mm_cpumask for a CPU
+  * @cpu: a CPU id
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 1ec0f48962b3..2c49d76f96c3 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -65,8 +65,72 @@ static inline void clear_rt_mutex_waiters(struct rt_mutex *lock)
+ 
+ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
+ {
+-	if (!rt_mutex_has_waiters(lock))
+-		clear_rt_mutex_waiters(lock);
++	unsigned long owner, *p = (unsigned long *) &lock->owner;
++
++	if (rt_mutex_has_waiters(lock))
++		return;
++
++	/*
++	 * The rbtree has no waiters enqueued, now make sure that the
++	 * lock->owner still has the waiters bit set, otherwise the
++	 * following can happen:
++	 *
++	 * CPU 0	CPU 1		CPU2
++	 * l->owner=T1
++	 *		rt_mutex_lock(l)
++	 *		lock(l->lock)
++	 *		l->owner = T1 | HAS_WAITERS;
++	 *		enqueue(T2)
++	 *		boost()
++	 *		  unlock(l->lock)
++	 *		block()
++	 *
++	 *				rt_mutex_lock(l)
++	 *				lock(l->lock)
++	 *				l->owner = T1 | HAS_WAITERS;
++	 *				enqueue(T3)
++	 *				boost()
++	 *				  unlock(l->lock)
++	 *				block()
++	 *		signal(->T2)	signal(->T3)
++	 *		lock(l->lock)
++	 *		dequeue(T2)
++	 *		deboost()
++	 *		  unlock(l->lock)
++	 *				lock(l->lock)
++	 *				dequeue(T3)
++	 *				 ==> wait list is empty
++	 *				deboost()
++	 *				 unlock(l->lock)
++	 *		lock(l->lock)
++	 *		fixup_rt_mutex_waiters()
++	 *		  if (wait_list_empty(l) {
++	 *		    l->owner = owner
++	 *		    owner = l->owner & ~HAS_WAITERS;
++	 *		      ==> l->owner = T1
++	 *		  }
++	 *				lock(l->lock)
++	 * rt_mutex_unlock(l)		fixup_rt_mutex_waiters()
++	 *				  if (wait_list_empty(l) {
++	 *				    owner = l->owner & ~HAS_WAITERS;
++	 * cmpxchg(l->owner, T1, NULL)
++	 *  ===> Success (l->owner = NULL)
++	 *
++	 *				    l->owner = owner
++	 *				      ==> l->owner = T1
++	 *				  }
++	 *
++	 * With the check for the waiter bit in place T3 on CPU2 will not
++	 * overwrite. All tasks fiddling with the waiters bit are
++	 * serialized by l->lock, so nothing else can modify the waiters
++	 * bit. If the bit is set then nothing can change l->owner either
++	 * so the simple RMW is safe. The cmpxchg() will simply fail if it
++	 * happens in the middle of the RMW because the waiters bit is
++	 * still set.
++	 */
++	owner = READ_ONCE(*p);
++	if (owner & RT_MUTEX_HAS_WAITERS)
++		WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);
+ }
+ 
+ /*
+diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
+index 4f5f83c7d2d3..e317e1cbb3eb 100644
+--- a/kernel/locking/rtmutex_common.h
++++ b/kernel/locking/rtmutex_common.h
+@@ -75,8 +75,9 @@ task_top_pi_waiter(struct task_struct *p)
+ 
+ static inline struct task_struct *rt_mutex_owner(struct rt_mutex *lock)
+ {
+-	return (struct task_struct *)
+-		((unsigned long)lock->owner & ~RT_MUTEX_OWNER_MASKALL);
++	unsigned long owner = (unsigned long) READ_ONCE(lock->owner);
++
++	return (struct task_struct *) (owner & ~RT_MUTEX_OWNER_MASKALL);
+ }
+ 
+ /*
+diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
+index a5d966cb8891..418d9b6276a3 100644
+--- a/kernel/sched/auto_group.c
++++ b/kernel/sched/auto_group.c
+@@ -192,6 +192,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
+ {
+ 	static unsigned long next = INITIAL_JIFFIES;
+ 	struct autogroup *ag;
++	unsigned long shares;
+ 	int err;
+ 
+ 	if (nice < MIN_NICE || nice > MAX_NICE)
+@@ -210,9 +211,10 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
+ 
+ 	next = HZ / 10 + jiffies;
+ 	ag = autogroup_task_get(p);
++	shares = scale_load(sched_prio_to_weight[nice + 20]);
+ 
+ 	down_write(&ag->lock);
+-	err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]);
++	err = sched_group_set_shares(ag->tg, shares);
+ 	if (!err)
+ 		ag->nice = nice;
+ 	up_write(&ag->lock);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 7e6df7a4964a..67f8fa9fc15a 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -2849,7 +2849,7 @@ static bool batadv_send_my_tt_response(struct batadv_priv *bat_priv,
+ 							     &tvlv_tt_data,
+ 							     &tt_change,
+ 							     &tt_len);
+-		if (!tt_len)
++		if (!tt_len || !tvlv_len)
+ 			goto unlock;
+ 
+ 		/* Copy the last orig_node's OGM buffer */
+@@ -2867,7 +2867,7 @@ static bool batadv_send_my_tt_response(struct batadv_priv *bat_priv,
+ 							     &tvlv_tt_data,
+ 							     &tt_change,
+ 							     &tt_len);
+-		if (!tt_len)
++		if (!tt_len || !tvlv_len)
+ 			goto out;
+ 
+ 		/* fill the rest of the tvlv with the real TT entries */
+diff --git a/net/can/raw.c b/net/can/raw.c
+index 972c187d40ab..b075f028d7e2 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -499,6 +499,9 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 		if (optlen % sizeof(struct can_filter) != 0)
+ 			return -EINVAL;
+ 
++		if (optlen > CAN_RAW_FILTER_MAX * sizeof(struct can_filter))
++			return -EINVAL;
++
+ 		count = optlen / sizeof(struct can_filter);
+ 
+ 		if (count > 1) {


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2017-01-06 23:11 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2017-01-06 23:11 UTC (permalink / raw
  To: gentoo-commits

commit:     81befa1208eed1da45b7a12153560ab1c0c184ce
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan  6 23:11:42 2017 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan  6 23:11:42 2017 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=81befa12

Linux patch 4.8.15

 0000_README             |    4 +
 1015_linux-4.8.16.patch | 3559 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3563 insertions(+)

diff --git a/0000_README b/0000_README
index 37d0ff1..e7fac7c 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-4.8.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.15
 
+Patch:  1015_linux-4.8.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-4.8.16.patch b/1015_linux-4.8.16.patch
new file mode 100644
index 0000000..9977d7a
--- /dev/null
+++ b/1015_linux-4.8.16.patch
@@ -0,0 +1,3559 @@
+diff --git a/Makefile b/Makefile
+index c7f0e798ca34..50f68648a79a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index f193414d0f6f..4986dc0c1dff 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -372,8 +372,7 @@ static int __init xen_guest_init(void)
+ 	 * for secondary CPUs as they are brought up.
+ 	 * For uniformity we use VCPUOP_register_vcpu_info even on cpu0.
+ 	 */
+-	xen_vcpu_info = __alloc_percpu(sizeof(struct vcpu_info),
+-			                       sizeof(struct vcpu_info));
++	xen_vcpu_info = alloc_percpu(struct vcpu_info);
+ 	if (xen_vcpu_info == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index 5420cb0fcb3e..e517088d635f 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -12,7 +12,7 @@
+ #ifndef _ASM_ACPI_H
+ #define _ASM_ACPI_H
+ 
+-#include <linux/mm.h>
++#include <linux/memblock.h>
+ #include <linux/psci.h>
+ 
+ #include <asm/cputype.h>
+@@ -32,7 +32,11 @@
+ static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
+ 					    acpi_size size)
+ {
+-	if (!page_is_ram(phys >> PAGE_SHIFT))
++	/*
++	 * EFI's reserve_regions() call adds memory with the WB attribute
++	 * to memblock via early_init_dt_add_memory_arch().
++	 */
++	if (!memblock_is_memory(phys))
+ 		return ioremap(phys, size);
+ 
+ 	return ioremap_cache(phys, size);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 536dce22fe76..514b4e3ba029 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -206,10 +206,15 @@ static void __init request_standard_resources(void)
+ 
+ 	for_each_memblock(memory, region) {
+ 		res = alloc_bootmem_low(sizeof(*res));
+-		res->name  = "System RAM";
++		if (memblock_is_nomap(region)) {
++			res->name  = "reserved";
++			res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++		} else {
++			res->name  = "System RAM";
++			res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
++		}
+ 		res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
+ 		res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
+-		res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+ 
+ 		request_resource(&iomem_resource, res);
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c207fa9870eb..494e0d800976 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1371,9 +1371,9 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ 		blk_mq_put_ctx(data.ctx);
+ 		if (!old_rq)
+ 			goto done;
+-		if (!blk_mq_direct_issue_request(old_rq, &cookie))
+-			goto done;
+-		blk_mq_insert_request(old_rq, false, true, true);
++		if (test_bit(BLK_MQ_S_STOPPED, &data.hctx->state) ||
++		    blk_mq_direct_issue_request(old_rq, &cookie) != 0)
++			blk_mq_insert_request(old_rq, false, true, true);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 0a8bdade53f2..88df65d1e6f6 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -836,11 +836,29 @@ static struct kobject *get_device_parent(struct device *dev,
+ 	return NULL;
+ }
+ 
++static inline bool live_in_glue_dir(struct kobject *kobj,
++				    struct device *dev)
++{
++	if (!kobj || !dev->class ||
++	    kobj->kset != &dev->class->p->glue_dirs)
++		return false;
++	return true;
++}
++
++static inline struct kobject *get_glue_dir(struct device *dev)
++{
++	return dev->kobj.parent;
++}
++
++/*
++ * make sure cleaning up dir as the last step, we need to make
++ * sure .release handler of kobject is run with holding the
++ * global lock
++ */
+ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
+ {
+ 	/* see if we live in a "glue" directory */
+-	if (!glue_dir || !dev->class ||
+-	    glue_dir->kset != &dev->class->p->glue_dirs)
++	if (!live_in_glue_dir(glue_dir, dev))
+ 		return;
+ 
+ 	mutex_lock(&gdp_mutex);
+@@ -848,11 +866,6 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
+ 	mutex_unlock(&gdp_mutex);
+ }
+ 
+-static void cleanup_device_parent(struct device *dev)
+-{
+-	cleanup_glue_dir(dev, dev->kobj.parent);
+-}
+-
+ static int device_add_class_symlinks(struct device *dev)
+ {
+ 	struct device_node *of_node = dev_of_node(dev);
+@@ -1028,6 +1041,7 @@ int device_add(struct device *dev)
+ 	struct kobject *kobj;
+ 	struct class_interface *class_intf;
+ 	int error = -EINVAL;
++	struct kobject *glue_dir = NULL;
+ 
+ 	dev = get_device(dev);
+ 	if (!dev)
+@@ -1072,8 +1086,10 @@ int device_add(struct device *dev)
+ 	/* first, register with generic layer. */
+ 	/* we require the name to be set before, and pass NULL */
+ 	error = kobject_add(&dev->kobj, dev->kobj.parent, NULL);
+-	if (error)
++	if (error) {
++		glue_dir = get_glue_dir(dev);
+ 		goto Error;
++	}
+ 
+ 	/* notify platform of device entry */
+ 	if (platform_notify)
+@@ -1154,9 +1170,10 @@ done:
+ 	device_remove_file(dev, &dev_attr_uevent);
+  attrError:
+ 	kobject_uevent(&dev->kobj, KOBJ_REMOVE);
++	glue_dir = get_glue_dir(dev);
+ 	kobject_del(&dev->kobj);
+  Error:
+-	cleanup_device_parent(dev);
++	cleanup_glue_dir(dev, glue_dir);
+ 	put_device(parent);
+ name_error:
+ 	kfree(dev->p);
+@@ -1232,6 +1249,7 @@ EXPORT_SYMBOL_GPL(put_device);
+ void device_del(struct device *dev)
+ {
+ 	struct device *parent = dev->parent;
++	struct kobject *glue_dir = NULL;
+ 	struct class_interface *class_intf;
+ 
+ 	/* Notify clients of device removal.  This call must come
+@@ -1276,8 +1294,9 @@ void device_del(struct device *dev)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_REMOVED_DEVICE, dev);
+ 	kobject_uevent(&dev->kobj, KOBJ_REMOVE);
+-	cleanup_device_parent(dev);
++	glue_dir = get_glue_dir(dev);
+ 	kobject_del(&dev->kobj);
++	cleanup_glue_dir(dev, glue_dir);
+ 	put_device(parent);
+ }
+ EXPORT_SYMBOL_GPL(device_del);
+diff --git a/drivers/base/power/opp/core.c b/drivers/base/power/opp/core.c
+index df0c70963d9e..94b33ce96be7 100644
+--- a/drivers/base/power/opp/core.c
++++ b/drivers/base/power/opp/core.c
+@@ -1320,7 +1320,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name);
+  * that this function is *NOT* called under RCU protection or in contexts where
+  * mutex cannot be locked.
+  */
+-int dev_pm_opp_set_regulator(struct device *dev, const char *name)
++struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ {
+ 	struct opp_table *opp_table;
+ 	struct regulator *reg;
+@@ -1358,20 +1358,20 @@ int dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ 	opp_table->regulator = reg;
+ 
+ 	mutex_unlock(&opp_table_lock);
+-	return 0;
++	return opp_table;
+ 
+ err:
+ 	_remove_opp_table(opp_table);
+ unlock:
+ 	mutex_unlock(&opp_table_lock);
+ 
+-	return ret;
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulator);
+ 
+ /**
+  * dev_pm_opp_put_regulator() - Releases resources blocked for regulator
+- * @dev: Device for which regulator was set.
++ * @opp_table: OPP table returned from dev_pm_opp_set_regulator().
+  *
+  * Locking: The internal opp_table and opp structures are RCU protected.
+  * Hence this function internally uses RCU updater strategy with mutex locks
+@@ -1379,22 +1379,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulator);
+  * that this function is *NOT* called under RCU protection or in contexts where
+  * mutex cannot be locked.
+  */
+-void dev_pm_opp_put_regulator(struct device *dev)
++void dev_pm_opp_put_regulator(struct opp_table *opp_table)
+ {
+-	struct opp_table *opp_table;
+-
+ 	mutex_lock(&opp_table_lock);
+ 
+-	/* Check for existing table for 'dev' first */
+-	opp_table = _find_opp_table(dev);
+-	if (IS_ERR(opp_table)) {
+-		dev_err(dev, "Failed to find opp_table: %ld\n",
+-			PTR_ERR(opp_table));
+-		goto unlock;
+-	}
+-
+ 	if (IS_ERR(opp_table->regulator)) {
+-		dev_err(dev, "%s: Doesn't have regulator set\n", __func__);
++		pr_err("%s: Doesn't have regulator set\n", __func__);
+ 		goto unlock;
+ 	}
+ 
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index ab19adb07a12..3c606c09fd5a 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -853,45 +853,6 @@ rqbiocnt(struct request *r)
+ 	return n;
+ }
+ 
+-/* This can be removed if we are certain that no users of the block
+- * layer will ever use zero-count pages in bios.  Otherwise we have to
+- * protect against the put_page sometimes done by the network layer.
+- *
+- * See http://oss.sgi.com/archives/xfs/2007-01/msg00594.html for
+- * discussion.
+- *
+- * We cannot use get_page in the workaround, because it insists on a
+- * positive page count as a precondition.  So we use _refcount directly.
+- */
+-static void
+-bio_pageinc(struct bio *bio)
+-{
+-	struct bio_vec bv;
+-	struct page *page;
+-	struct bvec_iter iter;
+-
+-	bio_for_each_segment(bv, bio, iter) {
+-		/* Non-zero page count for non-head members of
+-		 * compound pages is no longer allowed by the kernel.
+-		 */
+-		page = compound_head(bv.bv_page);
+-		page_ref_inc(page);
+-	}
+-}
+-
+-static void
+-bio_pagedec(struct bio *bio)
+-{
+-	struct page *page;
+-	struct bio_vec bv;
+-	struct bvec_iter iter;
+-
+-	bio_for_each_segment(bv, bio, iter) {
+-		page = compound_head(bv.bv_page);
+-		page_ref_dec(page);
+-	}
+-}
+-
+ static void
+ bufinit(struct buf *buf, struct request *rq, struct bio *bio)
+ {
+@@ -899,7 +860,6 @@ bufinit(struct buf *buf, struct request *rq, struct bio *bio)
+ 	buf->rq = rq;
+ 	buf->bio = bio;
+ 	buf->iter = bio->bi_iter;
+-	bio_pageinc(bio);
+ }
+ 
+ static struct buf *
+@@ -1127,7 +1087,6 @@ aoe_end_buf(struct aoedev *d, struct buf *buf)
+ 	if (buf == d->ip.buf)
+ 		d->ip.buf = NULL;
+ 	rq = buf->rq;
+-	bio_pagedec(buf->bio);
+ 	mempool_free(buf, d->bufpool);
+ 	n = (unsigned long) rq->special;
+ 	rq->special = (void *) --n;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index c9f2107f7095..b314a57e3c86 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1646,7 +1646,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	blk_mq_start_request(bd->rq);
+ 
+ 	if (lo->lo_state != Lo_bound)
+-		return -EIO;
++		return BLK_MQ_RQ_QUEUE_ERROR;
+ 
+ 	switch (req_op(cmd->rq)) {
+ 	case REQ_OP_FLUSH:
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 62028f483bba..a2ab00831df1 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -307,7 +307,6 @@ static int tpmfront_probe(struct xenbus_device *dev,
+ 	rv = setup_ring(dev, priv);
+ 	if (rv) {
+ 		chip = dev_get_drvdata(&dev->dev);
+-		tpm_chip_unregister(chip);
+ 		ring_free(priv);
+ 		return rv;
+ 	}
+diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
+index 8831e1a05367..11d8aa3ec186 100644
+--- a/drivers/clk/ti/clk-3xxx.c
++++ b/drivers/clk/ti/clk-3xxx.c
+@@ -22,13 +22,6 @@
+ 
+ #include "clock.h"
+ 
+-/*
+- * DPLL5_FREQ_FOR_USBHOST: USBHOST and USBTLL are the only clocks
+- * that are sourced by DPLL5, and both of these require this clock
+- * to be at 120 MHz for proper operation.
+- */
+-#define DPLL5_FREQ_FOR_USBHOST		120000000
+-
+ #define OMAP3430ES2_ST_DSS_IDLE_SHIFT			1
+ #define OMAP3430ES2_ST_HSOTGUSB_IDLE_SHIFT		5
+ #define OMAP3430ES2_ST_SSI_IDLE_SHIFT			8
+@@ -546,14 +539,21 @@ void __init omap3_clk_lock_dpll5(void)
+ 	struct clk *dpll5_clk;
+ 	struct clk *dpll5_m2_clk;
+ 
++	/*
++	 * Errata sprz319f advisory 2.1 documents a USB host clock drift issue
++	 * that can be worked around using specially crafted dpll5 settings
++	 * with a dpll5_m2 divider set to 8. Set the dpll5 rate to 8x the USB
++	 * host clock rate, its .set_rate handler() will detect that frequency
++	 * and use the errata settings.
++	 */
+ 	dpll5_clk = clk_get(NULL, "dpll5_ck");
+-	clk_set_rate(dpll5_clk, DPLL5_FREQ_FOR_USBHOST);
++	clk_set_rate(dpll5_clk, OMAP3_DPLL5_FREQ_FOR_USBHOST * 8);
+ 	clk_prepare_enable(dpll5_clk);
+ 
+-	/* Program dpll5_m2_clk divider for no division */
++	/* Program dpll5_m2_clk divider */
+ 	dpll5_m2_clk = clk_get(NULL, "dpll5_m2_ck");
+ 	clk_prepare_enable(dpll5_m2_clk);
+-	clk_set_rate(dpll5_m2_clk, DPLL5_FREQ_FOR_USBHOST);
++	clk_set_rate(dpll5_m2_clk, OMAP3_DPLL5_FREQ_FOR_USBHOST);
+ 
+ 	clk_disable_unprepare(dpll5_m2_clk);
+ 	clk_disable_unprepare(dpll5_clk);
+diff --git a/drivers/clk/ti/clock.h b/drivers/clk/ti/clock.h
+index 90f3f472ae1c..13c37f48d9d6 100644
+--- a/drivers/clk/ti/clock.h
++++ b/drivers/clk/ti/clock.h
+@@ -257,11 +257,20 @@ long omap2_dpll_round_rate(struct clk_hw *hw, unsigned long target_rate,
+ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
+ 				    unsigned long parent_rate);
+ 
++/*
++ * OMAP3_DPLL5_FREQ_FOR_USBHOST: USBHOST and USBTLL are the only clocks
++ * that are sourced by DPLL5, and both of these require this clock
++ * to be at 120 MHz for proper operation.
++ */
++#define OMAP3_DPLL5_FREQ_FOR_USBHOST	120000000
++
+ unsigned long omap3_dpll_recalc(struct clk_hw *hw, unsigned long parent_rate);
+ int omap3_dpll4_set_rate(struct clk_hw *clk, unsigned long rate,
+ 			 unsigned long parent_rate);
+ int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate,
+ 				    unsigned long parent_rate, u8 index);
++int omap3_dpll5_set_rate(struct clk_hw *hw, unsigned long rate,
++			 unsigned long parent_rate);
+ void omap3_clk_lock_dpll5(void);
+ 
+ unsigned long omap4_dpll_regm4xen_recalc(struct clk_hw *hw,
+diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c
+index 9fc8754a6e61..4b9a419d8e14 100644
+--- a/drivers/clk/ti/dpll.c
++++ b/drivers/clk/ti/dpll.c
+@@ -114,6 +114,18 @@ static const struct clk_ops omap3_dpll_ck_ops = {
+ 	.round_rate	= &omap2_dpll_round_rate,
+ };
+ 
++static const struct clk_ops omap3_dpll5_ck_ops = {
++	.enable		= &omap3_noncore_dpll_enable,
++	.disable	= &omap3_noncore_dpll_disable,
++	.get_parent	= &omap2_init_dpll_parent,
++	.recalc_rate	= &omap3_dpll_recalc,
++	.set_rate	= &omap3_dpll5_set_rate,
++	.set_parent	= &omap3_noncore_dpll_set_parent,
++	.set_rate_and_parent	= &omap3_noncore_dpll_set_rate_and_parent,
++	.determine_rate	= &omap3_noncore_dpll_determine_rate,
++	.round_rate	= &omap2_dpll_round_rate,
++};
++
+ static const struct clk_ops omap3_dpll_per_ck_ops = {
+ 	.enable		= &omap3_noncore_dpll_enable,
+ 	.disable	= &omap3_noncore_dpll_disable,
+@@ -474,7 +486,12 @@ static void __init of_ti_omap3_dpll_setup(struct device_node *node)
+ 		.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
+ 	};
+ 
+-	of_ti_dpll_setup(node, &omap3_dpll_ck_ops, &dd);
++	if ((of_machine_is_compatible("ti,omap3630") ||
++	     of_machine_is_compatible("ti,omap36xx")) &&
++	    !strcmp(node->name, "dpll5_ck"))
++		of_ti_dpll_setup(node, &omap3_dpll5_ck_ops, &dd);
++	else
++		of_ti_dpll_setup(node, &omap3_dpll_ck_ops, &dd);
+ }
+ CLK_OF_DECLARE(ti_omap3_dpll_clock, "ti,omap3-dpll-clock",
+ 	       of_ti_omap3_dpll_setup);
+diff --git a/drivers/clk/ti/dpll3xxx.c b/drivers/clk/ti/dpll3xxx.c
+index 88f2ce81ba55..4cdd28a25584 100644
+--- a/drivers/clk/ti/dpll3xxx.c
++++ b/drivers/clk/ti/dpll3xxx.c
+@@ -838,3 +838,70 @@ int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate,
+ 	return omap3_noncore_dpll_set_rate_and_parent(hw, rate, parent_rate,
+ 						      index);
+ }
++
++/* Apply DM3730 errata sprz319 advisory 2.1. */
++static bool omap3_dpll5_apply_errata(struct clk_hw *hw,
++				     unsigned long parent_rate)
++{
++	struct omap3_dpll5_settings {
++		unsigned int rate, m, n;
++	};
++
++	static const struct omap3_dpll5_settings precomputed[] = {
++		/*
++		 * From DM3730 errata advisory 2.1, table 35 and 36.
++		 * The N value is increased by 1 compared to the tables as the
++		 * errata lists register values while last_rounded_field is the
++		 * real divider value.
++		 */
++		{ 12000000,  80,  0 + 1 },
++		{ 13000000, 443,  5 + 1 },
++		{ 19200000,  50,  0 + 1 },
++		{ 26000000, 443, 11 + 1 },
++		{ 38400000,  25,  0 + 1 }
++	};
++
++	const struct omap3_dpll5_settings *d;
++	struct clk_hw_omap *clk = to_clk_hw_omap(hw);
++	struct dpll_data *dd;
++	unsigned int i;
++
++	for (i = 0; i < ARRAY_SIZE(precomputed); ++i) {
++		if (parent_rate == precomputed[i].rate)
++			break;
++	}
++
++	if (i == ARRAY_SIZE(precomputed))
++		return false;
++
++	d = &precomputed[i];
++
++	/* Update the M, N and rounded rate values and program the DPLL. */
++	dd = clk->dpll_data;
++	dd->last_rounded_m = d->m;
++	dd->last_rounded_n = d->n;
++	dd->last_rounded_rate = div_u64((u64)parent_rate * d->m, d->n);
++	omap3_noncore_dpll_program(clk, 0);
++
++	return true;
++}
++
++/**
++ * omap3_dpll5_set_rate - set rate for omap3 dpll5
++ * @hw: clock to change
++ * @rate: target rate for clock
++ * @parent_rate: rate of the parent clock
++ *
++ * Set rate for the DPLL5 clock. Apply the sprz319 advisory 2.1 on OMAP36xx if
++ * the DPLL is used for USB host (detected through the requested rate).
++ */
++int omap3_dpll5_set_rate(struct clk_hw *hw, unsigned long rate,
++			 unsigned long parent_rate)
++{
++	if (rate == OMAP3_DPLL5_FREQ_FOR_USBHOST * 8) {
++		if (omap3_dpll5_apply_errata(hw, parent_rate))
++			return 0;
++	}
++
++	return omap3_noncore_dpll_set_rate(hw, rate, parent_rate);
++}
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 3957de801ae8..204cd527ff34 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -26,6 +26,7 @@
+ #include <linux/thermal.h>
+ 
+ struct private_data {
++	struct opp_table *opp_table;
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
+@@ -141,6 +142,7 @@ static int resources_available(void)
+ static int cpufreq_init(struct cpufreq_policy *policy)
+ {
+ 	struct cpufreq_frequency_table *freq_table;
++	struct opp_table *opp_table = NULL;
+ 	struct private_data *priv;
+ 	struct device *cpu_dev;
+ 	struct clk *cpu_clk;
+@@ -184,8 +186,9 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 */
+ 	name = find_supply_name(cpu_dev);
+ 	if (name) {
+-		ret = dev_pm_opp_set_regulator(cpu_dev, name);
+-		if (ret) {
++		opp_table = dev_pm_opp_set_regulator(cpu_dev, name);
++		if (IS_ERR(opp_table)) {
++			ret = PTR_ERR(opp_table);
+ 			dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n",
+ 				policy->cpu, ret);
+ 			goto out_put_clk;
+@@ -235,6 +238,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	}
+ 
+ 	priv->reg_name = name;
++	priv->opp_table = opp_table;
+ 
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+@@ -283,7 +287,7 @@ out_free_priv:
+ out_free_opp:
+ 	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
+ 	if (name)
+-		dev_pm_opp_put_regulator(cpu_dev);
++		dev_pm_opp_put_regulator(opp_table);
+ out_put_clk:
+ 	clk_put(cpu_clk);
+ 
+@@ -298,7 +302,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+ 	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+-		dev_pm_opp_put_regulator(priv->cpu_dev);
++		dev_pm_opp_put_regulator(priv->opp_table);
+ 
+ 	clk_put(policy->clk);
+ 	kfree(priv);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 2cde3796cb82..f3307fc38e79 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -702,7 +702,9 @@ copy_iv:
+ 
+ 	/* Will read cryptlen */
+ 	append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+-	aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
++	append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH | KEY_VLF |
++			     FIFOLD_TYPE_MSG1OUT2 | FIFOLD_TYPE_LASTBOTH);
++	append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
+ 
+ 	/* Write ICV */
+ 	append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 6fc8923bd92a..bd56a3ecb29c 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -1503,12 +1503,15 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
+ 	if (!cc->key_size && strcmp(key, "-"))
+ 		goto out;
+ 
++	/* clear the flag since following operations may invalidate previously valid key */
++	clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
++
+ 	if (cc->key_size && crypt_decode_key(cc->key, key, cc->key_size) < 0)
+ 		goto out;
+ 
+-	set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
+-
+ 	r = crypt_setkey_allcpus(cc);
++	if (!r)
++		set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
+ 
+ out:
+ 	/* Hex key string not needed after here, so wipe it. */
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 6a2e8dd44a1b..3643cba71351 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -200,11 +200,13 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 
+ 	if (!(fc->up_interval + fc->down_interval)) {
+ 		ti->error = "Total (up + down) interval is zero";
++		r = -EINVAL;
+ 		goto bad;
+ 	}
+ 
+ 	if (fc->up_interval + fc->down_interval < fc->up_interval) {
+ 		ti->error = "Interval overflow";
++		r = -EINVAL;
+ 		goto bad;
+ 	}
+ 
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 6d53810963f7..af2d79b52484 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -2994,6 +2994,9 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		}
+ 	}
+ 
++	/* Disable/enable discard support on raid set. */
++	configure_discard_support(rs);
++
+ 	mddev_unlock(&rs->md);
+ 	return 0;
+ 
+@@ -3580,12 +3583,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	if (test_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags))
+ 		rs_update_sbs(rs);
+ 
+-	/*
+-	 * Disable/enable discard support on raid set after any
+-	 * conversion, because devices can have been added
+-	 */
+-	configure_discard_support(rs);
+-
+ 	/* Load the bitmap from disk unless raid0 */
+ 	r = __load_dirty_region_bitmap(rs);
+ 	if (r)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 2154596eedf3..b6af2860cbc6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -218,6 +218,9 @@ static void rq_end_stats(struct mapped_device *md, struct request *orig)
+  */
+ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
+ {
++	struct request_queue *q = md->queue;
++	unsigned long flags;
++
+ 	atomic_dec(&md->pending[rw]);
+ 
+ 	/* nudge anyone waiting on suspend queue */
+@@ -230,8 +233,11 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
+ 	 * back into ->request_fn() could deadlock attempting to grab the
+ 	 * queue lock again.
+ 	 */
+-	if (!md->queue->mq_ops && run_queue)
+-		blk_run_queue_async(md->queue);
++	if (!q->mq_ops && run_queue) {
++		spin_lock_irqsave(q->queue_lock, flags);
++		blk_run_queue_async(q);
++		spin_unlock_irqrestore(q->queue_lock, flags);
++	}
+ 
+ 	/*
+ 	 * dm_put() must be at the end of this function. See the comment above
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index c4b53b332607..5ac239d0f787 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -924,12 +924,6 @@ static int dm_table_determine_type(struct dm_table *t)
+ 
+ 	BUG_ON(!request_based); /* No targets in this table */
+ 
+-	if (list_empty(devices) && __table_type_request_based(live_md_type)) {
+-		/* inherit live MD type */
+-		t->type = live_md_type;
+-		return 0;
+-	}
+-
+ 	/*
+ 	 * The only way to establish DM_TYPE_MQ_REQUEST_BASED is by
+ 	 * having a compatible target use dm_table_set_type.
+@@ -948,6 +942,19 @@ verify_rq_based:
+ 		return -EINVAL;
+ 	}
+ 
++	if (list_empty(devices)) {
++		int srcu_idx;
++		struct dm_table *live_table = dm_get_live_table(t->md, &srcu_idx);
++
++		/* inherit live table's type and all_blk_mq */
++		if (live_table) {
++			t->type = live_table->type;
++			t->all_blk_mq = live_table->all_blk_mq;
++		}
++		dm_put_live_table(t->md, srcu_idx);
++		return 0;
++	}
++
+ 	/* Non-request-stackable devices can't be used for request-based dm */
+ 	list_for_each_entry(dd, devices, list) {
+ 		struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev);
+@@ -974,6 +981,11 @@ verify_rq_based:
+ 		t->all_blk_mq = true;
+ 	}
+ 
++	if (t->type == DM_TYPE_MQ_REQUEST_BASED && !t->all_blk_mq) {
++		DMERR("table load rejected: all devices are not blk-mq request-stackable");
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index 7e44005595c1..20557e2c60c6 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -775,17 +775,15 @@ int dm_sm_metadata_create(struct dm_space_map *sm,
+ 	memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm));
+ 
+ 	r = sm_ll_new_metadata(&smm->ll, tm);
++	if (!r) {
++		if (nr_blocks > DM_SM_METADATA_MAX_BLOCKS)
++			nr_blocks = DM_SM_METADATA_MAX_BLOCKS;
++		r = sm_ll_extend(&smm->ll, nr_blocks);
++	}
++	memcpy(&smm->sm, &ops, sizeof(smm->sm));
+ 	if (r)
+ 		return r;
+ 
+-	if (nr_blocks > DM_SM_METADATA_MAX_BLOCKS)
+-		nr_blocks = DM_SM_METADATA_MAX_BLOCKS;
+-	r = sm_ll_extend(&smm->ll, nr_blocks);
+-	if (r)
+-		return r;
+-
+-	memcpy(&smm->sm, &ops, sizeof(smm->sm));
+-
+ 	/*
+ 	 * Now we need to update the newly created data structures with the
+ 	 * allocated blocks that they were built from.
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index af5e2dc4a3d5..011f88e5663e 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -271,7 +271,7 @@ static ssize_t nvmet_ns_device_path_store(struct config_item *item,
+ 
+ 	mutex_lock(&subsys->lock);
+ 	ret = -EBUSY;
+-	if (nvmet_ns_enabled(ns))
++	if (ns->enabled)
+ 		goto out_unlock;
+ 
+ 	kfree(ns->device_path);
+@@ -307,7 +307,7 @@ static ssize_t nvmet_ns_device_nguid_store(struct config_item *item,
+ 	int ret = 0;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (nvmet_ns_enabled(ns)) {
++	if (ns->enabled) {
+ 		ret = -EBUSY;
+ 		goto out_unlock;
+ 	}
+@@ -339,7 +339,7 @@ CONFIGFS_ATTR(nvmet_ns_, device_nguid);
+ 
+ static ssize_t nvmet_ns_enable_show(struct config_item *item, char *page)
+ {
+-	return sprintf(page, "%d\n", nvmet_ns_enabled(to_nvmet_ns(item)));
++	return sprintf(page, "%d\n", to_nvmet_ns(item)->enabled);
+ }
+ 
+ static ssize_t nvmet_ns_enable_store(struct config_item *item,
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 6559d5afa7bf..e9500d41b943 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -264,7 +264,7 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
+ 	int ret = 0;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (!list_empty(&ns->dev_link))
++	if (ns->enabled)
+ 		goto out_unlock;
+ 
+ 	ns->bdev = blkdev_get_by_path(ns->device_path, FMODE_READ | FMODE_WRITE,
+@@ -309,6 +309,7 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
+ 	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ 		nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
+ 
++	ns->enabled = true;
+ 	ret = 0;
+ out_unlock:
+ 	mutex_unlock(&subsys->lock);
+@@ -325,11 +326,11 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
+ 	struct nvmet_ctrl *ctrl;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (list_empty(&ns->dev_link)) {
+-		mutex_unlock(&subsys->lock);
+-		return;
+-	}
+-	list_del_init(&ns->dev_link);
++	if (!ns->enabled)
++		goto out_unlock;
++
++	ns->enabled = false;
++	list_del_rcu(&ns->dev_link);
+ 	mutex_unlock(&subsys->lock);
+ 
+ 	/*
+@@ -351,6 +352,7 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
+ 
+ 	if (ns->bdev)
+ 		blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
++out_unlock:
+ 	mutex_unlock(&subsys->lock);
+ }
+ 
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 76b6eedccaf9..7655a351320f 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -47,6 +47,7 @@ struct nvmet_ns {
+ 	loff_t			size;
+ 	u8			nguid[16];
+ 
++	bool			enabled;
+ 	struct nvmet_subsys	*subsys;
+ 	const char		*device_path;
+ 
+@@ -61,11 +62,6 @@ static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
+ 	return container_of(to_config_group(item), struct nvmet_ns, group);
+ }
+ 
+-static inline bool nvmet_ns_enabled(struct nvmet_ns *ns)
+-{
+-	return !list_empty_careful(&ns->dev_link);
+-}
+-
+ struct nvmet_cq {
+ 	u16			qid;
+ 	u16			size;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 3ca9fdb0a271..3f9ac8be77e1 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1732,6 +1732,7 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */
+ 	.driver_info = QUIRK_CONTROL_LINE_STATE, },
+ 	{ USB_DEVICE(0x2184, 0x001c) },	/* GW Instek AFG-2225 */
++	{ USB_DEVICE(0x2184, 0x0036) },	/* GW Instek AFG-125 */
+ 	{ USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */
+ 	},
+ 	/* Motorola H24 HSPA module: */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 1d5fc32d06d0..f3a7408d6417 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -101,6 +101,8 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem);
+ 
+ static void hub_release(struct kref *kref);
+ static int usb_reset_and_verify_device(struct usb_device *udev);
++static void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++					  struct usb_port *port_dev);
+ 
+ static inline char *portspeed(struct usb_hub *hub, int portstatus)
+ {
+@@ -899,82 +901,28 @@ static int hub_set_port_link_state(struct usb_hub *hub, int port1,
+ }
+ 
+ /*
+- * If USB 3.0 ports are placed into the Disabled state, they will no longer
+- * detect any device connects or disconnects.  This is generally not what the
+- * USB core wants, since it expects a disabled port to produce a port status
+- * change event when a new device connects.
+- *
+- * Instead, set the link state to Disabled, wait for the link to settle into
+- * that state, clear any change bits, and then put the port into the RxDetect
+- * state.
++ * USB-3 does not have a similar link state as USB-2 that will avoid negotiating
++ * a connection with a plugged-in cable but will signal the host when the cable
++ * is unplugged. Disable remote wake and set link state to U3 for USB-3 devices
+  */
+-static int hub_usb3_port_disable(struct usb_hub *hub, int port1)
+-{
+-	int ret;
+-	int total_time;
+-	u16 portchange, portstatus;
+-
+-	if (!hub_is_superspeed(hub->hdev))
+-		return -EINVAL;
+-
+-	ret = hub_port_status(hub, port1, &portstatus, &portchange);
+-	if (ret < 0)
+-		return ret;
+-
+-	/*
+-	 * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI
+-	 * Controller [1022:7814] will have spurious result making the following
+-	 * usb 3.0 device hotplugging route to the 2.0 root hub and recognized
+-	 * as high-speed device if we set the usb 3.0 port link state to
+-	 * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we
+-	 * check the state here to avoid the bug.
+-	 */
+-	if ((portstatus & USB_PORT_STAT_LINK_STATE) ==
+-				USB_SS_PORT_LS_RX_DETECT) {
+-		dev_dbg(&hub->ports[port1 - 1]->dev,
+-			 "Not disabling port; link state is RxDetect\n");
+-		return ret;
+-	}
+-
+-	ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED);
+-	if (ret)
+-		return ret;
+-
+-	/* Wait for the link to enter the disabled state. */
+-	for (total_time = 0; ; total_time += HUB_DEBOUNCE_STEP) {
+-		ret = hub_port_status(hub, port1, &portstatus, &portchange);
+-		if (ret < 0)
+-			return ret;
+-
+-		if ((portstatus & USB_PORT_STAT_LINK_STATE) ==
+-				USB_SS_PORT_LS_SS_DISABLED)
+-			break;
+-		if (total_time >= HUB_DEBOUNCE_TIMEOUT)
+-			break;
+-		msleep(HUB_DEBOUNCE_STEP);
+-	}
+-	if (total_time >= HUB_DEBOUNCE_TIMEOUT)
+-		dev_warn(&hub->ports[port1 - 1]->dev,
+-				"Could not disable after %d ms\n", total_time);
+-
+-	return hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_RX_DETECT);
+-}
+-
+ static int hub_port_disable(struct usb_hub *hub, int port1, int set_state)
+ {
+ 	struct usb_port *port_dev = hub->ports[port1 - 1];
+ 	struct usb_device *hdev = hub->hdev;
+ 	int ret = 0;
+ 
+-	if (port_dev->child && set_state)
+-		usb_set_device_state(port_dev->child, USB_STATE_NOTATTACHED);
+ 	if (!hub->error) {
+-		if (hub_is_superspeed(hub->hdev))
+-			ret = hub_usb3_port_disable(hub, port1);
+-		else
++		if (hub_is_superspeed(hub->hdev)) {
++			hub_usb3_port_prepare_disable(hub, port_dev);
++			ret = hub_set_port_link_state(hub, port_dev->portnum,
++						      USB_SS_PORT_LS_U3);
++		} else {
+ 			ret = usb_clear_port_feature(hdev, port1,
+ 					USB_PORT_FEAT_ENABLE);
++		}
+ 	}
++	if (port_dev->child && set_state)
++		usb_set_device_state(port_dev->child, USB_STATE_NOTATTACHED);
+ 	if (ret && ret != -ENODEV)
+ 		dev_err(&port_dev->dev, "cannot disable (err = %d)\n", ret);
+ 	return ret;
+@@ -4142,6 +4090,26 @@ void usb_unlocked_enable_lpm(struct usb_device *udev)
+ }
+ EXPORT_SYMBOL_GPL(usb_unlocked_enable_lpm);
+ 
++/* usb3 devices use U3 for disabled, make sure remote wakeup is disabled */
++static void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++					  struct usb_port *port_dev)
++{
++	struct usb_device *udev = port_dev->child;
++	int ret;
++
++	if (udev && udev->port_is_suspended && udev->do_remote_wakeup) {
++		ret = hub_set_port_link_state(hub, port_dev->portnum,
++					      USB_SS_PORT_LS_U0);
++		if (!ret) {
++			msleep(USB_RESUME_TIMEOUT);
++			ret = usb_disable_remote_wakeup(udev);
++		}
++		if (ret)
++			dev_warn(&udev->dev,
++				 "Port disable: can't disable remote wake\n");
++		udev->do_remote_wakeup = 0;
++	}
++}
+ 
+ #else	/* CONFIG_PM */
+ 
+@@ -4149,6 +4117,9 @@ EXPORT_SYMBOL_GPL(usb_unlocked_enable_lpm);
+ #define hub_resume		NULL
+ #define hub_reset_resume	NULL
+ 
++static inline void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++						 struct usb_port *port_dev) { }
++
+ int usb_disable_lpm(struct usb_device *udev)
+ {
+ 	return 0;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index dc3b5962d087..717d5f6d3e7c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -775,6 +775,9 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 		unsigned length, unsigned last, unsigned chain, unsigned node)
+ {
+ 	struct dwc3_trb		*trb;
++	struct dwc3		*dwc = dep->dwc;
++	struct usb_gadget	*gadget = &dwc->gadget;
++	enum usb_device_speed	speed = gadget->speed;
+ 
+ 	dwc3_trace(trace_dwc3_gadget, "%s: req %p dma %08llx length %d%s%s",
+ 			dep->name, req, (unsigned long long) dma,
+@@ -804,10 +807,16 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 		break;
+ 
+ 	case USB_ENDPOINT_XFER_ISOC:
+-		if (!node)
++		if (!node) {
+ 			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS_FIRST;
+-		else
++
++			if (speed == USB_SPEED_HIGH) {
++				struct usb_ep *ep = &dep->endpoint;
++				trb->size |= DWC3_TRB_SIZE_PCM1(ep->mult - 1);
++			}
++		} else {
+ 			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
++		}
+ 
+ 		/* always enable Interrupt on Missed ISOC */
+ 		trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 5ebe6af7976e..e2abaa7943d9 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -197,11 +197,16 @@ int config_ep_by_speed(struct usb_gadget *g,
+ 
+ ep_found:
+ 	/* commit results */
+-	_ep->maxpacket = usb_endpoint_maxp(chosen_desc);
++	_ep->maxpacket = usb_endpoint_maxp(chosen_desc) & 0x7ff;
+ 	_ep->desc = chosen_desc;
+ 	_ep->comp_desc = NULL;
+ 	_ep->maxburst = 0;
+-	_ep->mult = 0;
++	_ep->mult = 1;
++
++	if (g->speed == USB_SPEED_HIGH && (usb_endpoint_xfer_isoc(_ep->desc) ||
++				usb_endpoint_xfer_int(_ep->desc)))
++		_ep->mult = usb_endpoint_maxp(_ep->desc) & 0x7ff;
++
+ 	if (!want_comp_desc)
+ 		return 0;
+ 
+@@ -218,7 +223,7 @@ ep_found:
+ 		switch (usb_endpoint_type(_ep->desc)) {
+ 		case USB_ENDPOINT_XFER_ISOC:
+ 			/* mult: bits 1:0 of bmAttributes */
+-			_ep->mult = comp_desc->bmAttributes & 0x3;
++			_ep->mult = (comp_desc->bmAttributes & 0x3) + 1;
+ 		case USB_ENDPOINT_XFER_BULK:
+ 		case USB_ENDPOINT_XFER_INT:
+ 			_ep->maxburst = comp_desc->bMaxBurst + 1;
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index cd214ec8a601..969cfe741380 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -1067,13 +1067,13 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
+ 	if (!agdev->out_ep) {
+ 		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	agdev->in_ep = usb_ep_autoconfig(gadget, &fs_epin_desc);
+ 	if (!agdev->in_ep) {
+ 		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	uac2->p_prm.uac2 = uac2;
+@@ -1091,7 +1091,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc, NULL,
+ 				     NULL);
+ 	if (ret)
+-		goto err;
++		return ret;
+ 
+ 	prm = &agdev->uac2.c_prm;
+ 	prm->max_psize = hs_epout_desc.wMaxPacketSize;
+@@ -1106,19 +1106,19 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL);
+ 	if (!prm->rbuf) {
+ 		prm->max_psize = 0;
+-		goto err_free_descs;
++		goto err;
+ 	}
+ 
+ 	ret = alsa_uac2_init(agdev);
+ 	if (ret)
+-		goto err_free_descs;
++		goto err;
+ 	return 0;
+ 
+-err_free_descs:
+-	usb_free_all_descriptors(fn);
+ err:
+ 	kfree(agdev->uac2.p_prm.rbuf);
+ 	kfree(agdev->uac2.c_prm.rbuf);
++err_free_descs:
++	usb_free_all_descriptors(fn);
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index 3d0d5d94a62f..0f01c04d7cbd 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -243,7 +243,7 @@ uvc_video_alloc_requests(struct uvc_video *video)
+ 
+ 	req_size = video->ep->maxpacket
+ 		 * max_t(unsigned int, video->ep->maxburst, 1)
+-		 * (video->ep->mult + 1);
++		 * (video->ep->mult);
+ 
+ 	for (i = 0; i < UVC_NUM_REQUESTS; ++i) {
+ 		video->req_buffer[i] = kmalloc(req_size, GFP_KERNEL);
+diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
+index 940304c33224..02260cfdedb1 100644
+--- a/drivers/usb/host/uhci-pci.c
++++ b/drivers/usb/host/uhci-pci.c
+@@ -129,6 +129,10 @@ static int uhci_pci_init(struct usb_hcd *hcd)
+ 	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_HP)
+ 		uhci->wait_for_hp = 1;
+ 
++	/* Intel controllers use non-PME wakeup signalling */
++	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL)
++		device_set_run_wake(uhci_dev(uhci), 1);
++
+ 	/* Set up pointers to PCI-specific functions */
+ 	uhci->reset_hc = uhci_pci_reset_hc;
+ 	uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
+diff --git a/drivers/usb/serial/kl5kusb105.c b/drivers/usb/serial/kl5kusb105.c
+index fc5d3a791e08..6f29bfadbe33 100644
+--- a/drivers/usb/serial/kl5kusb105.c
++++ b/drivers/usb/serial/kl5kusb105.c
+@@ -296,7 +296,7 @@ static int  klsi_105_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 	rc = usb_serial_generic_open(tty, port);
+ 	if (rc) {
+ 		retval = rc;
+-		goto exit;
++		goto err_free_cfg;
+ 	}
+ 
+ 	rc = usb_control_msg(port->serial->dev,
+@@ -315,17 +315,32 @@ static int  klsi_105_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 		dev_dbg(&port->dev, "%s - enabled reading\n", __func__);
+ 
+ 	rc = klsi_105_get_line_state(port, &line_state);
+-	if (rc >= 0) {
+-		spin_lock_irqsave(&priv->lock, flags);
+-		priv->line_state = line_state;
+-		spin_unlock_irqrestore(&priv->lock, flags);
+-		dev_dbg(&port->dev, "%s - read line state 0x%lx\n", __func__, line_state);
+-		retval = 0;
+-	} else
++	if (rc < 0) {
+ 		retval = rc;
++		goto err_disable_read;
++	}
++
++	spin_lock_irqsave(&priv->lock, flags);
++	priv->line_state = line_state;
++	spin_unlock_irqrestore(&priv->lock, flags);
++	dev_dbg(&port->dev, "%s - read line state 0x%lx\n", __func__,
++			line_state);
++
++	return 0;
+ 
+-exit:
++err_disable_read:
++	usb_control_msg(port->serial->dev,
++			     usb_sndctrlpipe(port->serial->dev, 0),
++			     KL5KUSB105A_SIO_CONFIGURE,
++			     USB_TYPE_VENDOR | USB_DIR_OUT,
++			     KL5KUSB105A_SIO_CONFIGURE_READ_OFF,
++			     0, /* index */
++			     NULL, 0,
++			     KLSI_TIMEOUT);
++	usb_serial_generic_close(port);
++err_free_cfg:
+ 	kfree(cfg);
++
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 9894e341c6ac..7ce31a4c7e7f 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -268,6 +268,8 @@ static void option_instat_callback(struct urb *urb);
+ #define TELIT_PRODUCT_CC864_SINGLE		0x1006
+ #define TELIT_PRODUCT_DE910_DUAL		0x1010
+ #define TELIT_PRODUCT_UE910_V2			0x1012
++#define TELIT_PRODUCT_LE922_USBCFG1		0x1040
++#define TELIT_PRODUCT_LE922_USBCFG2		0x1041
+ #define TELIT_PRODUCT_LE922_USBCFG0		0x1042
+ #define TELIT_PRODUCT_LE922_USBCFG3		0x1043
+ #define TELIT_PRODUCT_LE922_USBCFG5		0x1045
+@@ -1210,6 +1212,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 },
++	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
++		.driver_info = (kernel_ulong_t)&telit_le910_blacklist },
++	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG2),
++		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG3),
+ 		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff),
+@@ -1989,6 +1995,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d02, 0xff, 0x00, 0x00) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d04, 0xff) },			/* D-Link DWM-158 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff),			/* D-Link DWM-221 B1 */
+ 	  .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+diff --git a/drivers/usb/usbip/vudc_transfer.c b/drivers/usb/usbip/vudc_transfer.c
+index aba6bd478045..bc0296d937d0 100644
+--- a/drivers/usb/usbip/vudc_transfer.c
++++ b/drivers/usb/usbip/vudc_transfer.c
+@@ -339,6 +339,8 @@ static void v_timer(unsigned long _vudc)
+ 		total = timer->frame_limit;
+ 	}
+ 
++	/* We have to clear ep0 flags separately as it's not on the list */
++	udc->ep[0].already_seen = 0;
+ 	list_for_each_entry(_ep, &udc->gadget.ep_list, ep_list) {
+ 		ep = to_vep(_ep);
+ 		ep->already_seen = 0;
+diff --git a/drivers/watchdog/mei_wdt.c b/drivers/watchdog/mei_wdt.c
+index 630bd189f167..2a9d5cdedea2 100644
+--- a/drivers/watchdog/mei_wdt.c
++++ b/drivers/watchdog/mei_wdt.c
+@@ -389,6 +389,8 @@ static int mei_wdt_register(struct mei_wdt *wdt)
+ 	wdt->wdd.max_timeout = MEI_WDT_MAX_TIMEOUT;
+ 
+ 	watchdog_set_drvdata(&wdt->wdd, wdt);
++	watchdog_stop_on_reboot(&wdt->wdd);
++
+ 	ret = watchdog_register_device(&wdt->wdd);
+ 	if (ret) {
+ 		dev_err(dev, "unable to register watchdog device = %d.\n", ret);
+diff --git a/drivers/watchdog/qcom-wdt.c b/drivers/watchdog/qcom-wdt.c
+index 5796b5d1b3f2..4f47b5e90956 100644
+--- a/drivers/watchdog/qcom-wdt.c
++++ b/drivers/watchdog/qcom-wdt.c
+@@ -209,7 +209,7 @@ static int qcom_wdt_probe(struct platform_device *pdev)
+ 	wdt->wdd.parent = &pdev->dev;
+ 	wdt->layout = regs;
+ 
+-	if (readl(wdt->base + WDT_STS) & 1)
++	if (readl(wdt_addr(wdt, WDT_STS)) & 1)
+ 		wdt->wdd.bootstatus = WDIOF_CARDRESET;
+ 
+ 	/*
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index bb952121ea94..2ef2b61b69df 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -1007,7 +1007,7 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 
+ 	vma->vm_ops = &gntdev_vmops;
+ 
+-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
++	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP;
+ 
+ 	if (use_ptemod)
+ 		vma->vm_flags |= VM_DONTCOPY;
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 08ae99343d92..b010242bab32 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -837,7 +837,7 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
+ 		return true;	 /* already a holder */
+ 	else if (bdev->bd_holder != NULL)
+ 		return false; 	 /* held by someone else */
+-	else if (bdev->bd_contains == bdev)
++	else if (whole == bdev)
+ 		return true;  	 /* is a whole device which isn't held */
+ 
+ 	else if (whole->bd_holder == bd_may_claim)
+diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
+index e0f071f6b5a7..63d197724519 100644
+--- a/fs/btrfs/async-thread.c
++++ b/fs/btrfs/async-thread.c
+@@ -86,6 +86,20 @@ btrfs_work_owner(struct btrfs_work *work)
+ 	return work->wq->fs_info;
+ }
+ 
++bool btrfs_workqueue_normal_congested(struct btrfs_workqueue *wq)
++{
++	/*
++	 * We could compare wq->normal->pending with num_online_cpus()
++	 * to support "thresh == NO_THRESHOLD" case, but it requires
++	 * moving up atomic_inc/dec in thresh_queue/exec_hook. Let's
++	 * postpone it until someone needs the support of that case.
++	 */
++	if (wq->normal->thresh == NO_THRESHOLD)
++		return false;
++
++	return atomic_read(&wq->normal->pending) > wq->normal->thresh * 2;
++}
++
+ BTRFS_WORK_HELPER(worker_helper);
+ BTRFS_WORK_HELPER(delalloc_helper);
+ BTRFS_WORK_HELPER(flush_delalloc_helper);
+diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h
+index 8e52484cd461..1f9597355c9d 100644
+--- a/fs/btrfs/async-thread.h
++++ b/fs/btrfs/async-thread.h
+@@ -84,4 +84,5 @@ void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int max);
+ void btrfs_set_work_high_priority(struct btrfs_work *work);
+ struct btrfs_fs_info *btrfs_work_owner(struct btrfs_work *work);
+ struct btrfs_fs_info *btrfs_workqueue_owner(struct __btrfs_workqueue *wq);
++bool btrfs_workqueue_normal_congested(struct btrfs_workqueue *wq);
+ #endif
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 791e47ce9d27..469fa3268d9b 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2210,6 +2210,8 @@ btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu,
+ 	cpu->target = le64_to_cpu(disk->target);
+ 	cpu->flags = le64_to_cpu(disk->flags);
+ 	cpu->limit = le64_to_cpu(disk->limit);
++	cpu->stripes_min = le32_to_cpu(disk->stripes_min);
++	cpu->stripes_max = le32_to_cpu(disk->stripes_max);
+ }
+ 
+ static inline void
+@@ -2228,6 +2230,8 @@ btrfs_cpu_balance_args_to_disk(struct btrfs_disk_balance_args *disk,
+ 	disk->target = cpu_to_le64(cpu->target);
+ 	disk->flags = cpu_to_le64(cpu->flags);
+ 	disk->limit = cpu_to_le64(cpu->limit);
++	disk->stripes_min = cpu_to_le32(cpu->stripes_min);
++	disk->stripes_max = cpu_to_le32(cpu->stripes_max);
+ }
+ 
+ /* struct btrfs_super_block */
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 3eeb9cd8cfa5..de946dd743b7 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1356,7 +1356,8 @@ release_path:
+ 	total_done++;
+ 
+ 	btrfs_release_prepared_delayed_node(delayed_node);
+-	if (async_work->nr == 0 || total_done < async_work->nr)
++	if ((async_work->nr == 0 && total_done < BTRFS_DELAYED_WRITEBACK) ||
++	    total_done < async_work->nr)
+ 		goto again;
+ 
+ free_path:
+@@ -1372,7 +1373,8 @@ static int btrfs_wq_run_delayed_node(struct btrfs_delayed_root *delayed_root,
+ {
+ 	struct btrfs_async_delayed_work *async_work;
+ 
+-	if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND)
++	if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND ||
++	    btrfs_workqueue_normal_congested(fs_info->delayed_workers))
+ 		return 0;
+ 
+ 	async_work = kmalloc(sizeof(*async_work), GFP_NOFS);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 3dede6d53bad..dafcfd017d37 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -559,7 +559,15 @@ static noinline int check_leaf(struct btrfs_root *root,
+ 	u32 nritems = btrfs_header_nritems(leaf);
+ 	int slot;
+ 
+-	if (nritems == 0) {
++	/*
++	 * Extent buffers from a relocation tree have a owner field that
++	 * corresponds to the subvolume tree they are based on. So just from an
++	 * extent buffer alone we can not find out what is the id of the
++	 * corresponding subvolume tree, so we can not figure out if the extent
++	 * buffer corresponds to the root of the relocation tree or not. So skip
++	 * this check for relocation trees.
++	 */
++	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
+ 		struct btrfs_root *check_root;
+ 
+ 		key.objectid = btrfs_header_owner(leaf);
+@@ -572,17 +580,24 @@ static noinline int check_leaf(struct btrfs_root *root,
+ 		 * open_ctree() some roots has not yet been set up.
+ 		 */
+ 		if (!IS_ERR_OR_NULL(check_root)) {
++			struct extent_buffer *eb;
++
++			eb = btrfs_root_node(check_root);
+ 			/* if leaf is the root, then it's fine */
+-			if (leaf->start !=
+-			    btrfs_root_bytenr(&check_root->root_item)) {
++			if (leaf != eb) {
+ 				CORRUPT("non-root leaf's nritems is 0",
+-					leaf, root, 0);
++					leaf, check_root, 0);
++				free_extent_buffer(eb);
+ 				return -EIO;
+ 			}
++			free_extent_buffer(eb);
+ 		}
+ 		return 0;
+ 	}
+ 
++	if (nritems == 0)
++		return 0;
++
+ 	/* Check the 0 item */
+ 	if (btrfs_item_offset_nr(leaf, 0) + btrfs_item_size_nr(leaf, 0) !=
+ 	    BTRFS_LEAF_DATA_SIZE(root)) {
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 665da8f66ff1..a1b40ab5770a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8884,14 +8884,13 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 	ret = btrfs_lookup_extent_info(trans, root, bytenr, level - 1, 1,
+ 				       &wc->refs[level - 1],
+ 				       &wc->flags[level - 1]);
+-	if (ret < 0) {
+-		btrfs_tree_unlock(next);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto out_unlock;
+ 
+ 	if (unlikely(wc->refs[level - 1] == 0)) {
+ 		btrfs_err(root->fs_info, "Missing references.");
+-		BUG();
++		ret = -EIO;
++		goto out_unlock;
+ 	}
+ 	*lookup_info = 0;
+ 
+@@ -8943,7 +8942,12 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	level--;
+-	BUG_ON(level != btrfs_header_level(next));
++	ASSERT(level == btrfs_header_level(next));
++	if (level != btrfs_header_level(next)) {
++		btrfs_err(root->fs_info, "mismatched level");
++		ret = -EIO;
++		goto out_unlock;
++	}
+ 	path->nodes[level] = next;
+ 	path->slots[level] = 0;
+ 	path->locks[level] = BTRFS_WRITE_LOCK_BLOCKING;
+@@ -8958,8 +8962,15 @@ skip:
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF) {
+ 			parent = path->nodes[level]->start;
+ 		} else {
+-			BUG_ON(root->root_key.objectid !=
++			ASSERT(root->root_key.objectid ==
+ 			       btrfs_header_owner(path->nodes[level]));
++			if (root->root_key.objectid !=
++			    btrfs_header_owner(path->nodes[level])) {
++				btrfs_err(root->fs_info,
++						"mismatched block owner");
++				ret = -EIO;
++				goto out_unlock;
++			}
+ 			parent = 0;
+ 		}
+ 
+@@ -8976,12 +8987,18 @@ skip:
+ 		}
+ 		ret = btrfs_free_extent(trans, root, bytenr, blocksize, parent,
+ 				root->root_key.objectid, level - 1, 0);
+-		BUG_ON(ret); /* -ENOMEM */
++		if (ret)
++			goto out_unlock;
+ 	}
++
++	*lookup_info = 1;
++	ret = 1;
++
++out_unlock:
+ 	btrfs_tree_unlock(next);
+ 	free_extent_buffer(next);
+-	*lookup_info = 1;
+-	return 1;
++
++	return ret;
+ }
+ 
+ /*
+@@ -10127,6 +10144,11 @@ int btrfs_read_block_groups(struct btrfs_root *root)
+ 	struct extent_buffer *leaf;
+ 	int need_clear = 0;
+ 	u64 cache_gen;
++	u64 feature;
++	int mixed;
++
++	feature = btrfs_super_incompat_flags(info->super_copy);
++	mixed = !!(feature & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS);
+ 
+ 	root = info->extent_root;
+ 	key.objectid = 0;
+@@ -10180,6 +10202,15 @@ int btrfs_read_block_groups(struct btrfs_root *root)
+ 				   btrfs_item_ptr_offset(leaf, path->slots[0]),
+ 				   sizeof(cache->item));
+ 		cache->flags = btrfs_block_group_flags(&cache->item);
++		if (!mixed &&
++		    ((cache->flags & BTRFS_BLOCK_GROUP_METADATA) &&
++		    (cache->flags & BTRFS_BLOCK_GROUP_DATA))) {
++			btrfs_err(info,
++"bg %llu is a mixed block group but filesystem hasn't enabled mixed block groups",
++				  cache->key.objectid);
++			ret = -EINVAL;
++			goto error;
++		}
+ 
+ 		key.objectid = found_key.objectid + found_key.offset;
+ 		btrfs_release_path(path);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index c3ec30dea9a5..3f9a6b40fbbd 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5209,11 +5209,20 @@ int read_extent_buffer_pages(struct extent_io_tree *tree,
+ 			lock_page(page);
+ 		}
+ 		locked_pages++;
++	}
++	/*
++	 * We need to firstly lock all pages to make sure that
++	 * the uptodate bit of our pages won't be affected by
++	 * clear_extent_buffer_uptodate().
++	 */
++	for (i = start_i; i < num_pages; i++) {
++		page = eb->pages[i];
+ 		if (!PageUptodate(page)) {
+ 			num_reads++;
+ 			all_uptodate = 0;
+ 		}
+ 	}
++
+ 	if (all_uptodate) {
+ 		if (start_i == 0)
+ 			set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 7fd939bfbd99..fcd2b3be21bf 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3813,6 +3813,11 @@ process_slot:
+ 		}
+ 		btrfs_release_path(path);
+ 		key.offset = next_key_min_offset;
++
++		if (fatal_signal_pending(current)) {
++			ret = -EINTR;
++			goto out;
++		}
+ 	}
+ 	ret = 0;
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 8db2e29fdcf4..9cf03ebb27cc 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2332,10 +2332,6 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	int err = -ENOMEM;
+ 	int ret = 0;
+ 
+-	mutex_lock(&fs_info->qgroup_rescan_lock);
+-	fs_info->qgroup_rescan_running = true;
+-	mutex_unlock(&fs_info->qgroup_rescan_lock);
+-
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+ 		goto out;
+@@ -2446,6 +2442,7 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
+ 		sizeof(fs_info->qgroup_rescan_progress));
+ 	fs_info->qgroup_rescan_progress.objectid = progress_objectid;
+ 	init_completion(&fs_info->qgroup_rescan_completion);
++	fs_info->qgroup_rescan_running = true;
+ 
+ 	spin_unlock(&fs_info->qgroup_lock);
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index c0c13dc6fe12..d08a79dbf323 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -923,9 +923,16 @@ again:
+ 			path2->slots[level]--;
+ 
+ 		eb = path2->nodes[level];
+-		WARN_ON(btrfs_node_blockptr(eb, path2->slots[level]) !=
+-			cur->bytenr);
+-
++		if (btrfs_node_blockptr(eb, path2->slots[level]) !=
++		    cur->bytenr) {
++			btrfs_err(root->fs_info,
++	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
++				  cur->bytenr, level - 1, root->objectid,
++				  node_key->objectid, node_key->type,
++				  node_key->offset);
++			err = -ENOENT;
++			goto out;
++		}
+ 		lower = cur;
+ 		need_check = true;
+ 		for (; level < BTRFS_MAX_LEVEL; level++) {
+@@ -1387,14 +1394,23 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 	root_key.offset = objectid;
+ 
+ 	if (root->root_key.objectid == objectid) {
++		u64 commit_root_gen;
++
+ 		/* called by btrfs_init_reloc_root */
+ 		ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+ 		BUG_ON(ret);
+-
+ 		last_snap = btrfs_root_last_snapshot(&root->root_item);
+-		btrfs_set_root_last_snapshot(&root->root_item,
+-					     trans->transid - 1);
++		/*
++		 * Set the last_snapshot field to the generation of the commit
++		 * root - like this ctree.c:btrfs_block_can_be_shared() behaves
++		 * correctly (returns true) when the relocation root is created
++		 * either inside the critical section of a transaction commit
++		 * (through transaction.c:qgroup_account_snapshot()) and when
++		 * it's created before the transaction commit is started.
++		 */
++		commit_root_gen = btrfs_header_generation(root->commit_root);
++		btrfs_set_root_last_snapshot(&root->root_item, commit_root_gen);
+ 	} else {
+ 		/*
+ 		 * called by btrfs_reloc_post_snapshot_hook.
+@@ -2350,6 +2366,10 @@ void free_reloc_roots(struct list_head *list)
+ 	while (!list_empty(list)) {
+ 		reloc_root = list_entry(list->next, struct btrfs_root,
+ 					root_list);
++		free_extent_buffer(reloc_root->node);
++		free_extent_buffer(reloc_root->commit_root);
++		reloc_root->node = NULL;
++		reloc_root->commit_root = NULL;
+ 		__del_reloc_root(reloc_root);
+ 	}
+ }
+@@ -2686,11 +2706,15 @@ static int do_relocation(struct btrfs_trans_handle *trans,
+ 
+ 		if (!upper->eb) {
+ 			ret = btrfs_search_slot(trans, root, key, path, 0, 1);
+-			if (ret < 0) {
+-				err = ret;
++			if (ret) {
++				if (ret < 0)
++					err = ret;
++				else
++					err = -ENOENT;
++
++				btrfs_release_path(path);
+ 				break;
+ 			}
+-			BUG_ON(ret > 0);
+ 
+ 			if (!upper->eb) {
+ 				upper->eb = path->nodes[upper->level];
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index a87675ffd02b..563878a141a3 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5802,6 +5802,64 @@ static int changed_extent(struct send_ctx *sctx,
+ 	int ret = 0;
+ 
+ 	if (sctx->cur_ino != sctx->cmp_key->objectid) {
++
++		if (result == BTRFS_COMPARE_TREE_CHANGED) {
++			struct extent_buffer *leaf_l;
++			struct extent_buffer *leaf_r;
++			struct btrfs_file_extent_item *ei_l;
++			struct btrfs_file_extent_item *ei_r;
++
++			leaf_l = sctx->left_path->nodes[0];
++			leaf_r = sctx->right_path->nodes[0];
++			ei_l = btrfs_item_ptr(leaf_l,
++					      sctx->left_path->slots[0],
++					      struct btrfs_file_extent_item);
++			ei_r = btrfs_item_ptr(leaf_r,
++					      sctx->right_path->slots[0],
++					      struct btrfs_file_extent_item);
++
++			/*
++			 * We may have found an extent item that has changed
++			 * only its disk_bytenr field and the corresponding
++			 * inode item was not updated. This case happens due to
++			 * very specific timings during relocation when a leaf
++			 * that contains file extent items is COWed while
++			 * relocation is ongoing and its in the stage where it
++			 * updates data pointers. So when this happens we can
++			 * safely ignore it since we know it's the same extent,
++			 * but just at different logical and physical locations
++			 * (when an extent is fully replaced with a new one, we
++			 * know the generation number must have changed too,
++			 * since snapshot creation implies committing the current
++			 * transaction, and the inode item must have been updated
++			 * as well).
++			 * This replacement of the disk_bytenr happens at
++			 * relocation.c:replace_file_extents() through
++			 * relocation.c:btrfs_reloc_cow_block().
++			 */
++			if (btrfs_file_extent_generation(leaf_l, ei_l) ==
++			    btrfs_file_extent_generation(leaf_r, ei_r) &&
++			    btrfs_file_extent_ram_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_ram_bytes(leaf_r, ei_r) &&
++			    btrfs_file_extent_compression(leaf_l, ei_l) ==
++			    btrfs_file_extent_compression(leaf_r, ei_r) &&
++			    btrfs_file_extent_encryption(leaf_l, ei_l) ==
++			    btrfs_file_extent_encryption(leaf_r, ei_r) &&
++			    btrfs_file_extent_other_encoding(leaf_l, ei_l) ==
++			    btrfs_file_extent_other_encoding(leaf_r, ei_r) &&
++			    btrfs_file_extent_type(leaf_l, ei_l) ==
++			    btrfs_file_extent_type(leaf_r, ei_r) &&
++			    btrfs_file_extent_disk_bytenr(leaf_l, ei_l) !=
++			    btrfs_file_extent_disk_bytenr(leaf_r, ei_r) &&
++			    btrfs_file_extent_disk_num_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_disk_num_bytes(leaf_r, ei_r) &&
++			    btrfs_file_extent_offset(leaf_l, ei_l) ==
++			    btrfs_file_extent_offset(leaf_r, ei_r) &&
++			    btrfs_file_extent_num_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_num_bytes(leaf_r, ei_r))
++				return 0;
++		}
++
+ 		inconsistent_snapshot_error(sctx, result, "extent");
+ 		return -EIO;
+ 	}
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 90e1198bc63d..e63c96ca0e96 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1940,12 +1940,11 @@ static noinline int find_dir_range(struct btrfs_root *root,
+ next:
+ 	/* check the next slot in the tree to see if it is a valid item */
+ 	nritems = btrfs_header_nritems(path->nodes[0]);
++	path->slots[0]++;
+ 	if (path->slots[0] >= nritems) {
+ 		ret = btrfs_next_leaf(root, path);
+ 		if (ret)
+ 			goto out;
+-	} else {
+-		path->slots[0]++;
+ 	}
+ 
+ 	btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+@@ -5205,6 +5204,7 @@ process_leaf:
+ 			if (di_key.type == BTRFS_ROOT_ITEM_KEY)
+ 				continue;
+ 
++			btrfs_release_path(path);
+ 			di_inode = btrfs_iget(root->fs_info->sb, &di_key,
+ 					      root, NULL);
+ 			if (IS_ERR(di_inode)) {
+@@ -5214,13 +5214,12 @@ process_leaf:
+ 
+ 			if (btrfs_inode_in_log(di_inode, trans->transid)) {
+ 				iput(di_inode);
+-				continue;
++				break;
+ 			}
+ 
+ 			ctx->log_new_dentries = false;
+ 			if (type == BTRFS_FT_DIR || type == BTRFS_FT_SYMLINK)
+ 				log_mode = LOG_INODE_ALL;
+-			btrfs_release_path(path);
+ 			ret = btrfs_log_inode(trans, root, di_inode,
+ 					      log_mode, 0, LLONG_MAX, ctx);
+ 			if (!ret &&
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 035efce603a9..7c9c6a4be4fd 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -859,7 +859,7 @@ static void btrfs_close_bdev(struct btrfs_device *device)
+ 		blkdev_put(device->bdev, device->mode);
+ }
+ 
+-static void btrfs_close_one_device(struct btrfs_device *device)
++static void btrfs_prepare_close_one_device(struct btrfs_device *device)
+ {
+ 	struct btrfs_fs_devices *fs_devices = device->fs_devices;
+ 	struct btrfs_device *new_device;
+@@ -877,8 +877,6 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	if (device->missing)
+ 		fs_devices->missing_devices--;
+ 
+-	btrfs_close_bdev(device);
+-
+ 	new_device = btrfs_alloc_device(NULL, &device->devid,
+ 					device->uuid);
+ 	BUG_ON(IS_ERR(new_device)); /* -ENOMEM */
+@@ -892,23 +890,39 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 
+ 	list_replace_rcu(&device->dev_list, &new_device->dev_list);
+ 	new_device->fs_devices = device->fs_devices;
+-
+-	call_rcu(&device->rcu, free_device);
+ }
+ 
+ static int __btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
+ {
+ 	struct btrfs_device *device, *tmp;
++	struct list_head pending_put;
++
++	INIT_LIST_HEAD(&pending_put);
+ 
+ 	if (--fs_devices->opened > 0)
+ 		return 0;
+ 
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	list_for_each_entry_safe(device, tmp, &fs_devices->devices, dev_list) {
+-		btrfs_close_one_device(device);
++		btrfs_prepare_close_one_device(device);
++		list_add(&device->dev_list, &pending_put);
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+ 
++	/*
++	 * btrfs_show_devname() is using the device_list_mutex,
++	 * sometimes call to blkdev_put() leads vfs calling
++	 * into this func. So do put outside of device_list_mutex,
++	 * as of now.
++	 */
++	while (!list_empty(&pending_put)) {
++		device = list_first_entry(&pending_put,
++				struct btrfs_device, dev_list);
++		list_del(&device->dev_list);
++		btrfs_close_bdev(device);
++		call_rcu(&device->rcu, free_device);
++	}
++
+ 	WARN_ON(fs_devices->open_devices);
+ 	WARN_ON(fs_devices->rw_devices);
+ 	fs_devices->opened = 0;
+@@ -1846,7 +1860,6 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 	u64 num_devices;
+ 	int ret = 0;
+ 	bool clear_super = false;
+-	char *dev_name = NULL;
+ 
+ 	mutex_lock(&uuid_mutex);
+ 
+@@ -1882,11 +1895,6 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 		list_del_init(&device->dev_alloc_list);
+ 		device->fs_devices->rw_devices--;
+ 		unlock_chunks(root);
+-		dev_name = kstrdup(device->name->str, GFP_KERNEL);
+-		if (!dev_name) {
+-			ret = -ENOMEM;
+-			goto error_undo;
+-		}
+ 		clear_super = true;
+ 	}
+ 
+@@ -1936,14 +1944,21 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 		btrfs_sysfs_rm_device_link(root->fs_info->fs_devices, device);
+ 	}
+ 
+-	btrfs_close_bdev(device);
+-
+-	call_rcu(&device->rcu, free_device);
+-
+ 	num_devices = btrfs_super_num_devices(root->fs_info->super_copy) - 1;
+ 	btrfs_set_super_num_devices(root->fs_info->super_copy, num_devices);
+ 	mutex_unlock(&root->fs_info->fs_devices->device_list_mutex);
+ 
++	/*
++	 * at this point, the device is zero sized and detached from
++	 * the devices list.  All that's left is to zero out the old
++	 * supers and free the device.
++	 */
++	if (device->writeable)
++		btrfs_scratch_superblocks(device->bdev, device->name->str);
++
++	btrfs_close_bdev(device);
++	call_rcu(&device->rcu, free_device);
++
+ 	if (cur_devices->open_devices == 0) {
+ 		struct btrfs_fs_devices *fs_devices;
+ 		fs_devices = root->fs_info->fs_devices;
+@@ -1962,24 +1977,7 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 	root->fs_info->num_tolerated_disk_barrier_failures =
+ 		btrfs_calc_num_tolerated_disk_barrier_failures(root->fs_info);
+ 
+-	/*
+-	 * at this point, the device is zero sized.  We want to
+-	 * remove it from the devices list and zero out the old super
+-	 */
+-	if (clear_super) {
+-		struct block_device *bdev;
+-
+-		bdev = blkdev_get_by_path(dev_name, FMODE_READ | FMODE_EXCL,
+-						root->fs_info->bdev_holder);
+-		if (!IS_ERR(bdev)) {
+-			btrfs_scratch_superblocks(bdev, dev_name);
+-			blkdev_put(bdev, FMODE_READ | FMODE_EXCL);
+-		}
+-	}
+-
+ out:
+-	kfree(dev_name);
+-
+ 	mutex_unlock(&uuid_mutex);
+ 	return ret;
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 65f78b7a9062..24184cae83d2 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -629,6 +629,8 @@ struct TCP_Server_Info {
+ 	unsigned int	max_read;
+ 	unsigned int	max_write;
+ 	__u8		preauth_hash[512];
++	struct delayed_work reconnect; /* reconnect workqueue job */
++	struct mutex reconnect_mutex; /* prevent simultaneous reconnects */
+ #endif /* CONFIG_CIFS_SMB2 */
+ 	unsigned long echo_interval;
+ };
+@@ -832,6 +834,7 @@ cap_unix(struct cifs_ses *ses)
+ struct cifs_tcon {
+ 	struct list_head tcon_list;
+ 	int tc_count;
++	struct list_head rlist; /* reconnect list */
+ 	struct list_head openFileList;
+ 	spinlock_t open_file_lock; /* protects list above */
+ 	struct cifs_ses *ses;	/* pointer to session associated with */
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 95dab43646f0..ca76f8a7267f 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -204,6 +204,9 @@ extern void cifs_add_pending_open_locked(struct cifs_fid *fid,
+ 					 struct tcon_link *tlink,
+ 					 struct cifs_pending_open *open);
+ extern void cifs_del_pending_open(struct cifs_pending_open *open);
++extern void cifs_put_tcp_session(struct TCP_Server_Info *server,
++				 int from_reconnect);
++extern void cifs_put_tcon(struct cifs_tcon *tcon);
+ 
+ #if IS_ENABLED(CONFIG_CIFS_DFS_UPCALL)
+ extern void cifs_dfs_release_automount_timer(void);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 7b67179521cf..a178f3a5b052 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -52,6 +52,9 @@
+ #include "nterr.h"
+ #include "rfc1002pdu.h"
+ #include "fscache.h"
++#ifdef CONFIG_CIFS_SMB2
++#include "smb2proto.h"
++#endif
+ 
+ #define CIFS_PORT 445
+ #define RFC1001_PORT 139
+@@ -2076,8 +2079,8 @@ cifs_find_tcp_session(struct smb_vol *vol)
+ 	return NULL;
+ }
+ 
+-static void
+-cifs_put_tcp_session(struct TCP_Server_Info *server)
++void
++cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ {
+ 	struct task_struct *task;
+ 
+@@ -2094,6 +2097,19 @@ cifs_put_tcp_session(struct TCP_Server_Info *server)
+ 
+ 	cancel_delayed_work_sync(&server->echo);
+ 
++#ifdef CONFIG_CIFS_SMB2
++	if (from_reconnect)
++		/*
++		 * Avoid deadlock here: reconnect work calls
++		 * cifs_put_tcp_session() at its end. Need to be sure
++		 * that reconnect work does nothing with server pointer after
++		 * that step.
++		 */
++		cancel_delayed_work(&server->reconnect);
++	else
++		cancel_delayed_work_sync(&server->reconnect);
++#endif
++
+ 	spin_lock(&GlobalMid_Lock);
+ 	server->tcpStatus = CifsExiting;
+ 	spin_unlock(&GlobalMid_Lock);
+@@ -2158,6 +2174,10 @@ cifs_get_tcp_session(struct smb_vol *volume_info)
+ 	INIT_LIST_HEAD(&tcp_ses->tcp_ses_list);
+ 	INIT_LIST_HEAD(&tcp_ses->smb_ses_list);
+ 	INIT_DELAYED_WORK(&tcp_ses->echo, cifs_echo_request);
++#ifdef CONFIG_CIFS_SMB2
++	INIT_DELAYED_WORK(&tcp_ses->reconnect, smb2_reconnect_server);
++	mutex_init(&tcp_ses->reconnect_mutex);
++#endif
+ 	memcpy(&tcp_ses->srcaddr, &volume_info->srcaddr,
+ 	       sizeof(tcp_ses->srcaddr));
+ 	memcpy(&tcp_ses->dstaddr, &volume_info->dstaddr,
+@@ -2316,7 +2336,7 @@ cifs_put_smb_ses(struct cifs_ses *ses)
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+ 	sesInfoFree(ses);
+-	cifs_put_tcp_session(server);
++	cifs_put_tcp_session(server, 0);
+ }
+ 
+ #ifdef CONFIG_KEYS
+@@ -2490,7 +2510,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ 		mutex_unlock(&ses->session_mutex);
+ 
+ 		/* existing SMB ses has a server reference already */
+-		cifs_put_tcp_session(server);
++		cifs_put_tcp_session(server, 0);
+ 		free_xid(xid);
+ 		return ses;
+ 	}
+@@ -2580,7 +2600,7 @@ cifs_find_tcon(struct cifs_ses *ses, const char *unc)
+ 	return NULL;
+ }
+ 
+-static void
++void
+ cifs_put_tcon(struct cifs_tcon *tcon)
+ {
+ 	unsigned int xid;
+@@ -3762,7 +3782,7 @@ mount_fail_check:
+ 		else if (ses)
+ 			cifs_put_smb_ses(ses);
+ 		else
+-			cifs_put_tcp_session(server);
++			cifs_put_tcp_session(server, 0);
+ 		bdi_destroy(&cifs_sb->bdi);
+ 	}
+ 
+@@ -4073,7 +4093,7 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
+ 	ses = cifs_get_smb_ses(master_tcon->ses->server, vol_info);
+ 	if (IS_ERR(ses)) {
+ 		tcon = (struct cifs_tcon *)ses;
+-		cifs_put_tcp_session(master_tcon->ses->server);
++		cifs_put_tcp_session(master_tcon->ses->server, 0);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index f9e766f464be..b2aff0c6f22c 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -260,7 +260,7 @@ smb2_push_mandatory_locks(struct cifsFileInfo *cfile)
+ 	 * and check it for zero before using.
+ 	 */
+ 	max_buf = tlink_tcon(cfile->tlink)->ses->server->maxBuf;
+-	if (!max_buf) {
++	if (max_buf < sizeof(struct smb2_lock_element)) {
+ 		free_xid(xid);
+ 		return -EINVAL;
+ 	}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 3eec96ca87d9..32e0e06f972c 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -275,7 +275,7 @@ out:
+ 	case SMB2_CHANGE_NOTIFY:
+ 	case SMB2_QUERY_INFO:
+ 	case SMB2_SET_INFO:
+-		return -EAGAIN;
++		rc = -EAGAIN;
+ 	}
+ 	unload_nls(nls_codepage);
+ 	return rc;
+@@ -1820,6 +1820,54 @@ smb2_echo_callback(struct mid_q_entry *mid)
+ 	add_credits(server, credits_received, CIFS_ECHO_OP);
+ }
+ 
++void smb2_reconnect_server(struct work_struct *work)
++{
++	struct TCP_Server_Info *server = container_of(work,
++					struct TCP_Server_Info, reconnect.work);
++	struct cifs_ses *ses;
++	struct cifs_tcon *tcon, *tcon2;
++	struct list_head tmp_list;
++	int tcon_exist = false;
++
++	/* Prevent simultaneous reconnects that can corrupt tcon->rlist list */
++	mutex_lock(&server->reconnect_mutex);
++
++	INIT_LIST_HEAD(&tmp_list);
++	cifs_dbg(FYI, "Need negotiate, reconnecting tcons\n");
++
++	spin_lock(&cifs_tcp_ses_lock);
++	list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
++		list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
++			if (tcon->need_reconnect) {
++				tcon->tc_count++;
++				list_add_tail(&tcon->rlist, &tmp_list);
++				tcon_exist = true;
++			}
++		}
++	}
++	/*
++	 * Get the reference to server struct to be sure that the last call of
++	 * cifs_put_tcon() in the loop below won't release the server pointer.
++	 */
++	if (tcon_exist)
++		server->srv_count++;
++
++	spin_unlock(&cifs_tcp_ses_lock);
++
++	list_for_each_entry_safe(tcon, tcon2, &tmp_list, rlist) {
++		smb2_reconnect(SMB2_ECHO, tcon);
++		list_del_init(&tcon->rlist);
++		cifs_put_tcon(tcon);
++	}
++
++	cifs_dbg(FYI, "Reconnecting tcons finished\n");
++	mutex_unlock(&server->reconnect_mutex);
++
++	/* now we can safely release srv struct */
++	if (tcon_exist)
++		cifs_put_tcp_session(server, 1);
++}
++
+ int
+ SMB2_echo(struct TCP_Server_Info *server)
+ {
+@@ -1832,32 +1880,11 @@ SMB2_echo(struct TCP_Server_Info *server)
+ 	cifs_dbg(FYI, "In echo request\n");
+ 
+ 	if (server->tcpStatus == CifsNeedNegotiate) {
+-		struct list_head *tmp, *tmp2;
+-		struct cifs_ses *ses;
+-		struct cifs_tcon *tcon;
+-
+-		cifs_dbg(FYI, "Need negotiate, reconnecting tcons\n");
+-		spin_lock(&cifs_tcp_ses_lock);
+-		list_for_each(tmp, &server->smb_ses_list) {
+-			ses = list_entry(tmp, struct cifs_ses, smb_ses_list);
+-			list_for_each(tmp2, &ses->tcon_list) {
+-				tcon = list_entry(tmp2, struct cifs_tcon,
+-						  tcon_list);
+-				/* add check for persistent handle reconnect */
+-				if (tcon && tcon->need_reconnect) {
+-					spin_unlock(&cifs_tcp_ses_lock);
+-					rc = smb2_reconnect(SMB2_ECHO, tcon);
+-					spin_lock(&cifs_tcp_ses_lock);
+-				}
+-			}
+-		}
+-		spin_unlock(&cifs_tcp_ses_lock);
++		/* No need to send echo on newly established connections */
++		queue_delayed_work(cifsiod_wq, &server->reconnect, 0);
++		return rc;
+ 	}
+ 
+-	/* if no session, renegotiate failed above */
+-	if (server->tcpStatus == CifsNeedNegotiate)
+-		return -EIO;
+-
+ 	rc = small_smb2_init(SMB2_ECHO, NULL, (void **)&req);
+ 	if (rc)
+ 		return rc;
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index eb2cde2f64ba..f2d511a6971b 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -96,6 +96,7 @@ extern int smb2_open_file(const unsigned int xid,
+ extern int smb2_unlock_range(struct cifsFileInfo *cfile,
+ 			     struct file_lock *flock, const unsigned int xid);
+ extern int smb2_push_mandatory_locks(struct cifsFileInfo *cfile);
++extern void smb2_reconnect_server(struct work_struct *work);
+ 
+ /*
+  * SMB2 Worker functions - most of protocol specific implementation details
+diff --git a/fs/exec.c b/fs/exec.c
+index 6fcfb3f7b137..eebe8be7a29d 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -19,7 +19,7 @@
+  * current->executable is only used by the procfs.  This allows a dispatch
+  * table to check for several different types  of binary formats.  We keep
+  * trying until we recognize the file or we run out of supported binary
+- * formats. 
++ * formats.
+  */
+ 
+ #include <linux/slab.h>
+@@ -1261,6 +1261,13 @@ int flush_old_exec(struct linux_binprm * bprm)
+ 	flush_thread();
+ 	current->personality &= ~bprm->per_clear;
+ 
++	/*
++	 * We have to apply CLOEXEC before we change whether the process is
++	 * dumpable (in setup_new_exec) to avoid a race with a process in userspace
++	 * trying to access the should-be-closed file descriptors of a process
++	 * undergoing exec(2).
++	 */
++	do_close_on_exec(current->files);
+ 	return 0;
+ 
+ out:
+@@ -1270,8 +1277,22 @@ EXPORT_SYMBOL(flush_old_exec);
+ 
+ void would_dump(struct linux_binprm *bprm, struct file *file)
+ {
+-	if (inode_permission(file_inode(file), MAY_READ) < 0)
++	struct inode *inode = file_inode(file);
++	if (inode_permission(inode, MAY_READ) < 0) {
++		struct user_namespace *old, *user_ns;
+ 		bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;
++
++		/* Ensure mm->user_ns contains the executable */
++		user_ns = old = bprm->mm->user_ns;
++		while ((user_ns != &init_user_ns) &&
++		       !privileged_wrt_inode_uidgid(user_ns, inode))
++			user_ns = user_ns->parent;
++
++		if (old != user_ns) {
++			bprm->mm->user_ns = get_user_ns(user_ns);
++			put_user_ns(old);
++		}
++	}
+ }
+ EXPORT_SYMBOL(would_dump);
+ 
+@@ -1301,7 +1322,6 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 	    !gid_eq(bprm->cred->gid, current_egid())) {
+ 		current->pdeath_signal = 0;
+ 	} else {
+-		would_dump(bprm, bprm->file);
+ 		if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP)
+ 			set_dumpable(current->mm, suid_dumpable);
+ 	}
+@@ -1310,7 +1330,6 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 	   group */
+ 	current->self_exec_id++;
+ 	flush_signal_handlers(current, 0);
+-	do_close_on_exec(current->files);
+ }
+ EXPORT_SYMBOL(setup_new_exec);
+ 
+@@ -1401,7 +1420,7 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ 	unsigned n_fs;
+ 
+ 	if (p->ptrace) {
+-		if (p->ptrace & PT_PTRACE_CAP)
++		if (ptracer_capable(p, current_user_ns()))
+ 			bprm->unsafe |= LSM_UNSAFE_PTRACE_CAP;
+ 		else
+ 			bprm->unsafe |= LSM_UNSAFE_PTRACE;
+@@ -1736,6 +1755,8 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	if (retval < 0)
+ 		goto out;
+ 
++	would_dump(bprm, bprm->file);
++
+ 	retval = exec_binprm(bprm);
+ 	if (retval < 0)
+ 		goto out;
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index b1d52c14098e..f97611171023 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -414,17 +414,19 @@ static inline int ext4_inode_journal_mode(struct inode *inode)
+ 		return EXT4_INODE_WRITEBACK_DATA_MODE;	/* writeback */
+ 	/* We do not support data journalling with delayed allocation */
+ 	if (!S_ISREG(inode->i_mode) ||
+-	    test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
+-		return EXT4_INODE_JOURNAL_DATA_MODE;	/* journal data */
+-	if (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA) &&
+-	    !test_opt(inode->i_sb, DELALLOC))
++	    test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA ||
++	    (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA) &&
++	    !test_opt(inode->i_sb, DELALLOC))) {
++		/* We do not support data journalling for encrypted data */
++		if (S_ISREG(inode->i_mode) && ext4_encrypted_inode(inode))
++			return EXT4_INODE_ORDERED_DATA_MODE;  /* ordered */
+ 		return EXT4_INODE_JOURNAL_DATA_MODE;	/* journal data */
++	}
+ 	if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA)
+ 		return EXT4_INODE_ORDERED_DATA_MODE;	/* ordered */
+ 	if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_WRITEBACK_DATA)
+ 		return EXT4_INODE_WRITEBACK_DATA_MODE;	/* writeback */
+-	else
+-		BUG();
++	BUG();
+ }
+ 
+ static inline int ext4_should_journal_data(struct inode *inode)
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index f74d5ee2cdec..d8ca4b9f9dd6 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -336,8 +336,10 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 
+ 	len -= EXT4_MIN_INLINE_DATA_SIZE;
+ 	value = kzalloc(len, GFP_NOFS);
+-	if (!value)
++	if (!value) {
++		error = -ENOMEM;
+ 		goto out;
++	}
+ 
+ 	error = ext4_xattr_ibody_get(inode, i.name_index, i.name,
+ 				     value, len);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index f4cdc647ecfc..27e4348efc4e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4438,6 +4438,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 	struct inode *inode;
+ 	journal_t *journal = EXT4_SB(sb)->s_journal;
+ 	long ret;
++	loff_t size;
+ 	int block;
+ 	uid_t i_uid;
+ 	gid_t i_gid;
+@@ -4538,6 +4539,11 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		ei->i_file_acl |=
+ 			((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32;
+ 	inode->i_size = ext4_isize(raw_inode);
++	if ((size = i_size_read(inode)) < 0) {
++		EXT4_ERROR_INODE(inode, "bad i_size value: %lld", size);
++		ret = -EFSCORRUPTED;
++		goto bad_inode;
++	}
+ 	ei->i_disksize = inode->i_size;
+ #ifdef CONFIG_QUOTA
+ 	ei->i_reserved_quota = 0;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index f418f55c2bbe..7ae43c59bc79 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -669,7 +669,7 @@ static void ext4_mb_mark_free_simple(struct super_block *sb,
+ 	ext4_grpblk_t min;
+ 	ext4_grpblk_t max;
+ 	ext4_grpblk_t chunk;
+-	unsigned short border;
++	unsigned int border;
+ 
+ 	BUG_ON(len > EXT4_CLUSTERS_PER_GROUP(sb));
+ 
+@@ -2287,7 +2287,7 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 	struct ext4_group_info *grinfo;
+ 	struct sg {
+ 		struct ext4_group_info info;
+-		ext4_grpblk_t counters[16];
++		ext4_grpblk_t counters[EXT4_MAX_BLOCK_LOG_SIZE + 2];
+ 	} sg;
+ 
+ 	group--;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ec89f5005c2d..d0d4377680c7 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3161,10 +3161,15 @@ static int count_overhead(struct super_block *sb, ext4_group_t grp,
+ 			ext4_set_bit(s++, buf);
+ 			count++;
+ 		}
+-		for (j = ext4_bg_num_gdb(sb, grp); j > 0; j--) {
+-			ext4_set_bit(EXT4_B2C(sbi, s++), buf);
+-			count++;
++		j = ext4_bg_num_gdb(sb, grp);
++		if (s + j > EXT4_BLOCKS_PER_GROUP(sb)) {
++			ext4_error(sb, "Invalid number of block group "
++				   "descriptor blocks: %d", j);
++			j = EXT4_BLOCKS_PER_GROUP(sb) - s;
+ 		}
++		count += j;
++		for (; j > 0; j--)
++			ext4_set_bit(EXT4_B2C(sbi, s++), buf);
+ 	}
+ 	if (!count)
+ 		return 0;
+@@ -3254,7 +3259,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 	struct buffer_head *bh;
+ 	struct ext4_super_block *es = NULL;
+-	struct ext4_sb_info *sbi;
++	struct ext4_sb_info *sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
+ 	ext4_fsblk_t block;
+ 	ext4_fsblk_t sb_block = get_sb_block(&data);
+ 	ext4_fsblk_t logical_sb_block;
+@@ -3273,16 +3278,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	unsigned int journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 	ext4_group_t first_not_zeroed;
+ 
+-	sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
+-	if (!sbi)
+-		goto out_free_orig;
++	if ((data && !orig_data) || !sbi)
++		goto out_free_base;
+ 
+ 	sbi->s_blockgroup_lock =
+ 		kzalloc(sizeof(struct blockgroup_lock), GFP_KERNEL);
+-	if (!sbi->s_blockgroup_lock) {
+-		kfree(sbi);
+-		goto out_free_orig;
+-	}
++	if (!sbi->s_blockgroup_lock)
++		goto out_free_base;
++
+ 	sb->s_fs_info = sbi;
+ 	sbi->s_sb = sb;
+ 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
+@@ -3428,11 +3431,19 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	 */
+ 	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
+ 
+-	if (!parse_options((char *) sbi->s_es->s_mount_opts, sb,
+-			   &journal_devnum, &journal_ioprio, 0)) {
+-		ext4_msg(sb, KERN_WARNING,
+-			 "failed to parse options in superblock: %s",
+-			 sbi->s_es->s_mount_opts);
++	if (sbi->s_es->s_mount_opts[0]) {
++		char *s_mount_opts = kstrndup(sbi->s_es->s_mount_opts,
++					      sizeof(sbi->s_es->s_mount_opts),
++					      GFP_KERNEL);
++		if (!s_mount_opts)
++			goto failed_mount;
++		if (!parse_options(s_mount_opts, sb, &journal_devnum,
++				   &journal_ioprio, 0)) {
++			ext4_msg(sb, KERN_WARNING,
++				 "failed to parse options in superblock: %s",
++				 s_mount_opts);
++		}
++		kfree(s_mount_opts);
+ 	}
+ 	sbi->s_def_mount_opt = sbi->s_mount_opt;
+ 	if (!parse_options((char *) data, sb, &journal_devnum,
+@@ -3458,6 +3469,11 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 				 "both data=journal and dax");
+ 			goto failed_mount;
+ 		}
++		if (ext4_has_feature_encrypt(sb)) {
++			ext4_msg(sb, KERN_WARNING,
++				 "encrypted files will use data=ordered "
++				 "instead of data journaling mode");
++		}
+ 		if (test_opt(sb, DELALLOC))
+ 			clear_opt(sb, DELALLOC);
+ 	} else {
+@@ -3613,12 +3629,16 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
+ 	sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
+-	if (EXT4_INODE_SIZE(sb) == 0 || EXT4_INODES_PER_GROUP(sb) == 0)
+-		goto cantfind_ext4;
+ 
+ 	sbi->s_inodes_per_block = blocksize / EXT4_INODE_SIZE(sb);
+ 	if (sbi->s_inodes_per_block == 0)
+ 		goto cantfind_ext4;
++	if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
++	    sbi->s_inodes_per_group > blocksize * 8) {
++		ext4_msg(sb, KERN_ERR, "invalid inodes per group: %lu\n",
++			 sbi->s_blocks_per_group);
++		goto failed_mount;
++	}
+ 	sbi->s_itb_per_group = sbi->s_inodes_per_group /
+ 					sbi->s_inodes_per_block;
+ 	sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb);
+@@ -3701,13 +3721,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	}
+ 	sbi->s_cluster_ratio = clustersize / blocksize;
+ 
+-	if (sbi->s_inodes_per_group > blocksize * 8) {
+-		ext4_msg(sb, KERN_ERR,
+-		       "#inodes per group too big: %lu",
+-		       sbi->s_inodes_per_group);
+-		goto failed_mount;
+-	}
+-
+ 	/* Do we have standard group size of clustersize * 8 blocks ? */
+ 	if (sbi->s_blocks_per_group == clustersize << 3)
+ 		set_opt2(sb, STD_GROUP_SIZE);
+@@ -4113,7 +4126,9 @@ no_journal:
+ 
+ 	if (___ratelimit(&ext4_mount_msg_ratelimit, "EXT4-fs mount"))
+ 		ext4_msg(sb, KERN_INFO, "mounted filesystem with%s. "
+-			 "Opts: %s%s%s", descr, sbi->s_es->s_mount_opts,
++			 "Opts: %.*s%s%s", descr,
++			 (int) sizeof(sbi->s_es->s_mount_opts),
++			 sbi->s_es->s_mount_opts,
+ 			 *sbi->s_es->s_mount_opts ? "; " : "", orig_data);
+ 
+ 	if (es->s_error_count)
+@@ -4192,8 +4207,8 @@ failed_mount:
+ out_fail:
+ 	sb->s_fs_info = NULL;
+ 	kfree(sbi->s_blockgroup_lock);
++out_free_base:
+ 	kfree(sbi);
+-out_free_orig:
+ 	kfree(orig_data);
+ 	return err ? err : ret;
+ }
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index badd407bb622..c05d66d98871 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -362,6 +362,7 @@ static int stat_open(struct inode *inode, struct file *file)
+ }
+ 
+ static const struct file_operations stat_fops = {
++	.owner = THIS_MODULE,
+ 	.open = stat_open,
+ 	.read = seq_read,
+ 	.llseek = seq_lseek,
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 14f5fe2b841e..8467f4221f57 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -451,7 +451,7 @@ struct f2fs_inode_info {
+ 	/* Use below internally in f2fs*/
+ 	unsigned long flags;		/* use to pass per-file flags */
+ 	struct rw_semaphore i_sem;	/* protect fi info */
+-	struct percpu_counter dirty_pages;	/* # of dirty pages */
++	atomic_t dirty_pages;		/* # of dirty pages */
+ 	f2fs_hash_t chash;		/* hash value of given file name */
+ 	unsigned int clevel;		/* maximum level of given file name */
+ 	nid_t i_xattr_nid;		/* node id that contains xattrs */
+@@ -1198,7 +1198,7 @@ static inline void inc_page_count(struct f2fs_sb_info *sbi, int count_type)
+ 
+ static inline void inode_inc_dirty_pages(struct inode *inode)
+ {
+-	percpu_counter_inc(&F2FS_I(inode)->dirty_pages);
++	atomic_inc(&F2FS_I(inode)->dirty_pages);
+ 	inc_page_count(F2FS_I_SB(inode), S_ISDIR(inode->i_mode) ?
+ 				F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA);
+ }
+@@ -1214,7 +1214,7 @@ static inline void inode_dec_dirty_pages(struct inode *inode)
+ 			!S_ISLNK(inode->i_mode))
+ 		return;
+ 
+-	percpu_counter_dec(&F2FS_I(inode)->dirty_pages);
++	atomic_dec(&F2FS_I(inode)->dirty_pages);
+ 	dec_page_count(F2FS_I_SB(inode), S_ISDIR(inode->i_mode) ?
+ 				F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA);
+ }
+@@ -1224,9 +1224,9 @@ static inline s64 get_pages(struct f2fs_sb_info *sbi, int count_type)
+ 	return percpu_counter_sum_positive(&sbi->nr_pages[count_type]);
+ }
+ 
+-static inline s64 get_dirty_pages(struct inode *inode)
++static inline int get_dirty_pages(struct inode *inode)
+ {
+-	return percpu_counter_sum_positive(&F2FS_I(inode)->dirty_pages);
++	return atomic_read(&F2FS_I(inode)->dirty_pages);
+ }
+ 
+ static inline int get_blocktype_secs(struct f2fs_sb_info *sbi, int block_type)
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 28f4f4cbb8d8..06f39db65923 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -970,7 +970,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
+ 				new_size = (dst + i) << PAGE_SHIFT;
+ 				if (dst_inode->i_size < new_size)
+ 					f2fs_i_size_write(dst_inode, new_size);
+-			} while ((do_replace[i] || blkaddr[i] == NULL_ADDR) && --ilen);
++			} while (--ilen && (do_replace[i] || blkaddr[i] == NULL_ADDR));
+ 
+ 			f2fs_put_dnode(&dn);
+ 		} else {
+@@ -1529,7 +1529,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 		goto out;
+ 
+ 	f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
+-		"Unexpected flush for atomic writes: ino=%lu, npages=%lld",
++		"Unexpected flush for atomic writes: ino=%lu, npages=%u",
+ 					inode->i_ino, get_dirty_pages(inode));
+ 	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+ 	if (ret)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 7f863a645ab1..37edc853c66c 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -565,13 +565,9 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
+ 
+ 	init_once((void *) fi);
+ 
+-	if (percpu_counter_init(&fi->dirty_pages, 0, GFP_NOFS)) {
+-		kmem_cache_free(f2fs_inode_cachep, fi);
+-		return NULL;
+-	}
+-
+ 	/* Initialize f2fs-specific inode info */
+ 	fi->vfs_inode.i_version = 1;
++	atomic_set(&fi->dirty_pages, 0);
+ 	fi->i_current_depth = 1;
+ 	fi->i_advise = 0;
+ 	init_rwsem(&fi->i_sem);
+@@ -694,7 +690,6 @@ static void f2fs_i_callback(struct rcu_head *head)
+ 
+ static void f2fs_destroy_inode(struct inode *inode)
+ {
+-	percpu_counter_destroy(&F2FS_I(inode)->dirty_pages);
+ 	call_rcu(&inode->i_rcu, f2fs_i_callback);
+ }
+ 
+diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
+index e8638fd2c0c3..6f07b3e89ff2 100644
+--- a/fs/xfs/xfs_log_recover.c
++++ b/fs/xfs/xfs_log_recover.c
+@@ -4506,6 +4506,7 @@ xlog_recover_clear_agi_bucket(
+ 	agi->agi_unlinked[bucket] = cpu_to_be32(NULLAGINO);
+ 	offset = offsetof(xfs_agi_t, agi_unlinked) +
+ 		 (sizeof(xfs_agino_t) * bucket);
++	xfs_trans_buf_set_type(tp, agibp, XFS_BLFT_AGI_BUF);
+ 	xfs_trans_log_buf(tp, agibp, offset,
+ 			  (offset + sizeof(xfs_agino_t) - 1));
+ 
+diff --git a/include/linux/capability.h b/include/linux/capability.h
+index dbc21c719ce6..6ffb67e10c06 100644
+--- a/include/linux/capability.h
++++ b/include/linux/capability.h
+@@ -240,8 +240,10 @@ static inline bool ns_capable_noaudit(struct user_namespace *ns, int cap)
+ 	return true;
+ }
+ #endif /* CONFIG_MULTIUSER */
++extern bool privileged_wrt_inode_uidgid(struct user_namespace *ns, const struct inode *inode);
+ extern bool capable_wrt_inode_uidgid(const struct inode *inode, int cap);
+ extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);
++extern bool ptracer_capable(struct task_struct *tsk, struct user_namespace *ns);
+ 
+ /* audit system wants to get cap info from files as well */
+ extern int get_vfs_caps_from_disk(const struct dentry *dentry, struct cpu_vfs_cap_data *cpu_caps);
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 903200f4ec41..39820082ce91 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -473,6 +473,7 @@ struct mm_struct {
+ 	 */
+ 	struct task_struct __rcu *owner;
+ #endif
++	struct user_namespace *user_ns;
+ 
+ 	/* store ref to file /proc/<pid>/exe symlink points to */
+ 	struct file __rcu *exe_file;
+diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
+index bca26157f5b6..f6bc76501912 100644
+--- a/include/linux/pm_opp.h
++++ b/include/linux/pm_opp.h
+@@ -19,6 +19,7 @@
+ 
+ struct dev_pm_opp;
+ struct device;
++struct opp_table;
+ 
+ enum dev_pm_opp_event {
+ 	OPP_EVENT_ADD, OPP_EVENT_REMOVE, OPP_EVENT_ENABLE, OPP_EVENT_DISABLE,
+@@ -62,8 +63,8 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions,
+ void dev_pm_opp_put_supported_hw(struct device *dev);
+ int dev_pm_opp_set_prop_name(struct device *dev, const char *name);
+ void dev_pm_opp_put_prop_name(struct device *dev);
+-int dev_pm_opp_set_regulator(struct device *dev, const char *name);
+-void dev_pm_opp_put_regulator(struct device *dev);
++struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name);
++void dev_pm_opp_put_regulator(struct opp_table *opp_table);
+ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
+ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
+ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
+@@ -170,12 +171,12 @@ static inline int dev_pm_opp_set_prop_name(struct device *dev, const char *name)
+ 
+ static inline void dev_pm_opp_put_prop_name(struct device *dev) {}
+ 
+-static inline int dev_pm_opp_set_regulator(struct device *dev, const char *name)
++static inline struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ {
+-	return -ENOTSUPP;
++	return ERR_PTR(-ENOTSUPP);
+ }
+ 
+-static inline void dev_pm_opp_put_regulator(struct device *dev) {}
++static inline void dev_pm_opp_put_regulator(struct opp_table *opp_table) {}
+ 
+ static inline int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ {
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 504c98a278d4..e13bfdf7f314 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -19,7 +19,6 @@
+ #define PT_SEIZED	0x00010000	/* SEIZE used, enable new behavior */
+ #define PT_PTRACED	0x00000001
+ #define PT_DTRACE	0x00000002	/* delayed trace (used on m68k, i386) */
+-#define PT_PTRACE_CAP	0x00000004	/* ptracer can follow suid-exec */
+ 
+ #define PT_OPT_FLAG_SHIFT	3
+ /* PT_TRACE_* event enable flags */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 62c68e513e39..f52d4cc97788 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1631,6 +1631,7 @@ struct task_struct {
+ 	struct list_head cpu_timers[3];
+ 
+ /* process credentials */
++	const struct cred __rcu *ptracer_cred; /* Tracer's credentials at attach */
+ 	const struct cred __rcu *real_cred; /* objective and real subjective task
+ 					 * credentials (COW) */
+ 	const struct cred __rcu *cred;	/* effective (overridable) subjective task
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 445b019c2078..de45666f96b4 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -17,7 +17,6 @@
+ #include <linux/bitops.h>
+ #include <linux/compiler.h>
+ #include <linux/atomic.h>
+-#include <linux/rhashtable.h>
+ 
+ #include <linux/netfilter/nf_conntrack_tcp.h>
+ #include <linux/netfilter/nf_conntrack_dccp.h>
+@@ -118,9 +117,6 @@ struct nf_conn {
+ 	/* Extensions */
+ 	struct nf_ct_ext *ext;
+ 
+-#if IS_ENABLED(CONFIG_NF_NAT)
+-	struct rhash_head	nat_bysource;
+-#endif
+ 	/* Storage reserved for other modules, must be the last member */
+ 	union nf_conntrack_proto proto;
+ };
+diff --git a/include/net/netfilter/nf_conntrack_extend.h b/include/net/netfilter/nf_conntrack_extend.h
+index 1c3035dda31f..b925395fa5ed 100644
+--- a/include/net/netfilter/nf_conntrack_extend.h
++++ b/include/net/netfilter/nf_conntrack_extend.h
+@@ -99,6 +99,9 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ struct nf_ct_ext_type {
+ 	/* Destroys relationships (can be NULL). */
+ 	void (*destroy)(struct nf_conn *ct);
++	/* Called when realloacted (can be NULL).
++	   Contents has already been moved. */
++	void (*move)(void *new, void *old);
+ 
+ 	enum nf_ct_ext_id id;
+ 
+diff --git a/include/net/netfilter/nf_nat.h b/include/net/netfilter/nf_nat.h
+index c327a431a6f3..344b1ab19220 100644
+--- a/include/net/netfilter/nf_nat.h
++++ b/include/net/netfilter/nf_nat.h
+@@ -1,6 +1,5 @@
+ #ifndef _NF_NAT_H
+ #define _NF_NAT_H
+-#include <linux/rhashtable.h>
+ #include <linux/netfilter_ipv4.h>
+ #include <linux/netfilter/nf_nat.h>
+ #include <net/netfilter/nf_conntrack_tuple.h>
+@@ -30,6 +29,8 @@ struct nf_conn;
+ 
+ /* The structure embedded in the conntrack structure. */
+ struct nf_conn_nat {
++	struct hlist_node bysource;
++	struct nf_conn *ct;
+ 	union nf_conntrack_nat_help help;
+ #if IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV4) || \
+     IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV6)
+diff --git a/kernel/capability.c b/kernel/capability.c
+index 00411c82dac5..4984e1f552eb 100644
+--- a/kernel/capability.c
++++ b/kernel/capability.c
+@@ -457,6 +457,19 @@ bool file_ns_capable(const struct file *file, struct user_namespace *ns,
+ EXPORT_SYMBOL(file_ns_capable);
+ 
+ /**
++ * privileged_wrt_inode_uidgid - Do capabilities in the namespace work over the inode?
++ * @ns: The user namespace in question
++ * @inode: The inode in question
++ *
++ * Return true if the inode uid and gid are within the namespace.
++ */
++bool privileged_wrt_inode_uidgid(struct user_namespace *ns, const struct inode *inode)
++{
++	return kuid_has_mapping(ns, inode->i_uid) &&
++		kgid_has_mapping(ns, inode->i_gid);
++}
++
++/**
+  * capable_wrt_inode_uidgid - Check nsown_capable and uid and gid mapped
+  * @inode: The inode in question
+  * @cap: The capability in question
+@@ -469,7 +482,26 @@ bool capable_wrt_inode_uidgid(const struct inode *inode, int cap)
+ {
+ 	struct user_namespace *ns = current_user_ns();
+ 
+-	return ns_capable(ns, cap) && kuid_has_mapping(ns, inode->i_uid) &&
+-		kgid_has_mapping(ns, inode->i_gid);
++	return ns_capable(ns, cap) && privileged_wrt_inode_uidgid(ns, inode);
+ }
+ EXPORT_SYMBOL(capable_wrt_inode_uidgid);
++
++/**
++ * ptracer_capable - Determine if the ptracer holds CAP_SYS_PTRACE in the namespace
++ * @tsk: The task that may be ptraced
++ * @ns: The user namespace to search for CAP_SYS_PTRACE in
++ *
++ * Return true if the task that is ptracing the current task had CAP_SYS_PTRACE
++ * in the specified user namespace.
++ */
++bool ptracer_capable(struct task_struct *tsk, struct user_namespace *ns)
++{
++	int ret = 0;  /* An absent tracer adds no restrictions */
++	const struct cred *cred;
++	rcu_read_lock();
++	cred = rcu_dereference(tsk->ptracer_cred);
++	if (cred)
++		ret = security_capable_noaudit(cred, ns, CAP_SYS_PTRACE);
++	rcu_read_unlock();
++	return (ret == 0);
++}
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 0874e2edd275..79517e5549f1 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -598,11 +598,11 @@ return_normal:
+ 	/*
+ 	 * Wait for the other CPUs to be notified and be waiting for us:
+ 	 */
+-	time_left = loops_per_jiffy * HZ;
++	time_left = MSEC_PER_SEC;
+ 	while (kgdb_do_roundup && --time_left &&
+ 	       (atomic_read(&masters_in_kgdb) + atomic_read(&slaves_in_kgdb)) !=
+ 		   online_cpus)
+-		cpu_relax();
++		udelay(1000);
+ 	if (!time_left)
+ 		pr_crit("Timed out waiting for secondary CPUs.\n");
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index beb31725f7e2..9f8dae785f07 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -598,7 +598,8 @@ static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+ #endif
+ }
+ 
+-static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
++static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
++	struct user_namespace *user_ns)
+ {
+ 	mm->mmap = NULL;
+ 	mm->mm_rb = RB_ROOT;
+@@ -638,6 +639,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
+ 	if (init_new_context(p, mm))
+ 		goto fail_nocontext;
+ 
++	mm->user_ns = get_user_ns(user_ns);
+ 	return mm;
+ 
+ fail_nocontext:
+@@ -683,7 +685,7 @@ struct mm_struct *mm_alloc(void)
+ 		return NULL;
+ 
+ 	memset(mm, 0, sizeof(*mm));
+-	return mm_init(mm, current);
++	return mm_init(mm, current, current_user_ns());
+ }
+ 
+ /*
+@@ -698,6 +700,7 @@ void __mmdrop(struct mm_struct *mm)
+ 	destroy_context(mm);
+ 	mmu_notifier_mm_destroy(mm);
+ 	check_mm(mm);
++	put_user_ns(mm->user_ns);
+ 	free_mm(mm);
+ }
+ EXPORT_SYMBOL_GPL(__mmdrop);
+@@ -977,7 +980,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk)
+ 
+ 	memcpy(mm, oldmm, sizeof(*mm));
+ 
+-	if (!mm_init(mm, tsk))
++	if (!mm_init(mm, tsk, mm->user_ns))
+ 		goto fail_nomem;
+ 
+ 	err = dup_mmap(mm, oldmm);
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 1d3b7665d0be..7b20baea41e7 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -39,6 +39,9 @@ void __ptrace_link(struct task_struct *child, struct task_struct *new_parent)
+ 	BUG_ON(!list_empty(&child->ptrace_entry));
+ 	list_add(&child->ptrace_entry, &new_parent->ptraced);
+ 	child->parent = new_parent;
++	rcu_read_lock();
++	child->ptracer_cred = get_cred(__task_cred(new_parent));
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -71,10 +74,14 @@ void __ptrace_link(struct task_struct *child, struct task_struct *new_parent)
+  */
+ void __ptrace_unlink(struct task_struct *child)
+ {
++	const struct cred *old_cred;
+ 	BUG_ON(!child->ptrace);
+ 
+ 	child->parent = child->real_parent;
+ 	list_del_init(&child->ptrace_entry);
++	old_cred = child->ptracer_cred;
++	child->ptracer_cred = NULL;
++	put_cred(old_cred);
+ 
+ 	spin_lock(&child->sighand->siglock);
+ 	child->ptrace = 0;
+@@ -218,7 +225,7 @@ static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)
+ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ {
+ 	const struct cred *cred = current_cred(), *tcred;
+-	int dumpable = 0;
++	struct mm_struct *mm;
+ 	kuid_t caller_uid;
+ 	kgid_t caller_gid;
+ 
+@@ -269,16 +276,11 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ 	return -EPERM;
+ ok:
+ 	rcu_read_unlock();
+-	smp_rmb();
+-	if (task->mm)
+-		dumpable = get_dumpable(task->mm);
+-	rcu_read_lock();
+-	if (dumpable != SUID_DUMP_USER &&
+-	    !ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
+-		rcu_read_unlock();
+-		return -EPERM;
+-	}
+-	rcu_read_unlock();
++	mm = task->mm;
++	if (mm &&
++	    ((get_dumpable(mm) != SUID_DUMP_USER) &&
++	     !ptrace_has_cap(mm->user_ns, mode)))
++	    return -EPERM;
+ 
+ 	return security_ptrace_access_check(task, mode);
+ }
+@@ -342,10 +344,6 @@ static int ptrace_attach(struct task_struct *task, long request,
+ 
+ 	if (seize)
+ 		flags |= PT_SEIZED;
+-	rcu_read_lock();
+-	if (ns_capable(__task_cred(task)->user_ns, CAP_SYS_PTRACE))
+-		flags |= PT_PTRACE_CAP;
+-	rcu_read_unlock();
+ 	task->ptrace = flags;
+ 
+ 	__ptrace_link(task, current);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 9acb29f280ec..6d1020c03d41 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -344,7 +344,6 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 	 */
+ 	if (is_hardlockup()) {
+ 		int this_cpu = smp_processor_id();
+-		struct pt_regs *regs = get_irq_regs();
+ 
+ 		/* only print hardlockups once */
+ 		if (__this_cpu_read(hard_watchdog_warn) == true)
+diff --git a/mm/filemap.c b/mm/filemap.c
+index ced9ef6c06b0..f1da48d1f95f 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1688,7 +1688,7 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
+ 	int error = 0;
+ 
+ 	if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
+-		return -EINVAL;
++		return 0;
+ 	iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
+ 
+ 	index = *ppos >> PAGE_SHIFT;
+diff --git a/mm/init-mm.c b/mm/init-mm.c
+index a56a851908d2..975e49f00f34 100644
+--- a/mm/init-mm.c
++++ b/mm/init-mm.c
+@@ -6,6 +6,7 @@
+ #include <linux/cpumask.h>
+ 
+ #include <linux/atomic.h>
++#include <linux/user_namespace.h>
+ #include <asm/pgtable.h>
+ #include <asm/mmu.h>
+ 
+@@ -21,5 +22,6 @@ struct mm_struct init_mm = {
+ 	.mmap_sem	= __RWSEM_INITIALIZER(init_mm.mmap_sem),
+ 	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
+ 	.mmlist		= LIST_HEAD_INIT(init_mm.mmlist),
++	.user_ns	= &init_user_ns,
+ 	INIT_MM_CONTEXT(init_mm)
+ };
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7401e996009a..212a017a2f02 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2173,7 +2173,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ 			unsigned long count, struct list_head *list,
+ 			int migratetype, bool cold)
+ {
+-	int i;
++	int i, alloced = 0;
+ 
+ 	spin_lock(&zone->lock);
+ 	for (i = 0; i < count; ++i) {
+@@ -2198,13 +2198,21 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ 		else
+ 			list_add_tail(&page->lru, list);
+ 		list = &page->lru;
++		alloced++;
+ 		if (is_migrate_cma(get_pcppage_migratetype(page)))
+ 			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ 					      -(1 << order));
+ 	}
++
++	/*
++	 * i pages were removed from the buddy list even if some leak due
++	 * to check_pcp_refill failing so adjust NR_FREE_PAGES based
++	 * on i. Do not confuse with 'alloced' which is the number of
++	 * pages added to the pcp list.
++	 */
+ 	__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
+ 	spin_unlock(&zone->lock);
+-	return i;
++	return alloced;
+ }
+ 
+ #ifdef CONFIG_NUMA
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index ba0fad78e5d4..5c313a0d1626 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -291,6 +291,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	int nid = shrinkctl->nid;
+ 	long batch_size = shrinker->batch ? shrinker->batch
+ 					  : SHRINK_BATCH;
++	long scanned = 0, next_deferred;
+ 
+ 	freeable = shrinker->count_objects(shrinker, shrinkctl);
+ 	if (freeable == 0)
+@@ -312,7 +313,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+ 		       shrinker->scan_objects, total_scan);
+ 		total_scan = freeable;
+-	}
++		next_deferred = nr;
++	} else
++		next_deferred = total_scan;
+ 
+ 	/*
+ 	 * We need to avoid excessive windup on filesystem shrinkers
+@@ -369,17 +372,22 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 
+ 		count_vm_events(SLABS_SCANNED, nr_to_scan);
+ 		total_scan -= nr_to_scan;
++		scanned += nr_to_scan;
+ 
+ 		cond_resched();
+ 	}
+ 
++	if (next_deferred >= scanned)
++		next_deferred -= scanned;
++	else
++		next_deferred = 0;
+ 	/*
+ 	 * move the unused scan count back into the shrinker in a
+ 	 * manner that handles concurrent updates. If we exhausted the
+ 	 * scan, there is no need to do an update.
+ 	 */
+-	if (total_scan > 0)
+-		new_nr = atomic_long_add_return(total_scan,
++	if (next_deferred > 0)
++		new_nr = atomic_long_add_return(next_deferred,
+ 						&shrinker->nr_deferred[nid]);
+ 	else
+ 		new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
+diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c
+index 02bcf00c2492..1a9545965c0d 100644
+--- a/net/netfilter/nf_conntrack_extend.c
++++ b/net/netfilter/nf_conntrack_extend.c
+@@ -73,7 +73,7 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ 			     size_t var_alloc_len, gfp_t gfp)
+ {
+ 	struct nf_ct_ext *old, *new;
+-	int newlen, newoff;
++	int i, newlen, newoff;
+ 	struct nf_ct_ext_type *t;
+ 
+ 	/* Conntrack must not be confirmed to avoid races on reallocation. */
+@@ -99,8 +99,19 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ 		return NULL;
+ 
+ 	if (new != old) {
++		for (i = 0; i < NF_CT_EXT_NUM; i++) {
++			if (!__nf_ct_ext_exist(old, i))
++				continue;
++
++			rcu_read_lock();
++			t = rcu_dereference(nf_ct_ext_types[i]);
++			if (t && t->move)
++				t->move((void *)new + new->offset[i],
++					(void *)old + old->offset[i]);
++			rcu_read_unlock();
++		}
+ 		kfree_rcu(old, rcu);
+-		rcu_assign_pointer(ct->ext, new);
++		ct->ext = new;
+ 	}
+ 
+ 	new->offset[id] = newoff;
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index ecee105bbada..4bf9b50cbb4c 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -30,19 +30,17 @@
+ #include <net/netfilter/nf_conntrack_zones.h>
+ #include <linux/netfilter/nf_nat.h>
+ 
++static DEFINE_SPINLOCK(nf_nat_lock);
++
+ static DEFINE_MUTEX(nf_nat_proto_mutex);
+ static const struct nf_nat_l3proto __rcu *nf_nat_l3protos[NFPROTO_NUMPROTO]
+ 						__read_mostly;
+ static const struct nf_nat_l4proto __rcu **nf_nat_l4protos[NFPROTO_NUMPROTO]
+ 						__read_mostly;
+ 
+-struct nf_nat_conn_key {
+-	const struct net *net;
+-	const struct nf_conntrack_tuple *tuple;
+-	const struct nf_conntrack_zone *zone;
+-};
+-
+-static struct rhashtable nf_nat_bysource_table;
++static struct hlist_head *nf_nat_bysource __read_mostly;
++static unsigned int nf_nat_htable_size __read_mostly;
++static unsigned int nf_nat_hash_rnd __read_mostly;
+ 
+ inline const struct nf_nat_l3proto *
+ __nf_nat_l3proto_find(u8 family)
+@@ -121,17 +119,19 @@ int nf_xfrm_me_harder(struct net *net, struct sk_buff *skb, unsigned int family)
+ EXPORT_SYMBOL(nf_xfrm_me_harder);
+ #endif /* CONFIG_XFRM */
+ 
+-static u32 nf_nat_bysource_hash(const void *data, u32 len, u32 seed)
++/* We keep an extra hash for each conntrack, for fast searching. */
++static inline unsigned int
++hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple)
+ {
+-	const struct nf_conntrack_tuple *t;
+-	const struct nf_conn *ct = data;
++	unsigned int hash;
++
++	get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd));
+ 
+-	t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+ 	/* Original src, to ensure we map it consistently if poss. */
++	hash = jhash2((u32 *)&tuple->src, sizeof(tuple->src) / sizeof(u32),
++		      tuple->dst.protonum ^ nf_nat_hash_rnd ^ net_hash_mix(n));
+ 
+-	seed ^= net_hash_mix(nf_ct_net(ct));
+-	return jhash2((const u32 *)&t->src, sizeof(t->src) / sizeof(u32),
+-		      t->dst.protonum ^ seed);
++	return reciprocal_scale(hash, nf_nat_htable_size);
+ }
+ 
+ /* Is this tuple already taken? (not by us) */
+@@ -187,26 +187,6 @@ same_src(const struct nf_conn *ct,
+ 		t->src.u.all == tuple->src.u.all);
+ }
+ 
+-static int nf_nat_bysource_cmp(struct rhashtable_compare_arg *arg,
+-			       const void *obj)
+-{
+-	const struct nf_nat_conn_key *key = arg->key;
+-	const struct nf_conn *ct = obj;
+-
+-	return same_src(ct, key->tuple) &&
+-	       net_eq(nf_ct_net(ct), key->net) &&
+-	       nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL);
+-}
+-
+-static struct rhashtable_params nf_nat_bysource_params = {
+-	.head_offset = offsetof(struct nf_conn, nat_bysource),
+-	.obj_hashfn = nf_nat_bysource_hash,
+-	.obj_cmpfn = nf_nat_bysource_cmp,
+-	.nelem_hint = 256,
+-	.min_size = 1024,
+-	.nulls_base = (1U << RHT_BASE_SHIFT),
+-};
+-
+ /* Only called for SRC manip */
+ static int
+ find_appropriate_src(struct net *net,
+@@ -217,23 +197,25 @@ find_appropriate_src(struct net *net,
+ 		     struct nf_conntrack_tuple *result,
+ 		     const struct nf_nat_range *range)
+ {
++	unsigned int h = hash_by_src(net, tuple);
++	const struct nf_conn_nat *nat;
+ 	const struct nf_conn *ct;
+-	struct nf_nat_conn_key key = {
+-		.net = net,
+-		.tuple = tuple,
+-		.zone = zone
+-	};
+ 
+-	ct = rhashtable_lookup_fast(&nf_nat_bysource_table, &key,
+-				    nf_nat_bysource_params);
+-	if (!ct)
+-		return 0;
+-
+-	nf_ct_invert_tuplepr(result,
+-			     &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+-	result->dst = tuple->dst;
+-
+-	return in_range(l3proto, l4proto, result, range);
++	hlist_for_each_entry_rcu(nat, &nf_nat_bysource[h], bysource) {
++		ct = nat->ct;
++		if (same_src(ct, tuple) &&
++		    net_eq(net, nf_ct_net(ct)) &&
++		    nf_ct_zone_equal(ct, zone, IP_CT_DIR_ORIGINAL)) {
++			/* Copy source part from reply tuple. */
++			nf_ct_invert_tuplepr(result,
++				       &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
++			result->dst = tuple->dst;
++
++			if (in_range(l3proto, l4proto, result, range))
++				return 1;
++		}
++	}
++	return 0;
+ }
+ 
+ /* For [FUTURE] fragmentation handling, we want the least-used
+@@ -405,6 +387,7 @@ nf_nat_setup_info(struct nf_conn *ct,
+ 		  const struct nf_nat_range *range,
+ 		  enum nf_nat_manip_type maniptype)
+ {
++	struct net *net = nf_ct_net(ct);
+ 	struct nf_conntrack_tuple curr_tuple, new_tuple;
+ 	struct nf_conn_nat *nat;
+ 
+@@ -446,13 +429,17 @@ nf_nat_setup_info(struct nf_conn *ct,
+ 	}
+ 
+ 	if (maniptype == NF_NAT_MANIP_SRC) {
+-		int err;
+-
+-		err = rhashtable_insert_fast(&nf_nat_bysource_table,
+-					     &ct->nat_bysource,
+-					     nf_nat_bysource_params);
+-		if (err)
+-			return NF_DROP;
++		unsigned int srchash;
++
++		srchash = hash_by_src(net,
++				      &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
++		spin_lock_bh(&nf_nat_lock);
++		/* nf_conntrack_alter_reply might re-allocate extension aera */
++		nat = nfct_nat(ct);
++		nat->ct = ct;
++		hlist_add_head_rcu(&nat->bysource,
++				   &nf_nat_bysource[srchash]);
++		spin_unlock_bh(&nf_nat_lock);
+ 	}
+ 
+ 	/* It's done. */
+@@ -557,7 +544,7 @@ static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
+ 	if (nf_nat_proto_remove(ct, data))
+ 		return 1;
+ 
+-	if (!nat)
++	if (!nat || !nat->ct)
+ 		return 0;
+ 
+ 	/* This netns is being destroyed, and conntrack has nat null binding.
+@@ -569,10 +556,11 @@ static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
+ 	if (!del_timer(&ct->timeout))
+ 		return 1;
+ 
++	spin_lock_bh(&nf_nat_lock);
++	hlist_del_rcu(&nat->bysource);
+ 	ct->status &= ~IPS_NAT_DONE_MASK;
+-
+-	rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource,
+-			       nf_nat_bysource_params);
++	nat->ct = NULL;
++	spin_unlock_bh(&nf_nat_lock);
+ 
+ 	add_timer(&ct->timeout);
+ 
+@@ -701,17 +689,35 @@ static void nf_nat_cleanup_conntrack(struct nf_conn *ct)
+ {
+ 	struct nf_conn_nat *nat = nf_ct_ext_find(ct, NF_CT_EXT_NAT);
+ 
+-	if (!nat)
++	if (nat == NULL || nat->ct == NULL)
+ 		return;
+ 
+-	rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource,
+-			       nf_nat_bysource_params);
++	NF_CT_ASSERT(nat->ct->status & IPS_SRC_NAT_DONE);
++
++	spin_lock_bh(&nf_nat_lock);
++	hlist_del_rcu(&nat->bysource);
++	spin_unlock_bh(&nf_nat_lock);
++}
++
++static void nf_nat_move_storage(void *new, void *old)
++{
++	struct nf_conn_nat *new_nat = new;
++	struct nf_conn_nat *old_nat = old;
++	struct nf_conn *ct = old_nat->ct;
++
++	if (!ct || !(ct->status & IPS_SRC_NAT_DONE))
++		return;
++
++	spin_lock_bh(&nf_nat_lock);
++	hlist_replace_rcu(&old_nat->bysource, &new_nat->bysource);
++	spin_unlock_bh(&nf_nat_lock);
+ }
+ 
+ static struct nf_ct_ext_type nat_extend __read_mostly = {
+ 	.len		= sizeof(struct nf_conn_nat),
+ 	.align		= __alignof__(struct nf_conn_nat),
+ 	.destroy	= nf_nat_cleanup_conntrack,
++	.move		= nf_nat_move_storage,
+ 	.id		= NF_CT_EXT_NAT,
+ 	.flags		= NF_CT_EXT_F_PREALLOC,
+ };
+@@ -840,13 +846,16 @@ static int __init nf_nat_init(void)
+ {
+ 	int ret;
+ 
+-	ret = rhashtable_init(&nf_nat_bysource_table, &nf_nat_bysource_params);
+-	if (ret)
+-		return ret;
++	/* Leave them the same for the moment. */
++	nf_nat_htable_size = nf_conntrack_htable_size;
++
++	nf_nat_bysource = nf_ct_alloc_hashtable(&nf_nat_htable_size, 0);
++	if (!nf_nat_bysource)
++		return -ENOMEM;
+ 
+ 	ret = nf_ct_extend_register(&nat_extend);
+ 	if (ret < 0) {
+-		rhashtable_destroy(&nf_nat_bysource_table);
++		nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ 		printk(KERN_ERR "nf_nat_core: Unable to register extension\n");
+ 		return ret;
+ 	}
+@@ -870,7 +879,7 @@ static int __init nf_nat_init(void)
+ 	return 0;
+ 
+  cleanup_extend:
+-	rhashtable_destroy(&nf_nat_bysource_table);
++	nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ 	nf_ct_extend_unregister(&nat_extend);
+ 	return ret;
+ }
+@@ -888,8 +897,8 @@ static void __exit nf_nat_cleanup(void)
+ #endif
+ 	for (i = 0; i < NFPROTO_NUMPROTO; i++)
+ 		kfree(nf_nat_l4protos[i]);
+-
+-	rhashtable_destroy(&nf_nat_bysource_table);
++	synchronize_net();
++	nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ }
+ 
+ MODULE_LICENSE("GPL");
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 7f57a145a47e..a03cf68d0bcd 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -884,6 +884,8 @@ void snd_hda_apply_fixup(struct hda_codec *codec, int action)
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_apply_fixup);
+ 
++#define IGNORE_SEQ_ASSOC (~(AC_DEFCFG_SEQUENCE | AC_DEFCFG_DEF_ASSOC))
++
+ static bool pin_config_match(struct hda_codec *codec,
+ 			     const struct hda_pintbl *pins)
+ {
+@@ -901,7 +903,7 @@ static bool pin_config_match(struct hda_codec *codec,
+ 		for (; t_pins->nid; t_pins++) {
+ 			if (t_pins->nid == nid) {
+ 				found = 1;
+-				if (t_pins->val == cfg)
++				if ((t_pins->val & IGNORE_SEQ_ASSOC) == (cfg & IGNORE_SEQ_ASSOC))
+ 					break;
+ 				else if ((cfg & 0xf0000000) == 0x40000000 && (t_pins->val & 0xf0000000) == 0x40000000)
+ 					break;
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 9ceb2bc36e68..c146d0de53d8 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -780,6 +780,7 @@ static const struct hda_pintbl alienware_pincfgs[] = {
+ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE),
+ 	SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE),
++	SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE),
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index ed62748a6d55..c15c51bea26d 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -262,6 +262,7 @@ enum {
+ 	CXT_FIXUP_CAP_MIX_AMP_5047,
+ 	CXT_FIXUP_MUTE_LED_EAPD,
+ 	CXT_FIXUP_HP_SPECTRE,
++	CXT_FIXUP_HP_GATE_MIC,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -633,6 +634,17 @@ static void cxt_fixup_cap_mix_amp_5047(struct hda_codec *codec,
+ 				  (1 << AC_AMPCAP_MUTE_SHIFT));
+ }
+ 
++static void cxt_fixup_hp_gate_mic_jack(struct hda_codec *codec,
++				       const struct hda_fixup *fix,
++				       int action)
++{
++	/* the mic pin (0x19) doesn't give an unsolicited event;
++	 * probe the mic pin together with the headphone pin (0x16)
++	 */
++	if (action == HDA_FIXUP_ACT_PROBE)
++		snd_hda_jack_set_gating_jack(codec, 0x19, 0x16);
++}
++
+ /* ThinkPad X200 & co with cxt5051 */
+ static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
+ 	{ 0x16, 0x042140ff }, /* HP (seq# overridden) */
+@@ -774,6 +786,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[CXT_FIXUP_HP_GATE_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cxt_fixup_hp_gate_mic_jack,
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -824,6 +840,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
+ 	SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
++	SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 162818058a5d..512fd83407f2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5915,6 +5915,9 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60180},
+ 		{0x14, 0x90170120},
+ 		{0x21, 0x02211030}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x1b, 0x01011020},
++		{0x21, 0x02211010}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x12, 0x90a60160},
+ 		{0x14, 0x90170120},
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 52ed434cbca6..f1223045c400 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -771,6 +771,9 @@ static int sst_soc_prepare(struct device *dev)
+ 	struct sst_data *drv = dev_get_drvdata(dev);
+ 	struct snd_soc_pcm_runtime *rtd;
+ 
++	if (!drv->soc_card)
++		return 0;
++
+ 	/* suspend all pcms first */
+ 	snd_soc_suspend(drv->soc_card->dev);
+ 	snd_soc_poweroff(drv->soc_card->dev);
+@@ -793,6 +796,9 @@ static void sst_soc_complete(struct device *dev)
+ 	struct sst_data *drv = dev_get_drvdata(dev);
+ 	struct snd_soc_pcm_runtime *rtd;
+ 
++	if (!drv->soc_card)
++		return;
++
+ 	/* restart SSPs */
+ 	list_for_each_entry(rtd, &drv->soc_card->rtd_list, list) {
+ 		struct snd_soc_dai *dai = rtd->cpu_dai;
+diff --git a/sound/usb/hiface/pcm.c b/sound/usb/hiface/pcm.c
+index 2c44139b4041..33db205dd12b 100644
+--- a/sound/usb/hiface/pcm.c
++++ b/sound/usb/hiface/pcm.c
+@@ -445,6 +445,8 @@ static int hiface_pcm_prepare(struct snd_pcm_substream *alsa_sub)
+ 
+ 	mutex_lock(&rt->stream_mutex);
+ 
++	hiface_pcm_stream_stop(rt);
++
+ 	sub->dma_off = 0;
+ 	sub->period_off = 0;
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 2f8c388ef84f..4703caea56b2 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -932,9 +932,10 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 	case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
+ 	case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+ 	case USB_ID(0x046d, 0x0991):
++	case USB_ID(0x046d, 0x09a2): /* QuickCam Communicate Deluxe/S7500 */
+ 	/* Most audio usb devices lie about volume resolution.
+ 	 * Most Logitech webcams have res = 384.
+-	 * Proboly there is some logitech magic behind this number --fishor
++	 * Probably there is some logitech magic behind this number --fishor
+ 	 */
+ 		if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+ 			usb_audio_info(chip,


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2017-01-06 23:43 Mike Pagano
  0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2017-01-06 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     5a0bfade39e12ebb3a33a6868f6ebfe07297d6c6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan  6 23:11:42 2017 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan  6 23:42:50 2017 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5a0bfade

Linux patch 4.8.16

 0000_README             |    4 +
 1015_linux-4.8.16.patch | 3559 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3563 insertions(+)

diff --git a/0000_README b/0000_README
index 37d0ff1..e7fac7c 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-4.8.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.15
 
+Patch:  1015_linux-4.8.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-4.8.16.patch b/1015_linux-4.8.16.patch
new file mode 100644
index 0000000..9977d7a
--- /dev/null
+++ b/1015_linux-4.8.16.patch
@@ -0,0 +1,3559 @@
+diff --git a/Makefile b/Makefile
+index c7f0e798ca34..50f68648a79a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index f193414d0f6f..4986dc0c1dff 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -372,8 +372,7 @@ static int __init xen_guest_init(void)
+ 	 * for secondary CPUs as they are brought up.
+ 	 * For uniformity we use VCPUOP_register_vcpu_info even on cpu0.
+ 	 */
+-	xen_vcpu_info = __alloc_percpu(sizeof(struct vcpu_info),
+-			                       sizeof(struct vcpu_info));
++	xen_vcpu_info = alloc_percpu(struct vcpu_info);
+ 	if (xen_vcpu_info == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index 5420cb0fcb3e..e517088d635f 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -12,7 +12,7 @@
+ #ifndef _ASM_ACPI_H
+ #define _ASM_ACPI_H
+ 
+-#include <linux/mm.h>
++#include <linux/memblock.h>
+ #include <linux/psci.h>
+ 
+ #include <asm/cputype.h>
+@@ -32,7 +32,11 @@
+ static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
+ 					    acpi_size size)
+ {
+-	if (!page_is_ram(phys >> PAGE_SHIFT))
++	/*
++	 * EFI's reserve_regions() call adds memory with the WB attribute
++	 * to memblock via early_init_dt_add_memory_arch().
++	 */
++	if (!memblock_is_memory(phys))
+ 		return ioremap(phys, size);
+ 
+ 	return ioremap_cache(phys, size);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 536dce22fe76..514b4e3ba029 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -206,10 +206,15 @@ static void __init request_standard_resources(void)
+ 
+ 	for_each_memblock(memory, region) {
+ 		res = alloc_bootmem_low(sizeof(*res));
+-		res->name  = "System RAM";
++		if (memblock_is_nomap(region)) {
++			res->name  = "reserved";
++			res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++		} else {
++			res->name  = "System RAM";
++			res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
++		}
+ 		res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region));
+ 		res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
+-		res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+ 
+ 		request_resource(&iomem_resource, res);
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c207fa9870eb..494e0d800976 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1371,9 +1371,9 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ 		blk_mq_put_ctx(data.ctx);
+ 		if (!old_rq)
+ 			goto done;
+-		if (!blk_mq_direct_issue_request(old_rq, &cookie))
+-			goto done;
+-		blk_mq_insert_request(old_rq, false, true, true);
++		if (test_bit(BLK_MQ_S_STOPPED, &data.hctx->state) ||
++		    blk_mq_direct_issue_request(old_rq, &cookie) != 0)
++			blk_mq_insert_request(old_rq, false, true, true);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 0a8bdade53f2..88df65d1e6f6 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -836,11 +836,29 @@ static struct kobject *get_device_parent(struct device *dev,
+ 	return NULL;
+ }
+ 
++static inline bool live_in_glue_dir(struct kobject *kobj,
++				    struct device *dev)
++{
++	if (!kobj || !dev->class ||
++	    kobj->kset != &dev->class->p->glue_dirs)
++		return false;
++	return true;
++}
++
++static inline struct kobject *get_glue_dir(struct device *dev)
++{
++	return dev->kobj.parent;
++}
++
++/*
++ * make sure cleaning up dir as the last step, we need to make
++ * sure .release handler of kobject is run with holding the
++ * global lock
++ */
+ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
+ {
+ 	/* see if we live in a "glue" directory */
+-	if (!glue_dir || !dev->class ||
+-	    glue_dir->kset != &dev->class->p->glue_dirs)
++	if (!live_in_glue_dir(glue_dir, dev))
+ 		return;
+ 
+ 	mutex_lock(&gdp_mutex);
+@@ -848,11 +866,6 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
+ 	mutex_unlock(&gdp_mutex);
+ }
+ 
+-static void cleanup_device_parent(struct device *dev)
+-{
+-	cleanup_glue_dir(dev, dev->kobj.parent);
+-}
+-
+ static int device_add_class_symlinks(struct device *dev)
+ {
+ 	struct device_node *of_node = dev_of_node(dev);
+@@ -1028,6 +1041,7 @@ int device_add(struct device *dev)
+ 	struct kobject *kobj;
+ 	struct class_interface *class_intf;
+ 	int error = -EINVAL;
++	struct kobject *glue_dir = NULL;
+ 
+ 	dev = get_device(dev);
+ 	if (!dev)
+@@ -1072,8 +1086,10 @@ int device_add(struct device *dev)
+ 	/* first, register with generic layer. */
+ 	/* we require the name to be set before, and pass NULL */
+ 	error = kobject_add(&dev->kobj, dev->kobj.parent, NULL);
+-	if (error)
++	if (error) {
++		glue_dir = get_glue_dir(dev);
+ 		goto Error;
++	}
+ 
+ 	/* notify platform of device entry */
+ 	if (platform_notify)
+@@ -1154,9 +1170,10 @@ done:
+ 	device_remove_file(dev, &dev_attr_uevent);
+  attrError:
+ 	kobject_uevent(&dev->kobj, KOBJ_REMOVE);
++	glue_dir = get_glue_dir(dev);
+ 	kobject_del(&dev->kobj);
+  Error:
+-	cleanup_device_parent(dev);
++	cleanup_glue_dir(dev, glue_dir);
+ 	put_device(parent);
+ name_error:
+ 	kfree(dev->p);
+@@ -1232,6 +1249,7 @@ EXPORT_SYMBOL_GPL(put_device);
+ void device_del(struct device *dev)
+ {
+ 	struct device *parent = dev->parent;
++	struct kobject *glue_dir = NULL;
+ 	struct class_interface *class_intf;
+ 
+ 	/* Notify clients of device removal.  This call must come
+@@ -1276,8 +1294,9 @@ void device_del(struct device *dev)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_REMOVED_DEVICE, dev);
+ 	kobject_uevent(&dev->kobj, KOBJ_REMOVE);
+-	cleanup_device_parent(dev);
++	glue_dir = get_glue_dir(dev);
+ 	kobject_del(&dev->kobj);
++	cleanup_glue_dir(dev, glue_dir);
+ 	put_device(parent);
+ }
+ EXPORT_SYMBOL_GPL(device_del);
+diff --git a/drivers/base/power/opp/core.c b/drivers/base/power/opp/core.c
+index df0c70963d9e..94b33ce96be7 100644
+--- a/drivers/base/power/opp/core.c
++++ b/drivers/base/power/opp/core.c
+@@ -1320,7 +1320,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name);
+  * that this function is *NOT* called under RCU protection or in contexts where
+  * mutex cannot be locked.
+  */
+-int dev_pm_opp_set_regulator(struct device *dev, const char *name)
++struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ {
+ 	struct opp_table *opp_table;
+ 	struct regulator *reg;
+@@ -1358,20 +1358,20 @@ int dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ 	opp_table->regulator = reg;
+ 
+ 	mutex_unlock(&opp_table_lock);
+-	return 0;
++	return opp_table;
+ 
+ err:
+ 	_remove_opp_table(opp_table);
+ unlock:
+ 	mutex_unlock(&opp_table_lock);
+ 
+-	return ret;
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulator);
+ 
+ /**
+  * dev_pm_opp_put_regulator() - Releases resources blocked for regulator
+- * @dev: Device for which regulator was set.
++ * @opp_table: OPP table returned from dev_pm_opp_set_regulator().
+  *
+  * Locking: The internal opp_table and opp structures are RCU protected.
+  * Hence this function internally uses RCU updater strategy with mutex locks
+@@ -1379,22 +1379,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulator);
+  * that this function is *NOT* called under RCU protection or in contexts where
+  * mutex cannot be locked.
+  */
+-void dev_pm_opp_put_regulator(struct device *dev)
++void dev_pm_opp_put_regulator(struct opp_table *opp_table)
+ {
+-	struct opp_table *opp_table;
+-
+ 	mutex_lock(&opp_table_lock);
+ 
+-	/* Check for existing table for 'dev' first */
+-	opp_table = _find_opp_table(dev);
+-	if (IS_ERR(opp_table)) {
+-		dev_err(dev, "Failed to find opp_table: %ld\n",
+-			PTR_ERR(opp_table));
+-		goto unlock;
+-	}
+-
+ 	if (IS_ERR(opp_table->regulator)) {
+-		dev_err(dev, "%s: Doesn't have regulator set\n", __func__);
++		pr_err("%s: Doesn't have regulator set\n", __func__);
+ 		goto unlock;
+ 	}
+ 
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index ab19adb07a12..3c606c09fd5a 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -853,45 +853,6 @@ rqbiocnt(struct request *r)
+ 	return n;
+ }
+ 
+-/* This can be removed if we are certain that no users of the block
+- * layer will ever use zero-count pages in bios.  Otherwise we have to
+- * protect against the put_page sometimes done by the network layer.
+- *
+- * See http://oss.sgi.com/archives/xfs/2007-01/msg00594.html for
+- * discussion.
+- *
+- * We cannot use get_page in the workaround, because it insists on a
+- * positive page count as a precondition.  So we use _refcount directly.
+- */
+-static void
+-bio_pageinc(struct bio *bio)
+-{
+-	struct bio_vec bv;
+-	struct page *page;
+-	struct bvec_iter iter;
+-
+-	bio_for_each_segment(bv, bio, iter) {
+-		/* Non-zero page count for non-head members of
+-		 * compound pages is no longer allowed by the kernel.
+-		 */
+-		page = compound_head(bv.bv_page);
+-		page_ref_inc(page);
+-	}
+-}
+-
+-static void
+-bio_pagedec(struct bio *bio)
+-{
+-	struct page *page;
+-	struct bio_vec bv;
+-	struct bvec_iter iter;
+-
+-	bio_for_each_segment(bv, bio, iter) {
+-		page = compound_head(bv.bv_page);
+-		page_ref_dec(page);
+-	}
+-}
+-
+ static void
+ bufinit(struct buf *buf, struct request *rq, struct bio *bio)
+ {
+@@ -899,7 +860,6 @@ bufinit(struct buf *buf, struct request *rq, struct bio *bio)
+ 	buf->rq = rq;
+ 	buf->bio = bio;
+ 	buf->iter = bio->bi_iter;
+-	bio_pageinc(bio);
+ }
+ 
+ static struct buf *
+@@ -1127,7 +1087,6 @@ aoe_end_buf(struct aoedev *d, struct buf *buf)
+ 	if (buf == d->ip.buf)
+ 		d->ip.buf = NULL;
+ 	rq = buf->rq;
+-	bio_pagedec(buf->bio);
+ 	mempool_free(buf, d->bufpool);
+ 	n = (unsigned long) rq->special;
+ 	rq->special = (void *) --n;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index c9f2107f7095..b314a57e3c86 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1646,7 +1646,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	blk_mq_start_request(bd->rq);
+ 
+ 	if (lo->lo_state != Lo_bound)
+-		return -EIO;
++		return BLK_MQ_RQ_QUEUE_ERROR;
+ 
+ 	switch (req_op(cmd->rq)) {
+ 	case REQ_OP_FLUSH:
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 62028f483bba..a2ab00831df1 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -307,7 +307,6 @@ static int tpmfront_probe(struct xenbus_device *dev,
+ 	rv = setup_ring(dev, priv);
+ 	if (rv) {
+ 		chip = dev_get_drvdata(&dev->dev);
+-		tpm_chip_unregister(chip);
+ 		ring_free(priv);
+ 		return rv;
+ 	}
+diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
+index 8831e1a05367..11d8aa3ec186 100644
+--- a/drivers/clk/ti/clk-3xxx.c
++++ b/drivers/clk/ti/clk-3xxx.c
+@@ -22,13 +22,6 @@
+ 
+ #include "clock.h"
+ 
+-/*
+- * DPLL5_FREQ_FOR_USBHOST: USBHOST and USBTLL are the only clocks
+- * that are sourced by DPLL5, and both of these require this clock
+- * to be at 120 MHz for proper operation.
+- */
+-#define DPLL5_FREQ_FOR_USBHOST		120000000
+-
+ #define OMAP3430ES2_ST_DSS_IDLE_SHIFT			1
+ #define OMAP3430ES2_ST_HSOTGUSB_IDLE_SHIFT		5
+ #define OMAP3430ES2_ST_SSI_IDLE_SHIFT			8
+@@ -546,14 +539,21 @@ void __init omap3_clk_lock_dpll5(void)
+ 	struct clk *dpll5_clk;
+ 	struct clk *dpll5_m2_clk;
+ 
++	/*
++	 * Errata sprz319f advisory 2.1 documents a USB host clock drift issue
++	 * that can be worked around using specially crafted dpll5 settings
++	 * with a dpll5_m2 divider set to 8. Set the dpll5 rate to 8x the USB
++	 * host clock rate, its .set_rate handler() will detect that frequency
++	 * and use the errata settings.
++	 */
+ 	dpll5_clk = clk_get(NULL, "dpll5_ck");
+-	clk_set_rate(dpll5_clk, DPLL5_FREQ_FOR_USBHOST);
++	clk_set_rate(dpll5_clk, OMAP3_DPLL5_FREQ_FOR_USBHOST * 8);
+ 	clk_prepare_enable(dpll5_clk);
+ 
+-	/* Program dpll5_m2_clk divider for no division */
++	/* Program dpll5_m2_clk divider */
+ 	dpll5_m2_clk = clk_get(NULL, "dpll5_m2_ck");
+ 	clk_prepare_enable(dpll5_m2_clk);
+-	clk_set_rate(dpll5_m2_clk, DPLL5_FREQ_FOR_USBHOST);
++	clk_set_rate(dpll5_m2_clk, OMAP3_DPLL5_FREQ_FOR_USBHOST);
+ 
+ 	clk_disable_unprepare(dpll5_m2_clk);
+ 	clk_disable_unprepare(dpll5_clk);
+diff --git a/drivers/clk/ti/clock.h b/drivers/clk/ti/clock.h
+index 90f3f472ae1c..13c37f48d9d6 100644
+--- a/drivers/clk/ti/clock.h
++++ b/drivers/clk/ti/clock.h
+@@ -257,11 +257,20 @@ long omap2_dpll_round_rate(struct clk_hw *hw, unsigned long target_rate,
+ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
+ 				    unsigned long parent_rate);
+ 
++/*
++ * OMAP3_DPLL5_FREQ_FOR_USBHOST: USBHOST and USBTLL are the only clocks
++ * that are sourced by DPLL5, and both of these require this clock
++ * to be at 120 MHz for proper operation.
++ */
++#define OMAP3_DPLL5_FREQ_FOR_USBHOST	120000000
++
+ unsigned long omap3_dpll_recalc(struct clk_hw *hw, unsigned long parent_rate);
+ int omap3_dpll4_set_rate(struct clk_hw *clk, unsigned long rate,
+ 			 unsigned long parent_rate);
+ int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate,
+ 				    unsigned long parent_rate, u8 index);
++int omap3_dpll5_set_rate(struct clk_hw *hw, unsigned long rate,
++			 unsigned long parent_rate);
+ void omap3_clk_lock_dpll5(void);
+ 
+ unsigned long omap4_dpll_regm4xen_recalc(struct clk_hw *hw,
+diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c
+index 9fc8754a6e61..4b9a419d8e14 100644
+--- a/drivers/clk/ti/dpll.c
++++ b/drivers/clk/ti/dpll.c
+@@ -114,6 +114,18 @@ static const struct clk_ops omap3_dpll_ck_ops = {
+ 	.round_rate	= &omap2_dpll_round_rate,
+ };
+ 
++static const struct clk_ops omap3_dpll5_ck_ops = {
++	.enable		= &omap3_noncore_dpll_enable,
++	.disable	= &omap3_noncore_dpll_disable,
++	.get_parent	= &omap2_init_dpll_parent,
++	.recalc_rate	= &omap3_dpll_recalc,
++	.set_rate	= &omap3_dpll5_set_rate,
++	.set_parent	= &omap3_noncore_dpll_set_parent,
++	.set_rate_and_parent	= &omap3_noncore_dpll_set_rate_and_parent,
++	.determine_rate	= &omap3_noncore_dpll_determine_rate,
++	.round_rate	= &omap2_dpll_round_rate,
++};
++
+ static const struct clk_ops omap3_dpll_per_ck_ops = {
+ 	.enable		= &omap3_noncore_dpll_enable,
+ 	.disable	= &omap3_noncore_dpll_disable,
+@@ -474,7 +486,12 @@ static void __init of_ti_omap3_dpll_setup(struct device_node *node)
+ 		.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
+ 	};
+ 
+-	of_ti_dpll_setup(node, &omap3_dpll_ck_ops, &dd);
++	if ((of_machine_is_compatible("ti,omap3630") ||
++	     of_machine_is_compatible("ti,omap36xx")) &&
++	    !strcmp(node->name, "dpll5_ck"))
++		of_ti_dpll_setup(node, &omap3_dpll5_ck_ops, &dd);
++	else
++		of_ti_dpll_setup(node, &omap3_dpll_ck_ops, &dd);
+ }
+ CLK_OF_DECLARE(ti_omap3_dpll_clock, "ti,omap3-dpll-clock",
+ 	       of_ti_omap3_dpll_setup);
+diff --git a/drivers/clk/ti/dpll3xxx.c b/drivers/clk/ti/dpll3xxx.c
+index 88f2ce81ba55..4cdd28a25584 100644
+--- a/drivers/clk/ti/dpll3xxx.c
++++ b/drivers/clk/ti/dpll3xxx.c
+@@ -838,3 +838,70 @@ int omap3_dpll4_set_rate_and_parent(struct clk_hw *hw, unsigned long rate,
+ 	return omap3_noncore_dpll_set_rate_and_parent(hw, rate, parent_rate,
+ 						      index);
+ }
++
++/* Apply DM3730 errata sprz319 advisory 2.1. */
++static bool omap3_dpll5_apply_errata(struct clk_hw *hw,
++				     unsigned long parent_rate)
++{
++	struct omap3_dpll5_settings {
++		unsigned int rate, m, n;
++	};
++
++	static const struct omap3_dpll5_settings precomputed[] = {
++		/*
++		 * From DM3730 errata advisory 2.1, table 35 and 36.
++		 * The N value is increased by 1 compared to the tables as the
++		 * errata lists register values while last_rounded_field is the
++		 * real divider value.
++		 */
++		{ 12000000,  80,  0 + 1 },
++		{ 13000000, 443,  5 + 1 },
++		{ 19200000,  50,  0 + 1 },
++		{ 26000000, 443, 11 + 1 },
++		{ 38400000,  25,  0 + 1 }
++	};
++
++	const struct omap3_dpll5_settings *d;
++	struct clk_hw_omap *clk = to_clk_hw_omap(hw);
++	struct dpll_data *dd;
++	unsigned int i;
++
++	for (i = 0; i < ARRAY_SIZE(precomputed); ++i) {
++		if (parent_rate == precomputed[i].rate)
++			break;
++	}
++
++	if (i == ARRAY_SIZE(precomputed))
++		return false;
++
++	d = &precomputed[i];
++
++	/* Update the M, N and rounded rate values and program the DPLL. */
++	dd = clk->dpll_data;
++	dd->last_rounded_m = d->m;
++	dd->last_rounded_n = d->n;
++	dd->last_rounded_rate = div_u64((u64)parent_rate * d->m, d->n);
++	omap3_noncore_dpll_program(clk, 0);
++
++	return true;
++}
++
++/**
++ * omap3_dpll5_set_rate - set rate for omap3 dpll5
++ * @hw: clock to change
++ * @rate: target rate for clock
++ * @parent_rate: rate of the parent clock
++ *
++ * Set rate for the DPLL5 clock. Apply the sprz319 advisory 2.1 on OMAP36xx if
++ * the DPLL is used for USB host (detected through the requested rate).
++ */
++int omap3_dpll5_set_rate(struct clk_hw *hw, unsigned long rate,
++			 unsigned long parent_rate)
++{
++	if (rate == OMAP3_DPLL5_FREQ_FOR_USBHOST * 8) {
++		if (omap3_dpll5_apply_errata(hw, parent_rate))
++			return 0;
++	}
++
++	return omap3_noncore_dpll_set_rate(hw, rate, parent_rate);
++}
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 3957de801ae8..204cd527ff34 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -26,6 +26,7 @@
+ #include <linux/thermal.h>
+ 
+ struct private_data {
++	struct opp_table *opp_table;
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
+@@ -141,6 +142,7 @@ static int resources_available(void)
+ static int cpufreq_init(struct cpufreq_policy *policy)
+ {
+ 	struct cpufreq_frequency_table *freq_table;
++	struct opp_table *opp_table = NULL;
+ 	struct private_data *priv;
+ 	struct device *cpu_dev;
+ 	struct clk *cpu_clk;
+@@ -184,8 +186,9 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 */
+ 	name = find_supply_name(cpu_dev);
+ 	if (name) {
+-		ret = dev_pm_opp_set_regulator(cpu_dev, name);
+-		if (ret) {
++		opp_table = dev_pm_opp_set_regulator(cpu_dev, name);
++		if (IS_ERR(opp_table)) {
++			ret = PTR_ERR(opp_table);
+ 			dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n",
+ 				policy->cpu, ret);
+ 			goto out_put_clk;
+@@ -235,6 +238,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	}
+ 
+ 	priv->reg_name = name;
++	priv->opp_table = opp_table;
+ 
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+@@ -283,7 +287,7 @@ out_free_priv:
+ out_free_opp:
+ 	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
+ 	if (name)
+-		dev_pm_opp_put_regulator(cpu_dev);
++		dev_pm_opp_put_regulator(opp_table);
+ out_put_clk:
+ 	clk_put(cpu_clk);
+ 
+@@ -298,7 +302,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+ 	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+-		dev_pm_opp_put_regulator(priv->cpu_dev);
++		dev_pm_opp_put_regulator(priv->opp_table);
+ 
+ 	clk_put(policy->clk);
+ 	kfree(priv);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 2cde3796cb82..f3307fc38e79 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -702,7 +702,9 @@ copy_iv:
+ 
+ 	/* Will read cryptlen */
+ 	append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+-	aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
++	append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH | KEY_VLF |
++			     FIFOLD_TYPE_MSG1OUT2 | FIFOLD_TYPE_LASTBOTH);
++	append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
+ 
+ 	/* Write ICV */
+ 	append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 6fc8923bd92a..bd56a3ecb29c 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -1503,12 +1503,15 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
+ 	if (!cc->key_size && strcmp(key, "-"))
+ 		goto out;
+ 
++	/* clear the flag since following operations may invalidate previously valid key */
++	clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
++
+ 	if (cc->key_size && crypt_decode_key(cc->key, key, cc->key_size) < 0)
+ 		goto out;
+ 
+-	set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
+-
+ 	r = crypt_setkey_allcpus(cc);
++	if (!r)
++		set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
+ 
+ out:
+ 	/* Hex key string not needed after here, so wipe it. */
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 6a2e8dd44a1b..3643cba71351 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -200,11 +200,13 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 
+ 	if (!(fc->up_interval + fc->down_interval)) {
+ 		ti->error = "Total (up + down) interval is zero";
++		r = -EINVAL;
+ 		goto bad;
+ 	}
+ 
+ 	if (fc->up_interval + fc->down_interval < fc->up_interval) {
+ 		ti->error = "Interval overflow";
++		r = -EINVAL;
+ 		goto bad;
+ 	}
+ 
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 6d53810963f7..af2d79b52484 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -2994,6 +2994,9 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		}
+ 	}
+ 
++	/* Disable/enable discard support on raid set. */
++	configure_discard_support(rs);
++
+ 	mddev_unlock(&rs->md);
+ 	return 0;
+ 
+@@ -3580,12 +3583,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	if (test_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags))
+ 		rs_update_sbs(rs);
+ 
+-	/*
+-	 * Disable/enable discard support on raid set after any
+-	 * conversion, because devices can have been added
+-	 */
+-	configure_discard_support(rs);
+-
+ 	/* Load the bitmap from disk unless raid0 */
+ 	r = __load_dirty_region_bitmap(rs);
+ 	if (r)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 2154596eedf3..b6af2860cbc6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -218,6 +218,9 @@ static void rq_end_stats(struct mapped_device *md, struct request *orig)
+  */
+ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
+ {
++	struct request_queue *q = md->queue;
++	unsigned long flags;
++
+ 	atomic_dec(&md->pending[rw]);
+ 
+ 	/* nudge anyone waiting on suspend queue */
+@@ -230,8 +233,11 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
+ 	 * back into ->request_fn() could deadlock attempting to grab the
+ 	 * queue lock again.
+ 	 */
+-	if (!md->queue->mq_ops && run_queue)
+-		blk_run_queue_async(md->queue);
++	if (!q->mq_ops && run_queue) {
++		spin_lock_irqsave(q->queue_lock, flags);
++		blk_run_queue_async(q);
++		spin_unlock_irqrestore(q->queue_lock, flags);
++	}
+ 
+ 	/*
+ 	 * dm_put() must be at the end of this function. See the comment above
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index c4b53b332607..5ac239d0f787 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -924,12 +924,6 @@ static int dm_table_determine_type(struct dm_table *t)
+ 
+ 	BUG_ON(!request_based); /* No targets in this table */
+ 
+-	if (list_empty(devices) && __table_type_request_based(live_md_type)) {
+-		/* inherit live MD type */
+-		t->type = live_md_type;
+-		return 0;
+-	}
+-
+ 	/*
+ 	 * The only way to establish DM_TYPE_MQ_REQUEST_BASED is by
+ 	 * having a compatible target use dm_table_set_type.
+@@ -948,6 +942,19 @@ verify_rq_based:
+ 		return -EINVAL;
+ 	}
+ 
++	if (list_empty(devices)) {
++		int srcu_idx;
++		struct dm_table *live_table = dm_get_live_table(t->md, &srcu_idx);
++
++		/* inherit live table's type and all_blk_mq */
++		if (live_table) {
++			t->type = live_table->type;
++			t->all_blk_mq = live_table->all_blk_mq;
++		}
++		dm_put_live_table(t->md, srcu_idx);
++		return 0;
++	}
++
+ 	/* Non-request-stackable devices can't be used for request-based dm */
+ 	list_for_each_entry(dd, devices, list) {
+ 		struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev);
+@@ -974,6 +981,11 @@ verify_rq_based:
+ 		t->all_blk_mq = true;
+ 	}
+ 
++	if (t->type == DM_TYPE_MQ_REQUEST_BASED && !t->all_blk_mq) {
++		DMERR("table load rejected: all devices are not blk-mq request-stackable");
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index 7e44005595c1..20557e2c60c6 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -775,17 +775,15 @@ int dm_sm_metadata_create(struct dm_space_map *sm,
+ 	memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm));
+ 
+ 	r = sm_ll_new_metadata(&smm->ll, tm);
++	if (!r) {
++		if (nr_blocks > DM_SM_METADATA_MAX_BLOCKS)
++			nr_blocks = DM_SM_METADATA_MAX_BLOCKS;
++		r = sm_ll_extend(&smm->ll, nr_blocks);
++	}
++	memcpy(&smm->sm, &ops, sizeof(smm->sm));
+ 	if (r)
+ 		return r;
+ 
+-	if (nr_blocks > DM_SM_METADATA_MAX_BLOCKS)
+-		nr_blocks = DM_SM_METADATA_MAX_BLOCKS;
+-	r = sm_ll_extend(&smm->ll, nr_blocks);
+-	if (r)
+-		return r;
+-
+-	memcpy(&smm->sm, &ops, sizeof(smm->sm));
+-
+ 	/*
+ 	 * Now we need to update the newly created data structures with the
+ 	 * allocated blocks that they were built from.
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index af5e2dc4a3d5..011f88e5663e 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -271,7 +271,7 @@ static ssize_t nvmet_ns_device_path_store(struct config_item *item,
+ 
+ 	mutex_lock(&subsys->lock);
+ 	ret = -EBUSY;
+-	if (nvmet_ns_enabled(ns))
++	if (ns->enabled)
+ 		goto out_unlock;
+ 
+ 	kfree(ns->device_path);
+@@ -307,7 +307,7 @@ static ssize_t nvmet_ns_device_nguid_store(struct config_item *item,
+ 	int ret = 0;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (nvmet_ns_enabled(ns)) {
++	if (ns->enabled) {
+ 		ret = -EBUSY;
+ 		goto out_unlock;
+ 	}
+@@ -339,7 +339,7 @@ CONFIGFS_ATTR(nvmet_ns_, device_nguid);
+ 
+ static ssize_t nvmet_ns_enable_show(struct config_item *item, char *page)
+ {
+-	return sprintf(page, "%d\n", nvmet_ns_enabled(to_nvmet_ns(item)));
++	return sprintf(page, "%d\n", to_nvmet_ns(item)->enabled);
+ }
+ 
+ static ssize_t nvmet_ns_enable_store(struct config_item *item,
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 6559d5afa7bf..e9500d41b943 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -264,7 +264,7 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
+ 	int ret = 0;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (!list_empty(&ns->dev_link))
++	if (ns->enabled)
+ 		goto out_unlock;
+ 
+ 	ns->bdev = blkdev_get_by_path(ns->device_path, FMODE_READ | FMODE_WRITE,
+@@ -309,6 +309,7 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
+ 	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ 		nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
+ 
++	ns->enabled = true;
+ 	ret = 0;
+ out_unlock:
+ 	mutex_unlock(&subsys->lock);
+@@ -325,11 +326,11 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
+ 	struct nvmet_ctrl *ctrl;
+ 
+ 	mutex_lock(&subsys->lock);
+-	if (list_empty(&ns->dev_link)) {
+-		mutex_unlock(&subsys->lock);
+-		return;
+-	}
+-	list_del_init(&ns->dev_link);
++	if (!ns->enabled)
++		goto out_unlock;
++
++	ns->enabled = false;
++	list_del_rcu(&ns->dev_link);
+ 	mutex_unlock(&subsys->lock);
+ 
+ 	/*
+@@ -351,6 +352,7 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
+ 
+ 	if (ns->bdev)
+ 		blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
++out_unlock:
+ 	mutex_unlock(&subsys->lock);
+ }
+ 
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 76b6eedccaf9..7655a351320f 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -47,6 +47,7 @@ struct nvmet_ns {
+ 	loff_t			size;
+ 	u8			nguid[16];
+ 
++	bool			enabled;
+ 	struct nvmet_subsys	*subsys;
+ 	const char		*device_path;
+ 
+@@ -61,11 +62,6 @@ static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
+ 	return container_of(to_config_group(item), struct nvmet_ns, group);
+ }
+ 
+-static inline bool nvmet_ns_enabled(struct nvmet_ns *ns)
+-{
+-	return !list_empty_careful(&ns->dev_link);
+-}
+-
+ struct nvmet_cq {
+ 	u16			qid;
+ 	u16			size;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 3ca9fdb0a271..3f9ac8be77e1 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1732,6 +1732,7 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */
+ 	.driver_info = QUIRK_CONTROL_LINE_STATE, },
+ 	{ USB_DEVICE(0x2184, 0x001c) },	/* GW Instek AFG-2225 */
++	{ USB_DEVICE(0x2184, 0x0036) },	/* GW Instek AFG-125 */
+ 	{ USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */
+ 	},
+ 	/* Motorola H24 HSPA module: */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 1d5fc32d06d0..f3a7408d6417 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -101,6 +101,8 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem);
+ 
+ static void hub_release(struct kref *kref);
+ static int usb_reset_and_verify_device(struct usb_device *udev);
++static void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++					  struct usb_port *port_dev);
+ 
+ static inline char *portspeed(struct usb_hub *hub, int portstatus)
+ {
+@@ -899,82 +901,28 @@ static int hub_set_port_link_state(struct usb_hub *hub, int port1,
+ }
+ 
+ /*
+- * If USB 3.0 ports are placed into the Disabled state, they will no longer
+- * detect any device connects or disconnects.  This is generally not what the
+- * USB core wants, since it expects a disabled port to produce a port status
+- * change event when a new device connects.
+- *
+- * Instead, set the link state to Disabled, wait for the link to settle into
+- * that state, clear any change bits, and then put the port into the RxDetect
+- * state.
++ * USB-3 does not have a similar link state as USB-2 that will avoid negotiating
++ * a connection with a plugged-in cable but will signal the host when the cable
++ * is unplugged. Disable remote wake and set link state to U3 for USB-3 devices
+  */
+-static int hub_usb3_port_disable(struct usb_hub *hub, int port1)
+-{
+-	int ret;
+-	int total_time;
+-	u16 portchange, portstatus;
+-
+-	if (!hub_is_superspeed(hub->hdev))
+-		return -EINVAL;
+-
+-	ret = hub_port_status(hub, port1, &portstatus, &portchange);
+-	if (ret < 0)
+-		return ret;
+-
+-	/*
+-	 * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI
+-	 * Controller [1022:7814] will have spurious result making the following
+-	 * usb 3.0 device hotplugging route to the 2.0 root hub and recognized
+-	 * as high-speed device if we set the usb 3.0 port link state to
+-	 * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we
+-	 * check the state here to avoid the bug.
+-	 */
+-	if ((portstatus & USB_PORT_STAT_LINK_STATE) ==
+-				USB_SS_PORT_LS_RX_DETECT) {
+-		dev_dbg(&hub->ports[port1 - 1]->dev,
+-			 "Not disabling port; link state is RxDetect\n");
+-		return ret;
+-	}
+-
+-	ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED);
+-	if (ret)
+-		return ret;
+-
+-	/* Wait for the link to enter the disabled state. */
+-	for (total_time = 0; ; total_time += HUB_DEBOUNCE_STEP) {
+-		ret = hub_port_status(hub, port1, &portstatus, &portchange);
+-		if (ret < 0)
+-			return ret;
+-
+-		if ((portstatus & USB_PORT_STAT_LINK_STATE) ==
+-				USB_SS_PORT_LS_SS_DISABLED)
+-			break;
+-		if (total_time >= HUB_DEBOUNCE_TIMEOUT)
+-			break;
+-		msleep(HUB_DEBOUNCE_STEP);
+-	}
+-	if (total_time >= HUB_DEBOUNCE_TIMEOUT)
+-		dev_warn(&hub->ports[port1 - 1]->dev,
+-				"Could not disable after %d ms\n", total_time);
+-
+-	return hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_RX_DETECT);
+-}
+-
+ static int hub_port_disable(struct usb_hub *hub, int port1, int set_state)
+ {
+ 	struct usb_port *port_dev = hub->ports[port1 - 1];
+ 	struct usb_device *hdev = hub->hdev;
+ 	int ret = 0;
+ 
+-	if (port_dev->child && set_state)
+-		usb_set_device_state(port_dev->child, USB_STATE_NOTATTACHED);
+ 	if (!hub->error) {
+-		if (hub_is_superspeed(hub->hdev))
+-			ret = hub_usb3_port_disable(hub, port1);
+-		else
++		if (hub_is_superspeed(hub->hdev)) {
++			hub_usb3_port_prepare_disable(hub, port_dev);
++			ret = hub_set_port_link_state(hub, port_dev->portnum,
++						      USB_SS_PORT_LS_U3);
++		} else {
+ 			ret = usb_clear_port_feature(hdev, port1,
+ 					USB_PORT_FEAT_ENABLE);
++		}
+ 	}
++	if (port_dev->child && set_state)
++		usb_set_device_state(port_dev->child, USB_STATE_NOTATTACHED);
+ 	if (ret && ret != -ENODEV)
+ 		dev_err(&port_dev->dev, "cannot disable (err = %d)\n", ret);
+ 	return ret;
+@@ -4142,6 +4090,26 @@ void usb_unlocked_enable_lpm(struct usb_device *udev)
+ }
+ EXPORT_SYMBOL_GPL(usb_unlocked_enable_lpm);
+ 
++/* usb3 devices use U3 for disabled, make sure remote wakeup is disabled */
++static void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++					  struct usb_port *port_dev)
++{
++	struct usb_device *udev = port_dev->child;
++	int ret;
++
++	if (udev && udev->port_is_suspended && udev->do_remote_wakeup) {
++		ret = hub_set_port_link_state(hub, port_dev->portnum,
++					      USB_SS_PORT_LS_U0);
++		if (!ret) {
++			msleep(USB_RESUME_TIMEOUT);
++			ret = usb_disable_remote_wakeup(udev);
++		}
++		if (ret)
++			dev_warn(&udev->dev,
++				 "Port disable: can't disable remote wake\n");
++		udev->do_remote_wakeup = 0;
++	}
++}
+ 
+ #else	/* CONFIG_PM */
+ 
+@@ -4149,6 +4117,9 @@ EXPORT_SYMBOL_GPL(usb_unlocked_enable_lpm);
+ #define hub_resume		NULL
+ #define hub_reset_resume	NULL
+ 
++static inline void hub_usb3_port_prepare_disable(struct usb_hub *hub,
++						 struct usb_port *port_dev) { }
++
+ int usb_disable_lpm(struct usb_device *udev)
+ {
+ 	return 0;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index dc3b5962d087..717d5f6d3e7c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -775,6 +775,9 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 		unsigned length, unsigned last, unsigned chain, unsigned node)
+ {
+ 	struct dwc3_trb		*trb;
++	struct dwc3		*dwc = dep->dwc;
++	struct usb_gadget	*gadget = &dwc->gadget;
++	enum usb_device_speed	speed = gadget->speed;
+ 
+ 	dwc3_trace(trace_dwc3_gadget, "%s: req %p dma %08llx length %d%s%s",
+ 			dep->name, req, (unsigned long long) dma,
+@@ -804,10 +807,16 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+ 		break;
+ 
+ 	case USB_ENDPOINT_XFER_ISOC:
+-		if (!node)
++		if (!node) {
+ 			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS_FIRST;
+-		else
++
++			if (speed == USB_SPEED_HIGH) {
++				struct usb_ep *ep = &dep->endpoint;
++				trb->size |= DWC3_TRB_SIZE_PCM1(ep->mult - 1);
++			}
++		} else {
+ 			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
++		}
+ 
+ 		/* always enable Interrupt on Missed ISOC */
+ 		trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 5ebe6af7976e..e2abaa7943d9 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -197,11 +197,16 @@ int config_ep_by_speed(struct usb_gadget *g,
+ 
+ ep_found:
+ 	/* commit results */
+-	_ep->maxpacket = usb_endpoint_maxp(chosen_desc);
++	_ep->maxpacket = usb_endpoint_maxp(chosen_desc) & 0x7ff;
+ 	_ep->desc = chosen_desc;
+ 	_ep->comp_desc = NULL;
+ 	_ep->maxburst = 0;
+-	_ep->mult = 0;
++	_ep->mult = 1;
++
++	if (g->speed == USB_SPEED_HIGH && (usb_endpoint_xfer_isoc(_ep->desc) ||
++				usb_endpoint_xfer_int(_ep->desc)))
++		_ep->mult = usb_endpoint_maxp(_ep->desc) & 0x7ff;
++
+ 	if (!want_comp_desc)
+ 		return 0;
+ 
+@@ -218,7 +223,7 @@ ep_found:
+ 		switch (usb_endpoint_type(_ep->desc)) {
+ 		case USB_ENDPOINT_XFER_ISOC:
+ 			/* mult: bits 1:0 of bmAttributes */
+-			_ep->mult = comp_desc->bmAttributes & 0x3;
++			_ep->mult = (comp_desc->bmAttributes & 0x3) + 1;
+ 		case USB_ENDPOINT_XFER_BULK:
+ 		case USB_ENDPOINT_XFER_INT:
+ 			_ep->maxburst = comp_desc->bMaxBurst + 1;
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index cd214ec8a601..969cfe741380 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -1067,13 +1067,13 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
+ 	if (!agdev->out_ep) {
+ 		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	agdev->in_ep = usb_ep_autoconfig(gadget, &fs_epin_desc);
+ 	if (!agdev->in_ep) {
+ 		dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	uac2->p_prm.uac2 = uac2;
+@@ -1091,7 +1091,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc, NULL,
+ 				     NULL);
+ 	if (ret)
+-		goto err;
++		return ret;
+ 
+ 	prm = &agdev->uac2.c_prm;
+ 	prm->max_psize = hs_epout_desc.wMaxPacketSize;
+@@ -1106,19 +1106,19 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL);
+ 	if (!prm->rbuf) {
+ 		prm->max_psize = 0;
+-		goto err_free_descs;
++		goto err;
+ 	}
+ 
+ 	ret = alsa_uac2_init(agdev);
+ 	if (ret)
+-		goto err_free_descs;
++		goto err;
+ 	return 0;
+ 
+-err_free_descs:
+-	usb_free_all_descriptors(fn);
+ err:
+ 	kfree(agdev->uac2.p_prm.rbuf);
+ 	kfree(agdev->uac2.c_prm.rbuf);
++err_free_descs:
++	usb_free_all_descriptors(fn);
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index 3d0d5d94a62f..0f01c04d7cbd 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -243,7 +243,7 @@ uvc_video_alloc_requests(struct uvc_video *video)
+ 
+ 	req_size = video->ep->maxpacket
+ 		 * max_t(unsigned int, video->ep->maxburst, 1)
+-		 * (video->ep->mult + 1);
++		 * (video->ep->mult);
+ 
+ 	for (i = 0; i < UVC_NUM_REQUESTS; ++i) {
+ 		video->req_buffer[i] = kmalloc(req_size, GFP_KERNEL);
+diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
+index 940304c33224..02260cfdedb1 100644
+--- a/drivers/usb/host/uhci-pci.c
++++ b/drivers/usb/host/uhci-pci.c
+@@ -129,6 +129,10 @@ static int uhci_pci_init(struct usb_hcd *hcd)
+ 	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_HP)
+ 		uhci->wait_for_hp = 1;
+ 
++	/* Intel controllers use non-PME wakeup signalling */
++	if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL)
++		device_set_run_wake(uhci_dev(uhci), 1);
++
+ 	/* Set up pointers to PCI-specific functions */
+ 	uhci->reset_hc = uhci_pci_reset_hc;
+ 	uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
+diff --git a/drivers/usb/serial/kl5kusb105.c b/drivers/usb/serial/kl5kusb105.c
+index fc5d3a791e08..6f29bfadbe33 100644
+--- a/drivers/usb/serial/kl5kusb105.c
++++ b/drivers/usb/serial/kl5kusb105.c
+@@ -296,7 +296,7 @@ static int  klsi_105_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 	rc = usb_serial_generic_open(tty, port);
+ 	if (rc) {
+ 		retval = rc;
+-		goto exit;
++		goto err_free_cfg;
+ 	}
+ 
+ 	rc = usb_control_msg(port->serial->dev,
+@@ -315,17 +315,32 @@ static int  klsi_105_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 		dev_dbg(&port->dev, "%s - enabled reading\n", __func__);
+ 
+ 	rc = klsi_105_get_line_state(port, &line_state);
+-	if (rc >= 0) {
+-		spin_lock_irqsave(&priv->lock, flags);
+-		priv->line_state = line_state;
+-		spin_unlock_irqrestore(&priv->lock, flags);
+-		dev_dbg(&port->dev, "%s - read line state 0x%lx\n", __func__, line_state);
+-		retval = 0;
+-	} else
++	if (rc < 0) {
+ 		retval = rc;
++		goto err_disable_read;
++	}
++
++	spin_lock_irqsave(&priv->lock, flags);
++	priv->line_state = line_state;
++	spin_unlock_irqrestore(&priv->lock, flags);
++	dev_dbg(&port->dev, "%s - read line state 0x%lx\n", __func__,
++			line_state);
++
++	return 0;
+ 
+-exit:
++err_disable_read:
++	usb_control_msg(port->serial->dev,
++			     usb_sndctrlpipe(port->serial->dev, 0),
++			     KL5KUSB105A_SIO_CONFIGURE,
++			     USB_TYPE_VENDOR | USB_DIR_OUT,
++			     KL5KUSB105A_SIO_CONFIGURE_READ_OFF,
++			     0, /* index */
++			     NULL, 0,
++			     KLSI_TIMEOUT);
++	usb_serial_generic_close(port);
++err_free_cfg:
+ 	kfree(cfg);
++
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 9894e341c6ac..7ce31a4c7e7f 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -268,6 +268,8 @@ static void option_instat_callback(struct urb *urb);
+ #define TELIT_PRODUCT_CC864_SINGLE		0x1006
+ #define TELIT_PRODUCT_DE910_DUAL		0x1010
+ #define TELIT_PRODUCT_UE910_V2			0x1012
++#define TELIT_PRODUCT_LE922_USBCFG1		0x1040
++#define TELIT_PRODUCT_LE922_USBCFG2		0x1041
+ #define TELIT_PRODUCT_LE922_USBCFG0		0x1042
+ #define TELIT_PRODUCT_LE922_USBCFG3		0x1043
+ #define TELIT_PRODUCT_LE922_USBCFG5		0x1045
+@@ -1210,6 +1212,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 },
++	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
++		.driver_info = (kernel_ulong_t)&telit_le910_blacklist },
++	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG2),
++		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG3),
+ 		.driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff),
+@@ -1989,6 +1995,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d02, 0xff, 0x00, 0x00) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d04, 0xff) },			/* D-Link DWM-158 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff),			/* D-Link DWM-221 B1 */
+ 	  .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+diff --git a/drivers/usb/usbip/vudc_transfer.c b/drivers/usb/usbip/vudc_transfer.c
+index aba6bd478045..bc0296d937d0 100644
+--- a/drivers/usb/usbip/vudc_transfer.c
++++ b/drivers/usb/usbip/vudc_transfer.c
+@@ -339,6 +339,8 @@ static void v_timer(unsigned long _vudc)
+ 		total = timer->frame_limit;
+ 	}
+ 
++	/* We have to clear ep0 flags separately as it's not on the list */
++	udc->ep[0].already_seen = 0;
+ 	list_for_each_entry(_ep, &udc->gadget.ep_list, ep_list) {
+ 		ep = to_vep(_ep);
+ 		ep->already_seen = 0;
+diff --git a/drivers/watchdog/mei_wdt.c b/drivers/watchdog/mei_wdt.c
+index 630bd189f167..2a9d5cdedea2 100644
+--- a/drivers/watchdog/mei_wdt.c
++++ b/drivers/watchdog/mei_wdt.c
+@@ -389,6 +389,8 @@ static int mei_wdt_register(struct mei_wdt *wdt)
+ 	wdt->wdd.max_timeout = MEI_WDT_MAX_TIMEOUT;
+ 
+ 	watchdog_set_drvdata(&wdt->wdd, wdt);
++	watchdog_stop_on_reboot(&wdt->wdd);
++
+ 	ret = watchdog_register_device(&wdt->wdd);
+ 	if (ret) {
+ 		dev_err(dev, "unable to register watchdog device = %d.\n", ret);
+diff --git a/drivers/watchdog/qcom-wdt.c b/drivers/watchdog/qcom-wdt.c
+index 5796b5d1b3f2..4f47b5e90956 100644
+--- a/drivers/watchdog/qcom-wdt.c
++++ b/drivers/watchdog/qcom-wdt.c
+@@ -209,7 +209,7 @@ static int qcom_wdt_probe(struct platform_device *pdev)
+ 	wdt->wdd.parent = &pdev->dev;
+ 	wdt->layout = regs;
+ 
+-	if (readl(wdt->base + WDT_STS) & 1)
++	if (readl(wdt_addr(wdt, WDT_STS)) & 1)
+ 		wdt->wdd.bootstatus = WDIOF_CARDRESET;
+ 
+ 	/*
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index bb952121ea94..2ef2b61b69df 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -1007,7 +1007,7 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ 
+ 	vma->vm_ops = &gntdev_vmops;
+ 
+-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
++	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP;
+ 
+ 	if (use_ptemod)
+ 		vma->vm_flags |= VM_DONTCOPY;
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 08ae99343d92..b010242bab32 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -837,7 +837,7 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
+ 		return true;	 /* already a holder */
+ 	else if (bdev->bd_holder != NULL)
+ 		return false; 	 /* held by someone else */
+-	else if (bdev->bd_contains == bdev)
++	else if (whole == bdev)
+ 		return true;  	 /* is a whole device which isn't held */
+ 
+ 	else if (whole->bd_holder == bd_may_claim)
+diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
+index e0f071f6b5a7..63d197724519 100644
+--- a/fs/btrfs/async-thread.c
++++ b/fs/btrfs/async-thread.c
+@@ -86,6 +86,20 @@ btrfs_work_owner(struct btrfs_work *work)
+ 	return work->wq->fs_info;
+ }
+ 
++bool btrfs_workqueue_normal_congested(struct btrfs_workqueue *wq)
++{
++	/*
++	 * We could compare wq->normal->pending with num_online_cpus()
++	 * to support "thresh == NO_THRESHOLD" case, but it requires
++	 * moving up atomic_inc/dec in thresh_queue/exec_hook. Let's
++	 * postpone it until someone needs the support of that case.
++	 */
++	if (wq->normal->thresh == NO_THRESHOLD)
++		return false;
++
++	return atomic_read(&wq->normal->pending) > wq->normal->thresh * 2;
++}
++
+ BTRFS_WORK_HELPER(worker_helper);
+ BTRFS_WORK_HELPER(delalloc_helper);
+ BTRFS_WORK_HELPER(flush_delalloc_helper);
+diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h
+index 8e52484cd461..1f9597355c9d 100644
+--- a/fs/btrfs/async-thread.h
++++ b/fs/btrfs/async-thread.h
+@@ -84,4 +84,5 @@ void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int max);
+ void btrfs_set_work_high_priority(struct btrfs_work *work);
+ struct btrfs_fs_info *btrfs_work_owner(struct btrfs_work *work);
+ struct btrfs_fs_info *btrfs_workqueue_owner(struct __btrfs_workqueue *wq);
++bool btrfs_workqueue_normal_congested(struct btrfs_workqueue *wq);
+ #endif
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 791e47ce9d27..469fa3268d9b 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2210,6 +2210,8 @@ btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu,
+ 	cpu->target = le64_to_cpu(disk->target);
+ 	cpu->flags = le64_to_cpu(disk->flags);
+ 	cpu->limit = le64_to_cpu(disk->limit);
++	cpu->stripes_min = le32_to_cpu(disk->stripes_min);
++	cpu->stripes_max = le32_to_cpu(disk->stripes_max);
+ }
+ 
+ static inline void
+@@ -2228,6 +2230,8 @@ btrfs_cpu_balance_args_to_disk(struct btrfs_disk_balance_args *disk,
+ 	disk->target = cpu_to_le64(cpu->target);
+ 	disk->flags = cpu_to_le64(cpu->flags);
+ 	disk->limit = cpu_to_le64(cpu->limit);
++	disk->stripes_min = cpu_to_le32(cpu->stripes_min);
++	disk->stripes_max = cpu_to_le32(cpu->stripes_max);
+ }
+ 
+ /* struct btrfs_super_block */
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 3eeb9cd8cfa5..de946dd743b7 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1356,7 +1356,8 @@ release_path:
+ 	total_done++;
+ 
+ 	btrfs_release_prepared_delayed_node(delayed_node);
+-	if (async_work->nr == 0 || total_done < async_work->nr)
++	if ((async_work->nr == 0 && total_done < BTRFS_DELAYED_WRITEBACK) ||
++	    total_done < async_work->nr)
+ 		goto again;
+ 
+ free_path:
+@@ -1372,7 +1373,8 @@ static int btrfs_wq_run_delayed_node(struct btrfs_delayed_root *delayed_root,
+ {
+ 	struct btrfs_async_delayed_work *async_work;
+ 
+-	if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND)
++	if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND ||
++	    btrfs_workqueue_normal_congested(fs_info->delayed_workers))
+ 		return 0;
+ 
+ 	async_work = kmalloc(sizeof(*async_work), GFP_NOFS);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 3dede6d53bad..dafcfd017d37 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -559,7 +559,15 @@ static noinline int check_leaf(struct btrfs_root *root,
+ 	u32 nritems = btrfs_header_nritems(leaf);
+ 	int slot;
+ 
+-	if (nritems == 0) {
++	/*
++	 * Extent buffers from a relocation tree have a owner field that
++	 * corresponds to the subvolume tree they are based on. So just from an
++	 * extent buffer alone we can not find out what is the id of the
++	 * corresponding subvolume tree, so we can not figure out if the extent
++	 * buffer corresponds to the root of the relocation tree or not. So skip
++	 * this check for relocation trees.
++	 */
++	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
+ 		struct btrfs_root *check_root;
+ 
+ 		key.objectid = btrfs_header_owner(leaf);
+@@ -572,17 +580,24 @@ static noinline int check_leaf(struct btrfs_root *root,
+ 		 * open_ctree() some roots has not yet been set up.
+ 		 */
+ 		if (!IS_ERR_OR_NULL(check_root)) {
++			struct extent_buffer *eb;
++
++			eb = btrfs_root_node(check_root);
+ 			/* if leaf is the root, then it's fine */
+-			if (leaf->start !=
+-			    btrfs_root_bytenr(&check_root->root_item)) {
++			if (leaf != eb) {
+ 				CORRUPT("non-root leaf's nritems is 0",
+-					leaf, root, 0);
++					leaf, check_root, 0);
++				free_extent_buffer(eb);
+ 				return -EIO;
+ 			}
++			free_extent_buffer(eb);
+ 		}
+ 		return 0;
+ 	}
+ 
++	if (nritems == 0)
++		return 0;
++
+ 	/* Check the 0 item */
+ 	if (btrfs_item_offset_nr(leaf, 0) + btrfs_item_size_nr(leaf, 0) !=
+ 	    BTRFS_LEAF_DATA_SIZE(root)) {
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 665da8f66ff1..a1b40ab5770a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8884,14 +8884,13 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 	ret = btrfs_lookup_extent_info(trans, root, bytenr, level - 1, 1,
+ 				       &wc->refs[level - 1],
+ 				       &wc->flags[level - 1]);
+-	if (ret < 0) {
+-		btrfs_tree_unlock(next);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto out_unlock;
+ 
+ 	if (unlikely(wc->refs[level - 1] == 0)) {
+ 		btrfs_err(root->fs_info, "Missing references.");
+-		BUG();
++		ret = -EIO;
++		goto out_unlock;
+ 	}
+ 	*lookup_info = 0;
+ 
+@@ -8943,7 +8942,12 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	level--;
+-	BUG_ON(level != btrfs_header_level(next));
++	ASSERT(level == btrfs_header_level(next));
++	if (level != btrfs_header_level(next)) {
++		btrfs_err(root->fs_info, "mismatched level");
++		ret = -EIO;
++		goto out_unlock;
++	}
+ 	path->nodes[level] = next;
+ 	path->slots[level] = 0;
+ 	path->locks[level] = BTRFS_WRITE_LOCK_BLOCKING;
+@@ -8958,8 +8962,15 @@ skip:
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF) {
+ 			parent = path->nodes[level]->start;
+ 		} else {
+-			BUG_ON(root->root_key.objectid !=
++			ASSERT(root->root_key.objectid ==
+ 			       btrfs_header_owner(path->nodes[level]));
++			if (root->root_key.objectid !=
++			    btrfs_header_owner(path->nodes[level])) {
++				btrfs_err(root->fs_info,
++						"mismatched block owner");
++				ret = -EIO;
++				goto out_unlock;
++			}
+ 			parent = 0;
+ 		}
+ 
+@@ -8976,12 +8987,18 @@ skip:
+ 		}
+ 		ret = btrfs_free_extent(trans, root, bytenr, blocksize, parent,
+ 				root->root_key.objectid, level - 1, 0);
+-		BUG_ON(ret); /* -ENOMEM */
++		if (ret)
++			goto out_unlock;
+ 	}
++
++	*lookup_info = 1;
++	ret = 1;
++
++out_unlock:
+ 	btrfs_tree_unlock(next);
+ 	free_extent_buffer(next);
+-	*lookup_info = 1;
+-	return 1;
++
++	return ret;
+ }
+ 
+ /*
+@@ -10127,6 +10144,11 @@ int btrfs_read_block_groups(struct btrfs_root *root)
+ 	struct extent_buffer *leaf;
+ 	int need_clear = 0;
+ 	u64 cache_gen;
++	u64 feature;
++	int mixed;
++
++	feature = btrfs_super_incompat_flags(info->super_copy);
++	mixed = !!(feature & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS);
+ 
+ 	root = info->extent_root;
+ 	key.objectid = 0;
+@@ -10180,6 +10202,15 @@ int btrfs_read_block_groups(struct btrfs_root *root)
+ 				   btrfs_item_ptr_offset(leaf, path->slots[0]),
+ 				   sizeof(cache->item));
+ 		cache->flags = btrfs_block_group_flags(&cache->item);
++		if (!mixed &&
++		    ((cache->flags & BTRFS_BLOCK_GROUP_METADATA) &&
++		    (cache->flags & BTRFS_BLOCK_GROUP_DATA))) {
++			btrfs_err(info,
++"bg %llu is a mixed block group but filesystem hasn't enabled mixed block groups",
++				  cache->key.objectid);
++			ret = -EINVAL;
++			goto error;
++		}
+ 
+ 		key.objectid = found_key.objectid + found_key.offset;
+ 		btrfs_release_path(path);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index c3ec30dea9a5..3f9a6b40fbbd 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5209,11 +5209,20 @@ int read_extent_buffer_pages(struct extent_io_tree *tree,
+ 			lock_page(page);
+ 		}
+ 		locked_pages++;
++	}
++	/*
++	 * We need to firstly lock all pages to make sure that
++	 * the uptodate bit of our pages won't be affected by
++	 * clear_extent_buffer_uptodate().
++	 */
++	for (i = start_i; i < num_pages; i++) {
++		page = eb->pages[i];
+ 		if (!PageUptodate(page)) {
+ 			num_reads++;
+ 			all_uptodate = 0;
+ 		}
+ 	}
++
+ 	if (all_uptodate) {
+ 		if (start_i == 0)
+ 			set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 7fd939bfbd99..fcd2b3be21bf 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3813,6 +3813,11 @@ process_slot:
+ 		}
+ 		btrfs_release_path(path);
+ 		key.offset = next_key_min_offset;
++
++		if (fatal_signal_pending(current)) {
++			ret = -EINTR;
++			goto out;
++		}
+ 	}
+ 	ret = 0;
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 8db2e29fdcf4..9cf03ebb27cc 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2332,10 +2332,6 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	int err = -ENOMEM;
+ 	int ret = 0;
+ 
+-	mutex_lock(&fs_info->qgroup_rescan_lock);
+-	fs_info->qgroup_rescan_running = true;
+-	mutex_unlock(&fs_info->qgroup_rescan_lock);
+-
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+ 		goto out;
+@@ -2446,6 +2442,7 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
+ 		sizeof(fs_info->qgroup_rescan_progress));
+ 	fs_info->qgroup_rescan_progress.objectid = progress_objectid;
+ 	init_completion(&fs_info->qgroup_rescan_completion);
++	fs_info->qgroup_rescan_running = true;
+ 
+ 	spin_unlock(&fs_info->qgroup_lock);
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index c0c13dc6fe12..d08a79dbf323 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -923,9 +923,16 @@ again:
+ 			path2->slots[level]--;
+ 
+ 		eb = path2->nodes[level];
+-		WARN_ON(btrfs_node_blockptr(eb, path2->slots[level]) !=
+-			cur->bytenr);
+-
++		if (btrfs_node_blockptr(eb, path2->slots[level]) !=
++		    cur->bytenr) {
++			btrfs_err(root->fs_info,
++	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
++				  cur->bytenr, level - 1, root->objectid,
++				  node_key->objectid, node_key->type,
++				  node_key->offset);
++			err = -ENOENT;
++			goto out;
++		}
+ 		lower = cur;
+ 		need_check = true;
+ 		for (; level < BTRFS_MAX_LEVEL; level++) {
+@@ -1387,14 +1394,23 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 	root_key.offset = objectid;
+ 
+ 	if (root->root_key.objectid == objectid) {
++		u64 commit_root_gen;
++
+ 		/* called by btrfs_init_reloc_root */
+ 		ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+ 		BUG_ON(ret);
+-
+ 		last_snap = btrfs_root_last_snapshot(&root->root_item);
+-		btrfs_set_root_last_snapshot(&root->root_item,
+-					     trans->transid - 1);
++		/*
++		 * Set the last_snapshot field to the generation of the commit
++		 * root - like this ctree.c:btrfs_block_can_be_shared() behaves
++		 * correctly (returns true) when the relocation root is created
++		 * either inside the critical section of a transaction commit
++		 * (through transaction.c:qgroup_account_snapshot()) and when
++		 * it's created before the transaction commit is started.
++		 */
++		commit_root_gen = btrfs_header_generation(root->commit_root);
++		btrfs_set_root_last_snapshot(&root->root_item, commit_root_gen);
+ 	} else {
+ 		/*
+ 		 * called by btrfs_reloc_post_snapshot_hook.
+@@ -2350,6 +2366,10 @@ void free_reloc_roots(struct list_head *list)
+ 	while (!list_empty(list)) {
+ 		reloc_root = list_entry(list->next, struct btrfs_root,
+ 					root_list);
++		free_extent_buffer(reloc_root->node);
++		free_extent_buffer(reloc_root->commit_root);
++		reloc_root->node = NULL;
++		reloc_root->commit_root = NULL;
+ 		__del_reloc_root(reloc_root);
+ 	}
+ }
+@@ -2686,11 +2706,15 @@ static int do_relocation(struct btrfs_trans_handle *trans,
+ 
+ 		if (!upper->eb) {
+ 			ret = btrfs_search_slot(trans, root, key, path, 0, 1);
+-			if (ret < 0) {
+-				err = ret;
++			if (ret) {
++				if (ret < 0)
++					err = ret;
++				else
++					err = -ENOENT;
++
++				btrfs_release_path(path);
+ 				break;
+ 			}
+-			BUG_ON(ret > 0);
+ 
+ 			if (!upper->eb) {
+ 				upper->eb = path->nodes[upper->level];
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index a87675ffd02b..563878a141a3 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5802,6 +5802,64 @@ static int changed_extent(struct send_ctx *sctx,
+ 	int ret = 0;
+ 
+ 	if (sctx->cur_ino != sctx->cmp_key->objectid) {
++
++		if (result == BTRFS_COMPARE_TREE_CHANGED) {
++			struct extent_buffer *leaf_l;
++			struct extent_buffer *leaf_r;
++			struct btrfs_file_extent_item *ei_l;
++			struct btrfs_file_extent_item *ei_r;
++
++			leaf_l = sctx->left_path->nodes[0];
++			leaf_r = sctx->right_path->nodes[0];
++			ei_l = btrfs_item_ptr(leaf_l,
++					      sctx->left_path->slots[0],
++					      struct btrfs_file_extent_item);
++			ei_r = btrfs_item_ptr(leaf_r,
++					      sctx->right_path->slots[0],
++					      struct btrfs_file_extent_item);
++
++			/*
++			 * We may have found an extent item that has changed
++			 * only its disk_bytenr field and the corresponding
++			 * inode item was not updated. This case happens due to
++			 * very specific timings during relocation when a leaf
++			 * that contains file extent items is COWed while
++			 * relocation is ongoing and its in the stage where it
++			 * updates data pointers. So when this happens we can
++			 * safely ignore it since we know it's the same extent,
++			 * but just at different logical and physical locations
++			 * (when an extent is fully replaced with a new one, we
++			 * know the generation number must have changed too,
++			 * since snapshot creation implies committing the current
++			 * transaction, and the inode item must have been updated
++			 * as well).
++			 * This replacement of the disk_bytenr happens at
++			 * relocation.c:replace_file_extents() through
++			 * relocation.c:btrfs_reloc_cow_block().
++			 */
++			if (btrfs_file_extent_generation(leaf_l, ei_l) ==
++			    btrfs_file_extent_generation(leaf_r, ei_r) &&
++			    btrfs_file_extent_ram_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_ram_bytes(leaf_r, ei_r) &&
++			    btrfs_file_extent_compression(leaf_l, ei_l) ==
++			    btrfs_file_extent_compression(leaf_r, ei_r) &&
++			    btrfs_file_extent_encryption(leaf_l, ei_l) ==
++			    btrfs_file_extent_encryption(leaf_r, ei_r) &&
++			    btrfs_file_extent_other_encoding(leaf_l, ei_l) ==
++			    btrfs_file_extent_other_encoding(leaf_r, ei_r) &&
++			    btrfs_file_extent_type(leaf_l, ei_l) ==
++			    btrfs_file_extent_type(leaf_r, ei_r) &&
++			    btrfs_file_extent_disk_bytenr(leaf_l, ei_l) !=
++			    btrfs_file_extent_disk_bytenr(leaf_r, ei_r) &&
++			    btrfs_file_extent_disk_num_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_disk_num_bytes(leaf_r, ei_r) &&
++			    btrfs_file_extent_offset(leaf_l, ei_l) ==
++			    btrfs_file_extent_offset(leaf_r, ei_r) &&
++			    btrfs_file_extent_num_bytes(leaf_l, ei_l) ==
++			    btrfs_file_extent_num_bytes(leaf_r, ei_r))
++				return 0;
++		}
++
+ 		inconsistent_snapshot_error(sctx, result, "extent");
+ 		return -EIO;
+ 	}
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 90e1198bc63d..e63c96ca0e96 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1940,12 +1940,11 @@ static noinline int find_dir_range(struct btrfs_root *root,
+ next:
+ 	/* check the next slot in the tree to see if it is a valid item */
+ 	nritems = btrfs_header_nritems(path->nodes[0]);
++	path->slots[0]++;
+ 	if (path->slots[0] >= nritems) {
+ 		ret = btrfs_next_leaf(root, path);
+ 		if (ret)
+ 			goto out;
+-	} else {
+-		path->slots[0]++;
+ 	}
+ 
+ 	btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+@@ -5205,6 +5204,7 @@ process_leaf:
+ 			if (di_key.type == BTRFS_ROOT_ITEM_KEY)
+ 				continue;
+ 
++			btrfs_release_path(path);
+ 			di_inode = btrfs_iget(root->fs_info->sb, &di_key,
+ 					      root, NULL);
+ 			if (IS_ERR(di_inode)) {
+@@ -5214,13 +5214,12 @@ process_leaf:
+ 
+ 			if (btrfs_inode_in_log(di_inode, trans->transid)) {
+ 				iput(di_inode);
+-				continue;
++				break;
+ 			}
+ 
+ 			ctx->log_new_dentries = false;
+ 			if (type == BTRFS_FT_DIR || type == BTRFS_FT_SYMLINK)
+ 				log_mode = LOG_INODE_ALL;
+-			btrfs_release_path(path);
+ 			ret = btrfs_log_inode(trans, root, di_inode,
+ 					      log_mode, 0, LLONG_MAX, ctx);
+ 			if (!ret &&
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 035efce603a9..7c9c6a4be4fd 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -859,7 +859,7 @@ static void btrfs_close_bdev(struct btrfs_device *device)
+ 		blkdev_put(device->bdev, device->mode);
+ }
+ 
+-static void btrfs_close_one_device(struct btrfs_device *device)
++static void btrfs_prepare_close_one_device(struct btrfs_device *device)
+ {
+ 	struct btrfs_fs_devices *fs_devices = device->fs_devices;
+ 	struct btrfs_device *new_device;
+@@ -877,8 +877,6 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	if (device->missing)
+ 		fs_devices->missing_devices--;
+ 
+-	btrfs_close_bdev(device);
+-
+ 	new_device = btrfs_alloc_device(NULL, &device->devid,
+ 					device->uuid);
+ 	BUG_ON(IS_ERR(new_device)); /* -ENOMEM */
+@@ -892,23 +890,39 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 
+ 	list_replace_rcu(&device->dev_list, &new_device->dev_list);
+ 	new_device->fs_devices = device->fs_devices;
+-
+-	call_rcu(&device->rcu, free_device);
+ }
+ 
+ static int __btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
+ {
+ 	struct btrfs_device *device, *tmp;
++	struct list_head pending_put;
++
++	INIT_LIST_HEAD(&pending_put);
+ 
+ 	if (--fs_devices->opened > 0)
+ 		return 0;
+ 
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	list_for_each_entry_safe(device, tmp, &fs_devices->devices, dev_list) {
+-		btrfs_close_one_device(device);
++		btrfs_prepare_close_one_device(device);
++		list_add(&device->dev_list, &pending_put);
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+ 
++	/*
++	 * btrfs_show_devname() is using the device_list_mutex,
++	 * sometimes call to blkdev_put() leads vfs calling
++	 * into this func. So do put outside of device_list_mutex,
++	 * as of now.
++	 */
++	while (!list_empty(&pending_put)) {
++		device = list_first_entry(&pending_put,
++				struct btrfs_device, dev_list);
++		list_del(&device->dev_list);
++		btrfs_close_bdev(device);
++		call_rcu(&device->rcu, free_device);
++	}
++
+ 	WARN_ON(fs_devices->open_devices);
+ 	WARN_ON(fs_devices->rw_devices);
+ 	fs_devices->opened = 0;
+@@ -1846,7 +1860,6 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 	u64 num_devices;
+ 	int ret = 0;
+ 	bool clear_super = false;
+-	char *dev_name = NULL;
+ 
+ 	mutex_lock(&uuid_mutex);
+ 
+@@ -1882,11 +1895,6 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 		list_del_init(&device->dev_alloc_list);
+ 		device->fs_devices->rw_devices--;
+ 		unlock_chunks(root);
+-		dev_name = kstrdup(device->name->str, GFP_KERNEL);
+-		if (!dev_name) {
+-			ret = -ENOMEM;
+-			goto error_undo;
+-		}
+ 		clear_super = true;
+ 	}
+ 
+@@ -1936,14 +1944,21 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 		btrfs_sysfs_rm_device_link(root->fs_info->fs_devices, device);
+ 	}
+ 
+-	btrfs_close_bdev(device);
+-
+-	call_rcu(&device->rcu, free_device);
+-
+ 	num_devices = btrfs_super_num_devices(root->fs_info->super_copy) - 1;
+ 	btrfs_set_super_num_devices(root->fs_info->super_copy, num_devices);
+ 	mutex_unlock(&root->fs_info->fs_devices->device_list_mutex);
+ 
++	/*
++	 * at this point, the device is zero sized and detached from
++	 * the devices list.  All that's left is to zero out the old
++	 * supers and free the device.
++	 */
++	if (device->writeable)
++		btrfs_scratch_superblocks(device->bdev, device->name->str);
++
++	btrfs_close_bdev(device);
++	call_rcu(&device->rcu, free_device);
++
+ 	if (cur_devices->open_devices == 0) {
+ 		struct btrfs_fs_devices *fs_devices;
+ 		fs_devices = root->fs_info->fs_devices;
+@@ -1962,24 +1977,7 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path, u64 devid)
+ 	root->fs_info->num_tolerated_disk_barrier_failures =
+ 		btrfs_calc_num_tolerated_disk_barrier_failures(root->fs_info);
+ 
+-	/*
+-	 * at this point, the device is zero sized.  We want to
+-	 * remove it from the devices list and zero out the old super
+-	 */
+-	if (clear_super) {
+-		struct block_device *bdev;
+-
+-		bdev = blkdev_get_by_path(dev_name, FMODE_READ | FMODE_EXCL,
+-						root->fs_info->bdev_holder);
+-		if (!IS_ERR(bdev)) {
+-			btrfs_scratch_superblocks(bdev, dev_name);
+-			blkdev_put(bdev, FMODE_READ | FMODE_EXCL);
+-		}
+-	}
+-
+ out:
+-	kfree(dev_name);
+-
+ 	mutex_unlock(&uuid_mutex);
+ 	return ret;
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 65f78b7a9062..24184cae83d2 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -629,6 +629,8 @@ struct TCP_Server_Info {
+ 	unsigned int	max_read;
+ 	unsigned int	max_write;
+ 	__u8		preauth_hash[512];
++	struct delayed_work reconnect; /* reconnect workqueue job */
++	struct mutex reconnect_mutex; /* prevent simultaneous reconnects */
+ #endif /* CONFIG_CIFS_SMB2 */
+ 	unsigned long echo_interval;
+ };
+@@ -832,6 +834,7 @@ cap_unix(struct cifs_ses *ses)
+ struct cifs_tcon {
+ 	struct list_head tcon_list;
+ 	int tc_count;
++	struct list_head rlist; /* reconnect list */
+ 	struct list_head openFileList;
+ 	spinlock_t open_file_lock; /* protects list above */
+ 	struct cifs_ses *ses;	/* pointer to session associated with */
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 95dab43646f0..ca76f8a7267f 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -204,6 +204,9 @@ extern void cifs_add_pending_open_locked(struct cifs_fid *fid,
+ 					 struct tcon_link *tlink,
+ 					 struct cifs_pending_open *open);
+ extern void cifs_del_pending_open(struct cifs_pending_open *open);
++extern void cifs_put_tcp_session(struct TCP_Server_Info *server,
++				 int from_reconnect);
++extern void cifs_put_tcon(struct cifs_tcon *tcon);
+ 
+ #if IS_ENABLED(CONFIG_CIFS_DFS_UPCALL)
+ extern void cifs_dfs_release_automount_timer(void);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 7b67179521cf..a178f3a5b052 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -52,6 +52,9 @@
+ #include "nterr.h"
+ #include "rfc1002pdu.h"
+ #include "fscache.h"
++#ifdef CONFIG_CIFS_SMB2
++#include "smb2proto.h"
++#endif
+ 
+ #define CIFS_PORT 445
+ #define RFC1001_PORT 139
+@@ -2076,8 +2079,8 @@ cifs_find_tcp_session(struct smb_vol *vol)
+ 	return NULL;
+ }
+ 
+-static void
+-cifs_put_tcp_session(struct TCP_Server_Info *server)
++void
++cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ {
+ 	struct task_struct *task;
+ 
+@@ -2094,6 +2097,19 @@ cifs_put_tcp_session(struct TCP_Server_Info *server)
+ 
+ 	cancel_delayed_work_sync(&server->echo);
+ 
++#ifdef CONFIG_CIFS_SMB2
++	if (from_reconnect)
++		/*
++		 * Avoid deadlock here: reconnect work calls
++		 * cifs_put_tcp_session() at its end. Need to be sure
++		 * that reconnect work does nothing with server pointer after
++		 * that step.
++		 */
++		cancel_delayed_work(&server->reconnect);
++	else
++		cancel_delayed_work_sync(&server->reconnect);
++#endif
++
+ 	spin_lock(&GlobalMid_Lock);
+ 	server->tcpStatus = CifsExiting;
+ 	spin_unlock(&GlobalMid_Lock);
+@@ -2158,6 +2174,10 @@ cifs_get_tcp_session(struct smb_vol *volume_info)
+ 	INIT_LIST_HEAD(&tcp_ses->tcp_ses_list);
+ 	INIT_LIST_HEAD(&tcp_ses->smb_ses_list);
+ 	INIT_DELAYED_WORK(&tcp_ses->echo, cifs_echo_request);
++#ifdef CONFIG_CIFS_SMB2
++	INIT_DELAYED_WORK(&tcp_ses->reconnect, smb2_reconnect_server);
++	mutex_init(&tcp_ses->reconnect_mutex);
++#endif
+ 	memcpy(&tcp_ses->srcaddr, &volume_info->srcaddr,
+ 	       sizeof(tcp_ses->srcaddr));
+ 	memcpy(&tcp_ses->dstaddr, &volume_info->dstaddr,
+@@ -2316,7 +2336,7 @@ cifs_put_smb_ses(struct cifs_ses *ses)
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+ 	sesInfoFree(ses);
+-	cifs_put_tcp_session(server);
++	cifs_put_tcp_session(server, 0);
+ }
+ 
+ #ifdef CONFIG_KEYS
+@@ -2490,7 +2510,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ 		mutex_unlock(&ses->session_mutex);
+ 
+ 		/* existing SMB ses has a server reference already */
+-		cifs_put_tcp_session(server);
++		cifs_put_tcp_session(server, 0);
+ 		free_xid(xid);
+ 		return ses;
+ 	}
+@@ -2580,7 +2600,7 @@ cifs_find_tcon(struct cifs_ses *ses, const char *unc)
+ 	return NULL;
+ }
+ 
+-static void
++void
+ cifs_put_tcon(struct cifs_tcon *tcon)
+ {
+ 	unsigned int xid;
+@@ -3762,7 +3782,7 @@ mount_fail_check:
+ 		else if (ses)
+ 			cifs_put_smb_ses(ses);
+ 		else
+-			cifs_put_tcp_session(server);
++			cifs_put_tcp_session(server, 0);
+ 		bdi_destroy(&cifs_sb->bdi);
+ 	}
+ 
+@@ -4073,7 +4093,7 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
+ 	ses = cifs_get_smb_ses(master_tcon->ses->server, vol_info);
+ 	if (IS_ERR(ses)) {
+ 		tcon = (struct cifs_tcon *)ses;
+-		cifs_put_tcp_session(master_tcon->ses->server);
++		cifs_put_tcp_session(master_tcon->ses->server, 0);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index f9e766f464be..b2aff0c6f22c 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -260,7 +260,7 @@ smb2_push_mandatory_locks(struct cifsFileInfo *cfile)
+ 	 * and check it for zero before using.
+ 	 */
+ 	max_buf = tlink_tcon(cfile->tlink)->ses->server->maxBuf;
+-	if (!max_buf) {
++	if (max_buf < sizeof(struct smb2_lock_element)) {
+ 		free_xid(xid);
+ 		return -EINVAL;
+ 	}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 3eec96ca87d9..32e0e06f972c 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -275,7 +275,7 @@ out:
+ 	case SMB2_CHANGE_NOTIFY:
+ 	case SMB2_QUERY_INFO:
+ 	case SMB2_SET_INFO:
+-		return -EAGAIN;
++		rc = -EAGAIN;
+ 	}
+ 	unload_nls(nls_codepage);
+ 	return rc;
+@@ -1820,6 +1820,54 @@ smb2_echo_callback(struct mid_q_entry *mid)
+ 	add_credits(server, credits_received, CIFS_ECHO_OP);
+ }
+ 
++void smb2_reconnect_server(struct work_struct *work)
++{
++	struct TCP_Server_Info *server = container_of(work,
++					struct TCP_Server_Info, reconnect.work);
++	struct cifs_ses *ses;
++	struct cifs_tcon *tcon, *tcon2;
++	struct list_head tmp_list;
++	int tcon_exist = false;
++
++	/* Prevent simultaneous reconnects that can corrupt tcon->rlist list */
++	mutex_lock(&server->reconnect_mutex);
++
++	INIT_LIST_HEAD(&tmp_list);
++	cifs_dbg(FYI, "Need negotiate, reconnecting tcons\n");
++
++	spin_lock(&cifs_tcp_ses_lock);
++	list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
++		list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
++			if (tcon->need_reconnect) {
++				tcon->tc_count++;
++				list_add_tail(&tcon->rlist, &tmp_list);
++				tcon_exist = true;
++			}
++		}
++	}
++	/*
++	 * Get the reference to server struct to be sure that the last call of
++	 * cifs_put_tcon() in the loop below won't release the server pointer.
++	 */
++	if (tcon_exist)
++		server->srv_count++;
++
++	spin_unlock(&cifs_tcp_ses_lock);
++
++	list_for_each_entry_safe(tcon, tcon2, &tmp_list, rlist) {
++		smb2_reconnect(SMB2_ECHO, tcon);
++		list_del_init(&tcon->rlist);
++		cifs_put_tcon(tcon);
++	}
++
++	cifs_dbg(FYI, "Reconnecting tcons finished\n");
++	mutex_unlock(&server->reconnect_mutex);
++
++	/* now we can safely release srv struct */
++	if (tcon_exist)
++		cifs_put_tcp_session(server, 1);
++}
++
+ int
+ SMB2_echo(struct TCP_Server_Info *server)
+ {
+@@ -1832,32 +1880,11 @@ SMB2_echo(struct TCP_Server_Info *server)
+ 	cifs_dbg(FYI, "In echo request\n");
+ 
+ 	if (server->tcpStatus == CifsNeedNegotiate) {
+-		struct list_head *tmp, *tmp2;
+-		struct cifs_ses *ses;
+-		struct cifs_tcon *tcon;
+-
+-		cifs_dbg(FYI, "Need negotiate, reconnecting tcons\n");
+-		spin_lock(&cifs_tcp_ses_lock);
+-		list_for_each(tmp, &server->smb_ses_list) {
+-			ses = list_entry(tmp, struct cifs_ses, smb_ses_list);
+-			list_for_each(tmp2, &ses->tcon_list) {
+-				tcon = list_entry(tmp2, struct cifs_tcon,
+-						  tcon_list);
+-				/* add check for persistent handle reconnect */
+-				if (tcon && tcon->need_reconnect) {
+-					spin_unlock(&cifs_tcp_ses_lock);
+-					rc = smb2_reconnect(SMB2_ECHO, tcon);
+-					spin_lock(&cifs_tcp_ses_lock);
+-				}
+-			}
+-		}
+-		spin_unlock(&cifs_tcp_ses_lock);
++		/* No need to send echo on newly established connections */
++		queue_delayed_work(cifsiod_wq, &server->reconnect, 0);
++		return rc;
+ 	}
+ 
+-	/* if no session, renegotiate failed above */
+-	if (server->tcpStatus == CifsNeedNegotiate)
+-		return -EIO;
+-
+ 	rc = small_smb2_init(SMB2_ECHO, NULL, (void **)&req);
+ 	if (rc)
+ 		return rc;
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index eb2cde2f64ba..f2d511a6971b 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -96,6 +96,7 @@ extern int smb2_open_file(const unsigned int xid,
+ extern int smb2_unlock_range(struct cifsFileInfo *cfile,
+ 			     struct file_lock *flock, const unsigned int xid);
+ extern int smb2_push_mandatory_locks(struct cifsFileInfo *cfile);
++extern void smb2_reconnect_server(struct work_struct *work);
+ 
+ /*
+  * SMB2 Worker functions - most of protocol specific implementation details
+diff --git a/fs/exec.c b/fs/exec.c
+index 6fcfb3f7b137..eebe8be7a29d 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -19,7 +19,7 @@
+  * current->executable is only used by the procfs.  This allows a dispatch
+  * table to check for several different types  of binary formats.  We keep
+  * trying until we recognize the file or we run out of supported binary
+- * formats. 
++ * formats.
+  */
+ 
+ #include <linux/slab.h>
+@@ -1261,6 +1261,13 @@ int flush_old_exec(struct linux_binprm * bprm)
+ 	flush_thread();
+ 	current->personality &= ~bprm->per_clear;
+ 
++	/*
++	 * We have to apply CLOEXEC before we change whether the process is
++	 * dumpable (in setup_new_exec) to avoid a race with a process in userspace
++	 * trying to access the should-be-closed file descriptors of a process
++	 * undergoing exec(2).
++	 */
++	do_close_on_exec(current->files);
+ 	return 0;
+ 
+ out:
+@@ -1270,8 +1277,22 @@ EXPORT_SYMBOL(flush_old_exec);
+ 
+ void would_dump(struct linux_binprm *bprm, struct file *file)
+ {
+-	if (inode_permission(file_inode(file), MAY_READ) < 0)
++	struct inode *inode = file_inode(file);
++	if (inode_permission(inode, MAY_READ) < 0) {
++		struct user_namespace *old, *user_ns;
+ 		bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;
++
++		/* Ensure mm->user_ns contains the executable */
++		user_ns = old = bprm->mm->user_ns;
++		while ((user_ns != &init_user_ns) &&
++		       !privileged_wrt_inode_uidgid(user_ns, inode))
++			user_ns = user_ns->parent;
++
++		if (old != user_ns) {
++			bprm->mm->user_ns = get_user_ns(user_ns);
++			put_user_ns(old);
++		}
++	}
+ }
+ EXPORT_SYMBOL(would_dump);
+ 
+@@ -1301,7 +1322,6 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 	    !gid_eq(bprm->cred->gid, current_egid())) {
+ 		current->pdeath_signal = 0;
+ 	} else {
+-		would_dump(bprm, bprm->file);
+ 		if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP)
+ 			set_dumpable(current->mm, suid_dumpable);
+ 	}
+@@ -1310,7 +1330,6 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 	   group */
+ 	current->self_exec_id++;
+ 	flush_signal_handlers(current, 0);
+-	do_close_on_exec(current->files);
+ }
+ EXPORT_SYMBOL(setup_new_exec);
+ 
+@@ -1401,7 +1420,7 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ 	unsigned n_fs;
+ 
+ 	if (p->ptrace) {
+-		if (p->ptrace & PT_PTRACE_CAP)
++		if (ptracer_capable(p, current_user_ns()))
+ 			bprm->unsafe |= LSM_UNSAFE_PTRACE_CAP;
+ 		else
+ 			bprm->unsafe |= LSM_UNSAFE_PTRACE;
+@@ -1736,6 +1755,8 @@ static int do_execveat_common(int fd, struct filename *filename,
+ 	if (retval < 0)
+ 		goto out;
+ 
++	would_dump(bprm, bprm->file);
++
+ 	retval = exec_binprm(bprm);
+ 	if (retval < 0)
+ 		goto out;
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index b1d52c14098e..f97611171023 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -414,17 +414,19 @@ static inline int ext4_inode_journal_mode(struct inode *inode)
+ 		return EXT4_INODE_WRITEBACK_DATA_MODE;	/* writeback */
+ 	/* We do not support data journalling with delayed allocation */
+ 	if (!S_ISREG(inode->i_mode) ||
+-	    test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
+-		return EXT4_INODE_JOURNAL_DATA_MODE;	/* journal data */
+-	if (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA) &&
+-	    !test_opt(inode->i_sb, DELALLOC))
++	    test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA ||
++	    (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA) &&
++	    !test_opt(inode->i_sb, DELALLOC))) {
++		/* We do not support data journalling for encrypted data */
++		if (S_ISREG(inode->i_mode) && ext4_encrypted_inode(inode))
++			return EXT4_INODE_ORDERED_DATA_MODE;  /* ordered */
+ 		return EXT4_INODE_JOURNAL_DATA_MODE;	/* journal data */
++	}
+ 	if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA)
+ 		return EXT4_INODE_ORDERED_DATA_MODE;	/* ordered */
+ 	if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_WRITEBACK_DATA)
+ 		return EXT4_INODE_WRITEBACK_DATA_MODE;	/* writeback */
+-	else
+-		BUG();
++	BUG();
+ }
+ 
+ static inline int ext4_should_journal_data(struct inode *inode)
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index f74d5ee2cdec..d8ca4b9f9dd6 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -336,8 +336,10 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 
+ 	len -= EXT4_MIN_INLINE_DATA_SIZE;
+ 	value = kzalloc(len, GFP_NOFS);
+-	if (!value)
++	if (!value) {
++		error = -ENOMEM;
+ 		goto out;
++	}
+ 
+ 	error = ext4_xattr_ibody_get(inode, i.name_index, i.name,
+ 				     value, len);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index f4cdc647ecfc..27e4348efc4e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4438,6 +4438,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 	struct inode *inode;
+ 	journal_t *journal = EXT4_SB(sb)->s_journal;
+ 	long ret;
++	loff_t size;
+ 	int block;
+ 	uid_t i_uid;
+ 	gid_t i_gid;
+@@ -4538,6 +4539,11 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		ei->i_file_acl |=
+ 			((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32;
+ 	inode->i_size = ext4_isize(raw_inode);
++	if ((size = i_size_read(inode)) < 0) {
++		EXT4_ERROR_INODE(inode, "bad i_size value: %lld", size);
++		ret = -EFSCORRUPTED;
++		goto bad_inode;
++	}
+ 	ei->i_disksize = inode->i_size;
+ #ifdef CONFIG_QUOTA
+ 	ei->i_reserved_quota = 0;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index f418f55c2bbe..7ae43c59bc79 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -669,7 +669,7 @@ static void ext4_mb_mark_free_simple(struct super_block *sb,
+ 	ext4_grpblk_t min;
+ 	ext4_grpblk_t max;
+ 	ext4_grpblk_t chunk;
+-	unsigned short border;
++	unsigned int border;
+ 
+ 	BUG_ON(len > EXT4_CLUSTERS_PER_GROUP(sb));
+ 
+@@ -2287,7 +2287,7 @@ static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+ 	struct ext4_group_info *grinfo;
+ 	struct sg {
+ 		struct ext4_group_info info;
+-		ext4_grpblk_t counters[16];
++		ext4_grpblk_t counters[EXT4_MAX_BLOCK_LOG_SIZE + 2];
+ 	} sg;
+ 
+ 	group--;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ec89f5005c2d..d0d4377680c7 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3161,10 +3161,15 @@ static int count_overhead(struct super_block *sb, ext4_group_t grp,
+ 			ext4_set_bit(s++, buf);
+ 			count++;
+ 		}
+-		for (j = ext4_bg_num_gdb(sb, grp); j > 0; j--) {
+-			ext4_set_bit(EXT4_B2C(sbi, s++), buf);
+-			count++;
++		j = ext4_bg_num_gdb(sb, grp);
++		if (s + j > EXT4_BLOCKS_PER_GROUP(sb)) {
++			ext4_error(sb, "Invalid number of block group "
++				   "descriptor blocks: %d", j);
++			j = EXT4_BLOCKS_PER_GROUP(sb) - s;
+ 		}
++		count += j;
++		for (; j > 0; j--)
++			ext4_set_bit(EXT4_B2C(sbi, s++), buf);
+ 	}
+ 	if (!count)
+ 		return 0;
+@@ -3254,7 +3259,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 	struct buffer_head *bh;
+ 	struct ext4_super_block *es = NULL;
+-	struct ext4_sb_info *sbi;
++	struct ext4_sb_info *sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
+ 	ext4_fsblk_t block;
+ 	ext4_fsblk_t sb_block = get_sb_block(&data);
+ 	ext4_fsblk_t logical_sb_block;
+@@ -3273,16 +3278,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	unsigned int journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 	ext4_group_t first_not_zeroed;
+ 
+-	sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
+-	if (!sbi)
+-		goto out_free_orig;
++	if ((data && !orig_data) || !sbi)
++		goto out_free_base;
+ 
+ 	sbi->s_blockgroup_lock =
+ 		kzalloc(sizeof(struct blockgroup_lock), GFP_KERNEL);
+-	if (!sbi->s_blockgroup_lock) {
+-		kfree(sbi);
+-		goto out_free_orig;
+-	}
++	if (!sbi->s_blockgroup_lock)
++		goto out_free_base;
++
+ 	sb->s_fs_info = sbi;
+ 	sbi->s_sb = sb;
+ 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
+@@ -3428,11 +3431,19 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	 */
+ 	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
+ 
+-	if (!parse_options((char *) sbi->s_es->s_mount_opts, sb,
+-			   &journal_devnum, &journal_ioprio, 0)) {
+-		ext4_msg(sb, KERN_WARNING,
+-			 "failed to parse options in superblock: %s",
+-			 sbi->s_es->s_mount_opts);
++	if (sbi->s_es->s_mount_opts[0]) {
++		char *s_mount_opts = kstrndup(sbi->s_es->s_mount_opts,
++					      sizeof(sbi->s_es->s_mount_opts),
++					      GFP_KERNEL);
++		if (!s_mount_opts)
++			goto failed_mount;
++		if (!parse_options(s_mount_opts, sb, &journal_devnum,
++				   &journal_ioprio, 0)) {
++			ext4_msg(sb, KERN_WARNING,
++				 "failed to parse options in superblock: %s",
++				 s_mount_opts);
++		}
++		kfree(s_mount_opts);
+ 	}
+ 	sbi->s_def_mount_opt = sbi->s_mount_opt;
+ 	if (!parse_options((char *) data, sb, &journal_devnum,
+@@ -3458,6 +3469,11 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 				 "both data=journal and dax");
+ 			goto failed_mount;
+ 		}
++		if (ext4_has_feature_encrypt(sb)) {
++			ext4_msg(sb, KERN_WARNING,
++				 "encrypted files will use data=ordered "
++				 "instead of data journaling mode");
++		}
+ 		if (test_opt(sb, DELALLOC))
+ 			clear_opt(sb, DELALLOC);
+ 	} else {
+@@ -3613,12 +3629,16 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
+ 	sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
+-	if (EXT4_INODE_SIZE(sb) == 0 || EXT4_INODES_PER_GROUP(sb) == 0)
+-		goto cantfind_ext4;
+ 
+ 	sbi->s_inodes_per_block = blocksize / EXT4_INODE_SIZE(sb);
+ 	if (sbi->s_inodes_per_block == 0)
+ 		goto cantfind_ext4;
++	if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
++	    sbi->s_inodes_per_group > blocksize * 8) {
++		ext4_msg(sb, KERN_ERR, "invalid inodes per group: %lu\n",
++			 sbi->s_blocks_per_group);
++		goto failed_mount;
++	}
+ 	sbi->s_itb_per_group = sbi->s_inodes_per_group /
+ 					sbi->s_inodes_per_block;
+ 	sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb);
+@@ -3701,13 +3721,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	}
+ 	sbi->s_cluster_ratio = clustersize / blocksize;
+ 
+-	if (sbi->s_inodes_per_group > blocksize * 8) {
+-		ext4_msg(sb, KERN_ERR,
+-		       "#inodes per group too big: %lu",
+-		       sbi->s_inodes_per_group);
+-		goto failed_mount;
+-	}
+-
+ 	/* Do we have standard group size of clustersize * 8 blocks ? */
+ 	if (sbi->s_blocks_per_group == clustersize << 3)
+ 		set_opt2(sb, STD_GROUP_SIZE);
+@@ -4113,7 +4126,9 @@ no_journal:
+ 
+ 	if (___ratelimit(&ext4_mount_msg_ratelimit, "EXT4-fs mount"))
+ 		ext4_msg(sb, KERN_INFO, "mounted filesystem with%s. "
+-			 "Opts: %s%s%s", descr, sbi->s_es->s_mount_opts,
++			 "Opts: %.*s%s%s", descr,
++			 (int) sizeof(sbi->s_es->s_mount_opts),
++			 sbi->s_es->s_mount_opts,
+ 			 *sbi->s_es->s_mount_opts ? "; " : "", orig_data);
+ 
+ 	if (es->s_error_count)
+@@ -4192,8 +4207,8 @@ failed_mount:
+ out_fail:
+ 	sb->s_fs_info = NULL;
+ 	kfree(sbi->s_blockgroup_lock);
++out_free_base:
+ 	kfree(sbi);
+-out_free_orig:
+ 	kfree(orig_data);
+ 	return err ? err : ret;
+ }
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index badd407bb622..c05d66d98871 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -362,6 +362,7 @@ static int stat_open(struct inode *inode, struct file *file)
+ }
+ 
+ static const struct file_operations stat_fops = {
++	.owner = THIS_MODULE,
+ 	.open = stat_open,
+ 	.read = seq_read,
+ 	.llseek = seq_lseek,
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 14f5fe2b841e..8467f4221f57 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -451,7 +451,7 @@ struct f2fs_inode_info {
+ 	/* Use below internally in f2fs*/
+ 	unsigned long flags;		/* use to pass per-file flags */
+ 	struct rw_semaphore i_sem;	/* protect fi info */
+-	struct percpu_counter dirty_pages;	/* # of dirty pages */
++	atomic_t dirty_pages;		/* # of dirty pages */
+ 	f2fs_hash_t chash;		/* hash value of given file name */
+ 	unsigned int clevel;		/* maximum level of given file name */
+ 	nid_t i_xattr_nid;		/* node id that contains xattrs */
+@@ -1198,7 +1198,7 @@ static inline void inc_page_count(struct f2fs_sb_info *sbi, int count_type)
+ 
+ static inline void inode_inc_dirty_pages(struct inode *inode)
+ {
+-	percpu_counter_inc(&F2FS_I(inode)->dirty_pages);
++	atomic_inc(&F2FS_I(inode)->dirty_pages);
+ 	inc_page_count(F2FS_I_SB(inode), S_ISDIR(inode->i_mode) ?
+ 				F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA);
+ }
+@@ -1214,7 +1214,7 @@ static inline void inode_dec_dirty_pages(struct inode *inode)
+ 			!S_ISLNK(inode->i_mode))
+ 		return;
+ 
+-	percpu_counter_dec(&F2FS_I(inode)->dirty_pages);
++	atomic_dec(&F2FS_I(inode)->dirty_pages);
+ 	dec_page_count(F2FS_I_SB(inode), S_ISDIR(inode->i_mode) ?
+ 				F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA);
+ }
+@@ -1224,9 +1224,9 @@ static inline s64 get_pages(struct f2fs_sb_info *sbi, int count_type)
+ 	return percpu_counter_sum_positive(&sbi->nr_pages[count_type]);
+ }
+ 
+-static inline s64 get_dirty_pages(struct inode *inode)
++static inline int get_dirty_pages(struct inode *inode)
+ {
+-	return percpu_counter_sum_positive(&F2FS_I(inode)->dirty_pages);
++	return atomic_read(&F2FS_I(inode)->dirty_pages);
+ }
+ 
+ static inline int get_blocktype_secs(struct f2fs_sb_info *sbi, int block_type)
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 28f4f4cbb8d8..06f39db65923 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -970,7 +970,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
+ 				new_size = (dst + i) << PAGE_SHIFT;
+ 				if (dst_inode->i_size < new_size)
+ 					f2fs_i_size_write(dst_inode, new_size);
+-			} while ((do_replace[i] || blkaddr[i] == NULL_ADDR) && --ilen);
++			} while (--ilen && (do_replace[i] || blkaddr[i] == NULL_ADDR));
+ 
+ 			f2fs_put_dnode(&dn);
+ 		} else {
+@@ -1529,7 +1529,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 		goto out;
+ 
+ 	f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
+-		"Unexpected flush for atomic writes: ino=%lu, npages=%lld",
++		"Unexpected flush for atomic writes: ino=%lu, npages=%u",
+ 					inode->i_ino, get_dirty_pages(inode));
+ 	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+ 	if (ret)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 7f863a645ab1..37edc853c66c 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -565,13 +565,9 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
+ 
+ 	init_once((void *) fi);
+ 
+-	if (percpu_counter_init(&fi->dirty_pages, 0, GFP_NOFS)) {
+-		kmem_cache_free(f2fs_inode_cachep, fi);
+-		return NULL;
+-	}
+-
+ 	/* Initialize f2fs-specific inode info */
+ 	fi->vfs_inode.i_version = 1;
++	atomic_set(&fi->dirty_pages, 0);
+ 	fi->i_current_depth = 1;
+ 	fi->i_advise = 0;
+ 	init_rwsem(&fi->i_sem);
+@@ -694,7 +690,6 @@ static void f2fs_i_callback(struct rcu_head *head)
+ 
+ static void f2fs_destroy_inode(struct inode *inode)
+ {
+-	percpu_counter_destroy(&F2FS_I(inode)->dirty_pages);
+ 	call_rcu(&inode->i_rcu, f2fs_i_callback);
+ }
+ 
+diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
+index e8638fd2c0c3..6f07b3e89ff2 100644
+--- a/fs/xfs/xfs_log_recover.c
++++ b/fs/xfs/xfs_log_recover.c
+@@ -4506,6 +4506,7 @@ xlog_recover_clear_agi_bucket(
+ 	agi->agi_unlinked[bucket] = cpu_to_be32(NULLAGINO);
+ 	offset = offsetof(xfs_agi_t, agi_unlinked) +
+ 		 (sizeof(xfs_agino_t) * bucket);
++	xfs_trans_buf_set_type(tp, agibp, XFS_BLFT_AGI_BUF);
+ 	xfs_trans_log_buf(tp, agibp, offset,
+ 			  (offset + sizeof(xfs_agino_t) - 1));
+ 
+diff --git a/include/linux/capability.h b/include/linux/capability.h
+index dbc21c719ce6..6ffb67e10c06 100644
+--- a/include/linux/capability.h
++++ b/include/linux/capability.h
+@@ -240,8 +240,10 @@ static inline bool ns_capable_noaudit(struct user_namespace *ns, int cap)
+ 	return true;
+ }
+ #endif /* CONFIG_MULTIUSER */
++extern bool privileged_wrt_inode_uidgid(struct user_namespace *ns, const struct inode *inode);
+ extern bool capable_wrt_inode_uidgid(const struct inode *inode, int cap);
+ extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);
++extern bool ptracer_capable(struct task_struct *tsk, struct user_namespace *ns);
+ 
+ /* audit system wants to get cap info from files as well */
+ extern int get_vfs_caps_from_disk(const struct dentry *dentry, struct cpu_vfs_cap_data *cpu_caps);
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 903200f4ec41..39820082ce91 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -473,6 +473,7 @@ struct mm_struct {
+ 	 */
+ 	struct task_struct __rcu *owner;
+ #endif
++	struct user_namespace *user_ns;
+ 
+ 	/* store ref to file /proc/<pid>/exe symlink points to */
+ 	struct file __rcu *exe_file;
+diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
+index bca26157f5b6..f6bc76501912 100644
+--- a/include/linux/pm_opp.h
++++ b/include/linux/pm_opp.h
+@@ -19,6 +19,7 @@
+ 
+ struct dev_pm_opp;
+ struct device;
++struct opp_table;
+ 
+ enum dev_pm_opp_event {
+ 	OPP_EVENT_ADD, OPP_EVENT_REMOVE, OPP_EVENT_ENABLE, OPP_EVENT_DISABLE,
+@@ -62,8 +63,8 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions,
+ void dev_pm_opp_put_supported_hw(struct device *dev);
+ int dev_pm_opp_set_prop_name(struct device *dev, const char *name);
+ void dev_pm_opp_put_prop_name(struct device *dev);
+-int dev_pm_opp_set_regulator(struct device *dev, const char *name);
+-void dev_pm_opp_put_regulator(struct device *dev);
++struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name);
++void dev_pm_opp_put_regulator(struct opp_table *opp_table);
+ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
+ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
+ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
+@@ -170,12 +171,12 @@ static inline int dev_pm_opp_set_prop_name(struct device *dev, const char *name)
+ 
+ static inline void dev_pm_opp_put_prop_name(struct device *dev) {}
+ 
+-static inline int dev_pm_opp_set_regulator(struct device *dev, const char *name)
++static inline struct opp_table *dev_pm_opp_set_regulator(struct device *dev, const char *name)
+ {
+-	return -ENOTSUPP;
++	return ERR_PTR(-ENOTSUPP);
+ }
+ 
+-static inline void dev_pm_opp_put_regulator(struct device *dev) {}
++static inline void dev_pm_opp_put_regulator(struct opp_table *opp_table) {}
+ 
+ static inline int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ {
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 504c98a278d4..e13bfdf7f314 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -19,7 +19,6 @@
+ #define PT_SEIZED	0x00010000	/* SEIZE used, enable new behavior */
+ #define PT_PTRACED	0x00000001
+ #define PT_DTRACE	0x00000002	/* delayed trace (used on m68k, i386) */
+-#define PT_PTRACE_CAP	0x00000004	/* ptracer can follow suid-exec */
+ 
+ #define PT_OPT_FLAG_SHIFT	3
+ /* PT_TRACE_* event enable flags */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 62c68e513e39..f52d4cc97788 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1631,6 +1631,7 @@ struct task_struct {
+ 	struct list_head cpu_timers[3];
+ 
+ /* process credentials */
++	const struct cred __rcu *ptracer_cred; /* Tracer's credentials at attach */
+ 	const struct cred __rcu *real_cred; /* objective and real subjective task
+ 					 * credentials (COW) */
+ 	const struct cred __rcu *cred;	/* effective (overridable) subjective task
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 445b019c2078..de45666f96b4 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -17,7 +17,6 @@
+ #include <linux/bitops.h>
+ #include <linux/compiler.h>
+ #include <linux/atomic.h>
+-#include <linux/rhashtable.h>
+ 
+ #include <linux/netfilter/nf_conntrack_tcp.h>
+ #include <linux/netfilter/nf_conntrack_dccp.h>
+@@ -118,9 +117,6 @@ struct nf_conn {
+ 	/* Extensions */
+ 	struct nf_ct_ext *ext;
+ 
+-#if IS_ENABLED(CONFIG_NF_NAT)
+-	struct rhash_head	nat_bysource;
+-#endif
+ 	/* Storage reserved for other modules, must be the last member */
+ 	union nf_conntrack_proto proto;
+ };
+diff --git a/include/net/netfilter/nf_conntrack_extend.h b/include/net/netfilter/nf_conntrack_extend.h
+index 1c3035dda31f..b925395fa5ed 100644
+--- a/include/net/netfilter/nf_conntrack_extend.h
++++ b/include/net/netfilter/nf_conntrack_extend.h
+@@ -99,6 +99,9 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ struct nf_ct_ext_type {
+ 	/* Destroys relationships (can be NULL). */
+ 	void (*destroy)(struct nf_conn *ct);
++	/* Called when realloacted (can be NULL).
++	   Contents has already been moved. */
++	void (*move)(void *new, void *old);
+ 
+ 	enum nf_ct_ext_id id;
+ 
+diff --git a/include/net/netfilter/nf_nat.h b/include/net/netfilter/nf_nat.h
+index c327a431a6f3..344b1ab19220 100644
+--- a/include/net/netfilter/nf_nat.h
++++ b/include/net/netfilter/nf_nat.h
+@@ -1,6 +1,5 @@
+ #ifndef _NF_NAT_H
+ #define _NF_NAT_H
+-#include <linux/rhashtable.h>
+ #include <linux/netfilter_ipv4.h>
+ #include <linux/netfilter/nf_nat.h>
+ #include <net/netfilter/nf_conntrack_tuple.h>
+@@ -30,6 +29,8 @@ struct nf_conn;
+ 
+ /* The structure embedded in the conntrack structure. */
+ struct nf_conn_nat {
++	struct hlist_node bysource;
++	struct nf_conn *ct;
+ 	union nf_conntrack_nat_help help;
+ #if IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV4) || \
+     IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV6)
+diff --git a/kernel/capability.c b/kernel/capability.c
+index 00411c82dac5..4984e1f552eb 100644
+--- a/kernel/capability.c
++++ b/kernel/capability.c
+@@ -457,6 +457,19 @@ bool file_ns_capable(const struct file *file, struct user_namespace *ns,
+ EXPORT_SYMBOL(file_ns_capable);
+ 
+ /**
++ * privileged_wrt_inode_uidgid - Do capabilities in the namespace work over the inode?
++ * @ns: The user namespace in question
++ * @inode: The inode in question
++ *
++ * Return true if the inode uid and gid are within the namespace.
++ */
++bool privileged_wrt_inode_uidgid(struct user_namespace *ns, const struct inode *inode)
++{
++	return kuid_has_mapping(ns, inode->i_uid) &&
++		kgid_has_mapping(ns, inode->i_gid);
++}
++
++/**
+  * capable_wrt_inode_uidgid - Check nsown_capable and uid and gid mapped
+  * @inode: The inode in question
+  * @cap: The capability in question
+@@ -469,7 +482,26 @@ bool capable_wrt_inode_uidgid(const struct inode *inode, int cap)
+ {
+ 	struct user_namespace *ns = current_user_ns();
+ 
+-	return ns_capable(ns, cap) && kuid_has_mapping(ns, inode->i_uid) &&
+-		kgid_has_mapping(ns, inode->i_gid);
++	return ns_capable(ns, cap) && privileged_wrt_inode_uidgid(ns, inode);
+ }
+ EXPORT_SYMBOL(capable_wrt_inode_uidgid);
++
++/**
++ * ptracer_capable - Determine if the ptracer holds CAP_SYS_PTRACE in the namespace
++ * @tsk: The task that may be ptraced
++ * @ns: The user namespace to search for CAP_SYS_PTRACE in
++ *
++ * Return true if the task that is ptracing the current task had CAP_SYS_PTRACE
++ * in the specified user namespace.
++ */
++bool ptracer_capable(struct task_struct *tsk, struct user_namespace *ns)
++{
++	int ret = 0;  /* An absent tracer adds no restrictions */
++	const struct cred *cred;
++	rcu_read_lock();
++	cred = rcu_dereference(tsk->ptracer_cred);
++	if (cred)
++		ret = security_capable_noaudit(cred, ns, CAP_SYS_PTRACE);
++	rcu_read_unlock();
++	return (ret == 0);
++}
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 0874e2edd275..79517e5549f1 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -598,11 +598,11 @@ return_normal:
+ 	/*
+ 	 * Wait for the other CPUs to be notified and be waiting for us:
+ 	 */
+-	time_left = loops_per_jiffy * HZ;
++	time_left = MSEC_PER_SEC;
+ 	while (kgdb_do_roundup && --time_left &&
+ 	       (atomic_read(&masters_in_kgdb) + atomic_read(&slaves_in_kgdb)) !=
+ 		   online_cpus)
+-		cpu_relax();
++		udelay(1000);
+ 	if (!time_left)
+ 		pr_crit("Timed out waiting for secondary CPUs.\n");
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index beb31725f7e2..9f8dae785f07 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -598,7 +598,8 @@ static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+ #endif
+ }
+ 
+-static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
++static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
++	struct user_namespace *user_ns)
+ {
+ 	mm->mmap = NULL;
+ 	mm->mm_rb = RB_ROOT;
+@@ -638,6 +639,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
+ 	if (init_new_context(p, mm))
+ 		goto fail_nocontext;
+ 
++	mm->user_ns = get_user_ns(user_ns);
+ 	return mm;
+ 
+ fail_nocontext:
+@@ -683,7 +685,7 @@ struct mm_struct *mm_alloc(void)
+ 		return NULL;
+ 
+ 	memset(mm, 0, sizeof(*mm));
+-	return mm_init(mm, current);
++	return mm_init(mm, current, current_user_ns());
+ }
+ 
+ /*
+@@ -698,6 +700,7 @@ void __mmdrop(struct mm_struct *mm)
+ 	destroy_context(mm);
+ 	mmu_notifier_mm_destroy(mm);
+ 	check_mm(mm);
++	put_user_ns(mm->user_ns);
+ 	free_mm(mm);
+ }
+ EXPORT_SYMBOL_GPL(__mmdrop);
+@@ -977,7 +980,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk)
+ 
+ 	memcpy(mm, oldmm, sizeof(*mm));
+ 
+-	if (!mm_init(mm, tsk))
++	if (!mm_init(mm, tsk, mm->user_ns))
+ 		goto fail_nomem;
+ 
+ 	err = dup_mmap(mm, oldmm);
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 1d3b7665d0be..7b20baea41e7 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -39,6 +39,9 @@ void __ptrace_link(struct task_struct *child, struct task_struct *new_parent)
+ 	BUG_ON(!list_empty(&child->ptrace_entry));
+ 	list_add(&child->ptrace_entry, &new_parent->ptraced);
+ 	child->parent = new_parent;
++	rcu_read_lock();
++	child->ptracer_cred = get_cred(__task_cred(new_parent));
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -71,10 +74,14 @@ void __ptrace_link(struct task_struct *child, struct task_struct *new_parent)
+  */
+ void __ptrace_unlink(struct task_struct *child)
+ {
++	const struct cred *old_cred;
+ 	BUG_ON(!child->ptrace);
+ 
+ 	child->parent = child->real_parent;
+ 	list_del_init(&child->ptrace_entry);
++	old_cred = child->ptracer_cred;
++	child->ptracer_cred = NULL;
++	put_cred(old_cred);
+ 
+ 	spin_lock(&child->sighand->siglock);
+ 	child->ptrace = 0;
+@@ -218,7 +225,7 @@ static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)
+ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ {
+ 	const struct cred *cred = current_cred(), *tcred;
+-	int dumpable = 0;
++	struct mm_struct *mm;
+ 	kuid_t caller_uid;
+ 	kgid_t caller_gid;
+ 
+@@ -269,16 +276,11 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ 	return -EPERM;
+ ok:
+ 	rcu_read_unlock();
+-	smp_rmb();
+-	if (task->mm)
+-		dumpable = get_dumpable(task->mm);
+-	rcu_read_lock();
+-	if (dumpable != SUID_DUMP_USER &&
+-	    !ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
+-		rcu_read_unlock();
+-		return -EPERM;
+-	}
+-	rcu_read_unlock();
++	mm = task->mm;
++	if (mm &&
++	    ((get_dumpable(mm) != SUID_DUMP_USER) &&
++	     !ptrace_has_cap(mm->user_ns, mode)))
++	    return -EPERM;
+ 
+ 	return security_ptrace_access_check(task, mode);
+ }
+@@ -342,10 +344,6 @@ static int ptrace_attach(struct task_struct *task, long request,
+ 
+ 	if (seize)
+ 		flags |= PT_SEIZED;
+-	rcu_read_lock();
+-	if (ns_capable(__task_cred(task)->user_ns, CAP_SYS_PTRACE))
+-		flags |= PT_PTRACE_CAP;
+-	rcu_read_unlock();
+ 	task->ptrace = flags;
+ 
+ 	__ptrace_link(task, current);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 9acb29f280ec..6d1020c03d41 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -344,7 +344,6 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 	 */
+ 	if (is_hardlockup()) {
+ 		int this_cpu = smp_processor_id();
+-		struct pt_regs *regs = get_irq_regs();
+ 
+ 		/* only print hardlockups once */
+ 		if (__this_cpu_read(hard_watchdog_warn) == true)
+diff --git a/mm/filemap.c b/mm/filemap.c
+index ced9ef6c06b0..f1da48d1f95f 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1688,7 +1688,7 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
+ 	int error = 0;
+ 
+ 	if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
+-		return -EINVAL;
++		return 0;
+ 	iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
+ 
+ 	index = *ppos >> PAGE_SHIFT;
+diff --git a/mm/init-mm.c b/mm/init-mm.c
+index a56a851908d2..975e49f00f34 100644
+--- a/mm/init-mm.c
++++ b/mm/init-mm.c
+@@ -6,6 +6,7 @@
+ #include <linux/cpumask.h>
+ 
+ #include <linux/atomic.h>
++#include <linux/user_namespace.h>
+ #include <asm/pgtable.h>
+ #include <asm/mmu.h>
+ 
+@@ -21,5 +22,6 @@ struct mm_struct init_mm = {
+ 	.mmap_sem	= __RWSEM_INITIALIZER(init_mm.mmap_sem),
+ 	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
+ 	.mmlist		= LIST_HEAD_INIT(init_mm.mmlist),
++	.user_ns	= &init_user_ns,
+ 	INIT_MM_CONTEXT(init_mm)
+ };
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7401e996009a..212a017a2f02 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2173,7 +2173,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ 			unsigned long count, struct list_head *list,
+ 			int migratetype, bool cold)
+ {
+-	int i;
++	int i, alloced = 0;
+ 
+ 	spin_lock(&zone->lock);
+ 	for (i = 0; i < count; ++i) {
+@@ -2198,13 +2198,21 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ 		else
+ 			list_add_tail(&page->lru, list);
+ 		list = &page->lru;
++		alloced++;
+ 		if (is_migrate_cma(get_pcppage_migratetype(page)))
+ 			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ 					      -(1 << order));
+ 	}
++
++	/*
++	 * i pages were removed from the buddy list even if some leak due
++	 * to check_pcp_refill failing so adjust NR_FREE_PAGES based
++	 * on i. Do not confuse with 'alloced' which is the number of
++	 * pages added to the pcp list.
++	 */
+ 	__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
+ 	spin_unlock(&zone->lock);
+-	return i;
++	return alloced;
+ }
+ 
+ #ifdef CONFIG_NUMA
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index ba0fad78e5d4..5c313a0d1626 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -291,6 +291,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	int nid = shrinkctl->nid;
+ 	long batch_size = shrinker->batch ? shrinker->batch
+ 					  : SHRINK_BATCH;
++	long scanned = 0, next_deferred;
+ 
+ 	freeable = shrinker->count_objects(shrinker, shrinkctl);
+ 	if (freeable == 0)
+@@ -312,7 +313,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+ 		       shrinker->scan_objects, total_scan);
+ 		total_scan = freeable;
+-	}
++		next_deferred = nr;
++	} else
++		next_deferred = total_scan;
+ 
+ 	/*
+ 	 * We need to avoid excessive windup on filesystem shrinkers
+@@ -369,17 +372,22 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 
+ 		count_vm_events(SLABS_SCANNED, nr_to_scan);
+ 		total_scan -= nr_to_scan;
++		scanned += nr_to_scan;
+ 
+ 		cond_resched();
+ 	}
+ 
++	if (next_deferred >= scanned)
++		next_deferred -= scanned;
++	else
++		next_deferred = 0;
+ 	/*
+ 	 * move the unused scan count back into the shrinker in a
+ 	 * manner that handles concurrent updates. If we exhausted the
+ 	 * scan, there is no need to do an update.
+ 	 */
+-	if (total_scan > 0)
+-		new_nr = atomic_long_add_return(total_scan,
++	if (next_deferred > 0)
++		new_nr = atomic_long_add_return(next_deferred,
+ 						&shrinker->nr_deferred[nid]);
+ 	else
+ 		new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
+diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c
+index 02bcf00c2492..1a9545965c0d 100644
+--- a/net/netfilter/nf_conntrack_extend.c
++++ b/net/netfilter/nf_conntrack_extend.c
+@@ -73,7 +73,7 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ 			     size_t var_alloc_len, gfp_t gfp)
+ {
+ 	struct nf_ct_ext *old, *new;
+-	int newlen, newoff;
++	int i, newlen, newoff;
+ 	struct nf_ct_ext_type *t;
+ 
+ 	/* Conntrack must not be confirmed to avoid races on reallocation. */
+@@ -99,8 +99,19 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
+ 		return NULL;
+ 
+ 	if (new != old) {
++		for (i = 0; i < NF_CT_EXT_NUM; i++) {
++			if (!__nf_ct_ext_exist(old, i))
++				continue;
++
++			rcu_read_lock();
++			t = rcu_dereference(nf_ct_ext_types[i]);
++			if (t && t->move)
++				t->move((void *)new + new->offset[i],
++					(void *)old + old->offset[i]);
++			rcu_read_unlock();
++		}
+ 		kfree_rcu(old, rcu);
+-		rcu_assign_pointer(ct->ext, new);
++		ct->ext = new;
+ 	}
+ 
+ 	new->offset[id] = newoff;
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index ecee105bbada..4bf9b50cbb4c 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -30,19 +30,17 @@
+ #include <net/netfilter/nf_conntrack_zones.h>
+ #include <linux/netfilter/nf_nat.h>
+ 
++static DEFINE_SPINLOCK(nf_nat_lock);
++
+ static DEFINE_MUTEX(nf_nat_proto_mutex);
+ static const struct nf_nat_l3proto __rcu *nf_nat_l3protos[NFPROTO_NUMPROTO]
+ 						__read_mostly;
+ static const struct nf_nat_l4proto __rcu **nf_nat_l4protos[NFPROTO_NUMPROTO]
+ 						__read_mostly;
+ 
+-struct nf_nat_conn_key {
+-	const struct net *net;
+-	const struct nf_conntrack_tuple *tuple;
+-	const struct nf_conntrack_zone *zone;
+-};
+-
+-static struct rhashtable nf_nat_bysource_table;
++static struct hlist_head *nf_nat_bysource __read_mostly;
++static unsigned int nf_nat_htable_size __read_mostly;
++static unsigned int nf_nat_hash_rnd __read_mostly;
+ 
+ inline const struct nf_nat_l3proto *
+ __nf_nat_l3proto_find(u8 family)
+@@ -121,17 +119,19 @@ int nf_xfrm_me_harder(struct net *net, struct sk_buff *skb, unsigned int family)
+ EXPORT_SYMBOL(nf_xfrm_me_harder);
+ #endif /* CONFIG_XFRM */
+ 
+-static u32 nf_nat_bysource_hash(const void *data, u32 len, u32 seed)
++/* We keep an extra hash for each conntrack, for fast searching. */
++static inline unsigned int
++hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple)
+ {
+-	const struct nf_conntrack_tuple *t;
+-	const struct nf_conn *ct = data;
++	unsigned int hash;
++
++	get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd));
+ 
+-	t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+ 	/* Original src, to ensure we map it consistently if poss. */
++	hash = jhash2((u32 *)&tuple->src, sizeof(tuple->src) / sizeof(u32),
++		      tuple->dst.protonum ^ nf_nat_hash_rnd ^ net_hash_mix(n));
+ 
+-	seed ^= net_hash_mix(nf_ct_net(ct));
+-	return jhash2((const u32 *)&t->src, sizeof(t->src) / sizeof(u32),
+-		      t->dst.protonum ^ seed);
++	return reciprocal_scale(hash, nf_nat_htable_size);
+ }
+ 
+ /* Is this tuple already taken? (not by us) */
+@@ -187,26 +187,6 @@ same_src(const struct nf_conn *ct,
+ 		t->src.u.all == tuple->src.u.all);
+ }
+ 
+-static int nf_nat_bysource_cmp(struct rhashtable_compare_arg *arg,
+-			       const void *obj)
+-{
+-	const struct nf_nat_conn_key *key = arg->key;
+-	const struct nf_conn *ct = obj;
+-
+-	return same_src(ct, key->tuple) &&
+-	       net_eq(nf_ct_net(ct), key->net) &&
+-	       nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL);
+-}
+-
+-static struct rhashtable_params nf_nat_bysource_params = {
+-	.head_offset = offsetof(struct nf_conn, nat_bysource),
+-	.obj_hashfn = nf_nat_bysource_hash,
+-	.obj_cmpfn = nf_nat_bysource_cmp,
+-	.nelem_hint = 256,
+-	.min_size = 1024,
+-	.nulls_base = (1U << RHT_BASE_SHIFT),
+-};
+-
+ /* Only called for SRC manip */
+ static int
+ find_appropriate_src(struct net *net,
+@@ -217,23 +197,25 @@ find_appropriate_src(struct net *net,
+ 		     struct nf_conntrack_tuple *result,
+ 		     const struct nf_nat_range *range)
+ {
++	unsigned int h = hash_by_src(net, tuple);
++	const struct nf_conn_nat *nat;
+ 	const struct nf_conn *ct;
+-	struct nf_nat_conn_key key = {
+-		.net = net,
+-		.tuple = tuple,
+-		.zone = zone
+-	};
+ 
+-	ct = rhashtable_lookup_fast(&nf_nat_bysource_table, &key,
+-				    nf_nat_bysource_params);
+-	if (!ct)
+-		return 0;
+-
+-	nf_ct_invert_tuplepr(result,
+-			     &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+-	result->dst = tuple->dst;
+-
+-	return in_range(l3proto, l4proto, result, range);
++	hlist_for_each_entry_rcu(nat, &nf_nat_bysource[h], bysource) {
++		ct = nat->ct;
++		if (same_src(ct, tuple) &&
++		    net_eq(net, nf_ct_net(ct)) &&
++		    nf_ct_zone_equal(ct, zone, IP_CT_DIR_ORIGINAL)) {
++			/* Copy source part from reply tuple. */
++			nf_ct_invert_tuplepr(result,
++				       &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
++			result->dst = tuple->dst;
++
++			if (in_range(l3proto, l4proto, result, range))
++				return 1;
++		}
++	}
++	return 0;
+ }
+ 
+ /* For [FUTURE] fragmentation handling, we want the least-used
+@@ -405,6 +387,7 @@ nf_nat_setup_info(struct nf_conn *ct,
+ 		  const struct nf_nat_range *range,
+ 		  enum nf_nat_manip_type maniptype)
+ {
++	struct net *net = nf_ct_net(ct);
+ 	struct nf_conntrack_tuple curr_tuple, new_tuple;
+ 	struct nf_conn_nat *nat;
+ 
+@@ -446,13 +429,17 @@ nf_nat_setup_info(struct nf_conn *ct,
+ 	}
+ 
+ 	if (maniptype == NF_NAT_MANIP_SRC) {
+-		int err;
+-
+-		err = rhashtable_insert_fast(&nf_nat_bysource_table,
+-					     &ct->nat_bysource,
+-					     nf_nat_bysource_params);
+-		if (err)
+-			return NF_DROP;
++		unsigned int srchash;
++
++		srchash = hash_by_src(net,
++				      &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
++		spin_lock_bh(&nf_nat_lock);
++		/* nf_conntrack_alter_reply might re-allocate extension aera */
++		nat = nfct_nat(ct);
++		nat->ct = ct;
++		hlist_add_head_rcu(&nat->bysource,
++				   &nf_nat_bysource[srchash]);
++		spin_unlock_bh(&nf_nat_lock);
+ 	}
+ 
+ 	/* It's done. */
+@@ -557,7 +544,7 @@ static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
+ 	if (nf_nat_proto_remove(ct, data))
+ 		return 1;
+ 
+-	if (!nat)
++	if (!nat || !nat->ct)
+ 		return 0;
+ 
+ 	/* This netns is being destroyed, and conntrack has nat null binding.
+@@ -569,10 +556,11 @@ static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
+ 	if (!del_timer(&ct->timeout))
+ 		return 1;
+ 
++	spin_lock_bh(&nf_nat_lock);
++	hlist_del_rcu(&nat->bysource);
+ 	ct->status &= ~IPS_NAT_DONE_MASK;
+-
+-	rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource,
+-			       nf_nat_bysource_params);
++	nat->ct = NULL;
++	spin_unlock_bh(&nf_nat_lock);
+ 
+ 	add_timer(&ct->timeout);
+ 
+@@ -701,17 +689,35 @@ static void nf_nat_cleanup_conntrack(struct nf_conn *ct)
+ {
+ 	struct nf_conn_nat *nat = nf_ct_ext_find(ct, NF_CT_EXT_NAT);
+ 
+-	if (!nat)
++	if (nat == NULL || nat->ct == NULL)
+ 		return;
+ 
+-	rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource,
+-			       nf_nat_bysource_params);
++	NF_CT_ASSERT(nat->ct->status & IPS_SRC_NAT_DONE);
++
++	spin_lock_bh(&nf_nat_lock);
++	hlist_del_rcu(&nat->bysource);
++	spin_unlock_bh(&nf_nat_lock);
++}
++
++static void nf_nat_move_storage(void *new, void *old)
++{
++	struct nf_conn_nat *new_nat = new;
++	struct nf_conn_nat *old_nat = old;
++	struct nf_conn *ct = old_nat->ct;
++
++	if (!ct || !(ct->status & IPS_SRC_NAT_DONE))
++		return;
++
++	spin_lock_bh(&nf_nat_lock);
++	hlist_replace_rcu(&old_nat->bysource, &new_nat->bysource);
++	spin_unlock_bh(&nf_nat_lock);
+ }
+ 
+ static struct nf_ct_ext_type nat_extend __read_mostly = {
+ 	.len		= sizeof(struct nf_conn_nat),
+ 	.align		= __alignof__(struct nf_conn_nat),
+ 	.destroy	= nf_nat_cleanup_conntrack,
++	.move		= nf_nat_move_storage,
+ 	.id		= NF_CT_EXT_NAT,
+ 	.flags		= NF_CT_EXT_F_PREALLOC,
+ };
+@@ -840,13 +846,16 @@ static int __init nf_nat_init(void)
+ {
+ 	int ret;
+ 
+-	ret = rhashtable_init(&nf_nat_bysource_table, &nf_nat_bysource_params);
+-	if (ret)
+-		return ret;
++	/* Leave them the same for the moment. */
++	nf_nat_htable_size = nf_conntrack_htable_size;
++
++	nf_nat_bysource = nf_ct_alloc_hashtable(&nf_nat_htable_size, 0);
++	if (!nf_nat_bysource)
++		return -ENOMEM;
+ 
+ 	ret = nf_ct_extend_register(&nat_extend);
+ 	if (ret < 0) {
+-		rhashtable_destroy(&nf_nat_bysource_table);
++		nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ 		printk(KERN_ERR "nf_nat_core: Unable to register extension\n");
+ 		return ret;
+ 	}
+@@ -870,7 +879,7 @@ static int __init nf_nat_init(void)
+ 	return 0;
+ 
+  cleanup_extend:
+-	rhashtable_destroy(&nf_nat_bysource_table);
++	nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ 	nf_ct_extend_unregister(&nat_extend);
+ 	return ret;
+ }
+@@ -888,8 +897,8 @@ static void __exit nf_nat_cleanup(void)
+ #endif
+ 	for (i = 0; i < NFPROTO_NUMPROTO; i++)
+ 		kfree(nf_nat_l4protos[i]);
+-
+-	rhashtable_destroy(&nf_nat_bysource_table);
++	synchronize_net();
++	nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
+ }
+ 
+ MODULE_LICENSE("GPL");
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 7f57a145a47e..a03cf68d0bcd 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -884,6 +884,8 @@ void snd_hda_apply_fixup(struct hda_codec *codec, int action)
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_apply_fixup);
+ 
++#define IGNORE_SEQ_ASSOC (~(AC_DEFCFG_SEQUENCE | AC_DEFCFG_DEF_ASSOC))
++
+ static bool pin_config_match(struct hda_codec *codec,
+ 			     const struct hda_pintbl *pins)
+ {
+@@ -901,7 +903,7 @@ static bool pin_config_match(struct hda_codec *codec,
+ 		for (; t_pins->nid; t_pins++) {
+ 			if (t_pins->nid == nid) {
+ 				found = 1;
+-				if (t_pins->val == cfg)
++				if ((t_pins->val & IGNORE_SEQ_ASSOC) == (cfg & IGNORE_SEQ_ASSOC))
+ 					break;
+ 				else if ((cfg & 0xf0000000) == 0x40000000 && (t_pins->val & 0xf0000000) == 0x40000000)
+ 					break;
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 9ceb2bc36e68..c146d0de53d8 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -780,6 +780,7 @@ static const struct hda_pintbl alienware_pincfgs[] = {
+ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE),
+ 	SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE),
++	SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE),
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index ed62748a6d55..c15c51bea26d 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -262,6 +262,7 @@ enum {
+ 	CXT_FIXUP_CAP_MIX_AMP_5047,
+ 	CXT_FIXUP_MUTE_LED_EAPD,
+ 	CXT_FIXUP_HP_SPECTRE,
++	CXT_FIXUP_HP_GATE_MIC,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -633,6 +634,17 @@ static void cxt_fixup_cap_mix_amp_5047(struct hda_codec *codec,
+ 				  (1 << AC_AMPCAP_MUTE_SHIFT));
+ }
+ 
++static void cxt_fixup_hp_gate_mic_jack(struct hda_codec *codec,
++				       const struct hda_fixup *fix,
++				       int action)
++{
++	/* the mic pin (0x19) doesn't give an unsolicited event;
++	 * probe the mic pin together with the headphone pin (0x16)
++	 */
++	if (action == HDA_FIXUP_ACT_PROBE)
++		snd_hda_jack_set_gating_jack(codec, 0x19, 0x16);
++}
++
+ /* ThinkPad X200 & co with cxt5051 */
+ static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
+ 	{ 0x16, 0x042140ff }, /* HP (seq# overridden) */
+@@ -774,6 +786,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ 			{ }
+ 		}
+ 	},
++	[CXT_FIXUP_HP_GATE_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cxt_fixup_hp_gate_mic_jack,
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -824,6 +840,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
+ 	SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
++	SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 162818058a5d..512fd83407f2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5915,6 +5915,9 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60180},
+ 		{0x14, 0x90170120},
+ 		{0x21, 0x02211030}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x1b, 0x01011020},
++		{0x21, 0x02211010}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x12, 0x90a60160},
+ 		{0x14, 0x90170120},
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 52ed434cbca6..f1223045c400 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -771,6 +771,9 @@ static int sst_soc_prepare(struct device *dev)
+ 	struct sst_data *drv = dev_get_drvdata(dev);
+ 	struct snd_soc_pcm_runtime *rtd;
+ 
++	if (!drv->soc_card)
++		return 0;
++
+ 	/* suspend all pcms first */
+ 	snd_soc_suspend(drv->soc_card->dev);
+ 	snd_soc_poweroff(drv->soc_card->dev);
+@@ -793,6 +796,9 @@ static void sst_soc_complete(struct device *dev)
+ 	struct sst_data *drv = dev_get_drvdata(dev);
+ 	struct snd_soc_pcm_runtime *rtd;
+ 
++	if (!drv->soc_card)
++		return;
++
+ 	/* restart SSPs */
+ 	list_for_each_entry(rtd, &drv->soc_card->rtd_list, list) {
+ 		struct snd_soc_dai *dai = rtd->cpu_dai;
+diff --git a/sound/usb/hiface/pcm.c b/sound/usb/hiface/pcm.c
+index 2c44139b4041..33db205dd12b 100644
+--- a/sound/usb/hiface/pcm.c
++++ b/sound/usb/hiface/pcm.c
+@@ -445,6 +445,8 @@ static int hiface_pcm_prepare(struct snd_pcm_substream *alsa_sub)
+ 
+ 	mutex_lock(&rt->stream_mutex);
+ 
++	hiface_pcm_stream_stop(rt);
++
+ 	sub->dma_off = 0;
+ 	sub->period_off = 0;
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 2f8c388ef84f..4703caea56b2 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -932,9 +932,10 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 	case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
+ 	case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+ 	case USB_ID(0x046d, 0x0991):
++	case USB_ID(0x046d, 0x09a2): /* QuickCam Communicate Deluxe/S7500 */
+ 	/* Most audio usb devices lie about volume resolution.
+ 	 * Most Logitech webcams have res = 384.
+-	 * Proboly there is some logitech magic behind this number --fishor
++	 * Probably there is some logitech magic behind this number --fishor
+ 	 */
+ 		if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+ 			usb_audio_info(chip,


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [gentoo-commits] proj/linux-patches:4.8 commit in: /
@ 2017-01-09 11:44 Alice Ferrazzi
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ferrazzi @ 2017-01-09 11:44 UTC (permalink / raw
  To: gentoo-commits

commit:     203f855f3df02c2e1878212e32716806671716cd
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Jan  9 11:43:12 2017 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Jan  9 11:43:12 2017 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=203f855f

Linux patch 4.8.17

 0000_README             |    4 +
 1016_linux-4.8.17.patch | 3229 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3233 insertions(+)

diff --git a/0000_README b/0000_README
index e7fac7c..f8302fa 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-4.8.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.8.16
 
+Patch:  1016_linux-4.8.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.8.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-4.8.17.patch b/1016_linux-4.8.17.patch
new file mode 100644
index 0000000..7782469
--- /dev/null
+++ b/1016_linux-4.8.17.patch
@@ -0,0 +1,3229 @@
+diff --git a/Documentation/sphinx/rstFlatTable.py b/Documentation/sphinx/rstFlatTable.py
+index 26db852e3c74..99163598f18b 100644
+--- a/Documentation/sphinx/rstFlatTable.py
++++ b/Documentation/sphinx/rstFlatTable.py
+@@ -151,6 +151,11 @@ class ListTableBuilder(object):
+     def buildTableNode(self):
+ 
+         colwidths    = self.directive.get_column_widths(self.max_cols)
++        if isinstance(colwidths, tuple):
++            # Since docutils 0.13, get_column_widths returns a (widths,
++            # colwidths) tuple, where widths is a string (i.e. 'auto').
++            # See https://sourceforge.net/p/docutils/patches/120/.
++            colwidths = colwidths[1]
+         stub_columns = self.directive.options.get('stub-columns', 0)
+         header_rows  = self.directive.options.get('header-rows', 0)
+ 
+diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
+index 739db9ab16b2..a7596e9fdf06 100644
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -2039,6 +2039,7 @@ registers, find a list below:
+   PPC   | KVM_REG_PPC_TM_VSCR           | 32
+   PPC   | KVM_REG_PPC_TM_DSCR           | 64
+   PPC   | KVM_REG_PPC_TM_TAR            | 64
++  PPC   | KVM_REG_PPC_TM_XER            | 64
+         |                               |
+   MIPS  | KVM_REG_MIPS_R0               | 64
+           ...
+diff --git a/Makefile b/Makefile
+index 50f68648a79a..ace32d3bac4b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 8
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Psychotic Stoned Sheep
+ 
+diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h
+index a093adbdb017..fc662f49c55a 100644
+--- a/arch/arc/include/asm/cacheflush.h
++++ b/arch/arc/include/asm/cacheflush.h
+@@ -85,6 +85,10 @@ void flush_anon_page(struct vm_area_struct *vma,
+  */
+ #define PG_dc_clean	PG_arch_1
+ 
++#define CACHE_COLORS_NUM	4
++#define CACHE_COLORS_MSK	(CACHE_COLORS_NUM - 1)
++#define CACHE_COLOR(addr)	(((unsigned long)(addr) >> (PAGE_SHIFT)) & CACHE_COLORS_MSK)
++
+ /*
+  * Simple wrapper over config option
+  * Bootup code ensures that hardware matches kernel configuration
+@@ -94,8 +98,6 @@ static inline int cache_is_vipt_aliasing(void)
+ 	return IS_ENABLED(CONFIG_ARC_CACHE_VIPT_ALIASING);
+ }
+ 
+-#define CACHE_COLOR(addr)	(((unsigned long)(addr) >> (PAGE_SHIFT)) & 1)
+-
+ /*
+  * checks if two addresses (after page aligning) index into same cache set
+  */
+diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
+index 0b10efe3a6a7..ab1aaf2a28c8 100644
+--- a/arch/arc/mm/cache.c
++++ b/arch/arc/mm/cache.c
+@@ -967,11 +967,16 @@ void arc_cache_init(void)
+ 		/* check for D-Cache aliasing on ARCompact: ARCv2 has PIPT */
+ 		if (is_isa_arcompact()) {
+ 			int handled = IS_ENABLED(CONFIG_ARC_CACHE_VIPT_ALIASING);
+-
+-			if (dc->alias && !handled)
+-				panic("Enable CONFIG_ARC_CACHE_VIPT_ALIASING\n");
+-			else if (!dc->alias && handled)
++			int num_colors = dc->sz_k/dc->assoc/TO_KB(PAGE_SIZE);
++
++			if (dc->alias) {
++				if (!handled)
++					panic("Enable CONFIG_ARC_CACHE_VIPT_ALIASING\n");
++				if (CACHE_COLORS_NUM != num_colors)
++					panic("CACHE_COLORS_NUM not optimized for config\n");
++			} else if (!dc->alias && handled) {
+ 				panic("Disable CONFIG_ARC_CACHE_VIPT_ALIASING\n");
++			}
+ 		}
+ 	}
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index 5fda583351d7..906fb836d241 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -21,6 +21,10 @@
+ 		reg = <0x0 0x80000000 0x1 0x0>;
+ 	};
+ 
++	gpu@57000000 {
++		vdd-supply = <&vdd_gpu>;
++	};
++
+ 	/* debug port */
+ 	serial@70006000 {
+ 		status = "okay";
+@@ -291,4 +295,18 @@
+ 			clock-frequency = <32768>;
+ 		};
+ 	};
++
++	regulators {
++		vdd_gpu: regulator@100 {
++			compatible = "pwm-regulator";
++			reg = <100>;
++			pwms = <&pwm 1 4880>;
++			regulator-name = "VDD_GPU";
++			regulator-min-microvolt = <710000>;
++			regulator-max-microvolt = <1320000>;
++			enable-gpios = <&pmic 6 GPIO_ACTIVE_HIGH>;
++			regulator-ramp-delay = <80>;
++			regulator-enable-ramp-delay = <1000>;
++		};
++	};
+ };
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index 5a84b4562603..03498c8f0cf7 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -82,7 +82,13 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
+ 	write_sysreg(val, hcr_el2);
+ 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
+ 	write_sysreg(1 << 15, hstr_el2);
+-	/* Make sure we trap PMU access from EL0 to EL2 */
++	/*
++	 * Make sure we trap PMU access from EL0 to EL2. Also sanitize
++	 * PMSELR_EL0 to make sure it never contains the cycle
++	 * counter, which could make a PMXEVCNTR_EL0 access UNDEF at
++	 * EL1 instead of being trapped to EL2.
++	 */
++	write_sysreg(0, pmselr_el0);
+ 	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
+ 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+ 	__activate_traps_arch()();
+diff --git a/arch/powerpc/boot/ps3-head.S b/arch/powerpc/boot/ps3-head.S
+index b6fcbaf5027b..3dc44b05fb97 100644
+--- a/arch/powerpc/boot/ps3-head.S
++++ b/arch/powerpc/boot/ps3-head.S
+@@ -57,11 +57,6 @@ __system_reset_overlay:
+ 	bctr
+ 
+ 1:
+-	/* Save the value at addr zero for a null pointer write check later. */
+-
+-	li	r4, 0
+-	lwz	r3, 0(r4)
+-
+ 	/* Primary delays then goes to _zimage_start in wrapper. */
+ 
+ 	or	31, 31, 31 /* db16cyc */
+diff --git a/arch/powerpc/boot/ps3.c b/arch/powerpc/boot/ps3.c
+index 4ec2d86d3c50..a05558a7e51a 100644
+--- a/arch/powerpc/boot/ps3.c
++++ b/arch/powerpc/boot/ps3.c
+@@ -119,13 +119,12 @@ void ps3_copy_vectors(void)
+ 	flush_cache((void *)0x100, 512);
+ }
+ 
+-void platform_init(unsigned long null_check)
++void platform_init(void)
+ {
+ 	const u32 heapsize = 0x1000000 - (u32)_end; /* 16MiB */
+ 	void *chosen;
+ 	unsigned long ft_addr;
+ 	u64 rm_size;
+-	unsigned long val;
+ 
+ 	console_ops.write = ps3_console_write;
+ 	platform_ops.exit = ps3_exit;
+@@ -153,11 +152,6 @@ void platform_init(unsigned long null_check)
+ 
+ 	printf(" flat tree at 0x%lx\n\r", ft_addr);
+ 
+-	val = *(unsigned long *)0;
+-
+-	if (val != null_check)
+-		printf("null check failed: %lx != %lx\n\r", val, null_check);
+-
+ 	((kernel_entry_t)0)(ft_addr, 0, NULL);
+ 
+ 	ps3_exit();
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index ec35af34a3fb..f2c5dde507c7 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -555,6 +555,7 @@ struct kvm_vcpu_arch {
+ 	u64 tfiar;
+ 
+ 	u32 cr_tm;
++	u64 xer_tm;
+ 	u64 lr_tm;
+ 	u64 ctr_tm;
+ 	u64 amr_tm;
+diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
+index c93cf35ce379..0fb1326c3ea2 100644
+--- a/arch/powerpc/include/uapi/asm/kvm.h
++++ b/arch/powerpc/include/uapi/asm/kvm.h
+@@ -596,6 +596,7 @@ struct kvm_get_htab_header {
+ #define KVM_REG_PPC_TM_VSCR	(KVM_REG_PPC_TM | KVM_REG_SIZE_U32 | 0x67)
+ #define KVM_REG_PPC_TM_DSCR	(KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x68)
+ #define KVM_REG_PPC_TM_TAR	(KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x69)
++#define KVM_REG_PPC_TM_XER	(KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x6a)
+ 
+ /* PPC64 eXternal Interrupt Controller Specification */
+ #define KVM_DEV_XICS_GRP_SOURCES	1	/* 64-bit source attributes */
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index b89d14c0352c..6ba221c76214 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -569,6 +569,7 @@ int main(void)
+ 	DEFINE(VCPU_VRS_TM, offsetof(struct kvm_vcpu, arch.vr_tm.vr));
+ 	DEFINE(VCPU_VRSAVE_TM, offsetof(struct kvm_vcpu, arch.vrsave_tm));
+ 	DEFINE(VCPU_CR_TM, offsetof(struct kvm_vcpu, arch.cr_tm));
++	DEFINE(VCPU_XER_TM, offsetof(struct kvm_vcpu, arch.xer_tm));
+ 	DEFINE(VCPU_LR_TM, offsetof(struct kvm_vcpu, arch.lr_tm));
+ 	DEFINE(VCPU_CTR_TM, offsetof(struct kvm_vcpu, arch.ctr_tm));
+ 	DEFINE(VCPU_AMR_TM, offsetof(struct kvm_vcpu, arch.amr_tm));
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index f765b0434731..882578929fe7 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -201,9 +201,9 @@ booting_thread_hwid:
+  */
+ _GLOBAL(book3e_start_thread)
+ 	LOAD_REG_IMMEDIATE(r5, MSR_KERNEL)
+-	cmpi	0, r3, 0
++	cmpwi	r3, 0
+ 	beq	10f
+-	cmpi	0, r3, 1
++	cmpwi	r3, 1
+ 	beq	11f
+ 	/* If the thread id is invalid, just exit. */
+ 	b	13f
+@@ -228,9 +228,9 @@ _GLOBAL(book3e_start_thread)
+  * r3 = the thread physical id
+  */
+ _GLOBAL(book3e_stop_thread)
+-	cmpi	0, r3, 0
++	cmpwi	r3, 0
+ 	beq	10f
+-	cmpi	0, r3, 1
++	cmpwi	r3, 1
+ 	beq	10f
+ 	/* If the thread id is invalid, just exit. */
+ 	b	13f
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 2fd5580c8f6e..4c8d344486e6 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1235,6 +1235,9 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+ 	case KVM_REG_PPC_TM_CR:
+ 		*val = get_reg_val(id, vcpu->arch.cr_tm);
+ 		break;
++	case KVM_REG_PPC_TM_XER:
++		*val = get_reg_val(id, vcpu->arch.xer_tm);
++		break;
+ 	case KVM_REG_PPC_TM_LR:
+ 		*val = get_reg_val(id, vcpu->arch.lr_tm);
+ 		break;
+@@ -1442,6 +1445,9 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+ 	case KVM_REG_PPC_TM_CR:
+ 		vcpu->arch.cr_tm = set_reg_val(id, *val);
+ 		break;
++	case KVM_REG_PPC_TM_XER:
++		vcpu->arch.xer_tm = set_reg_val(id, *val);
++		break;
+ 	case KVM_REG_PPC_TM_LR:
+ 		vcpu->arch.lr_tm = set_reg_val(id, *val);
+ 		break;
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+index 99b4e9d5dd23..5420d060c6f6 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+@@ -653,6 +653,8 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
+ 					      HPTE_V_ABSENT);
+ 			do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags),
+ 				  true);
++			/* Don't lose R/C bit updates done by hardware */
++			r |= be64_to_cpu(hpte[1]) & (HPTE_R_R | HPTE_R_C);
+ 			hpte[1] = cpu_to_be64(r);
+ 		}
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 975655573844..bf243a4caa63 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -2579,11 +2579,13 @@ kvmppc_save_tm:
+ 	mfctr	r7
+ 	mfspr	r8, SPRN_AMR
+ 	mfspr	r10, SPRN_TAR
++	mfxer	r11
+ 	std	r5, VCPU_LR_TM(r9)
+ 	stw	r6, VCPU_CR_TM(r9)
+ 	std	r7, VCPU_CTR_TM(r9)
+ 	std	r8, VCPU_AMR_TM(r9)
+ 	std	r10, VCPU_TAR_TM(r9)
++	std	r11, VCPU_XER_TM(r9)
+ 
+ 	/* Restore r12 as trap number. */
+ 	lwz	r12, VCPU_TRAP(r9)
+@@ -2676,11 +2678,13 @@ kvmppc_restore_tm:
+ 	ld	r7, VCPU_CTR_TM(r4)
+ 	ld	r8, VCPU_AMR_TM(r4)
+ 	ld	r9, VCPU_TAR_TM(r4)
++	ld	r10, VCPU_XER_TM(r4)
+ 	mtlr	r5
+ 	mtcr	r6
+ 	mtctr	r7
+ 	mtspr	SPRN_AMR, r8
+ 	mtspr	SPRN_TAR, r9
++	mtxer	r10
+ 
+ 	/*
+ 	 * Load up PPR and DSCR values but don't put them in the actual SPRs
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 7f7ba5f23f13..d027f2eb3559 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -445,7 +445,7 @@ static void __init setup_resources(void)
+ 	 * part of the System RAM resource.
+ 	 */
+ 	if (crashk_res.end) {
+-		memblock_add(crashk_res.start, resource_size(&crashk_res));
++		memblock_add_node(crashk_res.start, resource_size(&crashk_res), 0);
+ 		memblock_reserve(crashk_res.start, resource_size(&crashk_res));
+ 		insert_resource(&iomem_resource, &crashk_res);
+ 	}
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 0b56666e6039..b84a3496994c 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -852,8 +852,8 @@ ftrace_graph_call:
+ 	jmp	ftrace_stub
+ #endif
+ 
+-.globl ftrace_stub
+-ftrace_stub:
++/* This is weak to keep gas from relaxing the jumps */
++WEAK(ftrace_stub)
+ 	ret
+ END(ftrace_caller)
+ 
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 8c925ecaf534..7b0f1d932c87 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -364,7 +364,11 @@ int x86_add_exclusive(unsigned int what)
+ {
+ 	int i;
+ 
+-	if (x86_pmu.lbr_pt_coexist)
++	/*
++	 * When lbr_pt_coexist we allow PT to coexist with either LBR or BTS.
++	 * LBR and BTS are still mutually exclusive.
++	 */
++	if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt)
+ 		return 0;
+ 
+ 	if (!atomic_inc_not_zero(&x86_pmu.lbr_exclusive[what])) {
+@@ -387,7 +391,7 @@ fail_unlock:
+ 
+ void x86_del_exclusive(unsigned int what)
+ {
+-	if (x86_pmu.lbr_pt_coexist)
++	if (x86_pmu.lbr_pt_coexist && what == x86_lbr_exclusive_pt)
+ 		return;
+ 
+ 	atomic_dec(&x86_pmu.lbr_exclusive[what]);
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index 3ca87b5a8677..834262aec88d 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -571,6 +571,9 @@ static int __init cstate_probe(const struct cstate_model *cm)
+ 
+ static inline void cstate_cleanup(void)
+ {
++	cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_CSTATE_ONLINE);
++	cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_CSTATE_STARTING);
++
+ 	if (has_cstate_core)
+ 		perf_pmu_unregister(&cstate_core_pmu);
+ 
+@@ -583,16 +586,16 @@ static int __init cstate_init(void)
+ 	int err;
+ 
+ 	cpuhp_setup_state(CPUHP_AP_PERF_X86_CSTATE_STARTING,
+-			  "AP_PERF_X86_CSTATE_STARTING", cstate_cpu_init,
+-			  NULL);
++			  "perf/x86/cstate:starting", cstate_cpu_init, NULL);
+ 	cpuhp_setup_state(CPUHP_AP_PERF_X86_CSTATE_ONLINE,
+-			  "AP_PERF_X86_CSTATE_ONLINE", NULL, cstate_cpu_exit);
++			  "perf/x86/cstate:online", NULL, cstate_cpu_exit);
+ 
+ 	if (has_cstate_core) {
+ 		err = perf_pmu_register(&cstate_core_pmu, cstate_core_pmu.name, -1);
+ 		if (err) {
+ 			has_cstate_core = false;
+ 			pr_info("Failed to register cstate core pmu\n");
++			cstate_cleanup();
+ 			return err;
+ 		}
+ 	}
+@@ -606,8 +609,7 @@ static int __init cstate_init(void)
+ 			return err;
+ 		}
+ 	}
+-
+-	return err;
++	return 0;
+ }
+ 
+ static int __init cstate_pmu_init(void)
+@@ -632,8 +634,6 @@ module_init(cstate_pmu_init);
+ 
+ static void __exit cstate_pmu_exit(void)
+ {
+-	cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_CSTATE_ONLINE);
+-	cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_CSTATE_STARTING);
+ 	cstate_cleanup();
+ }
+ module_exit(cstate_pmu_exit);
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 181c238d4df9..4ab002d4c9fd 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -601,7 +601,7 @@ struct x86_pmu {
+ 	u64		lbr_sel_mask;		   /* LBR_SELECT valid bits */
+ 	const int	*lbr_sel_map;		   /* lbr_select mappings */
+ 	bool		lbr_double_abort;	   /* duplicated lbr aborts */
+-	bool		lbr_pt_coexist;		   /* LBR may coexist with PT */
++	bool		lbr_pt_coexist;		   /* (LBR|BTS) may coexist with PT */
+ 
+ 	/*
+ 	 * Intel PT/LBR/BTS are exclusive
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 5cede40e2552..7a72db5350aa 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -1336,10 +1336,10 @@ static inline bool nested_cpu_has_posted_intr(struct vmcs12 *vmcs12)
+ 	return vmcs12->pin_based_vm_exec_control & PIN_BASED_POSTED_INTR;
+ }
+ 
+-static inline bool is_exception(u32 intr_info)
++static inline bool is_nmi(u32 intr_info)
+ {
+ 	return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VALID_MASK))
+-		== (INTR_TYPE_HARD_EXCEPTION | INTR_INFO_VALID_MASK);
++		== (INTR_TYPE_NMI_INTR | INTR_INFO_VALID_MASK);
+ }
+ 
+ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+@@ -5467,7 +5467,7 @@ static int handle_exception(struct kvm_vcpu *vcpu)
+ 	if (is_machine_check(intr_info))
+ 		return handle_machine_check(vcpu);
+ 
+-	if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR)
++	if (is_nmi(intr_info))
+ 		return 1;  /* already handled by vmx_vcpu_run() */
+ 
+ 	if (is_no_device(intr_info)) {
+@@ -7974,7 +7974,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
+ 
+ 	switch (exit_reason) {
+ 	case EXIT_REASON_EXCEPTION_NMI:
+-		if (!is_exception(intr_info))
++		if (is_nmi(intr_info))
+ 			return false;
+ 		else if (is_page_fault(intr_info))
+ 			return enable_ept;
+@@ -8572,8 +8572,7 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
+ 		kvm_machine_check();
+ 
+ 	/* We need to handle NMIs before interrupts are enabled */
+-	if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
+-	    (exit_intr_info & INTR_INFO_VALID_MASK)) {
++	if (is_nmi(exit_intr_info)) {
+ 		kvm_before_handle_nmi(&vmx->vcpu);
+ 		asm("int $2");
+ 		kvm_after_handle_nmi(&vmx->vcpu);
+diff --git a/block/bsg.c b/block/bsg.c
+index d214e929ce18..b9a53615bdef 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -655,6 +655,9 @@ bsg_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+ 
+ 	dprintk("%s: write %Zd bytes\n", bd->name, count);
+ 
++	if (unlikely(segment_eq(get_fs(), KERNEL_DS)))
++		return -EINVAL;
++
+ 	bsg_set_block(bd, file);
+ 
+ 	bytes_written = 0;
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index a6b36fc53aec..02ded25c82e4 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -296,6 +296,26 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
+ 		},
+ 	},
++	{
++	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1123661 */
++	 .callback = video_detect_force_native,
++	 .ident = "Dell XPS 17 L702X",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "Dell System XPS L702X"),
++		},
++	},
++	{
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1204476 */
++	/* https://bugs.launchpad.net/ubuntu/+source/linux-lts-trusty/+bug/1416940 */
++	.callback = video_detect_force_native,
++	.ident = "HP Pavilion dv6",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv6 Notebook PC"),
++		},
++	},
++
+ 	{ },
+ };
+ 
+diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
+index 22d1760a4278..a95e1e572697 100644
+--- a/drivers/base/firmware_class.c
++++ b/drivers/base/firmware_class.c
+@@ -955,13 +955,14 @@ static int _request_firmware_load(struct firmware_priv *fw_priv,
+ 		timeout = MAX_JIFFY_OFFSET;
+ 	}
+ 
+-	retval = wait_for_completion_interruptible_timeout(&buf->completion,
++	timeout = wait_for_completion_interruptible_timeout(&buf->completion,
+ 			timeout);
+-	if (retval == -ERESTARTSYS || !retval) {
++	if (timeout == -ERESTARTSYS || !timeout) {
++		retval = timeout;
+ 		mutex_lock(&fw_lock);
+ 		fw_load_abort(fw_priv);
+ 		mutex_unlock(&fw_lock);
+-	} else if (retval > 0) {
++	} else if (timeout > 0) {
+ 		retval = 0;
+ 	}
+ 
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 0fc71cbaa440..3250694fd793 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -751,7 +751,9 @@ static void bcm2835_pll_divider_off(struct clk_hw *hw)
+ 	cprman_write(cprman, data->cm_reg,
+ 		     (cprman_read(cprman, data->cm_reg) &
+ 		      ~data->load_mask) | data->hold_mask);
+-	cprman_write(cprman, data->a2w_reg, A2W_PLL_CHANNEL_DISABLE);
++	cprman_write(cprman, data->a2w_reg,
++		     cprman_read(cprman, data->a2w_reg) |
++		     A2W_PLL_CHANNEL_DISABLE);
+ 	spin_unlock(&cprman->regs_lock);
+ }
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 15704aaf9e4e..a8eea8a69da3 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -984,7 +984,8 @@ static int gpio_chrdev_open(struct inode *inode, struct file *filp)
+ 		return -ENODEV;
+ 	get_device(&gdev->dev);
+ 	filp->private_data = gdev;
+-	return 0;
++
++	return nonseekable_open(inode, filp);
+ }
+ 
+ /**
+@@ -1009,7 +1010,7 @@ static const struct file_operations gpio_fileops = {
+ 	.release = gpio_chrdev_release,
+ 	.open = gpio_chrdev_open,
+ 	.owner = THIS_MODULE,
+-	.llseek = noop_llseek,
++	.llseek = no_llseek,
+ 	.unlocked_ioctl = gpio_ioctl,
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl = gpio_ioctl_compat,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index b8184617ca25..8199232c2fbd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -3798,8 +3798,12 @@ static int gfx_v8_0_init_save_restore_list(struct amdgpu_device *adev)
+ 	temp = mmRLC_SRM_INDEX_CNTL_ADDR_0;
+ 	data = mmRLC_SRM_INDEX_CNTL_DATA_0;
+ 	for (i = 0; i < sizeof(unique_indices) / sizeof(int); i++) {
+-		amdgpu_mm_wreg(adev, temp + i, unique_indices[i] & 0x3FFFF, false);
+-		amdgpu_mm_wreg(adev, data + i, unique_indices[i] >> 20, false);
++		if (unique_indices[i] != 0) {
++			amdgpu_mm_wreg(adev, temp + i,
++					unique_indices[i] & 0x3FFFF, false);
++			amdgpu_mm_wreg(adev, data + i,
++					unique_indices[i] >> 20, false);
++		}
+ 	}
+ 	kfree(register_list_format);
+ 
+@@ -5735,29 +5739,24 @@ static void gfx_v8_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev
+ 	adev->gfx.rlc.funcs->enter_safe_mode(adev);
+ 
+ 	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)) {
+-		/* 1 enable cntx_empty_int_enable/cntx_busy_int_enable/
+-		 * Cmp_busy/GFX_Idle interrupts
+-		 */
+-		gfx_v8_0_enable_gui_idle_interrupt(adev, true);
+-
+ 		temp1 = data1 =	RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+ 		data1 &= ~RLC_CGTT_MGCG_OVERRIDE__CGCG_MASK;
+ 		if (temp1 != data1)
+ 			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data1);
+ 
+-		/* 2 wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
++		/* : wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+ 		gfx_v8_0_wait_for_rlc_serdes(adev);
+ 
+-		/* 3 - clear cgcg override */
++		/* 2 - clear cgcg override */
+ 		gfx_v8_0_send_serdes_cmd(adev, BPM_REG_CGCG_OVERRIDE, CLE_BPM_SERDES_CMD);
+ 
+ 		/* wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+ 		gfx_v8_0_wait_for_rlc_serdes(adev);
+ 
+-		/* 4 - write cmd to set CGLS */
++		/* 3 - write cmd to set CGLS */
+ 		gfx_v8_0_send_serdes_cmd(adev, BPM_REG_CGLS_EN, SET_BPM_SERDES_CMD);
+ 
+-		/* 5 - enable cgcg */
++		/* 4 - enable cgcg */
+ 		data |= RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+ 
+ 		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS) {
+@@ -5775,6 +5774,11 @@ static void gfx_v8_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev
+ 
+ 		if (temp != data)
+ 			WREG32(mmRLC_CGCG_CGLS_CTRL, data);
++
++		/* 5 enable cntx_empty_int_enable/cntx_busy_int_enable/
++		 * Cmp_busy/GFX_Idle interrupts
++		 */
++		gfx_v8_0_enable_gui_idle_interrupt(adev, true);
+ 	} else {
+ 		/* disable cntx_empty_int_enable & GFX Idle interrupt */
+ 		gfx_v8_0_enable_gui_idle_interrupt(adev, false);
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 904beaa932d0..f75c6421db62 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -223,7 +223,8 @@ static int ast_get_dram_info(struct drm_device *dev)
+ 	ast_write32(ast, 0x10000, 0xfc600309);
+ 
+ 	do {
+-		;
++		if (pci_channel_offline(dev->pdev))
++			return -EIO;
+ 	} while (ast_read32(ast, 0x10000) != 0x01);
+ 	data = ast_read32(ast, 0x10004);
+ 
+@@ -428,7 +429,9 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
+ 	ast_detect_chip(dev, &need_post);
+ 
+ 	if (ast->chip != AST1180) {
+-		ast_get_dram_info(dev);
++		ret = ast_get_dram_info(dev);
++		if (ret)
++			goto out_free;
+ 		ast->vram_size = ast_get_vram_info(dev);
+ 		DRM_INFO("dram %d %d %d %08x\n", ast->mclk, ast->dram_type, ast->dram_bus_width, ast->vram_size);
+ 	}
+diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c
+index 50eb944fb78a..8f3ca526bd1b 100644
+--- a/drivers/gpu/drm/gma500/psb_drv.c
++++ b/drivers/gpu/drm/gma500/psb_drv.c
+@@ -473,6 +473,9 @@ static const struct file_operations psb_gem_fops = {
+ 	.open = drm_open,
+ 	.release = drm_release,
+ 	.unlocked_ioctl = psb_unlocked_ioctl,
++#ifdef CONFIG_COMPAT
++	.compat_ioctl = drm_compat_ioctl,
++#endif
+ 	.mmap = drm_gem_mmap,
+ 	.poll = drm_poll,
+ 	.read = drm_read,
+diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
+index 2bb69f3c5b84..b386b3162e29 100644
+--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
++++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
+@@ -55,10 +55,9 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv,
+ 		return -ENODEV;
+ 
+ 	/* See the comment at the drm_mm_init() call for more about this check.
+-	 * WaSkipStolenMemoryFirstPage:bdw,chv,kbl (incomplete)
++	 * WaSkipStolenMemoryFirstPage:bdw+ (incomplete)
+ 	 */
+-	if (start < 4096 && (IS_GEN8(dev_priv) ||
+-			     IS_KBL_REVID(dev_priv, 0, KBL_REVID_A0)))
++	if (start < 4096 && INTEL_GEN(dev_priv) >= 8)
+ 		start = 4096;
+ 
+ 	mutex_lock(&dev_priv->mm.stolen_lock);
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 35d385d70d8e..6bc93ba7e1dd 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -13494,8 +13494,9 @@ static int intel_modeset_checks(struct drm_atomic_state *state)
+ 
+ 		DRM_DEBUG_KMS("New cdclk calculated to be atomic %u, actual %u\n",
+ 			      intel_state->cdclk, intel_state->dev_cdclk);
+-	} else
++	} else {
+ 		to_intel_atomic_state(state)->cdclk = dev_priv->atomic_cdclk_freq;
++	}
+ 
+ 	intel_modeset_clear_plls(state);
+ 
+@@ -13596,8 +13597,9 @@ static int intel_atomic_check(struct drm_device *dev,
+ 
+ 		if (ret)
+ 			return ret;
+-	} else
+-		intel_state->cdclk = dev_priv->cdclk_freq;
++	} else {
++		intel_state->cdclk = dev_priv->atomic_cdclk_freq;
++	}
+ 
+ 	ret = drm_atomic_helper_check_planes(dev, state);
+ 	if (ret)
+@@ -15902,6 +15904,7 @@ void intel_modeset_init(struct drm_device *dev)
+ 
+ 	intel_update_czclk(dev_priv);
+ 	intel_update_cdclk(dev);
++	dev_priv->atomic_cdclk_freq = dev_priv->cdclk_freq;
+ 
+ 	intel_shared_dpll_init(dev);
+ 
+diff --git a/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c b/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
+index cd154ce6b6c1..34601574fc6e 100644
+--- a/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
++++ b/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
+@@ -296,7 +296,8 @@ static void chv_exec_gpio(struct drm_i915_private *dev_priv,
+ 	mutex_lock(&dev_priv->sb_lock);
+ 	vlv_iosf_sb_write(dev_priv, port, cfg1, 0);
+ 	vlv_iosf_sb_write(dev_priv, port, cfg0,
+-			  CHV_GPIO_GPIOCFG_GPO | CHV_GPIO_GPIOTXSTATE(value));
++			  CHV_GPIO_GPIOEN | CHV_GPIO_GPIOCFG_GPO |
++			  CHV_GPIO_GPIOTXSTATE(value));
+ 	mutex_unlock(&dev_priv->sb_lock);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
+index 1c603bbe5784..07a7cc0d4ebb 100644
+--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
++++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
+@@ -1062,7 +1062,18 @@ static bool vlv_power_well_enabled(struct drm_i915_private *dev_priv,
+ 
+ static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
+ {
+-	I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
++	u32 val;
++
++	/*
++	 * On driver load, a pipe may be active and driving a DSI display.
++	 * Preserve DPOUNIT_CLOCK_GATE_DISABLE to avoid the pipe getting stuck
++	 * (and never recovering) in this case. intel_dsi_post_disable() will
++	 * clear it when we turn off the display.
++	 */
++	val = I915_READ(DSPCLK_GATE_D);
++	val &= DPOUNIT_CLOCK_GATE_DISABLE;
++	val |= VRHUNIT_CLOCK_GATE_DISABLE;
++	I915_WRITE(DSPCLK_GATE_D, val);
+ 
+ 	/*
+ 	 * Disable trickle feed and enable pnd deadline calculation
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c b/drivers/gpu/drm/nouveau/nouveau_bios.c
+index a1570b109434..23ffe8571a99 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bios.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bios.c
+@@ -333,6 +333,9 @@ get_fp_strap(struct drm_device *dev, struct nvbios *bios)
+ 	if (bios->major_version < 5 && bios->data[0x48] & 0x4)
+ 		return NVReadVgaCrtc5758(dev, 0, 0xf) & 0xf;
+ 
++	if (drm->device.info.family >= NV_DEVICE_INFO_V0_MAXWELL)
++		return nvif_rd32(device, 0x001800) & 0x0000000f;
++	else
+ 	if (drm->device.info.family >= NV_DEVICE_INFO_V0_TESLA)
+ 		return (nvif_rd32(device, NV_PEXTDEV_BOOT_0) >> 24) & 0xf;
+ 	else
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 864323b19cf7..fad263b3c7f7 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -1209,6 +1209,7 @@ nouveau_bo_move_ntfy(struct ttm_buffer_object *bo, struct ttm_mem_reg *new_mem)
+ 			       nvbo->page_shift != vma->vm->mmu->lpg_shift)) {
+ 			nvkm_vm_map(vma, new_mem->mm_node);
+ 		} else {
++			WARN_ON(ttm_bo_wait(bo, false, false));
+ 			nvkm_vm_unmap(vma);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index 7218a067a6c5..e0d7f8472ac6 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -1851,7 +1851,7 @@ nvf1_chipset = {
+ 	.fb = gk104_fb_new,
+ 	.fuse = gf100_fuse_new,
+ 	.gpio = gk104_gpio_new,
+-	.i2c = gf119_i2c_new,
++	.i2c = gk104_i2c_new,
+ 	.ibus = gk104_ibus_new,
+ 	.iccsense = gf100_iccsense_new,
+ 	.imem = nv50_instmem_new,
+@@ -1965,7 +1965,7 @@ nv117_chipset = {
+ 	.fb = gm107_fb_new,
+ 	.fuse = gm107_fuse_new,
+ 	.gpio = gk104_gpio_new,
+-	.i2c = gf119_i2c_new,
++	.i2c = gk104_i2c_new,
+ 	.ibus = gk104_ibus_new,
+ 	.iccsense = gf100_iccsense_new,
+ 	.imem = nv50_instmem_new,
+@@ -1999,7 +1999,7 @@ nv118_chipset = {
+ 	.fb = gm107_fb_new,
+ 	.fuse = gm107_fuse_new,
+ 	.gpio = gk104_gpio_new,
+-	.i2c = gf119_i2c_new,
++	.i2c = gk104_i2c_new,
+ 	.ibus = gk104_ibus_new,
+ 	.iccsense = gf100_iccsense_new,
+ 	.imem = nv50_instmem_new,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogf100.c
+index cbc67f262322..12d964260a29 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogf100.c
+@@ -60,6 +60,7 @@ gf100_fifo_gpfifo_engine_fini(struct nvkm_fifo_chan *base,
+ 	struct nvkm_gpuobj *inst = chan->base.inst;
+ 	int ret = 0;
+ 
++	mutex_lock(&subdev->mutex);
+ 	nvkm_wr32(device, 0x002634, chan->base.chid);
+ 	if (nvkm_msec(device, 2000,
+ 		if (nvkm_rd32(device, 0x002634) == chan->base.chid)
+@@ -67,10 +68,12 @@ gf100_fifo_gpfifo_engine_fini(struct nvkm_fifo_chan *base,
+ 	) < 0) {
+ 		nvkm_error(subdev, "channel %d [%s] kick timeout\n",
+ 			   chan->base.chid, chan->base.object.client->name);
+-		ret = -EBUSY;
+-		if (suspend)
+-			return ret;
++		ret = -ETIMEDOUT;
+ 	}
++	mutex_unlock(&subdev->mutex);
++
++	if (ret && suspend)
++		return ret;
+ 
+ 	if (offset) {
+ 		nvkm_kmap(inst);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogk104.c
+index ed4351032ed6..a2df4f3e7763 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogk104.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gpfifogk104.c
+@@ -40,7 +40,9 @@ gk104_fifo_gpfifo_kick(struct gk104_fifo_chan *chan)
+ 	struct nvkm_subdev *subdev = &fifo->base.engine.subdev;
+ 	struct nvkm_device *device = subdev->device;
+ 	struct nvkm_client *client = chan->base.object.client;
++	int ret = 0;
+ 
++	mutex_lock(&subdev->mutex);
+ 	nvkm_wr32(device, 0x002634, chan->base.chid);
+ 	if (nvkm_msec(device, 2000,
+ 		if (!(nvkm_rd32(device, 0x002634) & 0x00100000))
+@@ -48,10 +50,10 @@ gk104_fifo_gpfifo_kick(struct gk104_fifo_chan *chan)
+ 	) < 0) {
+ 		nvkm_error(subdev, "channel %d [%s] kick timeout\n",
+ 			   chan->base.chid, client->name);
+-		return -EBUSY;
++		ret = -ETIMEDOUT;
+ 	}
+-
+-	return 0;
++	mutex_unlock(&subdev->mutex);
++	return ret;
+ }
+ 
+ static u32
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+index 157919c788e6..6584d505460c 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -1756,6 +1756,50 @@ gf100_gr_ = {
+ };
+ 
+ int
++gf100_gr_ctor_fw_legacy(struct gf100_gr *gr, const char *fwname,
++			struct gf100_gr_fuc *fuc, int ret)
++{
++	struct nvkm_subdev *subdev = &gr->base.engine.subdev;
++	struct nvkm_device *device = subdev->device;
++	const struct firmware *fw;
++	char f[32];
++
++	/* see if this firmware has a legacy path */
++	if (!strcmp(fwname, "fecs_inst"))
++		fwname = "fuc409c";
++	else if (!strcmp(fwname, "fecs_data"))
++		fwname = "fuc409d";
++	else if (!strcmp(fwname, "gpccs_inst"))
++		fwname = "fuc41ac";
++	else if (!strcmp(fwname, "gpccs_data"))
++		fwname = "fuc41ad";
++	else {
++		/* nope, let's just return the error we got */
++		nvkm_error(subdev, "failed to load %s\n", fwname);
++		return ret;
++	}
++
++	/* yes, try to load from the legacy path */
++	nvkm_debug(subdev, "%s: falling back to legacy path\n", fwname);
++
++	snprintf(f, sizeof(f), "nouveau/nv%02x_%s", device->chipset, fwname);
++	ret = request_firmware(&fw, f, device->dev);
++	if (ret) {
++		snprintf(f, sizeof(f), "nouveau/%s", fwname);
++		ret = request_firmware(&fw, f, device->dev);
++		if (ret) {
++			nvkm_error(subdev, "failed to load %s\n", fwname);
++			return ret;
++		}
++	}
++
++	fuc->size = fw->size;
++	fuc->data = kmemdup(fw->data, fuc->size, GFP_KERNEL);
++	release_firmware(fw);
++	return (fuc->data != NULL) ? 0 : -ENOMEM;
++}
++
++int
+ gf100_gr_ctor_fw(struct gf100_gr *gr, const char *fwname,
+ 		 struct gf100_gr_fuc *fuc)
+ {
+@@ -1765,10 +1809,8 @@ gf100_gr_ctor_fw(struct gf100_gr *gr, const char *fwname,
+ 	int ret;
+ 
+ 	ret = nvkm_firmware_get(device, fwname, &fw);
+-	if (ret) {
+-		nvkm_error(subdev, "failed to load %s\n", fwname);
+-		return ret;
+-	}
++	if (ret)
++		return gf100_gr_ctor_fw_legacy(gr, fwname, fuc, ret);
+ 
+ 	fuc->size = fw->size;
+ 	fuc->data = kmemdup(fw->data, fuc->size, GFP_KERNEL);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/priv.h
+index 212800ecdce9..7d1d3c6b4b72 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/priv.h
+@@ -12,6 +12,7 @@ struct nvbios_source {
+ 	bool rw;
+ 	bool ignore_checksum;
+ 	bool no_pcir;
++	bool require_checksum;
+ };
+ 
+ int nvbios_extend(struct nvkm_bios *, u32 length);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+index b2557e87afdd..7deb81b6dbac 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+@@ -86,9 +86,12 @@ shadow_image(struct nvkm_bios *bios, int idx, u32 offset, struct shadow *mthd)
+ 		    nvbios_checksum(&bios->data[image.base], image.size)) {
+ 			nvkm_debug(subdev, "%08x: checksum failed\n",
+ 				   image.base);
+-			if (mthd->func->rw)
++			if (!mthd->func->require_checksum) {
++				if (mthd->func->rw)
++					score += 1;
+ 				score += 1;
+-			score += 1;
++			} else
++				return 0;
+ 		} else {
+ 			score += 3;
+ 		}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
+index 8fecb5ff22a0..06572f8ce914 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
+@@ -99,6 +99,7 @@ nvbios_acpi_fast = {
+ 	.init = acpi_init,
+ 	.read = acpi_read_fast,
+ 	.rw = false,
++	.require_checksum = true,
+ };
+ 
+ const struct nvbios_source
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/base.c
+index 39c2a38e54f7..0c7ef250dcaf 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/base.c
+@@ -47,8 +47,10 @@ nvkm_ltc_tags_clear(struct nvkm_ltc *ltc, u32 first, u32 count)
+ 
+ 	BUG_ON((first > limit) || (limit >= ltc->num_tags));
+ 
++	mutex_lock(&ltc->subdev.mutex);
+ 	ltc->func->cbc_clear(ltc, first, limit);
+ 	ltc->func->cbc_wait(ltc);
++	mutex_unlock(&ltc->subdev.mutex);
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/radeon/radeon_cursor.c b/drivers/gpu/drm/radeon/radeon_cursor.c
+index 2a10e24b34b1..87a72476d313 100644
+--- a/drivers/gpu/drm/radeon/radeon_cursor.c
++++ b/drivers/gpu/drm/radeon/radeon_cursor.c
+@@ -90,6 +90,9 @@ static void radeon_show_cursor(struct drm_crtc *crtc)
+ 	struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+ 	struct radeon_device *rdev = crtc->dev->dev_private;
+ 
++	if (radeon_crtc->cursor_out_of_bounds)
++		return;
++
+ 	if (ASIC_IS_DCE4(rdev)) {
+ 		WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset,
+ 		       upper_32_bits(radeon_crtc->cursor_addr));
+@@ -148,16 +151,17 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
+ 		x += crtc->x;
+ 		y += crtc->y;
+ 	}
+-	DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
+ 
+-	if (x < 0) {
++	if (x < 0)
+ 		xorigin = min(-x, radeon_crtc->max_cursor_width - 1);
+-		x = 0;
+-	}
+-	if (y < 0) {
++	if (y < 0)
+ 		yorigin = min(-y, radeon_crtc->max_cursor_height - 1);
+-		y = 0;
++
++	if (!ASIC_IS_AVIVO(rdev)) {
++		x += crtc->x;
++		y += crtc->y;
+ 	}
++	DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
+ 
+ 	/* fixed on DCE6 and newer */
+ 	if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE6(rdev)) {
+@@ -180,27 +184,31 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
+ 		if (i > 1) {
+ 			int cursor_end, frame_end;
+ 
+-			cursor_end = x - xorigin + w;
++			cursor_end = x + w;
+ 			frame_end = crtc->x + crtc->mode.crtc_hdisplay;
+ 			if (cursor_end >= frame_end) {
+ 				w = w - (cursor_end - frame_end);
+ 				if (!(frame_end & 0x7f))
+ 					w--;
+-			} else {
+-				if (!(cursor_end & 0x7f))
+-					w--;
++			} else if (cursor_end <= 0) {
++				goto out_of_bounds;
++			} else if (!(cursor_end & 0x7f)) {
++				w--;
+ 			}
+ 			if (w <= 0) {
+-				w = 1;
+-				cursor_end = x - xorigin + w;
+-				if (!(cursor_end & 0x7f)) {
+-					x--;
+-					WARN_ON_ONCE(x < 0);
+-				}
++				goto out_of_bounds;
+ 			}
+ 		}
+ 	}
+ 
++	if (x <= (crtc->x - w) || y <= (crtc->y - radeon_crtc->cursor_height) ||
++	    x >= (crtc->x + crtc->mode.crtc_hdisplay) ||
++	    y >= (crtc->y + crtc->mode.crtc_vdisplay))
++		goto out_of_bounds;
++
++	x += xorigin;
++	y += yorigin;
++
+ 	if (ASIC_IS_DCE4(rdev)) {
+ 		WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);
+ 		WREG32(EVERGREEN_CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);
+@@ -212,6 +220,9 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
+ 		WREG32(AVIVO_D1CUR_SIZE + radeon_crtc->crtc_offset,
+ 		       ((w - 1) << 16) | (radeon_crtc->cursor_height - 1));
+ 	} else {
++		x -= crtc->x;
++		y -= crtc->y;
++
+ 		if (crtc->mode.flags & DRM_MODE_FLAG_DBLSCAN)
+ 			y *= 2;
+ 
+@@ -232,6 +243,19 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
+ 	radeon_crtc->cursor_x = x;
+ 	radeon_crtc->cursor_y = y;
+ 
++	if (radeon_crtc->cursor_out_of_bounds) {
++		radeon_crtc->cursor_out_of_bounds = false;
++		if (radeon_crtc->cursor_bo)
++			radeon_show_cursor(crtc);
++	}
++
++	return 0;
++
++ out_of_bounds:
++	if (!radeon_crtc->cursor_out_of_bounds) {
++		radeon_hide_cursor(crtc);
++		radeon_crtc->cursor_out_of_bounds = true;
++	}
+ 	return 0;
+ }
+ 
+@@ -297,22 +321,23 @@ int radeon_crtc_cursor_set2(struct drm_crtc *crtc,
+ 		return ret;
+ 	}
+ 
+-	radeon_crtc->cursor_width = width;
+-	radeon_crtc->cursor_height = height;
+-
+ 	radeon_lock_cursor(crtc, true);
+ 
+-	if (hot_x != radeon_crtc->cursor_hot_x ||
++	if (width != radeon_crtc->cursor_width ||
++	    height != radeon_crtc->cursor_height ||
++	    hot_x != radeon_crtc->cursor_hot_x ||
+ 	    hot_y != radeon_crtc->cursor_hot_y) {
+ 		int x, y;
+ 
+ 		x = radeon_crtc->cursor_x + radeon_crtc->cursor_hot_x - hot_x;
+ 		y = radeon_crtc->cursor_y + radeon_crtc->cursor_hot_y - hot_y;
+ 
+-		radeon_cursor_move_locked(crtc, x, y);
+-
++		radeon_crtc->cursor_width = width;
++		radeon_crtc->cursor_height = height;
+ 		radeon_crtc->cursor_hot_x = hot_x;
+ 		radeon_crtc->cursor_hot_y = hot_y;
++
++		radeon_cursor_move_locked(crtc, x, y);
+ 	}
+ 
+ 	radeon_show_cursor(crtc);
+diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h
+index bb75201a24ba..f1da484864a9 100644
+--- a/drivers/gpu/drm/radeon/radeon_mode.h
++++ b/drivers/gpu/drm/radeon/radeon_mode.h
+@@ -330,6 +330,7 @@ struct radeon_crtc {
+ 	u16 lut_r[256], lut_g[256], lut_b[256];
+ 	bool enabled;
+ 	bool can_tile;
++	bool cursor_out_of_bounds;
+ 	uint32_t crtc_offset;
+ 	struct drm_gem_object *cursor_bo;
+ 	uint64_t cursor_addr;
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 2523ca96c6c7..2297ec7954af 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -1722,6 +1722,7 @@ static int si_init_microcode(struct radeon_device *rdev)
+ 		    (rdev->pdev->revision == 0x80) ||
+ 		    (rdev->pdev->revision == 0x81) ||
+ 		    (rdev->pdev->revision == 0x83) ||
++		    (rdev->pdev->revision == 0x87) ||
+ 		    (rdev->pdev->device == 0x6604) ||
+ 		    (rdev->pdev->device == 0x6605))
+ 			new_smc = true;
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index c49934527a87..8b5e697f2549 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -3026,6 +3026,7 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ 		    (rdev->pdev->revision == 0x80) ||
+ 		    (rdev->pdev->revision == 0x81) ||
+ 		    (rdev->pdev->revision == 0x83) ||
++		    (rdev->pdev->revision == 0x87) ||
+ 		    (rdev->pdev->device == 0x6604) ||
+ 		    (rdev->pdev->device == 0x6605)) {
+ 			max_sclk = 75000;
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index b6c1211b4df7..04508526f1d5 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -348,6 +348,7 @@ void vmbus_free_channels(void)
+ {
+ 	struct vmbus_channel *channel, *tmp;
+ 
++	mutex_lock(&vmbus_connection.channel_mutex);
+ 	list_for_each_entry_safe(channel, tmp, &vmbus_connection.chn_list,
+ 		listentry) {
+ 		/* hv_process_channel_removal() needs this */
+@@ -355,6 +356,7 @@ void vmbus_free_channels(void)
+ 
+ 		vmbus_device_unregister(channel->device_obj);
+ 	}
++	mutex_unlock(&vmbus_connection.channel_mutex);
+ }
+ 
+ /*
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index 51f81d64ca37..a6ea387b5b00 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -361,7 +361,7 @@ static int stm_char_open(struct inode *inode, struct file *file)
+ 	struct stm_file *stmf;
+ 	struct device *dev;
+ 	unsigned int major = imajor(inode);
+-	int err = -ENODEV;
++	int err = -ENOMEM;
+ 
+ 	dev = class_find_device(&stm_class, NULL, &major, major_match);
+ 	if (!dev)
+@@ -369,8 +369,9 @@ static int stm_char_open(struct inode *inode, struct file *file)
+ 
+ 	stmf = kzalloc(sizeof(*stmf), GFP_KERNEL);
+ 	if (!stmf)
+-		return -ENOMEM;
++		goto err_put_device;
+ 
++	err = -ENODEV;
+ 	stm_output_init(&stmf->output);
+ 	stmf->stm = to_stm_device(dev);
+ 
+@@ -382,9 +383,10 @@ static int stm_char_open(struct inode *inode, struct file *file)
+ 	return nonseekable_open(inode, file);
+ 
+ err_free:
++	kfree(stmf);
++err_put_device:
+ 	/* matches class_find_device() above */
+ 	put_device(dev);
+-	kfree(stmf);
+ 
+ 	return err;
+ }
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index 2d49228f28b2..85b2bfedd6be 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -1746,7 +1746,7 @@ find_mad_agent(struct ib_mad_port_private *port_priv,
+ 			if (!class)
+ 				goto out;
+ 			if (convert_mgmt_class(mad_hdr->mgmt_class) >=
+-			    IB_MGMT_MAX_METHODS)
++			    ARRAY_SIZE(class->method_table))
+ 				goto out;
+ 			method = class->method_table[convert_mgmt_class(
+ 							mad_hdr->mgmt_class)];
+diff --git a/drivers/infiniband/core/multicast.c b/drivers/infiniband/core/multicast.c
+index 51c79b2fb0b8..45523cf1b83b 100644
+--- a/drivers/infiniband/core/multicast.c
++++ b/drivers/infiniband/core/multicast.c
+@@ -518,8 +518,11 @@ static void join_handler(int status, struct ib_sa_mcmember_rec *rec,
+ 		process_join_error(group, status);
+ 	else {
+ 		int mgids_changed, is_mgid0;
+-		ib_find_pkey(group->port->dev->device, group->port->port_num,
+-			     be16_to_cpu(rec->pkey), &pkey_index);
++
++		if (ib_find_pkey(group->port->dev->device,
++				 group->port->port_num, be16_to_cpu(rec->pkey),
++				 &pkey_index))
++			pkey_index = MCAST_INVALID_PKEY_INDEX;
+ 
+ 		spin_lock_irq(&group->port->lock);
+ 		if (group->state == MCAST_BUSY &&
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 6329c971c22f..4b892ca2b13a 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -2501,7 +2501,7 @@ static int i40iw_get_hw_stats(struct ib_device *ibdev,
+ 			return -ENOSYS;
+ 	}
+ 
+-	memcpy(&stats->value[0], &hw_stats, sizeof(*hw_stats));
++	memcpy(&stats->value[0], hw_stats, sizeof(*hw_stats));
+ 
+ 	return stats->num_counters;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index f724a7ef9b67..979e445186cf 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -850,4 +850,5 @@ void rxe_qp_cleanup(void *arg)
+ 	free_rd_atomic_resources(qp);
+ 
+ 	kernel_sock_shutdown(qp->sk, SHUT_RDWR);
++	sock_release(qp->sk);
+ }
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 1909dd252c94..fddff403d5d2 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -575,8 +575,11 @@ void ipoib_mcast_join_task(struct work_struct *work)
+ 	if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
+ 		return;
+ 
+-	if (ib_query_port(priv->ca, priv->port, &port_attr) ||
+-	    port_attr.state != IB_PORT_ACTIVE) {
++	if (ib_query_port(priv->ca, priv->port, &port_attr)) {
++		ipoib_dbg(priv, "ib_query_port() failed\n");
++		return;
++	}
++	if (port_attr.state != IB_PORT_ACTIVE) {
+ 		ipoib_dbg(priv, "port state is not ACTIVE (state = %d) suspending join task\n",
+ 			  port_attr.state);
+ 		return;
+diff --git a/drivers/input/misc/drv260x.c b/drivers/input/misc/drv260x.c
+index 2adfd86c869a..930424e55439 100644
+--- a/drivers/input/misc/drv260x.c
++++ b/drivers/input/misc/drv260x.c
+@@ -592,7 +592,6 @@ static int drv260x_probe(struct i2c_client *client,
+ 	}
+ 
+ 	haptics->input_dev->name = "drv260x:haptics";
+-	haptics->input_dev->dev.parent = client->dev.parent;
+ 	haptics->input_dev->close = drv260x_close;
+ 	input_set_drvdata(haptics->input_dev, haptics);
+ 	input_set_capability(haptics->input_dev, EV_FF, FF_RUMBLE);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ee7fc3701700..a87549be8e53 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7017,6 +7017,15 @@ static int raid5_run(struct mddev *mddev)
+ 			stripe = (stripe | (stripe-1)) + 1;
+ 		mddev->queue->limits.discard_alignment = stripe;
+ 		mddev->queue->limits.discard_granularity = stripe;
++
++		/*
++		 * We use 16-bit counter of active stripes in bi_phys_segments
++		 * (minus one for over-loaded initialization)
++		 */
++		blk_queue_max_hw_sectors(mddev->queue, 0xfffe * STRIPE_SECTORS);
++		blk_queue_max_discard_sectors(mddev->queue,
++					      0xfffe * STRIPE_SECTORS);
++
+ 		/*
+ 		 * unaligned part of discard request will be ignored, so can't
+ 		 * guarantee discard_zeroes_data
+diff --git a/drivers/media/dvb-frontends/mn88472.c b/drivers/media/dvb-frontends/mn88472.c
+index 18fb2df1e2bd..72650116732c 100644
+--- a/drivers/media/dvb-frontends/mn88472.c
++++ b/drivers/media/dvb-frontends/mn88472.c
+@@ -488,18 +488,6 @@ static int mn88472_probe(struct i2c_client *client,
+ 		goto err_kfree;
+ 	}
+ 
+-	/* Check demod answers with correct chip id */
+-	ret = regmap_read(dev->regmap[0], 0xff, &utmp);
+-	if (ret)
+-		goto err_regmap_0_regmap_exit;
+-
+-	dev_dbg(&client->dev, "chip id=%02x\n", utmp);
+-
+-	if (utmp != 0x02) {
+-		ret = -ENODEV;
+-		goto err_regmap_0_regmap_exit;
+-	}
+-
+ 	/*
+ 	 * Chip has three I2C addresses for different register banks. Used
+ 	 * addresses are 0x18, 0x1a and 0x1c. We register two dummy clients,
+@@ -536,6 +524,18 @@ static int mn88472_probe(struct i2c_client *client,
+ 	}
+ 	i2c_set_clientdata(dev->client[2], dev);
+ 
++	/* Check demod answers with correct chip id */
++	ret = regmap_read(dev->regmap[2], 0xff, &utmp);
++	if (ret)
++		goto err_regmap_2_regmap_exit;
++
++	dev_dbg(&client->dev, "chip id=%02x\n", utmp);
++
++	if (utmp != 0x02) {
++		ret = -ENODEV;
++		goto err_regmap_2_regmap_exit;
++	}
++
+ 	/* Sleep because chip is active by default */
+ 	ret = regmap_write(dev->regmap[2], 0x05, 0x3e);
+ 	if (ret)
+diff --git a/drivers/media/dvb-frontends/mn88473.c b/drivers/media/dvb-frontends/mn88473.c
+index 451974a1d7ed..2932bdc8fa94 100644
+--- a/drivers/media/dvb-frontends/mn88473.c
++++ b/drivers/media/dvb-frontends/mn88473.c
+@@ -485,18 +485,6 @@ static int mn88473_probe(struct i2c_client *client,
+ 		goto err_kfree;
+ 	}
+ 
+-	/* Check demod answers with correct chip id */
+-	ret = regmap_read(dev->regmap[0], 0xff, &uitmp);
+-	if (ret)
+-		goto err_regmap_0_regmap_exit;
+-
+-	dev_dbg(&client->dev, "chip id=%02x\n", uitmp);
+-
+-	if (uitmp != 0x03) {
+-		ret = -ENODEV;
+-		goto err_regmap_0_regmap_exit;
+-	}
+-
+ 	/*
+ 	 * Chip has three I2C addresses for different register banks. Used
+ 	 * addresses are 0x18, 0x1a and 0x1c. We register two dummy clients,
+@@ -533,6 +521,18 @@ static int mn88473_probe(struct i2c_client *client,
+ 	}
+ 	i2c_set_clientdata(dev->client[2], dev);
+ 
++	/* Check demod answers with correct chip id */
++	ret = regmap_read(dev->regmap[2], 0xff, &uitmp);
++	if (ret)
++		goto err_regmap_2_regmap_exit;
++
++	dev_dbg(&client->dev, "chip id=%02x\n", uitmp);
++
++	if (uitmp != 0x03) {
++		ret = -ENODEV;
++		goto err_regmap_2_regmap_exit;
++	}
++
+ 	/* Sleep because chip is active by default */
+ 	ret = regmap_write(dev->regmap[2], 0x05, 0x3e);
+ 	if (ret)
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 0b6d46c453bf..02a8f511443a 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -815,6 +815,7 @@ static int tvp5150_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		return 0;
+ 	case V4L2_CID_HUE:
+ 		tvp5150_write(sd, TVP5150_HUE_CTL, ctrl->val);
++		break;
+ 	case V4L2_CID_TEST_PATTERN:
+ 		decoder->enable = ctrl->val ? false : true;
+ 		tvp5150_selmux(sd);
+diff --git a/drivers/media/pci/solo6x10/solo6x10.h b/drivers/media/pci/solo6x10/solo6x10.h
+index 5bd498735a66..3f8da5e8c430 100644
+--- a/drivers/media/pci/solo6x10/solo6x10.h
++++ b/drivers/media/pci/solo6x10/solo6x10.h
+@@ -284,7 +284,10 @@ static inline u32 solo_reg_read(struct solo_dev *solo_dev, int reg)
+ static inline void solo_reg_write(struct solo_dev *solo_dev, int reg,
+ 				  u32 data)
+ {
++	u16 val;
++
+ 	writel(data, solo_dev->reg_base + reg);
++	pci_read_config_word(solo_dev->pdev, PCI_STATUS, &val);
+ }
+ 
+ static inline void solo_irq_on(struct solo_dev *dev, u32 mask)
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index e3f104fafd0a..9e88c2f65333 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1073,6 +1073,7 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
+ 							 idx);
+ 		if (ret == 0)
+ 			return child;
++		device_del(child);
+ 	}
+ 
+ 	put_device(child);
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index 641c1a566687..75be3d5b2430 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -675,7 +675,7 @@ void mei_host_client_init(struct mei_device *dev)
+ 
+ 	pm_runtime_mark_last_busy(dev->dev);
+ 	dev_dbg(dev->dev, "rpm: autosuspend\n");
+-	pm_runtime_autosuspend(dev->dev);
++	pm_request_autosuspend(dev->dev);
+ }
+ 
+ /**
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 7ad15d678878..c8307e8b4c16 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -122,6 +122,8 @@
+ #define MEI_DEV_ID_SPT_H      0xA13A  /* Sunrise Point H */
+ #define MEI_DEV_ID_SPT_H_2    0xA13B  /* Sunrise Point H 2 */
+ 
++#define MEI_DEV_ID_LBG        0xA1BA  /* Lewisburg (SPT) */
++
+ #define MEI_DEV_ID_BXT_M      0x1A9A  /* Broxton M */
+ #define MEI_DEV_ID_APL_I      0x5A9A  /* Apollo Lake I */
+ 
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 5eb9b75ae9ec..69fca9b545df 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -87,6 +87,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, mei_me_pch8_cfg)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, mei_me_pch8_sps_cfg)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, mei_me_pch8_sps_cfg)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, mei_me_pch8_cfg)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, mei_me_pch8_cfg)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, mei_me_pch8_cfg)},
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 6eb8f0705c65..1f6205c959cd 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2074,7 +2074,27 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 			ctrl &= ~SDHCI_CTRL_EXEC_TUNING;
+ 			sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
+ 
++			sdhci_do_reset(host, SDHCI_RESET_CMD);
++			sdhci_do_reset(host, SDHCI_RESET_DATA);
++
+ 			err = -EIO;
++
++			if (cmd.opcode != MMC_SEND_TUNING_BLOCK_HS200)
++				goto out;
++
++			sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
++			sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
++
++			spin_unlock_irqrestore(&host->lock, flags);
++
++			memset(&cmd, 0, sizeof(cmd));
++			cmd.opcode = MMC_STOP_TRANSMISSION;
++			cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
++			cmd.busy_timeout = 50;
++			mmc_wait_for_cmd(mmc, &cmd, 0);
++
++			spin_lock_irqsave(&host->lock, flags);
++
+ 			goto out;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 60227a3452a4..5588c560ec61 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -770,6 +770,17 @@ struct mvpp2_rx_desc {
+ 	u32 reserved8;
+ };
+ 
++struct mvpp2_txq_pcpu_buf {
++	/* Transmitted SKB */
++	struct sk_buff *skb;
++
++	/* Physical address of transmitted buffer */
++	dma_addr_t phys;
++
++	/* Size transmitted */
++	size_t size;
++};
++
+ /* Per-CPU Tx queue control */
+ struct mvpp2_txq_pcpu {
+ 	int cpu;
+@@ -785,11 +796,8 @@ struct mvpp2_txq_pcpu {
+ 	/* Number of Tx DMA descriptors reserved for each CPU */
+ 	int reserved_num;
+ 
+-	/* Array of transmitted skb */
+-	struct sk_buff **tx_skb;
+-
+-	/* Array of transmitted buffers' physical addresses */
+-	dma_addr_t *tx_buffs;
++	/* Infos about transmitted buffers */
++	struct mvpp2_txq_pcpu_buf *buffs;
+ 
+ 	/* Index of last TX DMA descriptor that was inserted */
+ 	int txq_put_index;
+@@ -979,10 +987,11 @@ static void mvpp2_txq_inc_put(struct mvpp2_txq_pcpu *txq_pcpu,
+ 			      struct sk_buff *skb,
+ 			      struct mvpp2_tx_desc *tx_desc)
+ {
+-	txq_pcpu->tx_skb[txq_pcpu->txq_put_index] = skb;
+-	if (skb)
+-		txq_pcpu->tx_buffs[txq_pcpu->txq_put_index] =
+-							 tx_desc->buf_phys_addr;
++	struct mvpp2_txq_pcpu_buf *tx_buf =
++		txq_pcpu->buffs + txq_pcpu->txq_put_index;
++	tx_buf->skb = skb;
++	tx_buf->size = tx_desc->data_size;
++	tx_buf->phys = tx_desc->buf_phys_addr;
+ 	txq_pcpu->txq_put_index++;
+ 	if (txq_pcpu->txq_put_index == txq_pcpu->size)
+ 		txq_pcpu->txq_put_index = 0;
+@@ -4401,17 +4410,16 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
+ 	int i;
+ 
+ 	for (i = 0; i < num; i++) {
+-		dma_addr_t buf_phys_addr =
+-				    txq_pcpu->tx_buffs[txq_pcpu->txq_get_index];
+-		struct sk_buff *skb = txq_pcpu->tx_skb[txq_pcpu->txq_get_index];
++		struct mvpp2_txq_pcpu_buf *tx_buf =
++			txq_pcpu->buffs + txq_pcpu->txq_get_index;
+ 
+ 		mvpp2_txq_inc_get(txq_pcpu);
+ 
+-		dma_unmap_single(port->dev->dev.parent, buf_phys_addr,
+-				 skb_headlen(skb), DMA_TO_DEVICE);
+-		if (!skb)
++		dma_unmap_single(port->dev->dev.parent, tx_buf->phys,
++				 tx_buf->size, DMA_TO_DEVICE);
++		if (!tx_buf->skb)
+ 			continue;
+-		dev_kfree_skb_any(skb);
++		dev_kfree_skb_any(tx_buf->skb);
+ 	}
+ }
+ 
+@@ -4651,15 +4659,10 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
+ 	for_each_present_cpu(cpu) {
+ 		txq_pcpu = per_cpu_ptr(txq->pcpu, cpu);
+ 		txq_pcpu->size = txq->size;
+-		txq_pcpu->tx_skb = kmalloc(txq_pcpu->size *
+-					   sizeof(*txq_pcpu->tx_skb),
+-					   GFP_KERNEL);
+-		if (!txq_pcpu->tx_skb)
+-			goto error;
+-
+-		txq_pcpu->tx_buffs = kmalloc(txq_pcpu->size *
+-					     sizeof(dma_addr_t), GFP_KERNEL);
+-		if (!txq_pcpu->tx_buffs)
++		txq_pcpu->buffs = kmalloc(txq_pcpu->size *
++					  sizeof(struct mvpp2_txq_pcpu_buf),
++					  GFP_KERNEL);
++		if (!txq_pcpu->buffs)
+ 			goto error;
+ 
+ 		txq_pcpu->count = 0;
+@@ -4673,8 +4676,7 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
+ error:
+ 	for_each_present_cpu(cpu) {
+ 		txq_pcpu = per_cpu_ptr(txq->pcpu, cpu);
+-		kfree(txq_pcpu->tx_skb);
+-		kfree(txq_pcpu->tx_buffs);
++		kfree(txq_pcpu->buffs);
+ 	}
+ 
+ 	dma_free_coherent(port->dev->dev.parent,
+@@ -4693,8 +4695,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
+ 
+ 	for_each_present_cpu(cpu) {
+ 		txq_pcpu = per_cpu_ptr(txq->pcpu, cpu);
+-		kfree(txq_pcpu->tx_skb);
+-		kfree(txq_pcpu->tx_buffs);
++		kfree(txq_pcpu->buffs);
+ 	}
+ 
+ 	if (txq->descs)
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 14b13f07cd1f..a35f78be8dec 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -2792,7 +2792,7 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio)
+ 		WARN_ON(1);
+ 	}
+ 
+-	return val;
++	return !!val;
+ }
+ EXPORT_SYMBOL(ath9k_hw_gpio_get);
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/pci.c b/drivers/net/wireless/ath/ath9k/pci.c
+index 0dd454acf22a..aff473dfa10d 100644
+--- a/drivers/net/wireless/ath/ath9k/pci.c
++++ b/drivers/net/wireless/ath/ath9k/pci.c
+@@ -26,7 +26,6 @@ static const struct pci_device_id ath_pci_id_table[] = {
+ 	{ PCI_VDEVICE(ATHEROS, 0x0023) }, /* PCI   */
+ 	{ PCI_VDEVICE(ATHEROS, 0x0024) }, /* PCI-E */
+ 	{ PCI_VDEVICE(ATHEROS, 0x0027) }, /* PCI   */
+-	{ PCI_VDEVICE(ATHEROS, 0x0029) }, /* PCI   */
+ 
+ #ifdef CONFIG_ATH9K_PCOEM
+ 	/* Mini PCI AR9220 MB92 cards: Compex WLM200NX, Wistron DNMA-92 */
+@@ -37,7 +36,7 @@ static const struct pci_device_id ath_pci_id_table[] = {
+ 	  .driver_data = ATH9K_PCI_LED_ACT_HI },
+ #endif
+ 
+-	{ PCI_VDEVICE(ATHEROS, 0x002A) }, /* PCI-E */
++	{ PCI_VDEVICE(ATHEROS, 0x0029) }, /* PCI   */
+ 
+ #ifdef CONFIG_ATH9K_PCOEM
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
+@@ -85,7 +84,11 @@ static const struct pci_device_id ath_pci_id_table[] = {
+ 			 0x10CF, /* Fujitsu */
+ 			 0x1536),
+ 	  .driver_data = ATH9K_PCI_D3_L1_WAR },
++#endif
+ 
++	{ PCI_VDEVICE(ATHEROS, 0x002A) }, /* PCI-E */
++
++#ifdef CONFIG_ATH9K_PCOEM
+ 	/* AR9285 card for Asus */
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
+ 			 0x002B,
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index c6b246aa2419..4b7807902700 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4401,6 +4401,13 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
+ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+ 				  u8 macid, bool connect)
+ {
++#ifdef RTL8XXXU_GEN2_REPORT_CONNECT
++	/*
++	 * Barry Day reports this causes issues with 8192eu and 8723bu
++	 * devices reconnecting. The reason for this is unclear, but
++	 * until it is better understood, leave the code in place but
++	 * disabled, so it is not lost.
++	 */
+ 	struct h2c_cmd h2c;
+ 
+ 	memset(&h2c, 0, sizeof(struct h2c_cmd));
+@@ -4412,6 +4419,7 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+ 		h2c.media_status_rpt.parm &= ~BIT(0);
+ 
+ 	rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
++#endif
+ }
+ 
+ void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index 264466f59c57..4ac928bf1f8e 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -1303,12 +1303,13 @@ EXPORT_SYMBOL_GPL(rtl_action_proc);
+ 
+ static void setup_arp_tx(struct rtl_priv *rtlpriv, struct rtl_ps_ctl *ppsc)
+ {
++	struct ieee80211_hw *hw = rtlpriv->hw;
++
+ 	rtlpriv->ra.is_special_data = true;
+ 	if (rtlpriv->cfg->ops->get_btc_status())
+ 		rtlpriv->btcoexist.btc_ops->btc_special_packet_notify(
+ 					rtlpriv, 1);
+-	rtlpriv->enter_ps = false;
+-	schedule_work(&rtlpriv->works.lps_change_work);
++	rtl_lps_leave(hw);
+ 	ppsc->last_delaylps_stamp_jiffies = jiffies;
+ }
+ 
+@@ -1381,8 +1382,7 @@ u8 rtl_is_special_data(struct ieee80211_hw *hw, struct sk_buff *skb, u8 is_tx,
+ 
+ 		if (is_tx) {
+ 			rtlpriv->ra.is_special_data = true;
+-			rtlpriv->enter_ps = false;
+-			schedule_work(&rtlpriv->works.lps_change_work);
++			rtl_lps_leave(hw);
+ 			ppsc->last_delaylps_stamp_jiffies = jiffies;
+ 		}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c b/drivers/net/wireless/realtek/rtlwifi/core.c
+index 41f77f8a309e..8f783efac960 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/core.c
++++ b/drivers/net/wireless/realtek/rtlwifi/core.c
+@@ -1149,10 +1149,8 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw,
+ 		} else {
+ 			mstatus = RT_MEDIA_DISCONNECT;
+ 
+-			if (mac->link_state == MAC80211_LINKED) {
+-				rtlpriv->enter_ps = false;
+-				schedule_work(&rtlpriv->works.lps_change_work);
+-			}
++			if (mac->link_state == MAC80211_LINKED)
++				rtl_lps_leave(hw);
+ 			if (ppsc->p2p_ps_info.p2p_ps_mode > P2P_PS_NONE)
+ 				rtl_p2p_ps_cmd(hw, P2P_PS_DISABLE);
+ 			mac->link_state = MAC80211_NOLINK;
+@@ -1430,8 +1428,7 @@ static void rtl_op_sw_scan_start(struct ieee80211_hw *hw,
+ 	}
+ 
+ 	if (mac->link_state == MAC80211_LINKED) {
+-		rtlpriv->enter_ps = false;
+-		schedule_work(&rtlpriv->works.lps_change_work);
++		rtl_lps_leave(hw);
+ 		mac->link_state = MAC80211_LINKED_SCANNING;
+ 	} else {
+ 		rtl_ips_nic_on(hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index d12586d4f845..e538e23baa9f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -662,11 +662,9 @@ tx_status_ok:
+ 	}
+ 
+ 	if (((rtlpriv->link_info.num_rx_inperiod +
+-		rtlpriv->link_info.num_tx_inperiod) > 8) ||
+-		(rtlpriv->link_info.num_rx_inperiod > 2)) {
+-		rtlpriv->enter_ps = false;
+-		schedule_work(&rtlpriv->works.lps_change_work);
+-	}
++	      rtlpriv->link_info.num_tx_inperiod) > 8) ||
++	      (rtlpriv->link_info.num_rx_inperiod > 2))
++		rtl_lps_leave(hw);
+ }
+ 
+ static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,
+@@ -917,10 +915,8 @@ new_trx_end:
+ 		}
+ 		if (((rtlpriv->link_info.num_rx_inperiod +
+ 		      rtlpriv->link_info.num_tx_inperiod) > 8) ||
+-		      (rtlpriv->link_info.num_rx_inperiod > 2)) {
+-			rtlpriv->enter_ps = false;
+-			schedule_work(&rtlpriv->works.lps_change_work);
+-		}
++		      (rtlpriv->link_info.num_rx_inperiod > 2))
++			rtl_lps_leave(hw);
+ 		skb = new_skb;
+ no_new:
+ 		if (rtlpriv->use_new_trx_flow) {
+diff --git a/drivers/net/wireless/realtek/rtlwifi/ps.c b/drivers/net/wireless/realtek/rtlwifi/ps.c
+index 9a64f9b703e5..424e262939c9 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/ps.c
++++ b/drivers/net/wireless/realtek/rtlwifi/ps.c
+@@ -407,8 +407,8 @@ void rtl_lps_set_psmode(struct ieee80211_hw *hw, u8 rt_psmode)
+ 	}
+ }
+ 
+-/*Enter the leisure power save mode.*/
+-void rtl_lps_enter(struct ieee80211_hw *hw)
++/* Interrupt safe routine to enter the leisure power save mode.*/
++static void rtl_lps_enter_core(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+@@ -444,10 +444,9 @@ void rtl_lps_enter(struct ieee80211_hw *hw)
+ 
+ 	spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flag);
+ }
+-EXPORT_SYMBOL(rtl_lps_enter);
+ 
+-/*Leave the leisure power save mode.*/
+-void rtl_lps_leave(struct ieee80211_hw *hw)
++/* Interrupt safe routine to leave the leisure power save mode.*/
++static void rtl_lps_leave_core(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+@@ -477,7 +476,6 @@ void rtl_lps_leave(struct ieee80211_hw *hw)
+ 	}
+ 	spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flag);
+ }
+-EXPORT_SYMBOL(rtl_lps_leave);
+ 
+ /* For sw LPS*/
+ void rtl_swlps_beacon(struct ieee80211_hw *hw, void *data, unsigned int len)
+@@ -670,12 +668,34 @@ void rtl_lps_change_work_callback(struct work_struct *work)
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 
+ 	if (rtlpriv->enter_ps)
+-		rtl_lps_enter(hw);
++		rtl_lps_enter_core(hw);
+ 	else
+-		rtl_lps_leave(hw);
++		rtl_lps_leave_core(hw);
+ }
+ EXPORT_SYMBOL_GPL(rtl_lps_change_work_callback);
+ 
++void rtl_lps_enter(struct ieee80211_hw *hw)
++{
++	struct rtl_priv *rtlpriv = rtl_priv(hw);
++
++	if (!in_interrupt())
++		return rtl_lps_enter_core(hw);
++	rtlpriv->enter_ps = true;
++	schedule_work(&rtlpriv->works.lps_change_work);
++}
++EXPORT_SYMBOL_GPL(rtl_lps_enter);
++
++void rtl_lps_leave(struct ieee80211_hw *hw)
++{
++	struct rtl_priv *rtlpriv = rtl_priv(hw);
++
++	if (!in_interrupt())
++		return rtl_lps_leave_core(hw);
++	rtlpriv->enter_ps = false;
++	schedule_work(&rtlpriv->works.lps_change_work);
++}
++EXPORT_SYMBOL_GPL(rtl_lps_leave);
++
+ void rtl_swlps_wq_callback(void *data)
+ {
+ 	struct rtl_works *rtlworks = container_of_dwork_rtl(data,
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index cea8350fbc7e..a2ac9e641aa9 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -108,7 +108,7 @@ static ssize_t align_show(struct device *dev,
+ {
+ 	struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev);
+ 
+-	return sprintf(buf, "%lx\n", nd_pfn->align);
++	return sprintf(buf, "%ld\n", nd_pfn->align);
+ }
+ 
+ static ssize_t __align_store(struct nd_pfn *nd_pfn, const char *buf)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index aab9d5115a5f..24db77ea5093 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2064,6 +2064,10 @@ bool pci_dev_run_wake(struct pci_dev *dev)
+ 	if (!dev->pme_support)
+ 		return false;
+ 
++	/* PME-capable in principle, but not from the intended sleep state */
++	if (!pci_pme_capable(dev, pci_target_state(dev)))
++		return false;
++
+ 	while (bus->parent) {
+ 		struct pci_dev *bridge = bus->self;
+ 
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index adecc1c555f0..c4ed3e53c0ea 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -137,6 +137,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 	},
+ 	{
+ 		.callback = dmi_matched,
++		.ident = "ASUSTeK COMPUTER INC. X45U",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "X45U"),
++		},
++		.driver_data = &quirk_asus_wapf4,
++	},
++	{
++		.callback = dmi_matched,
+ 		.ident = "ASUSTeK COMPUTER INC. X456UA",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+diff --git a/drivers/regulator/stw481x-vmmc.c b/drivers/regulator/stw481x-vmmc.c
+index 7d2ae3e9e942..342f5da79975 100644
+--- a/drivers/regulator/stw481x-vmmc.c
++++ b/drivers/regulator/stw481x-vmmc.c
+@@ -47,7 +47,8 @@ static struct regulator_desc vmmc_regulator = {
+ 	.volt_table = stw481x_vmmc_voltages,
+ 	.enable_time = 200, /* FIXME: look this up */
+ 	.enable_reg = STW_CONF1,
+-	.enable_mask = STW_CONF1_PDN_VMMC,
++	.enable_mask = STW_CONF1_PDN_VMMC | STW_CONF1_MMC_LS_STATUS,
++	.enable_val = STW_CONF1_PDN_VMMC,
+ 	.vsel_reg = STW_CONF1,
+ 	.vsel_mask = STW_CONF1_VMMC_MASK,
+ };
+diff --git a/drivers/s390/char/vmlogrdr.c b/drivers/s390/char/vmlogrdr.c
+index e883063c7258..3167e8581994 100644
+--- a/drivers/s390/char/vmlogrdr.c
++++ b/drivers/s390/char/vmlogrdr.c
+@@ -870,7 +870,7 @@ static int __init vmlogrdr_init(void)
+ 		goto cleanup;
+ 
+ 	for (i=0; i < MAXMINOR; ++i ) {
+-		sys_ser[i].buffer = (char *) get_zeroed_page(GFP_KERNEL);
++		sys_ser[i].buffer = (char *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ 		if (!sys_ser[i].buffer) {
+ 			rc = -ENOMEM;
+ 			break;
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index 581001989937..d5bf36ec8a75 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -289,11 +289,12 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
+ 
+ 
+ /**
+- * zfcp_dbf_rec_run - trace event related to running recovery
++ * zfcp_dbf_rec_run_lvl - trace event related to running recovery
++ * @level: trace level to be used for event
+  * @tag: identifier for event
+  * @erp: erp_action running
+  */
+-void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
++void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp)
+ {
+ 	struct zfcp_dbf *dbf = erp->adapter->dbf;
+ 	struct zfcp_dbf_rec *rec = &dbf->rec_buf;
+@@ -319,11 +320,21 @@ void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
+ 	else
+ 		rec->u.run.rec_count = atomic_read(&erp->adapter->erp_counter);
+ 
+-	debug_event(dbf->rec, 1, rec, sizeof(*rec));
++	debug_event(dbf->rec, level, rec, sizeof(*rec));
+ 	spin_unlock_irqrestore(&dbf->rec_lock, flags);
+ }
+ 
+ /**
++ * zfcp_dbf_rec_run - trace event related to running recovery
++ * @tag: identifier for event
++ * @erp: erp_action running
++ */
++void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
++{
++	zfcp_dbf_rec_run_lvl(1, tag, erp);
++}
++
++/**
+  * zfcp_dbf_rec_run_wka - trace wka port event with info like running recovery
+  * @tag: identifier for event
+  * @wka_port: well known address port
+diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h
+index 36d07584271d..db186d44cfaf 100644
+--- a/drivers/s390/scsi/zfcp_dbf.h
++++ b/drivers/s390/scsi/zfcp_dbf.h
+@@ -2,7 +2,7 @@
+  * zfcp device driver
+  * debug feature declarations
+  *
+- * Copyright IBM Corp. 2008, 2015
++ * Copyright IBM Corp. 2008, 2016
+  */
+ 
+ #ifndef ZFCP_DBF_H
+@@ -283,6 +283,30 @@ struct zfcp_dbf {
+ 	struct zfcp_dbf_scsi		scsi_buf;
+ };
+ 
++/**
++ * zfcp_dbf_hba_fsf_resp_suppress - true if we should not trace by default
++ * @req: request that has been completed
++ *
++ * Returns true if FCP response with only benign residual under count.
++ */
++static inline
++bool zfcp_dbf_hba_fsf_resp_suppress(struct zfcp_fsf_req *req)
++{
++	struct fsf_qtcb *qtcb = req->qtcb;
++	u32 fsf_stat = qtcb->header.fsf_status;
++	struct fcp_resp *fcp_rsp;
++	u8 rsp_flags, fr_status;
++
++	if (qtcb->prefix.qtcb_type != FSF_IO_COMMAND)
++		return false; /* not an FCP response */
++	fcp_rsp = (struct fcp_resp *)&qtcb->bottom.io.fcp_rsp;
++	rsp_flags = fcp_rsp->fr_flags;
++	fr_status = fcp_rsp->fr_status;
++	return (fsf_stat == FSF_FCP_RSP_AVAILABLE) &&
++		(rsp_flags == FCP_RESID_UNDER) &&
++		(fr_status == SAM_STAT_GOOD);
++}
++
+ static inline
+ void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req)
+ {
+@@ -304,7 +328,9 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req)
+ 		zfcp_dbf_hba_fsf_resp("fs_perr", 1, req);
+ 
+ 	} else if (qtcb->header.fsf_status != FSF_GOOD) {
+-		zfcp_dbf_hba_fsf_resp("fs_ferr", 1, req);
++		zfcp_dbf_hba_fsf_resp("fs_ferr",
++				      zfcp_dbf_hba_fsf_resp_suppress(req)
++				      ? 5 : 1, req);
+ 
+ 	} else if ((req->fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) ||
+ 		   (req->fsf_command == FSF_QTCB_OPEN_LUN)) {
+@@ -388,4 +414,15 @@ void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag)
+ 	_zfcp_dbf_scsi(tmp_tag, 1, scmnd, NULL);
+ }
+ 
++/**
++ * zfcp_dbf_scsi_nullcmnd() - trace NULLify of SCSI command in dev/tgt-reset.
++ * @scmnd: SCSI command that was NULLified.
++ * @fsf_req: request that owned @scmnd.
++ */
++static inline void zfcp_dbf_scsi_nullcmnd(struct scsi_cmnd *scmnd,
++					  struct zfcp_fsf_req *fsf_req)
++{
++	_zfcp_dbf_scsi("scfc__1", 3, scmnd, fsf_req);
++}
++
+ #endif /* ZFCP_DBF_H */
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index a59d678125bd..7ccfce559034 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -3,7 +3,7 @@
+  *
+  * Error Recovery Procedures (ERP).
+  *
+- * Copyright IBM Corp. 2002, 2015
++ * Copyright IBM Corp. 2002, 2016
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -1204,6 +1204,62 @@ static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action)
+ 	}
+ }
+ 
++/**
++ * zfcp_erp_try_rport_unblock - unblock rport if no more/new recovery
++ * @port: zfcp_port whose fc_rport we should try to unblock
++ */
++static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
++{
++	unsigned long flags;
++	struct zfcp_adapter *adapter = port->adapter;
++	int port_status;
++	struct Scsi_Host *shost = adapter->scsi_host;
++	struct scsi_device *sdev;
++
++	write_lock_irqsave(&adapter->erp_lock, flags);
++	port_status = atomic_read(&port->status);
++	if ((port_status & ZFCP_STATUS_COMMON_UNBLOCKED)    == 0 ||
++	    (port_status & (ZFCP_STATUS_COMMON_ERP_INUSE |
++			    ZFCP_STATUS_COMMON_ERP_FAILED)) != 0) {
++		/* new ERP of severity >= port triggered elsewhere meanwhile or
++		 * local link down (adapter erp_failed but not clear unblock)
++		 */
++		zfcp_dbf_rec_run_lvl(4, "ertru_p", &port->erp_action);
++		write_unlock_irqrestore(&adapter->erp_lock, flags);
++		return;
++	}
++	spin_lock(shost->host_lock);
++	__shost_for_each_device(sdev, shost) {
++		struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
++		int lun_status;
++
++		if (zsdev->port != port)
++			continue;
++		/* LUN under port of interest */
++		lun_status = atomic_read(&zsdev->status);
++		if ((lun_status & ZFCP_STATUS_COMMON_ERP_FAILED) != 0)
++			continue; /* unblock rport despite failed LUNs */
++		/* LUN recovery not given up yet [maybe follow-up pending] */
++		if ((lun_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
++		    (lun_status & ZFCP_STATUS_COMMON_ERP_INUSE) != 0) {
++			/* LUN blocked:
++			 * not yet unblocked [LUN recovery pending]
++			 * or meanwhile blocked [new LUN recovery triggered]
++			 */
++			zfcp_dbf_rec_run_lvl(4, "ertru_l", &zsdev->erp_action);
++			spin_unlock(shost->host_lock);
++			write_unlock_irqrestore(&adapter->erp_lock, flags);
++			return;
++		}
++	}
++	/* now port has no child or all children have completed recovery,
++	 * and no ERP of severity >= port was meanwhile triggered elsewhere
++	 */
++	zfcp_scsi_schedule_rport_register(port);
++	spin_unlock(shost->host_lock);
++	write_unlock_irqrestore(&adapter->erp_lock, flags);
++}
++
+ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
+ {
+ 	struct zfcp_adapter *adapter = act->adapter;
+@@ -1214,6 +1270,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
+ 	case ZFCP_ERP_ACTION_REOPEN_LUN:
+ 		if (!(act->status & ZFCP_STATUS_ERP_NO_REF))
+ 			scsi_device_put(sdev);
++		zfcp_erp_try_rport_unblock(port);
+ 		break;
+ 
+ 	case ZFCP_ERP_ACTION_REOPEN_PORT:
+@@ -1224,7 +1281,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
+ 		 */
+ 		if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
+ 			if (result == ZFCP_ERP_SUCCEEDED)
+-				zfcp_scsi_schedule_rport_register(port);
++				zfcp_erp_try_rport_unblock(port);
+ 		/* fall through */
+ 	case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
+ 		put_device(&port->dev);
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index c8fed9fa1cca..21c8c689b02b 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -3,7 +3,7 @@
+  *
+  * External function declarations.
+  *
+- * Copyright IBM Corp. 2002, 2015
++ * Copyright IBM Corp. 2002, 2016
+  */
+ 
+ #ifndef ZFCP_EXT_H
+@@ -35,6 +35,8 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *);
+ extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *,
+ 			      struct zfcp_port *, struct scsi_device *, u8, u8);
+ extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *);
++extern void zfcp_dbf_rec_run_lvl(int level, char *tag,
++				 struct zfcp_erp_action *erp);
+ extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
+ extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
+ extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);
+diff --git a/drivers/s390/scsi/zfcp_fsf.h b/drivers/s390/scsi/zfcp_fsf.h
+index be1c04b334c5..ea3c76ac0de1 100644
+--- a/drivers/s390/scsi/zfcp_fsf.h
++++ b/drivers/s390/scsi/zfcp_fsf.h
+@@ -3,7 +3,7 @@
+  *
+  * Interface to the FSF support functions.
+  *
+- * Copyright IBM Corp. 2002, 2015
++ * Copyright IBM Corp. 2002, 2016
+  */
+ 
+ #ifndef FSF_H
+@@ -78,6 +78,7 @@
+ #define FSF_APP_TAG_CHECK_FAILURE		0x00000082
+ #define FSF_REF_TAG_CHECK_FAILURE		0x00000083
+ #define FSF_ADAPTER_STATUS_AVAILABLE		0x000000AD
++#define FSF_FCP_RSP_AVAILABLE			0x000000AF
+ #define FSF_UNKNOWN_COMMAND			0x000000E2
+ #define FSF_UNKNOWN_OP_SUBTYPE                  0x000000E3
+ #define FSF_INVALID_COMMAND_OPTION              0x000000E5
+diff --git a/drivers/s390/scsi/zfcp_reqlist.h b/drivers/s390/scsi/zfcp_reqlist.h
+index 7c2c6194dfca..703fce59befe 100644
+--- a/drivers/s390/scsi/zfcp_reqlist.h
++++ b/drivers/s390/scsi/zfcp_reqlist.h
+@@ -4,7 +4,7 @@
+  * Data structure and helper functions for tracking pending FSF
+  * requests.
+  *
+- * Copyright IBM Corp. 2009
++ * Copyright IBM Corp. 2009, 2016
+  */
+ 
+ #ifndef ZFCP_REQLIST_H
+@@ -180,4 +180,32 @@ static inline void zfcp_reqlist_move(struct zfcp_reqlist *rl,
+ 	spin_unlock_irqrestore(&rl->lock, flags);
+ }
+ 
++/**
++ * zfcp_reqlist_apply_for_all() - apply a function to every request.
++ * @rl: the requestlist that contains the target requests.
++ * @f: the function to apply to each request; the first parameter of the
++ *     function will be the target-request; the second parameter is the same
++ *     pointer as given with the argument @data.
++ * @data: freely chosen argument; passed through to @f as second parameter.
++ *
++ * Uses :c:macro:`list_for_each_entry` to iterate over the lists in the hash-
++ * table (not a 'safe' variant, so don't modify the list).
++ *
++ * Holds @rl->lock over the entire request-iteration.
++ */
++static inline void
++zfcp_reqlist_apply_for_all(struct zfcp_reqlist *rl,
++			   void (*f)(struct zfcp_fsf_req *, void *), void *data)
++{
++	struct zfcp_fsf_req *req;
++	unsigned long flags;
++	unsigned int i;
++
++	spin_lock_irqsave(&rl->lock, flags);
++	for (i = 0; i < ZFCP_REQ_LIST_BUCKETS; i++)
++		list_for_each_entry(req, &rl->buckets[i], list)
++			f(req, data);
++	spin_unlock_irqrestore(&rl->lock, flags);
++}
++
+ #endif /* ZFCP_REQLIST_H */
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index 9069f98a1817..07ffdbb5107f 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -3,7 +3,7 @@
+  *
+  * Interface to Linux SCSI midlayer.
+  *
+- * Copyright IBM Corp. 2002, 2015
++ * Copyright IBM Corp. 2002, 2016
+  */
+ 
+ #define KMSG_COMPONENT "zfcp"
+@@ -88,9 +88,7 @@ int zfcp_scsi_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scpnt)
+ 	}
+ 
+ 	if (unlikely(!(status & ZFCP_STATUS_COMMON_UNBLOCKED))) {
+-		/* This could be either
+-		 * open LUN pending: this is temporary, will result in
+-		 *	open LUN or ERP_FAILED, so retry command
++		/* This could be
+ 		 * call to rport_delete pending: mimic retry from
+ 		 * 	fc_remote_port_chkready until rport is BLOCKED
+ 		 */
+@@ -209,6 +207,57 @@ static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt)
+ 	return retval;
+ }
+ 
++struct zfcp_scsi_req_filter {
++	u8 tmf_scope;
++	u32 lun_handle;
++	u32 port_handle;
++};
++
++static void zfcp_scsi_forget_cmnd(struct zfcp_fsf_req *old_req, void *data)
++{
++	struct zfcp_scsi_req_filter *filter =
++		(struct zfcp_scsi_req_filter *)data;
++
++	/* already aborted - prevent side-effects - or not a SCSI command */
++	if (old_req->data == NULL || old_req->fsf_command != FSF_QTCB_FCP_CMND)
++		return;
++
++	/* (tmf_scope == FCP_TMF_TGT_RESET || tmf_scope == FCP_TMF_LUN_RESET) */
++	if (old_req->qtcb->header.port_handle != filter->port_handle)
++		return;
++
++	if (filter->tmf_scope == FCP_TMF_LUN_RESET &&
++	    old_req->qtcb->header.lun_handle != filter->lun_handle)
++		return;
++
++	zfcp_dbf_scsi_nullcmnd((struct scsi_cmnd *)old_req->data, old_req);
++	old_req->data = NULL;
++}
++
++static void zfcp_scsi_forget_cmnds(struct zfcp_scsi_dev *zsdev, u8 tm_flags)
++{
++	struct zfcp_adapter *adapter = zsdev->port->adapter;
++	struct zfcp_scsi_req_filter filter = {
++		.tmf_scope = FCP_TMF_TGT_RESET,
++		.port_handle = zsdev->port->handle,
++	};
++	unsigned long flags;
++
++	if (tm_flags == FCP_TMF_LUN_RESET) {
++		filter.tmf_scope = FCP_TMF_LUN_RESET;
++		filter.lun_handle = zsdev->lun_handle;
++	}
++
++	/*
++	 * abort_lock secures against other processings - in the abort-function
++	 * and normal cmnd-handler - of (struct zfcp_fsf_req *)->data
++	 */
++	write_lock_irqsave(&adapter->abort_lock, flags);
++	zfcp_reqlist_apply_for_all(adapter->req_list, zfcp_scsi_forget_cmnd,
++				   &filter);
++	write_unlock_irqrestore(&adapter->abort_lock, flags);
++}
++
+ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
+ {
+ 	struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device);
+@@ -241,8 +290,10 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
+ 	if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) {
+ 		zfcp_dbf_scsi_devreset("fail", scpnt, tm_flags);
+ 		retval = FAILED;
+-	} else
++	} else {
+ 		zfcp_dbf_scsi_devreset("okay", scpnt, tm_flags);
++		zfcp_scsi_forget_cmnds(zfcp_sdev, tm_flags);
++	}
+ 
+ 	zfcp_fsf_req_free(fsf_req);
+ 	return retval;
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 79871f3519ff..d5b26fa541d3 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -160,7 +160,6 @@ static const struct pci_device_id aac_pci_tbl[] = {
+ 	{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC Series 6 (Tupelo) */
+ 	{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC Series 7 (Denali) */
+ 	{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC Series 8 */
+-	{ 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC Series 9 */
+ 	{ 0,}
+ };
+ MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
+@@ -239,7 +238,6 @@ static struct aac_driver_ident aac_drivers[] = {
+ 	{ aac_src_init, "aacraid", "ADAPTEC ", "RAID            ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 6 (Tupelo) */
+ 	{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID            ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 7 (Denali) */
+ 	{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID            ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 8 */
+-	{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID            ", 2, AAC_QUIRK_SRC } /* Adaptec PMC Series 9 */
+ };
+ 
+ /**
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 52d8bbf7feb5..bd04bd01d34a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -2000,6 +2000,8 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
+ 		io_request->DevHandle = pd_sync->seq[pd_index].devHandle;
+ 		pRAID_Context->regLockFlags |=
+ 			(MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA);
++		pRAID_Context->Type = MPI2_TYPE_CUDA;
++		pRAID_Context->nseg = 0x1;
+ 	} else if (fusion->fast_path_io) {
+ 		pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
+ 		pRAID_Context->configSeqNum = 0;
+@@ -2035,12 +2037,10 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
+ 		pRAID_Context->timeoutValue =
+ 			cpu_to_le16((os_timeout_value > timeout_limit) ?
+ 			timeout_limit : os_timeout_value);
+-		if (fusion->adapter_type == INVADER_SERIES) {
+-			pRAID_Context->Type = MPI2_TYPE_CUDA;
+-			pRAID_Context->nseg = 0x1;
++		if (fusion->adapter_type == INVADER_SERIES)
+ 			io_request->IoFlags |=
+ 				cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH);
+-		}
++
+ 		cmd->request_desc->SCSIIO.RequestFlags =
+ 			(MPI2_REQ_DESCRIPT_FLAGS_FP_IO <<
+ 				MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
+@@ -2823,6 +2823,7 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance,
+ 		dev_err(&instance->pdev->dev, "pending commands remain after waiting, "
+ 		       "will reset adapter scsi%d.\n",
+ 		       instance->host->host_no);
++		*convert = 1;
+ 		retval = 1;
+ 	}
+ out:
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 07349270535d..82dfe07b1d47 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -1204,10 +1204,6 @@ int scsi_sysfs_add_sdev(struct scsi_device *sdev)
+ 	struct request_queue *rq = sdev->request_queue;
+ 	struct scsi_target *starget = sdev->sdev_target;
+ 
+-	error = scsi_device_set_state(sdev, SDEV_RUNNING);
+-	if (error)
+-		return error;
+-
+ 	error = scsi_target_add(starget);
+ 	if (error)
+ 		return error;
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index ae7d9bdf409c..a1c29b0afb22 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -592,6 +592,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ 	sg_io_hdr_t *hp;
+ 	unsigned char cmnd[SG_MAX_CDB_SIZE];
+ 
++	if (unlikely(segment_eq(get_fs(), KERNEL_DS)))
++		return -EINVAL;
++
+ 	if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
+ 		return -ENXIO;
+ 	SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp,
+diff --git a/drivers/ssb/pci.c b/drivers/ssb/pci.c
+index 0f28c08fcb3c..77b551da5728 100644
+--- a/drivers/ssb/pci.c
++++ b/drivers/ssb/pci.c
+@@ -909,6 +909,7 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
+ 			if (err) {
+ 				ssb_warn("WARNING: Using fallback SPROM failed (err %d)\n",
+ 					 err);
++				goto out_free;
+ 			} else {
+ 				ssb_dbg("Using SPROM revision %d provided by platform\n",
+ 					sprom->revision);
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 0f97d7b611d7..1c967c30e4ce 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -1832,7 +1832,7 @@ static int ni_ai_insn_read(struct comedi_device *dev,
+ 			   unsigned int *data)
+ {
+ 	struct ni_private *devpriv = dev->private;
+-	unsigned int mask = (s->maxdata + 1) >> 1;
++	unsigned int mask = s->maxdata;
+ 	int i, n;
+ 	unsigned int signbits;
+ 	unsigned int d;
+@@ -1875,7 +1875,7 @@ static int ni_ai_insn_read(struct comedi_device *dev,
+ 				return -ETIME;
+ 			}
+ 			d += signbits;
+-			data[n] = d;
++			data[n] = d & 0xffff;
+ 		}
+ 	} else if (devpriv->is_6143) {
+ 		for (n = 0; n < insn->n; n++) {
+@@ -1924,9 +1924,8 @@ static int ni_ai_insn_read(struct comedi_device *dev,
+ 				data[n] = dl;
+ 			} else {
+ 				d = ni_readw(dev, NI_E_AI_FIFO_DATA_REG);
+-				/* subtle: needs to be short addition */
+ 				d += signbits;
+-				data[n] = d;
++				data[n] = d & 0xffff;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
+index 923c032f0b95..e980e2d0c2db 100644
+--- a/drivers/target/iscsi/iscsi_target_configfs.c
++++ b/drivers/target/iscsi/iscsi_target_configfs.c
+@@ -100,8 +100,10 @@ static ssize_t lio_target_np_driver_store(struct config_item *item,
+ 
+ 		tpg_np_new = iscsit_tpg_add_network_portal(tpg,
+ 					&np->np_sockaddr, tpg_np, type);
+-		if (IS_ERR(tpg_np_new))
++		if (IS_ERR(tpg_np_new)) {
++			rc = PTR_ERR(tpg_np_new);
+ 			goto out;
++		}
+ 	} else {
+ 		tpg_np_new = iscsit_tpg_locate_child_np(tpg_np, type);
+ 		if (tpg_np_new) {
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 62bf4fe5704a..b8a986c2c567 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -682,8 +682,6 @@ static int tcmu_check_expired_cmd(int id, void *p, void *data)
+ 	target_complete_cmd(cmd->se_cmd, SAM_STAT_CHECK_CONDITION);
+ 	cmd->se_cmd = NULL;
+ 
+-	kmem_cache_free(tcmu_cmd_cache, cmd);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index c41c7742903a..2dcd4194d103 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -98,7 +98,7 @@ temp_crit_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	int temperature;
+ 	int ret;
+ 
+-	ret = tz->ops->get_trip_temp(tz, 0, &temperature);
++	ret = tz->ops->get_crit_temp(tz, &temperature);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index f36e6df2fa90..e086ea4d2997 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1240,7 +1240,7 @@ static int sc16is7xx_probe(struct device *dev,
+ 
+ 	/* Setup interrupt */
+ 	ret = devm_request_irq(dev, irq, sc16is7xx_irq,
+-			       IRQF_ONESHOT | flags, dev_name(dev), s);
++			       flags, dev_name(dev), s);
+ 	if (!ret)
+ 		return 0;
+ 
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 0f8caae4267d..ece10e6b731b 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -982,7 +982,7 @@ static void kbd_led_trigger_activate(struct led_classdev *cdev)
+ 	KBD_LED_TRIGGER((_led_bit) + 8, _name)
+ 
+ static struct kbd_led_trigger kbd_led_triggers[] = {
+-	KBD_LED_TRIGGER(VC_SCROLLOCK, "kbd-scrollock"),
++	KBD_LED_TRIGGER(VC_SCROLLOCK, "kbd-scrolllock"),
+ 	KBD_LED_TRIGGER(VC_NUMLOCK,   "kbd-numlock"),
+ 	KBD_LED_TRIGGER(VC_CAPSLOCK,  "kbd-capslock"),
+ 	KBD_LED_TRIGGER(VC_KANALOCK,  "kbd-kanalock"),
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index b010242bab32..496d99b80fb0 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1885,6 +1885,7 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
+ 	spin_lock(&blockdev_superblock->s_inode_list_lock);
+ 	list_for_each_entry(inode, &blockdev_superblock->s_inodes, i_sb_list) {
+ 		struct address_space *mapping = inode->i_mapping;
++		struct block_device *bdev;
+ 
+ 		spin_lock(&inode->i_lock);
+ 		if (inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW) ||
+@@ -1905,8 +1906,12 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
+ 		 */
+ 		iput(old_inode);
+ 		old_inode = inode;
++		bdev = I_BDEV(inode);
+ 
+-		func(I_BDEV(inode), arg);
++		mutex_lock(&bdev->bd_mutex);
++		if (bdev->bd_openers)
++			func(bdev, arg);
++		mutex_unlock(&bdev->bd_mutex);
+ 
+ 		spin_lock(&blockdev_superblock->s_inode_list_lock);
+ 	}
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index ca699ddc11c1..e6a0d22315e9 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -397,7 +397,7 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
+ 	 */
+ 	if (!PageUptodate(page)) {
+ 		unsigned pglen = nfs_page_length(page);
+-		unsigned end = offset + len;
++		unsigned end = offset + copied;
+ 
+ 		if (pglen == 0) {
+ 			zero_user_segments(page, 0, offset,
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 51b51369704c..dcc21f9e4bd1 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -28,6 +28,9 @@
+ 
+ static struct group_info	*ff_zero_group;
+ 
++static void ff_layout_read_record_layoutstats_done(struct rpc_task *task,
++		struct nfs_pgio_header *hdr);
++
+ static struct pnfs_layout_hdr *
+ ff_layout_alloc_layout_hdr(struct inode *inode, gfp_t gfp_flags)
+ {
+@@ -1293,6 +1296,7 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ 					hdr->pgio_mirror_idx + 1,
+ 					&hdr->pgio_mirror_idx))
+ 			goto out_eagain;
++		ff_layout_read_record_layoutstats_done(task, hdr);
+ 		pnfs_read_resend_pnfs(hdr);
+ 		return task->tk_status;
+ 	case -NFS4ERR_RESET_TO_MDS:
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 2c93a85eda51..eb8edb3d52e1 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -252,6 +252,14 @@ pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo)
+ 	}
+ }
+ 
++static void
++pnfs_clear_layoutreturn_info(struct pnfs_layout_hdr *lo)
++{
++	lo->plh_return_iomode = 0;
++	lo->plh_return_seq = 0;
++	clear_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
++}
++
+ /*
+  * Mark a pnfs_layout_hdr and all associated layout segments as invalid
+  *
+@@ -270,6 +278,7 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
+ 	};
+ 
+ 	set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
++	pnfs_clear_layoutreturn_info(lo);
+ 	return pnfs_mark_matching_lsegs_invalid(lo, lseg_list, &range, 0);
+ }
+ 
+@@ -364,7 +373,9 @@ pnfs_layout_remove_lseg(struct pnfs_layout_hdr *lo,
+ 	list_del_init(&lseg->pls_list);
+ 	/* Matched by pnfs_get_layout_hdr in pnfs_layout_insert_lseg */
+ 	atomic_dec(&lo->plh_refcount);
+-	if (list_empty(&lo->plh_segs)) {
++	if (list_empty(&lo->plh_segs) &&
++	    !test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) &&
++	    !test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) {
+ 		if (atomic_read(&lo->plh_outstanding) == 0)
+ 			set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+ 		clear_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags);
+@@ -769,14 +780,6 @@ pnfs_destroy_all_layouts(struct nfs_client *clp)
+ 	pnfs_destroy_layouts_byclid(clp, false);
+ }
+ 
+-static void
+-pnfs_clear_layoutreturn_info(struct pnfs_layout_hdr *lo)
+-{
+-	lo->plh_return_iomode = 0;
+-	lo->plh_return_seq = 0;
+-	clear_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
+-}
+-
+ /* update lo->plh_stateid with new if is more recent */
+ void
+ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+@@ -897,6 +900,7 @@ static void pnfs_clear_layoutcommit(struct inode *inode,
+ void pnfs_clear_layoutreturn_waitbit(struct pnfs_layout_hdr *lo)
+ {
+ 	clear_bit_unlock(NFS_LAYOUT_RETURN, &lo->plh_flags);
++	clear_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags);
+ 	smp_mb__after_atomic();
+ 	wake_up_bit(&lo->plh_flags, NFS_LAYOUT_RETURN);
+ 	rpc_wake_up(&NFS_SERVER(lo->plh_inode)->roc_rpcwaitq);
+@@ -910,8 +914,9 @@ pnfs_prepare_layoutreturn(struct pnfs_layout_hdr *lo,
+ 	/* Serialise LAYOUTGET/LAYOUTRETURN */
+ 	if (atomic_read(&lo->plh_outstanding) != 0)
+ 		return false;
+-	if (test_and_set_bit(NFS_LAYOUT_RETURN, &lo->plh_flags))
++	if (test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
+ 		return false;
++	set_bit(NFS_LAYOUT_RETURN, &lo->plh_flags);
+ 	pnfs_get_layout_hdr(lo);
+ 	if (test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags)) {
+ 		if (stateid != NULL) {
+@@ -1903,6 +1908,8 @@ void pnfs_error_mark_layout_for_return(struct inode *inode,
+ 
+ 	spin_lock(&inode->i_lock);
+ 	pnfs_set_plh_return_info(lo, range.iomode, 0);
++	/* Block LAYOUTGET */
++	set_bit(NFS_LAYOUT_RETURN, &lo->plh_flags);
+ 	/*
+ 	 * mark all matching lsegs so that we are sure to have no live
+ 	 * segments at hand when sending layoutreturn. See pnfs_put_lseg()
+@@ -2241,6 +2248,10 @@ void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
+ 	struct nfs_pageio_descriptor pgio;
+ 
+ 	if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags)) {
++		/* Prevent deadlocks with layoutreturn! */
++		pnfs_put_lseg(hdr->lseg);
++		hdr->lseg = NULL;
++
+ 		nfs_pageio_init_read(&pgio, hdr->inode, false,
+ 					hdr->completion_ops);
+ 		hdr->task.tk_status = nfs_pageio_resend(&pgio, hdr);
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 31d99b2927b0..98dbb51fcca8 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -96,6 +96,7 @@ enum {
+ 	NFS_LAYOUT_RW_FAILED,		/* get rw layout failed stop trying */
+ 	NFS_LAYOUT_BULK_RECALL,		/* bulk recall affecting layout */
+ 	NFS_LAYOUT_RETURN,		/* layoutreturn in progress */
++	NFS_LAYOUT_RETURN_LOCK,		/* Serialise layoutreturn */
+ 	NFS_LAYOUT_RETURN_REQUESTED,	/* Return this layout ASAP */
+ 	NFS_LAYOUT_INVALID_STID,	/* layout stateid id is invalid */
+ 	NFS_LAYOUT_FIRST_LAYOUTGET,	/* Serialize first layoutget */
+diff --git a/fs/notify/inode_mark.c b/fs/notify/inode_mark.c
+index 741077deef3b..a3645249f7ec 100644
+--- a/fs/notify/inode_mark.c
++++ b/fs/notify/inode_mark.c
+@@ -150,12 +150,10 @@ int fsnotify_add_inode_mark(struct fsnotify_mark *mark,
+  */
+ void fsnotify_unmount_inodes(struct super_block *sb)
+ {
+-	struct inode *inode, *next_i, *need_iput = NULL;
++	struct inode *inode, *iput_inode = NULL;
+ 
+ 	spin_lock(&sb->s_inode_list_lock);
+-	list_for_each_entry_safe(inode, next_i, &sb->s_inodes, i_sb_list) {
+-		struct inode *need_iput_tmp;
+-
++	list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ 		/*
+ 		 * We cannot __iget() an inode in state I_FREEING,
+ 		 * I_WILL_FREE, or I_NEW which is fine because by that point
+@@ -178,49 +176,24 @@ void fsnotify_unmount_inodes(struct super_block *sb)
+ 			continue;
+ 		}
+ 
+-		need_iput_tmp = need_iput;
+-		need_iput = NULL;
+-
+-		/* In case fsnotify_inode_delete() drops a reference. */
+-		if (inode != need_iput_tmp)
+-			__iget(inode);
+-		else
+-			need_iput_tmp = NULL;
++		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+-
+-		/* In case the dropping of a reference would nuke next_i. */
+-		while (&next_i->i_sb_list != &sb->s_inodes) {
+-			spin_lock(&next_i->i_lock);
+-			if (!(next_i->i_state & (I_FREEING | I_WILL_FREE)) &&
+-						atomic_read(&next_i->i_count)) {
+-				__iget(next_i);
+-				need_iput = next_i;
+-				spin_unlock(&next_i->i_lock);
+-				break;
+-			}
+-			spin_unlock(&next_i->i_lock);
+-			next_i = list_next_entry(next_i, i_sb_list);
+-		}
+-
+-		/*
+-		 * We can safely drop s_inode_list_lock here because either
+-		 * we actually hold references on both inode and next_i or
+-		 * end of list.  Also no new inodes will be added since the
+-		 * umount has begun.
+-		 */
+ 		spin_unlock(&sb->s_inode_list_lock);
+ 
+-		if (need_iput_tmp)
+-			iput(need_iput_tmp);
++		if (iput_inode)
++			iput(iput_inode);
+ 
+ 		/* for each watch, send FS_UNMOUNT and then remove it */
+ 		fsnotify(inode, FS_UNMOUNT, inode, FSNOTIFY_EVENT_INODE, NULL, 0);
+ 
+ 		fsnotify_inode_delete(inode);
+ 
+-		iput(inode);
++		iput_inode = inode;
+ 
+ 		spin_lock(&sb->s_inode_list_lock);
+ 	}
+ 	spin_unlock(&sb->s_inode_list_lock);
++
++	if (iput_inode)
++		iput(iput_inode);
+ }
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index beb7610d64e9..95b1b57b28a2 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4393,6 +4393,17 @@ void cfg80211_rx_assoc_resp(struct net_device *dev,
+ void cfg80211_assoc_timeout(struct net_device *dev, struct cfg80211_bss *bss);
+ 
+ /**
++ * cfg80211_abandon_assoc - notify cfg80211 of abandoned association attempt
++ * @dev: network device
++ * @bss: The BSS entry with which association was abandoned.
++ *
++ * Call this whenever - for reasons reported through other API, like deauth RX,
++ * an association attempt was abandoned.
++ * This function may sleep. The caller must hold the corresponding wdev's mutex.
++ */
++void cfg80211_abandon_assoc(struct net_device *dev, struct cfg80211_bss *bss);
++
++/**
+  * cfg80211_tx_mlme_mgmt - notification of transmitted deauth/disassoc frame
+  * @dev: network device
+  * @buf: 802.11 frame (header + body)
+diff --git a/include/rdma/ib_addr.h b/include/rdma/ib_addr.h
+index 931a47ba4571..1beab5532035 100644
+--- a/include/rdma/ib_addr.h
++++ b/include/rdma/ib_addr.h
+@@ -205,10 +205,12 @@ static inline void iboe_addr_get_sgid(struct rdma_dev_addr *dev_addr,
+ 
+ 	dev = dev_get_by_index(&init_net, dev_addr->bound_dev_if);
+ 	if (dev) {
+-		ip4 = (struct in_device *)dev->ip_ptr;
+-		if (ip4 && ip4->ifa_list && ip4->ifa_list->ifa_address)
++		ip4 = in_dev_get(dev);
++		if (ip4 && ip4->ifa_list && ip4->ifa_list->ifa_address) {
+ 			ipv6_addr_set_v4mapped(ip4->ifa_list->ifa_address,
+ 					       (struct in6_addr *)gid);
++			in_dev_put(ip4);
++		}
+ 		dev_put(dev);
+ 	}
+ }
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 37dec7e3db43..46e312e9be38 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -299,10 +299,10 @@ u32 (*arch_gettimeoffset)(void) = default_arch_gettimeoffset;
+ static inline u32 arch_gettimeoffset(void) { return 0; }
+ #endif
+ 
+-static inline s64 timekeeping_delta_to_ns(struct tk_read_base *tkr,
++static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr,
+ 					  cycle_t delta)
+ {
+-	s64 nsec;
++	u64 nsec;
+ 
+ 	nsec = delta * tkr->mult + tkr->xtime_nsec;
+ 	nsec >>= tkr->shift;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 7363ccf79512..16047a818d2f 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -780,6 +780,10 @@ print_graph_entry_leaf(struct trace_iterator *iter,
+ 
+ 		cpu_data = per_cpu_ptr(data->cpu_data, cpu);
+ 
++		/* If a graph tracer ignored set_graph_notrace */
++		if (call->depth < -1)
++			call->depth += FTRACE_NOTRACE_DEPTH;
++
+ 		/*
+ 		 * Comments display at + 1 to depth. Since
+ 		 * this is a leaf function, keep the comments
+@@ -788,7 +792,8 @@ print_graph_entry_leaf(struct trace_iterator *iter,
+ 		cpu_data->depth = call->depth - 1;
+ 
+ 		/* No need to keep this function around for this depth */
+-		if (call->depth < FTRACE_RETFUNC_DEPTH)
++		if (call->depth < FTRACE_RETFUNC_DEPTH &&
++		    !WARN_ON_ONCE(call->depth < 0))
+ 			cpu_data->enter_funcs[call->depth] = 0;
+ 	}
+ 
+@@ -818,11 +823,16 @@ print_graph_entry_nested(struct trace_iterator *iter,
+ 		struct fgraph_cpu_data *cpu_data;
+ 		int cpu = iter->cpu;
+ 
++		/* If a graph tracer ignored set_graph_notrace */
++		if (call->depth < -1)
++			call->depth += FTRACE_NOTRACE_DEPTH;
++
+ 		cpu_data = per_cpu_ptr(data->cpu_data, cpu);
+ 		cpu_data->depth = call->depth;
+ 
+ 		/* Save this function pointer to see if the exit matches */
+-		if (call->depth < FTRACE_RETFUNC_DEPTH)
++		if (call->depth < FTRACE_RETFUNC_DEPTH &&
++		    !WARN_ON_ONCE(call->depth < 0))
+ 			cpu_data->enter_funcs[call->depth] = call->func;
+ 	}
+ 
+@@ -1052,7 +1062,8 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
+ 		 */
+ 		cpu_data->depth = trace->depth - 1;
+ 
+-		if (trace->depth < FTRACE_RETFUNC_DEPTH) {
++		if (trace->depth < FTRACE_RETFUNC_DEPTH &&
++		    !WARN_ON_ONCE(trace->depth < 0)) {
+ 			if (cpu_data->enter_funcs[trace->depth] != trace->func)
+ 				func_match = 0;
+ 			cpu_data->enter_funcs[trace->depth] = 0;
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index a5502898ea33..2efb335deada 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -2027,6 +2027,19 @@ static int process_connect(struct ceph_connection *con)
+ 
+ 	dout("process_connect on %p tag %d\n", con, (int)con->in_tag);
+ 
++	if (con->auth_reply_buf) {
++		/*
++		 * Any connection that defines ->get_authorizer()
++		 * should also define ->verify_authorizer_reply().
++		 * See get_connect_authorizer().
++		 */
++		ret = con->ops->verify_authorizer_reply(con, 0);
++		if (ret < 0) {
++			con->error_msg = "bad authorize reply";
++			return ret;
++		}
++	}
++
+ 	switch (con->in_reply.tag) {
+ 	case CEPH_MSGR_TAG_FEATURES:
+ 		pr_err("%s%lld %s feature set mismatch,"
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 8d426f637f58..b2e3c320b648 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2506,7 +2506,7 @@ static void ieee80211_destroy_auth_data(struct ieee80211_sub_if_data *sdata,
+ }
+ 
+ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
+-					 bool assoc)
++					 bool assoc, bool abandon)
+ {
+ 	struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data;
+ 
+@@ -2529,6 +2529,9 @@ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
+ 		mutex_lock(&sdata->local->mtx);
+ 		ieee80211_vif_release_channel(sdata);
+ 		mutex_unlock(&sdata->local->mtx);
++
++		if (abandon)
++			cfg80211_abandon_assoc(sdata->dev, assoc_data->bss);
+ 	}
+ 
+ 	kfree(assoc_data);
+@@ -2758,7 +2761,7 @@ static void ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
+ 			   bssid, reason_code,
+ 			   ieee80211_get_reason_code_string(reason_code));
+ 
+-		ieee80211_destroy_assoc_data(sdata, false);
++		ieee80211_destroy_assoc_data(sdata, false, true);
+ 
+ 		cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
+ 		return;
+@@ -3163,14 +3166,14 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
+ 	if (status_code != WLAN_STATUS_SUCCESS) {
+ 		sdata_info(sdata, "%pM denied association (code=%d)\n",
+ 			   mgmt->sa, status_code);
+-		ieee80211_destroy_assoc_data(sdata, false);
++		ieee80211_destroy_assoc_data(sdata, false, false);
+ 		event.u.mlme.status = MLME_DENIED;
+ 		event.u.mlme.reason = status_code;
+ 		drv_event_callback(sdata->local, sdata, &event);
+ 	} else {
+ 		if (!ieee80211_assoc_success(sdata, bss, mgmt, len)) {
+ 			/* oops -- internal error -- send timeout for now */
+-			ieee80211_destroy_assoc_data(sdata, false);
++			ieee80211_destroy_assoc_data(sdata, false, false);
+ 			cfg80211_assoc_timeout(sdata->dev, bss);
+ 			return;
+ 		}
+@@ -3183,7 +3186,7 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
+ 		 * recalc after assoc_data is NULL but before associated
+ 		 * is set can cause the interface to go idle
+ 		 */
+-		ieee80211_destroy_assoc_data(sdata, true);
++		ieee80211_destroy_assoc_data(sdata, true, false);
+ 
+ 		/* get uapsd queues configuration */
+ 		uapsd_queues = 0;
+@@ -3882,7 +3885,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
+ 				.u.mlme.status = MLME_TIMEOUT,
+ 			};
+ 
+-			ieee80211_destroy_assoc_data(sdata, false);
++			ieee80211_destroy_assoc_data(sdata, false, false);
+ 			cfg80211_assoc_timeout(sdata->dev, bss);
+ 			drv_event_callback(sdata->local, sdata, &event);
+ 		}
+@@ -4021,7 +4024,7 @@ void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata)
+ 					       WLAN_REASON_DEAUTH_LEAVING,
+ 					       false, frame_buf);
+ 		if (ifmgd->assoc_data)
+-			ieee80211_destroy_assoc_data(sdata, false);
++			ieee80211_destroy_assoc_data(sdata, false, true);
+ 		if (ifmgd->auth_data)
+ 			ieee80211_destroy_auth_data(sdata, false);
+ 		cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,
+@@ -4903,7 +4906,7 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
+ 					       IEEE80211_STYPE_DEAUTH,
+ 					       req->reason_code, tx,
+ 					       frame_buf);
+-		ieee80211_destroy_assoc_data(sdata, false);
++		ieee80211_destroy_assoc_data(sdata, false, true);
+ 		ieee80211_report_disconnect(sdata, frame_buf,
+ 					    sizeof(frame_buf), true,
+ 					    req->reason_code);
+@@ -4978,7 +4981,7 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)
+ 	sdata_lock(sdata);
+ 	if (ifmgd->assoc_data) {
+ 		struct cfg80211_bss *bss = ifmgd->assoc_data->bss;
+-		ieee80211_destroy_assoc_data(sdata, false);
++		ieee80211_destroy_assoc_data(sdata, false, false);
+ 		cfg80211_assoc_timeout(sdata->dev, bss);
+ 	}
+ 	if (ifmgd->auth_data)
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 976c7812bbd5..a0110c2f5ae4 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -541,9 +541,13 @@ gss_setup_upcall(struct gss_auth *gss_auth, struct rpc_cred *cred)
+ 		return gss_new;
+ 	gss_msg = gss_add_msg(gss_new);
+ 	if (gss_msg == gss_new) {
+-		int res = rpc_queue_upcall(gss_new->pipe, &gss_new->msg);
++		int res;
++		atomic_inc(&gss_msg->count);
++		res = rpc_queue_upcall(gss_new->pipe, &gss_new->msg);
+ 		if (res) {
+ 			gss_unhash_msg(gss_new);
++			atomic_dec(&gss_msg->count);
++			gss_release_msg(gss_new);
+ 			gss_msg = ERR_PTR(res);
+ 		}
+ 	} else
+@@ -836,6 +840,7 @@ gss_pipe_destroy_msg(struct rpc_pipe_msg *msg)
+ 			warn_gssd();
+ 		gss_release_msg(gss_msg);
+ 	}
++	gss_release_msg(gss_msg);
+ }
+ 
+ static void gss_pipe_dentry_destroy(struct dentry *dir,
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index a53b3a16b4f1..62c056ea403b 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -606,9 +606,9 @@ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ 		return 0;
+ 
+ 	pkt = virtio_transport_alloc_pkt(&info, 0,
+-					 le32_to_cpu(pkt->hdr.dst_cid),
++					 le64_to_cpu(pkt->hdr.dst_cid),
+ 					 le32_to_cpu(pkt->hdr.dst_port),
+-					 le32_to_cpu(pkt->hdr.src_cid),
++					 le64_to_cpu(pkt->hdr.src_cid),
+ 					 le32_to_cpu(pkt->hdr.src_port));
+ 	if (!pkt)
+ 		return -ENOMEM;
+@@ -823,7 +823,7 @@ virtio_transport_send_response(struct vsock_sock *vsk,
+ 	struct virtio_vsock_pkt_info info = {
+ 		.op = VIRTIO_VSOCK_OP_RESPONSE,
+ 		.type = VIRTIO_VSOCK_TYPE_STREAM,
+-		.remote_cid = le32_to_cpu(pkt->hdr.src_cid),
++		.remote_cid = le64_to_cpu(pkt->hdr.src_cid),
+ 		.remote_port = le32_to_cpu(pkt->hdr.src_port),
+ 		.reply = true,
+ 	};
+@@ -863,9 +863,9 @@ virtio_transport_recv_listen(struct sock *sk, struct virtio_vsock_pkt *pkt)
+ 	child->sk_state = SS_CONNECTED;
+ 
+ 	vchild = vsock_sk(child);
+-	vsock_addr_init(&vchild->local_addr, le32_to_cpu(pkt->hdr.dst_cid),
++	vsock_addr_init(&vchild->local_addr, le64_to_cpu(pkt->hdr.dst_cid),
+ 			le32_to_cpu(pkt->hdr.dst_port));
+-	vsock_addr_init(&vchild->remote_addr, le32_to_cpu(pkt->hdr.src_cid),
++	vsock_addr_init(&vchild->remote_addr, le64_to_cpu(pkt->hdr.src_cid),
+ 			le32_to_cpu(pkt->hdr.src_port));
+ 
+ 	vsock_insert_connected(vchild);
+@@ -904,9 +904,9 @@ void virtio_transport_recv_pkt(struct virtio_vsock_pkt *pkt)
+ 	struct sock *sk;
+ 	bool space_available;
+ 
+-	vsock_addr_init(&src, le32_to_cpu(pkt->hdr.src_cid),
++	vsock_addr_init(&src, le64_to_cpu(pkt->hdr.src_cid),
+ 			le32_to_cpu(pkt->hdr.src_port));
+-	vsock_addr_init(&dst, le32_to_cpu(pkt->hdr.dst_cid),
++	vsock_addr_init(&dst, le64_to_cpu(pkt->hdr.dst_cid),
+ 			le32_to_cpu(pkt->hdr.dst_port));
+ 
+ 	trace_virtio_transport_recv_pkt(src.svm_cid, src.svm_port,
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index 66f2a1145d7c..b5cf21850be7 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -410,6 +410,7 @@ void cfg80211_sme_disassoc(struct wireless_dev *wdev);
+ void cfg80211_sme_deauth(struct wireless_dev *wdev);
+ void cfg80211_sme_auth_timeout(struct wireless_dev *wdev);
+ void cfg80211_sme_assoc_timeout(struct wireless_dev *wdev);
++void cfg80211_sme_abandon_assoc(struct wireless_dev *wdev);
+ 
+ /* internal helpers */
+ bool cfg80211_supported_cipher_suite(struct wiphy *wiphy, u32 cipher);
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index c284d883c349..2a62ef628f42 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -149,6 +149,18 @@ void cfg80211_assoc_timeout(struct net_device *dev, struct cfg80211_bss *bss)
+ }
+ EXPORT_SYMBOL(cfg80211_assoc_timeout);
+ 
++void cfg80211_abandon_assoc(struct net_device *dev, struct cfg80211_bss *bss)
++{
++	struct wireless_dev *wdev = dev->ieee80211_ptr;
++	struct wiphy *wiphy = wdev->wiphy;
++
++	cfg80211_sme_abandon_assoc(wdev);
++
++	cfg80211_unhold_bss(bss_from_pub(bss));
++	cfg80211_put_bss(wiphy, bss);
++}
++EXPORT_SYMBOL(cfg80211_abandon_assoc);
++
+ void cfg80211_tx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len)
+ {
+ 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index add6824c44fd..95c713cdabf0 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -39,6 +39,7 @@ struct cfg80211_conn {
+ 		CFG80211_CONN_ASSOCIATING,
+ 		CFG80211_CONN_ASSOC_FAILED,
+ 		CFG80211_CONN_DEAUTH,
++		CFG80211_CONN_ABANDON,
+ 		CFG80211_CONN_CONNECTED,
+ 	} state;
+ 	u8 bssid[ETH_ALEN], prev_bssid[ETH_ALEN];
+@@ -206,6 +207,8 @@ static int cfg80211_conn_do_work(struct wireless_dev *wdev)
+ 		cfg80211_mlme_deauth(rdev, wdev->netdev, params->bssid,
+ 				     NULL, 0,
+ 				     WLAN_REASON_DEAUTH_LEAVING, false);
++		/* fall through */
++	case CFG80211_CONN_ABANDON:
+ 		/* free directly, disconnected event already sent */
+ 		cfg80211_sme_free(wdev);
+ 		return 0;
+@@ -423,6 +426,17 @@ void cfg80211_sme_assoc_timeout(struct wireless_dev *wdev)
+ 	schedule_work(&rdev->conn_work);
+ }
+ 
++void cfg80211_sme_abandon_assoc(struct wireless_dev *wdev)
++{
++	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
++
++	if (!wdev->conn)
++		return;
++
++	wdev->conn->state = CFG80211_CONN_ABANDON;
++	schedule_work(&rdev->conn_work);
++}
++
+ static int cfg80211_sme_get_conn_ies(struct wireless_dev *wdev,
+ 				     const u8 *ies, size_t ies_len,
+ 				     const u8 **out_ies, size_t *out_ies_len)
+diff --git a/scripts/kconfig/nconf.gui.c b/scripts/kconfig/nconf.gui.c
+index 8275f0e55106..4b2f44c20caf 100644
+--- a/scripts/kconfig/nconf.gui.c
++++ b/scripts/kconfig/nconf.gui.c
+@@ -364,12 +364,14 @@ int dialog_inputbox(WINDOW *main_window,
+ 	WINDOW *prompt_win;
+ 	WINDOW *form_win;
+ 	PANEL *panel;
+-	int i, x, y;
++	int i, x, y, lines, columns, win_lines, win_cols;
+ 	int res = -1;
+ 	int cursor_position = strlen(init);
+ 	int cursor_form_win;
+ 	char *result = *resultp;
+ 
++	getmaxyx(stdscr, lines, columns);
++
+ 	if (strlen(init)+1 > *result_len) {
+ 		*result_len = strlen(init)+1;
+ 		*resultp = result = realloc(result, *result_len);
+@@ -386,14 +388,19 @@ int dialog_inputbox(WINDOW *main_window,
+ 	if (title)
+ 		prompt_width = max(prompt_width, strlen(title));
+ 
++	win_lines = min(prompt_lines+6, lines-2);
++	win_cols = min(prompt_width+7, columns-2);
++	prompt_lines = max(win_lines-6, 0);
++	prompt_width = max(win_cols-7, 0);
++
+ 	/* place dialog in middle of screen */
+-	y = (getmaxy(stdscr)-(prompt_lines+4))/2;
+-	x = (getmaxx(stdscr)-(prompt_width+4))/2;
++	y = (lines-win_lines)/2;
++	x = (columns-win_cols)/2;
+ 
+ 	strncpy(result, init, *result_len);
+ 
+ 	/* create the windows */
+-	win = newwin(prompt_lines+6, prompt_width+7, y, x);
++	win = newwin(win_lines, win_cols, y, x);
+ 	prompt_win = derwin(win, prompt_lines+1, prompt_width, 2, 2);
+ 	form_win = derwin(win, 1, prompt_width, prompt_lines+3, 2);
+ 	keypad(form_win, TRUE);


^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2017-01-09 11:44 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-21 11:11 [gentoo-commits] proj/linux-patches:4.8 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2017-01-09 11:44 Alice Ferrazzi
2017-01-06 23:43 Mike Pagano
2017-01-06 23:11 Mike Pagano
2016-12-15 23:43 Mike Pagano
2016-12-11  7:18 Alice Ferrazzi
2016-12-09  7:25 Alice Ferrazzi
2016-12-07 23:26 Mike Pagano
2016-12-02 16:23 Alice Ferrazzi
2016-11-26 14:21 Alice Ferrazzi
2016-11-21 14:55 Mike Pagano
2016-11-21 14:50 Mike Pagano
2016-11-19 11:05 Mike Pagano
2016-11-15  7:59 Alice Ferrazzi
2016-11-10 16:37 Alice Ferrazzi
2016-11-04 17:17 Mike Pagano
2016-10-31 12:34 Alice Ferrazzi
2016-10-28 14:03 Alice Ferrazzi
2016-10-23 13:59 Mike Pagano
2016-10-22 13:08 Mike Pagano
2016-10-16 19:21 Mike Pagano
2016-10-11  0:07 Mike Pagano
2016-10-08 19:50 Mike Pagano
2016-09-26 10:37 Mike Pagano
2016-09-26 10:35 Mike Pagano
2016-08-22 15:06 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox